US Pat. No. 11,030,151

CONSTRUCTING AN INVERTED INDEX

Avast Software s.r.o., P...

1. A method for creating an inverted index, the method comprising:receiving a plurality of documents to be indexed;dividing each of the plurality of documents into a plurality of n-grams, wherein an n-gram comprises a series of n bytes from the document;hashing each n-gram of the plurality of n-grams to produce a plurality of hashed n-gram values;for each document of the plurality of documents, placing the plurality of hashed n-gram values of the document and a document identifier associated with the document into a heap data structure, wherein the heap data structure is a tree-based data structure in which a key for a node is ordered with respect to a key for a parent of the node in the same way across all of the nodes in the heap; andwhile the heap data structure is not empty, performing first operations comprising:obtaining a first top hashed n-gram value from the heap data structure,obtaining from the heap data structure the document identifier of every document of the plurality of documents having at least one hashed n-gram value that is the same as the first top hashed n-gram value, wherein obtaining from the heap data structure the document identifier comprises:saving a copy of a second top hashed n-gram value; and
while the heap data structure is not empty, performing second operations comprising:
determining whether a third top hashed n-gram value matches the copy of the second top hashed n-gram value,
in response to determining that the third top hashed n-gram value does not match the copy of the second top hashed n-gram value, returning a results list,
in response to determining that the third top hashed n-gram value matches the copy of the second top hashed n-gram value, performing operations comprising:
inserting a document identifier associated with the third top hashed n-gram value into the results list, and
popping the third top hashed n-gram value from the heap, andplacing the obtained document identifiers into an inverted index in association with the first top hashed n-gram value.

US Pat. No. 11,030,150

SYSTEMS AND METHODS FOR CLASSIFYING ELECTRONIC FILES

NortonLifeLock Inc., Tem...

1. A computer-implemented method for classifying electronic files, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:identifying a plurality of instances of an electronic file that comprises a plurality of versions of the electronic file;
in response to determining that a user from among a plurality of users is interacting with one or more instances of an electronic file by opening the one or more instances of the electronic file on a computing device, causing a file-categorization system to evaluate the one or more instances of the electronic file for importance;
collecting, via at least one user-state monitoring device, information about a physical state of at least one user while the user is interacting with the one or more instances of the electronic file;
determining, based on the information about the physical state of the user while the user was interacting with the one or more instances of the electronic file, whether the user considers the one or more instances of the electronic file to be important;
classifying, by the file-categorization system and based at least in part on determining whether each user in the plurality of users that has interacted with the one or more instances of the electronic file considers the one or more instances of the electronic file to be important, the plurality of instances of the electronic file as important files; and
in response to classifying the plurality of instances of the electronic file as important files, notifying at least one software security system that the plurality of instances of the electronic file are important files, thereby causing the at least one software security system to prioritize actions involving the important files over actions involving electronic files that are not classified as important files.

US Pat. No. 11,030,149

FILE FORMAT FOR ACCESSING DATA QUICKLY AND EFFICIENTLY

SAP SE, Walldorf (DE)

1. A non-transitory machine-readable medium storing a program executable by at least one processing unit of a device, the program comprising sets of instructions for:receiving a request to create a file for storing data from a table comprising a plurality of rows, each row in the plurality of rows divided into a set of columns, each column in the set of columns configured to store a type of data;
dividing the plurality of rows into a plurality of blocks of rows, each block of rows in the plurality of blocks of rows comprising a portion of the plurality of rows;
for at least one column in the set of columns of each block of rows in the plurality of blocks of rows:
encoding the data in the column of the block of rows based on the type of data stored in the column,
storing the encoded data in the file as a page of data; and
determining a minimum value and a maximum value for the data in the column of the block of rows;
producing a set of data metadata comprising the determined minimum value and the determined maximum value for the data in the at least one column in the set of columns of each block of rows in the plurality of blocks of rows;
generating a tree data structure for the at least one column in the set of columns using the set of data metadata, wherein each leaf node of a plurality of leaf nodes of the tree data structure stores a minimum value and a maximum value of the set of data metadata, and wherein the tree data structure is generated based on a calculation of: a total number of leaf nodes in the tree data structure based on a number of blocks of rows, a height of the tree data structure based on the number of blocks of rows, and a number of leaf nodes in a penultimate level of the tree data structure based on the height of the tree data structure and the number of blocks of rows;
serializing the tree data structure;
storing the serialized tree data structure, including the data metadata of the plurality of leaf nodes, in the file as a separate page of data.

US Pat. No. 11,030,148

MASSIVELY PARALLEL HIERARCHICAL CONTROL SYSTEM AND METHOD

Lawrence Livermore Nation...

1. An electronic control system for controlling individually controllable electromagnetic signal directing elements of an external component, the system comprising:a state translator subsystem comprising one or more processors for receiving a state command from an external subsystem, the state translator subsystem processing the state command and generating operational commands for controlling the elements of the external component to achieve a desired state or condition;
a programmable calibration command translation layer (PCCTL) subsystem configured to receive the operational commands and using the operational commands to generate granular level commands for controlling the elements;
a feedback layer subsystem configured to receive the granular level commands to generate final output commands to the elements, wherein the final output commands are modified as needed to control the elements in closed loop fashion;
a first plurality of parallel communication channels independently communicating with granular level commands from the feedback layer subsystem to each one of the elements in parallel fashion and receiving feedback signals back from the elements over the first plurality of parallel communication channels;
a second plurality of parallel communication channels coupling the PCCTL subsystem with the feedback layer subsystem supplying the granular level commands from the PCCTL subsystem to the feedback layer subsystem and receiving back the feedback signals from the feedback layer subsystem; and
a calibration subsystem in communication with the PCCTL subsystem supplying calibration information to the PCCTL to modify the granular level commands when needed.

US Pat. No. 11,030,147

HARDWARE ACCELERATION USING A SELF-PROGRAMMABLE COPROCESSOR ARCHITECTURE

International Business Ma...

1. A method for hardware acceleration using a self-programmable coprocessor architecture, the method comprising:determining that an instruction cache comprises either an accelerable instruction sequence or a potentially accelerable instruction sequence;
responsive to determining that the instruction cache comprises an accelerable instruction sequence:
instead of executing, by a processor core, the accelerable instruction sequence, providing, to an accelerator block of an accelerator complex comprising a plurality of accelerator blocks, a complex instruction corresponding to the accelerable instruction sequence, wherein the accelerator block comprises one or more reprogrammable logic elements configured to execute the complex instruction; and
receiving, from the accelerator complex, a result of the complex instruction;
responsive to determining that the instruction cache comprises the potentially accelerable instruction sequence:
determining that another complex instruction for the potentially accelerable instruction sequence does not correspond to any accelerator image; and
synthesizing, based on the potentially accelerable instruction sequence, a accelerator image in parallel to executing, by the processor core, the potentially accelerable instruction sequence.

US Pat. No. 11,030,146

EXECUTION ENGINE FOR EXECUTING SINGLE ASSIGNMENT PROGRAMS WITH AFFINE DEPENDENCIES

Stillwater Supercomputing...

1. A computing device comprising:a memory for storing data and a domain flow program;
a crossbar for sending data tokens and the domain flow programming information to a processor fabric, wherein the processor fabric comprises one or more processing elements that match data tokens in a domain flow program; and
one or more data streamers for sending data streams to the crossbar.

US Pat. No. 11,030,145

SERVER

Jabil Inc., St. Petersbu...

1. A server comprising:an adapter device including
a base circuit board that extends along a first direction and that has a first surface and a second surface opposite to the first surface;
a plurality of spaced-apart first connectors that are disposed on the first surface along the first direction and that are electrically connected to the base circuit board, each of the first connectors extending along a second direction that is perpendicular to the first direction; and
a plurality of second connectors that are disposed on the second surface in pairs, which are spaced apart along the first direction, each pair of the second connectors being arranged along the second direction, being electrically connected to the base circuit board and being disposed between adjacent two of the first connectors, each of the second connectors extending along the second direction that is perpendicular to the first direction;
wherein the plurality of spaced-apart first connectors and the plurality of second connectors are in a non-overlapping relationship.

US Pat. No. 11,030,144

PERIPHERAL COMPONENT INTERCONNECT (PCI) BACKPLANE CONNECTIVITY SYSTEM ON CHIP (SOC)

TEXAS INSTRUMENTS INCORPO...

1. An integrated circuit, comprising:an interconnect communication bus; and
a peripheral component interconnect (PCI) multi-function endpoint (MFN-EP) coupled to the interconnect communication bus, the MFN-EP comprising:
a first address translation unit (ATU); and
a PCI function circuit, wherein the PCI function circuit comprises:
base address registers (BARs) comprising a selected BAR, wherein the selected BAR is associated with an address internal to the integrated circuit associated with the first ATU; and
a second ATU;
wherein the PCI function circuit is configured to:
receive a transaction associated with a PCI address;
determine that the PCI address is associated with the selected BAR; and
using the selected BAR and the second ATU, determine an address internal to the integrated circuit based on the PCI address.

US Pat. No. 11,030,143

SYSTEM FOR SHARING CONTENT BETWEEN ELECTRONIC DEVICES, AND CONTENT SHARING METHOD FOR ELECTRONIC DEVICE

Samsung Electronics Co., ...

1. An electronic device comprising:a communication module;
a memory configured to store contents; and
a processor configured to control to:
transmit a command requesting devices information to a machine-to-machine communication system in response to an input for sharing contents,
receive the devices information from the machine-to-machine communication system,
select a first external electronic device which to transmit the contents of the electronic device based on the devices information,
determine a communication mode between the electronic device and the first external electronic device based on content information to be shared and the devices information,
transmit at least part of the contents to a second external electronic device using the communication module based on the determined communication mode being a first communication mode, wherein the second external electronic device is connected to the first external electronic device, and
transmit the at least part of the contents to the first external electronic device using the communication module based on the determined communication mode being a second communication mode,
acquire integrated resource access information supporting external access corresponding to the at least part of the contents transmitted based on the first communication mode, from the second external electronic device, and
transmit the integrated resource access information to the first external electronic device.

US Pat. No. 11,030,142

METHOD, APPARATUS AND SYSTEM FOR DYNAMIC CONTROL OF CLOCK SIGNALING ON A BUS

Intel Corporation, Santa...

1. An apparatus comprising:a host controller to couple to an interconnect to which a plurality of devices may be coupled, the host controller including a clock control circuit to cause the host controller to communicate a clock signal on a clock line of the interconnect, the clock control circuit to receive an indication that a first device of the plurality of devices is to send information to the host controller and to dynamically release control of the clock line of the interconnect in response to an in-band interrupt from the first device to enable the first device to drive a second clock signal onto the clock line of the interconnect for communication with the information, wherein the first device is a clock source capable device comprising a slave device.

US Pat. No. 11,030,141

APPARATUSES FOR INDEPENDENT TUNING OF ON-DIE TERMINATION IMPEDANCES AND OUTPUT DRIVER IMPEDANCES, AND RELATED METHODS, SEMICONDUCTOR DEVICES, AND SYSTEMS

Micron Technology, Inc., ...

1. An apparatus, comprising:at least one circuit configured to:
receive at least one signal indicative of an operational mode of the apparatus;
generate, via activating at least one tuning device, a desired output driver impedance (ODI) value in response to the operational mode being a first operational mode; and
generate, via activating at least one other tuning device, a desired on-die termination (ODT) impedance value in response to the operational mode being a second operational mode.

US Pat. No. 11,030,140

BUS NETWORK TERMINATOR

EATON INTELLIGENT POWER L...

1. A bus network terminator having a termination functionality for a bus network, the bus network terminator comprising:a diagnostic analyzer configured to:
identify whether data is present on a segment of the bus network, and
identify a degree of data activity and/or a direction of transmission on the segment of the bus network; and
a line impedance terminator offering an in-line diagnostic functionality;
wherein the diagnostic analyzer and the line impedance terminator are integrated in a bus device of the bus network; and
wherein the bus network terminator is configured to selectively initiate the termination functionality responsive to a location of the bus network terminator in the bus network.

US Pat. No. 11,030,139

CIRCUIT DEVICE, CIRCUIT DEVICE DETERMINATION METHOD, AND ELECTRONIC APPARATUS

SEIKO EPSON CORPORATION, ...

1. A circuit device comprising:a packet output circuit configured to transmit a packet to a bus according to a USB standard in a manner capable of changing an amplitude level of the packet;
a detection circuit configured to detect whether an amplitude level of a packet transmitted to the bus exceeds a discoupling detection level; and
a control circuit configured to instruct, when the detection circuit detects that the amplitude level of the packet exceeds the discoupling detection level, the packet output circuit to lower an amplitude level of a part or all of packets, and after the instruction, to determine that, when the detection circuit detects that an amplitude level of a packet again exceeds the disconnecting detection level, a USB device connected to the bus is disconnected.

US Pat. No. 11,030,138

CIRCUIT DEVICE, ELECTRONIC DEVICE, AND CABLE HARNESS

SEIKO EPSON CORPORATION, ...

1. A circuit device comprising:a first bus compliant with a USB standard for connection;
a second bus compliant with the USB standard for connection;
a bus switch circuit that, having one end configured to be connected to the first bus and another end configured to be connected to the second bus, switches on connection between the first bus and the second bus in a first period, and switches off the connection in a second period;
a processing circuit that performs, in the second period, transfer processing for transmitting a packet received from the first bus to the second bus, and transmitting a packet received from the second bus to the first bus; and
a second-bus-side disconnection detection circuit that detects device disconnection of a device connected to the second bus side, wherein
when device connection of a device connected to the second bus is detected by the second-bus-side disconnection detection circuit, the bus switch circuit, in the second period, switches the connection between the first bus and the second bus from off to on after a wait period has elapsed from a timing at which the device disconnection of device connected to the second bus is detected.

US Pat. No. 11,030,137

COMMUNICATION SYSTEM FOR CURRENT-MODULATED DATA TRANSMISSION VIA A CURRENT LOOP

1. A communication system for current-modulated data transmission between a master device and at least one slave device, the communication system comprising:a) a current loop;
b) a master device comprising:
a first evaluation and control unit;
a first switching means connected into the current loop and actuable by the first evaluation and control unit, which is configured to open and close the current loop for transmitting data;
an electrical current source connected to the current loop, which is configured to inject a constant quiescent current into the current loop;
a first current detection means connected into the current loop and connected to the first evaluation and control unit, the first evaluation and control unit being configured to evaluate the current detected by the first current detection means;
c) at least one slave device connected to the current loop and comprising:
a second evaluation and control unit;
a second switching means connected into the current loop and actuable by the second evaluation and control unit, which is configured to open and close the current loop for transmitting data;
a third switching means-actuable by the second evaluation and control unit, which is configured to short-circuit the current loop when in its closed state; wherein the second evaluation and control unit is configured to temporarily close and then reopen the third switching means during a system configuration detection phase, and wherein the first evaluation and control unit is configured to detect when the at least one slave device is connected to the current loop;
a second current detection means connected into the current loop and connected to the second evaluation and control unit, the second evaluation and control unit being configured to evaluate the current detected by the second current detection means;
(d) wherein at least one of:
(i) the master device has at least one first output that is controllable by the first evaluation and control unit; and
(ii) the at least one slave device has at least one second output that is controllable by the second evaluation and control unit
(e) wherein:
(i) the first evaluation and control unit of the master device is able to control the first switching means in a defined manner for generating a state change request signal;
(ii) the second evaluation and control is able to control the second switching means in a defined manner for generating a state change request signal;
(iii) the first evaluation and control unit of the master device is configured to transfer the first output into a safe state in response to a received state change request signal; and
(iv) the second evaluation and control is configured to transfer the second output into a safe state in response to a received state change request signal.

US Pat. No. 11,030,136

MEMORY ACCESS OPTIMIZATION FOR AN I/O ADAPTER IN A PROCESSOR COMPLEX

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for memory access optimization for an input/output (I/O) adapter in a processor complex, the computer-implemented method comprising:determining a memory block distance between the I/O adapter and a memory block location in the processor complex;
determining one or more memory movement type criteria between the I/O adapter and the memory block location based on the memory block distance;
selecting a memory movement operation type based on a memory movement process parameter and the one or more memory movement type criteria, wherein the one or more memory movement type criteria comprise an offload minimum memory block size, and the memory movement process parameter comprises a block size of memory at the memory block location; and
initiating a memory movement process between the I/O adapter and the memory block location using the memory movement operation type.

US Pat. No. 11,030,135

METHOD AND APPARATUS FOR POWER REDUCTION FOR DATA MOVEMENT

Advanced Micro Devices, I...

1. A method of receiving data including:receiving, by a controller, a first data entry and a second data entry of a set of data entries from a first buffer in a first order such that the second data entry is received before the first data entry at least partially due to the similarity between the first and second entries, wherein the first entry was written to the first buffer before the second entry was written to the first buffer,
receiving, by the controller, at least one index along with the first data entry and the second data entry, and
writing, by the controller, the first data entry to a first slot in a second buffer in a second order and the second data entry to a second slot in the second buffer in the second order, the first slot and the second slot being identified by the at least one index.

US Pat. No. 11,030,134

COMMUNICATION SYSTEM, A COMMUNICATION CONTROLLER AND A NODE AGENT FOR CONNECTION CONTROL BASED ON PERFORMANCE MONITORING

SAAB AB, Linkoeping (SE)...

1. A centralized communication controller of a node in a network, the centralized communication controller consisting of:a plurality of data transmission ports configured to be connected to corresponding logical channels;
a plurality of application ports configured to provide data to or from corresponding applications; and
a single node agent having node control data comprising at least one trained machine learning model, the single node agent being configured to:
monitor performance characteristics for data inflow to the plurality of data transmission ports;
monitor at least one local observable;
control a connection between either: (i) two or more of the plurality of data transmission ports causing inflow or outflow of data, or (ii) between one or more of the plurality of data transmission ports and one or more of the plurality of application ports, causing (a) inflow of data if the one or more of the plurality of application ports receives data, and (b) outflow of data if the one or more of the plurality of application ports sends data; and
control a connection between either: (i) the two or more of the plurality of data transmission ports, or (ii) between the one or more of the plurality of data transmission ports and the one or more of the plurality of application ports based on the monitored performance characteristics, the at least one local observable, the at least one trained machine learning model, and a local policy matrix,
wherein:
the local policy matrix is updated based on variations of the at least one local observable derived from at least one of an inflow of data to the centralized communication controller or variations of the performance characteristics such that a global target function defining end to end application performance metrics for providing data to or from the corresponding applications of the centralized communication controller via the plurality of data transmission ports is satisfied for a local control performed by the single node agent, and
the update of the local policy matrix comprises automatically updating the local policy matrix based on providing at least one of results from the monitoring of the at least one local observable or the performance characteristics into the at least one trained machine learning model, whereby an output of the at least one trained machine learning model is generated by the single node agent which is used by the single node agent to update the local policy matrix.

US Pat. No. 11,030,133

AGGREGATED IN-BAND INTERRUPT BASED ON RESPONSES FROM SLAVE DEVICES ON A SERIAL DATA BUS LINE

QUALCOMM Incorporated, S...

1. An apparatus, comprising:a host controller configured to:
transmit a trigger for a series of responses to at least one slave via a serial communication bus,
receive the series of responses from the at least one slave via the serial communication bus in response to transmitting the trigger,
determine a first response of the series of responses indicating an in-band interrupt (IBI) request, and
respond to the IBI request based on a position of the first response among the series of responses.

US Pat. No. 11,030,132

SYNCHRONOUS MEMORY BUS ACCESS TO STORAGE MEDIA

Micron Technology, Inc., ...

1. A computing system, comprising:a plurality of memory components having first memory and second memory, wherein the first memory is available to a host system for read and write access over a memory bus during one or more of a first plurality of windows;
a processing device, operatively coupled with the plurality of memory components, to: receive, from a driver of the host system, a request regarding a page of data stored in the second memory;
responsive to the request, transfer the page from the second memory to a buffer, wherein the second memory has a longer response time than the first memory and the buffer; and
write the page from the buffer to the first memory;
wherein the page is written to the first memory and the first memory is refreshed during at least one of a second plurality of windows corresponding to a refresh timing for the memory bus, and the refresh timing is controlled at the host system;
wherein the host system is able to access the page of data over the memory bus after the page has been transferred to the buffer;
wherein the page is transferred from the first memory to the buffer during at least one of the first plurality of windows.

US Pat. No. 11,030,131

DATA PROCESSING PERFORMANCE ENHANCEMENT FOR NEURAL NETWORKS USING A VIRTUALIZED DATA ITERATOR

Microsoft Technology Lice...

1. A system comprising:at least one processor;
at least one hardware iterator configured to execute multiple instances for parallel processing of one or more data iteration functions; and
at least one memory in communication with the at least one processor, the at least one memory having computer-readable instructions stored thereupon that, when executed by the at least one processor, cause the system to:
receive one or more initialization parameters comprising data representative of dimensions of data to be processed by a neural network environment;
load the data from a memory component of the neural network environment;
receive one or more instructions to traverse the loaded data according to one or more iterator operation types and the one or more initialization parameters and using one or more selected traversal patterns;
process the one or more instructions to select one or more portions of the loaded data according to one or more iterator operation types and the one or more initialization parameters; and
communicate the one or more portions of the loaded data to one or more processing components of the neural network environment;
wherein the hardware iterator is virtualized using logical mapping to the data in the memory component, the logical mapping determined using parameters indicative of the data's physical memory addressing, dimension, and volume.

US Pat. No. 11,030,130

STORAGE DEVICE, ACCESS METHOD AND SYSTEM UTILIZING THE SAME

WINBOND ELECTRONICS CORP....

1. A storage device comprising:a memory array comprising:
a plurality of banks; and
a data path; and
a peripheral logic circuit operating in a copy mode or a normal mode according to a mode-switch command,
wherein in the copy mode:
the peripheral logic circuit decodes an activation command to generate activation information and activates a portion of the banks according to the activation information,
the peripheral logic circuit decodes a copy command to generate source information and assigns one among the portion of the banks as a source bank according to the source information,
the peripheral logic circuit performs a specific operation for the activation information and the source information to generate write information and assigns another one of the portion of the banks as a first target bank according to the write information, and
the peripheral logic circuit directs the source bank to output specific data to the data path and directs the target bank to receive and store the specific data from the data path.

US Pat. No. 11,030,129

SYSTEMS AND METHODS FOR MESSAGE TUNNELING

SAMSUNG ELECTRONICS CO., ...

20. A method of tunneling a remote procedure call through a data protocol, the method comprising:allocating at least a portion of a host memory buffer included by a host computing device for a communication between the host computing device and an enhanced storage device;
creating a data message that includes an indication that a tunneled message is stored within the portion of the host memory buffer;
transmitting the data message to the enhanced storage device;
upon receipt of the data message, the enhanced storage device reading the tunneled message from the host computing device; and
in response to the tunneled message, executing one or more instructions by the enhanced storage device.

US Pat. No. 11,030,128

MULTI-PORTED NONVOLATILE MEMORY DEVICE WITH BANK ALLOCATION AND RELATED SYSTEMS AND METHODS

Cypress Semiconductor Cor...

1. A nonvolatile memory device, comprising:a serial port including
at least one serial clock input, and
at least one serial data input/output (I/O) configured to receive command, address and write data in synchronism with the at least one serial clock input;
at least one parallel port including
a plurality of command address inputs configured to receive command and address data in groups of parallel bits,
a plurality of unidirectional data outputs configured to output read data in parallel on rising and falling edges of a data clock signal; and
plurality of banks, each bank including a plurality of nonvolatile memory cells and configurable for access by the at least one serial port or the at least one parallel port, wherein when a bank is configured for access by the at least one serial port, the bank is not accessible by the at least one parallel port and;
a bank access register configured to store access values for each bank, wherein each bank is accessible by the serial port or the parallel port based on the access value for the bank stored in the bank access register.

US Pat. No. 11,030,127

MULTI-THREADED ARCHITECTURE FOR MEMORY CONTROLLER DATA PATHS

NXP USA, INC., Austin, T...

1. An article of manufacture comprising:N processing cores, wherein N is an integer greater than 1;
M memory banks, wherein M is an integer greater than or equal to 1; and
a memory controller configured to enable the N processing cores to access the M memory banks, wherein the memory controller is configured to:
receive a sequence of one or more access requests from each of one or more of the processing cores, wherein each access request is a request for access by the corresponding processing core to a specified memory bank; and
process access requests from one or more of the processing cores to generate a current sequence of one or more granted access requests to each of one or more of the memory banks, wherein:
for each processing core, the article comprises first and second buffers, each buffer configured to store an access request;
when an access request from the first buffer is granted, the first buffer is configured to receive a new access request and processing is performed to determine whether to grant an access request stored in the second buffer; and
when an access request from the second buffer is granted, the second buffer is configured to receive a new access request and processing is performed to determine whether to grant an access request stored in the first buffer.

US Pat. No. 11,030,126

TECHNIQUES FOR MANAGING ACCESS TO HARDWARE ACCELERATOR MEMORY

INTEL CORPORATION, Santa...

1. An apparatus to provide coherence bias for accessing accelerator memory, the apparatus comprising:at least one processor;
a logic device communicatively coupled to the at least one processor;
a logic device memory communicatively coupled to the logic device, the logic device memory to store bias information; and
logic, at least a portion comprised in hardware, the logic to:
receive a request to access the logic device memory from the logic device,
determine a bias mode associated with the request based on the bias information stored in the logic device memory, and
provide the logic device with access to the logic device memory via a device bias pathway responsive to the bias mode being a device bias mode.

US Pat. No. 11,030,125

POINT IN TIME COPY OPERATIONS FROM SOURCE VOLUMES TO SPACE EFFICIENT TARGET VOLUMES IN TWO STAGES VIA A NON-VOLATILE STORAGE

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a memory; and
a processor coupled to the memory, wherein the processor performs operations, the operations comprising:
in response to receiving a request from a host to perform a point in time copy operation from a source volume to a space efficient target volume, performing a first set of operations, the first set of operations comprising:
updating a bitmap metadata to indicate tracks to be copied for the point in time copy operation, and in response to updating the bitmap metadata sending an indication to the host that the point in time copy operation is complete even though a corresponding physical point in time copy of data stored in the tracks has not committed; and
in response to updating the bitmap metadata, copying, by using the bitmap metadata, data stored in the tracks indicated in the bitmap metadata, from the source volume to a non-volatile storage to preserve the point in time copy operation; and
subsequent to completion of the first set of operations, performing a second set of operations, comprising asynchronously copying, via a background process, the data copied in the first set of operations to the non-volatile storage, from the non-volatile storage to the space efficient target volume to perform a commit of the physical point in time copy of the data from the source volume to the space efficient target volume, wherein point in time copy operations are preserved in the non-volatile storage via the first set of operations by copying a group of tracks to a temporary copy in the non-volatile storage, and subsequently in a commit stage that comprises the second set of operations, the temporary copy is copied over asynchronously via the background process to the space efficient target volume to commit the physical point in time copy of the data from the source volume to the space efficient target volume.

US Pat. No. 11,030,124

SEMICONDUCTOR DEVICE WITH SECURE ACCESS KEY AND ASSOCIATED METHODS AND SYSTEMS

Micron Technology, Inc., ...

1. A memory device, comprising:a memory array;
a plurality of nonvolatile memory elements configured to store a first access key; and
peripheral circuitry coupled to the plurality of nonvolatile memory elements and the memory array, and configured to:
receive a predetermined sequence of signals from a host device;
retrieve the first access key from the plurality of nonvolatile memory elements in response to receiving the predetermined sequence of signals;
configure one or more pins to output the first access key; and
transmit the first access key using the one or more pins after configuring the one or more pins.

US Pat. No. 11,030,123

FINE GRAINED MEMORY AND HEAP MANAGEMENT FOR SHARABLE ENTITIES ACROSS COORDINATING PARTICIPANTS IN DATABASE ENVIRONMENT

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions which, when executed by one or more hardware processors, cause performance of operations comprising:requesting, by a first process, a handle for a shared memory namespace to be shared with one or more other processes;
receiving, by the first process, owner access to the handle, wherein the handle references a data structure that identifies (a) any memory regions associated with the handle, and (b) any processes that have owner access to the handle;
specifying, by the first process, a set of one or more processes to be authorized to obtain owner access to the handle;
sharing, by the first process with a second process, the handle referencing the data structure;
requesting, by the second process, owner access to the handle;
responsive to the request by the second process for owner access to the handle:
(a) updating the data structure to identify the second process as having owner access to the handle; and
(b) granting to the second process, owner access to the handle,
wherein memory is managed by an access management layer that performs operations comprising:
responsive to receiving a request from the first process for a handle to a shared memory namespace to be shared with one or more other processes:
creating the handle and the data structure, wherein the data structure (a) describes the shared memory namespace and (b) identifies the first process as having owner access to the handle; and
providing the handle to the first process;
receiving from the first process, identities of one or more processes to be authorized to obtain owner access to the handle;
responsive to receiving from the second process a request for owner access to the handle:
verifying that the second process is one of the one or more processes that are authorized to obtain owner access to the handle; and
granting access to any memory regions, associated with the handle, to the second process.

US Pat. No. 11,030,122

APPARATUSES AND METHODS FOR SECURING AN ACCESS PROTECTION SCHEME

Micron Technology, Inc., ...

1. An apparatus, comprising:a memory; and
a controller configured to interpret a command received along a command path, wherein the controller is associated with a register, wherein the register is configured to store an indication of whether an access protection scheme of the memory is alterable, wherein the controller is configured to prevent alteration to the indication stored in the register until the command is authenticated as being properly issued from an authorized sender.

US Pat. No. 11,030,121

APPARATUS AND METHOD FOR COMPARING REGIONS ASSOCIATED WITH FIRST AND SECOND BOUNDED POINTERS

ARM Limited, Cambridge (...

1. An apparatus to determine whether an accessible memory region defined for a second bounded pointer is a subset of an accessible memory region defined for a first bounded pointer, each bounded pointer having a pointer value and associated upper and lower limits identifying the accessible memory region for that bounded pointer, the apparatus comprising:storage circuitry to store a first bounded pointer representation and a second bounded pointer representation, each bounded pointer representation comprising a pointer value having p bits, and identifying the upper and lower limits in a compressed form by identifying a lower limit mantissa of q bits, an upper limit mantissa of q bits and an exponent value e, where the lower limit and the upper limit each have p bits, and where a most significant p?q?e (p minus q minus e bits) bits of the lower limit and the upper limit is derivable from the most significant p?q?e bits of the pointer value such that the upper and lower limits are anchored by the pointer value to reside within a memory region of size 2n, where n=q+e, where the least significant n bits of the upper limit is derivable from the q bits of the upper limit mantissa and the exponent value e, and where the least significant n bits of the lower limit is derivable from the q bits of the lower limit mantissa and the exponent value e;
mapping circuitry to map the lower limit mantissas and the upper limit mantissas of the first and second bounded pointer representations to a q+x bit address space comprising 2x regions of size 2n1, where n1 is the value of n determined when using the exponent value of the first bounded pointer representation, and q+x is less than p;
mantissa extension circuitry to extend the lower limit mantissas in the q+x bit address space and the upper limit mantissas in the q+x bit address space for each bounded pointer representation to create extended lower limit and upper limit mantissas comprising q+x bits, where a most significant x bits of each extended limit mantissa are mapping bits identifying which region the associated limit mantissa is mapped to; and
determination circuitry to determine whether the accessible memory region defined for the second bounded pointer is a subset of the accessible memory region defined for the first bounded pointer by comparing the extended lower limit and upper limit mantissas for the first and second bounded pointers.

US Pat. No. 11,030,120

HOST-CONVERTIBLE SECURE ENCLAVES IN MEMORY THAT LEVERAGE MULTI-KEY TOTAL MEMORY ENCRYPTION WITH INTEGRITY

Intel Corporation, Santa...

1. A processor comprising:a cryptographic engine to control access, using a secure region key identifier (ID), to one or more memory range of memory allocable for flexible conversion to secure pages of architecturally-protected memory regions; and
a processor core coupled to the cryptographic engine, the processor core to:
determine that a physical address associated with a request to access the memory corresponds to a secure page within the one or more memory range of the memory;
determine that a first key ID located within the physical address does not match the secure region key ID; and
issue a page fault and deny access to the secure page in the memory, wherein one of:
the processor core further comprises a set of instructions in firmware that performs a basic input-output system (BIOS), wherein the processor core is to execute the set of instructions to:
discover that a host-convertible secure region mode and a secure extensions mode are enabled,
program a secure extensions key into the cryptographic engine to correspond to the secure region key ID, and
reserve the one or more memory range of the memory for flexible conversion to the secure pages, or
the processor core is further to map, using the secure key ID, a guest virtual address of the secure page to a second physical address within page tables and extended page tables, such that the second physical address contains the secure region key ID.

US Pat. No. 11,030,119

STORAGE DATA ENCRYPTION AND DECRYPTION APPARATUS AND METHOD

C-SKY Microsystems Co., L...

1. A method for encrypting and decrypting data, the method performed by an embedded system and comprising:generating, by a true random number generator of the embedded system, a plurality of keys, wherein:
the plurality of keys comprise a first key and a second key,
the first key is used to encrypt acquired data to be written to a first chip of the embedded system and to decrypt encrypted data read from the first chip, and
the second key is used to encrypt acquired data to be written to a second chip of the embedded system and to decrypt encrypted data read from the second chip;
writing the plurality of keys into a key memory of the embedded system; and
performing, using the first key of the plurality of keys from the key memory, encryption operation on data to be written into a data memory of the first chip and decryption operation on data read from the data memory of the first chip.

US Pat. No. 11,030,118

DATA-LOCKING MEMORY MODULE

Rambus Inc., San Jose, C...

1. A method of operation within a memory module, the method comprising:receiving encryption information from a source external to the memory module;
storing the encryption information exclusively within a non-persistent storage element of the memory module such that the encryption information is expunged from the memory module upon power loss;
receiving write data via a plurality of data paths and storing the write data within dynamic random access memory (DRAM) components that are (i) coupled respectively to the plurality of data paths such that a respective portion of the write data is stored within each of the DRAM components, and (ii) distinct from the non-persistent storage element; and
after storing the write data within the DRAM components, receiving information indicative of potential security breach with respect to the write data other than a power-loss event, and, in response to receiving the information indicative of potential security breach:
encrypting the write data using the encryption information stored within the non-persistent storage element to produce encrypted data;
storing the encrypted data within the memory module in a nonvolatile storage that is distinct from the DRAM components and the non-persistent storage element; and
after storing the encrypted data in the nonvolatile storage, expunging the encryption information from the non-persistent storage element and expunging the write data from the DRAM components.

US Pat. No. 11,030,117

PROTECTING HOST MEMORY FROM ACCESS BY UNTRUSTED ACCELERATORS

ADVANCED MICRO DEVICES, I...

1. A method comprising:receiving, at a host processor from an accelerator, an address translation request including a virtual address in a virtual address space that is shared by the host processor and the accelerator;
encrypting, at the host processor, a physical address in a host memory indicated by the virtual address in response to the accelerator being permitted to access the physical address;
providing, from the host processor to the accelerator, the encrypted physical address; and
storing, at a translation lookaside buffer associated with the accelerator, a mapping of the virtual address to the encrypted physical address.

US Pat. No. 11,030,116

PROCESSING CACHE MISS RATES TO DETERMINE MEMORY SPACE TO ADD TO AN ACTIVE CACHE TO REDUCE A CACHE MISS RATE FOR THE ACTIVE CACHE

International Business Ma...

1. A computer program product for managing an active cache in a computer system to cache tracks stored in a storage, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:determining whether adding additional memory space to the active cache would result in an active cache miss rate being less than a cache demote rate when the active cache miss rate exceeds the cache demote rate; and
generating a message to a user of the computer system indicating to add the additional memory space to the active cache in response to determining that adding the additional memory space would result in the active cache miss rate being less than the cache demote rate.

US Pat. No. 11,030,115

DATALESS CACHE ENTRY

LENOVO Enterprise Solutio...

1. A method comprising:identifying a first cache entry in cache memory as a potential cache entry to be replaced according to a cache replacement algorithm;
comparing a data value of the first cache entry to a predefined value; and
storing a memory address tag and state bits of the first cache entry to a dataless cache entry in response to the data value of the first cache entry matching the predefined value, wherein the dataless cache entry in the cache memory stores a memory address tag and state bits associated with the memory address tag, wherein the dataless cache entry represents the predefined value, and wherein the dataless cache entry occupies fewer bits than the first cache entry.

US Pat. No. 11,030,114

SHARED VOLUME BASED CENTRALIZED LOGGING

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for use with a hosted service computing system that is architecturally represented as a computing node stack that is organized into a plurality of layers including system software layer, with the computing node stack including a plurality of computing nodes respectively representing server computers in the hosted service computing system, the computer implemented method comprising:deploying in the hosted service computing system, a log agent instantiation such that the log agent instantiation is running in the system software layer of the computing node stack representation of the hosted service computing system;
deploying in the hosted service computing system, a log data collection agent instantiation;
deploying, in the hosted service computing system a first instantiation of a first application, with the first instantiation of the first application being reserved for the use of a first tenant of a plurality of tenants of the hosted service computing system;
receiving, by the log agent instantiation and from the log data collection agent instantiation, first application logging data that is generated by operations of the first instantiation of the first application and collected by the log data collection agent instantiation; and
storing, by the log agent instantiation, the first application logging data in a storage area network (SAN) type storage system, wherein the method includes deploying, in the hosted service computing system a second instantiation of the first application, with the second instantiation of the first application being reserved for the use of a second tenant of the plurality of tenants of the hosted service computing system.

US Pat. No. 11,030,113

APPARATUS AND METHOD FOR EFFICIENT PROCESS-BASED COMPARTMENTALIZATION

Intel Corporation, Santa...

1. A processor comprising:execution circuitry to execute instructions and process data;
memory management circuitry coupled to the execution circuitry, the memory management circuitry to manage access to a system memory by a plurality of related processes using one or more process-specific translation structures and one or more shared translation structures to be shared by the related processes; and
one or more control registers to store a process-specific base address pointer associated with a first process of the plurality of related processes and to store a shared base address pointer to identify the shared translation structures;
wherein the memory management circuitry is to use the process-specific base address pointer in combination with a first linear address provided by the first process to walk the process-specific translation structures to identify any permissions and/or physical address associated with the first linear address, wherein when permissions are identified, the memory management circuitry is to use the permissions in place of any permissions specified in the shared translation structures.

US Pat. No. 11,030,112

ENHANCED ADDRESS SPACE LAYOUT RANDOMIZATION

Red Hat, Inc., Raleigh, ...

1. A system comprising:a memory including a first memory address and a second memory address of a plurality of memory addresses, wherein at least one of the plurality of memory addresses is a decoy address; and
a memory manager on one or more processors executing to:
generate a page table associated with the memory, wherein the page table includes a plurality of page table entries;
flag each page table entry in the plurality of page table entries as in a valid state;
instantiate the page table with a first page table entry and a second page table entry of the plurality of page table entries associated with the first memory address and the second memory address respectively such that the first memory address and the second memory address are allocated;
flag the first memory address's and the second memory address's access rights as accessible;
associate a plurality of unused page table entries of the plurality of page table entries, including a decoy page table entry, with the decoy address;
write first data to the first memory address; and
after writing the first data to the first memory address, and while the first page table entry remains flagged as valid and also remains flagged as accessible;
deallocate the first memory address; and
update the first page table entry to reference the decoy address such that an access timing of the decoy address is the same as an access timing of the second memory address, wherein the decoy address comprises an address that returns a page fault.

US Pat. No. 11,030,111

REPRESENTING AN ADDRESS SPACE OF UNEQUAL GRANULARITY AND ALIGNMENT

International Business Ma...

1. A computer-implemented method, comprising:identifying a data write to a specific position within a virtual address space;
determining an entry within a metadata structure that corresponds to the specific position within the virtual address space; and
adding state information associated with the data write to the entry within the metadata structure, the state information including a left or right alignment of the data write within the virtual address space.

US Pat. No. 11,030,110

INTEGRATED CIRCUIT AND DATA PROCESSING SYSTEM SUPPORTING ADDRESS ALIASING IN AN ACCELERATOR

International Business Ma...

1. An integrated circuit for a coherent data processing system including a system memory, the integrated circuit comprising:a first communication interface for communicatively coupling the integrated circuit with the coherent data processing system;
a second communication interface for communicatively coupling the integrated circuit with an accelerator unit including an effective address-based accelerator cache for buffering copies of data from the system memory of the coherent data processing system;
a real address-based directory inclusive of contents of the accelerator cache, wherein the real address-based directory assigns entries based on real addresses utilized to identify storage locations in the system memory; and
request logic that communicates memory access requests and request responses with the accelerator unit via the second communication interface, wherein the request logic, responsive to receipt from the accelerator unit of a read-type request specifying an aliased second effective address of a target cache line, provides the accelerator unit a request response including a host tag indicating that the accelerator unit has associated a different first effective address with the target cache line.

US Pat. No. 11,030,109

MECHANISMS FOR A CONTENTION FREE LOOKUP IN A CACHE WITH CONCURRENT INSERTIONS AND DELETIONS

Samsung Electronics Co., ...

1. A method of contention-free lookup by a cache manager of a data cache, the method comprising:mapping, with the cache manager, a key of a cache lookup operation by performing a hash function on the key to determine an expected location, in the data cache, of object data corresponding to a key;
walking, by the cache manager, a collision chain of the expected location by determining, with the cache manager, for up to each node of a plurality of nodes of the collision chain, that a cache header signature matches or is different from a collision chain signature of the collision chain, the cache header signature comprising information indicating a state of a cache entry and a state of the object data in the cache entry;
based on determining that the cache header signature is different from the collision chain signature, again walking, with the cache manager, the collision chain;
based on determining that the cache header signature matches the collision chain signature, determining, with the cache manager, for up to each of the nodes, that a key in the cache header signature matches or is different from the key of the cache lookup operation;
based on determining that the key in the cache header signature is different from the key of the cache lookup operation in each of the nodes, reading, with the cache manager, a cache entry corresponding to the cache lookup operation from a data storage, and repopulating, by the cache manager, the cache entry into the expected location based on the cache header signature information;
based on determining that the key in the cache header signature matches the key of the cache lookup operation, acquiring, by the cache manager, an entry lock for holding a lock on a cache entry to protect only contents in the cache entry while a remainder of the collision chain is unlocked, and determining, by the cache manager, that the key in the cache header signature still matches or no longer matches the key of the cache lookup operation after acquiring the entry lock;
based on determining that the key in the cache header signature still matches the key of the cache lookup operation after acquiring the entry lock, finding, by the cache manager, the object data in the expected location; and
based on determining that the key in the cache header signature no longer matches the key of the cache lookup operation after acquiring the entry lock, releasing, with the cache manager, the entry lock, and again walking, with the cache manager, the collision chain.

US Pat. No. 11,030,108

SYSTEM, APPARATUS AND METHOD FOR SELECTIVE ENABLING OF LOCALITY-BASED INSTRUCTION HANDLING

Intel Corporation, Santa...

1. A processor comprising:a core to receive a memory access instruction having a no-locality hint to indicate that data associated with the memory access instruction has at least one of non-spatial locality and non-temporal locality;
a buffer having a plurality of entries each to store, for a memory access instruction to a particular address, information including address information;
a cache hierarchy; and
a memory controller coupled to the core, the memory controller to issue requests to a memory, the memory controller including a locality controller to receive the memory access instruction having the no-locality hint and override the no-locality hint to store the data in the cache hierarchy, based at least in part on the information stored in an entry of the buffer.

US Pat. No. 11,030,107

STORAGE CLASS MEMORY QUEUE DEPTH THRESHOLD ADJUSTMENT

Hewlett Packard Enterpris...

1. A method of a storage system comprising:determining, with a first controller of the storage system, a representative input/output (IO) request latency between the first controller and a storage class memory (SCM) read cache during a given time period in which the first controller and a second controller of the storage system are sending IO requests to the SCM read cache, the first and second controllers each comprising a respective main cache, and the storage system comprising backend storage;
determining a first total amount of data read from and written to the SCM read cache by the first controller during the given time period;
determining, with the first controller, an estimated cumulative amount of data read from and written to the SCM read cache by the first and second controllers during the given time period, based on the determined representative IO request latency for the given time period and performance data for the SCM read cache;
determining, based on the first total amount of data and the estimated cumulative amount of data, an adjustment amount for the at least one SCM queue depth threshold of the first controller;
adjusting at least one SCM queue depth threshold of the first controller by the adjustment amount, in response to a determination that the determined representative IO request latency exceeds an IO request latency threshold for the SCM read cache;
in response to an IO request of the first controller for the SCM read cache, comparing an SCM queue depth of the first controller to one of the at least one SCM queue depth threshold of the first controller;
selecting between processing the IO request using the SCM read cache, dropping the IO request, and processing the IO request without using the SCM read cache, based on a type of the IO request and a result of the comparison; and
performing the selected processing or dropping in response to the selection.

US Pat. No. 11,030,106

STORAGE SYSTEM AND METHOD FOR ENABLING HOST-DRIVEN REGIONAL PERFORMANCE IN MEMORY

Western Digital Technolog...

1. A method for enabling host-driven regional performance in memory, the method comprising:performing the following in a storage system comprising a non-volatile memory and a volatile memory:
receiving a directive from a host device as to a preferred logical region of the non-volatile memory;
based on the directive, modifying a caching policy specifying which pages of a logical-to-physical address map stored in the non-volatile memory are to be cached in the volatile memory, wherein priority is given to pages of the logical-to-physical address map that cover the preferred logical region over other pages of the logical-to-physical address map;
caching, in the volatile memory, the pages of the logical-to-physical address map that cover the preferred logical region; and
preventing the pages of the logical-to-physical address map that cover the preferred logical region from being removed from the volatile memory until a different directive from the host device is received to set a new preferred logical region.

US Pat. No. 11,030,105

VARIABLE HANDLES

Oracle International Corp...

1. A method comprising:generating a first receiver configured to provide secure access to memory that represents a first variable type through a plurality of distinct memory fencing operations;
generating a second receiver configured to provide secure access to memory that represents a second variable type through the plurality of distinct memory fencing operations, wherein the first variable type is not the second variable type;
a particular receiver receiving a call to a function that specifies a particular memory fencing operation of the plurality of distinct memory fencing operations to perform with respect to a memory location, wherein the function has a signature that is polymorphic to indicate that the function can return all types and accept all numbers and types of arguments, wherein the particular receiver is:
selected from the group consisting of the first receiver and the second receiver, and
related to a class that declares the plurality of distinct memory fencing operations for a plurality of different variable types, but without including an implementation of the plurality of distinct memory fencing operations for at least one particular variable type of the plurality of different variable types;
the particular receiver causing, responsive to receiving said call, performance of the particular memory fencing operation with respect to the memory location; and
wherein the method is performed by one or more processors.

US Pat. No. 11,030,104

PICKET FENCE STAGING IN A MULTI-TIER CACHE

International Business Ma...

1. A computer program product configured for use with a computer system having a host, and a data storage system having storage, wherein the computer program product comprises a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor of the computer system to cause computer system operations, the computer system operations comprising:storing in storage, a track having a set of sectors of data;
caching update data for the track in a fast cache tier of a multi-tier cache in a first track write caching operation which includes caching update data for a first subset of sectors of the track in the fast cache tier and omitting caching data for a second subset of sectors of the track in the fast cache tier to define upon completion of the first track write caching operation, a number of track holes equal to zero or more track holes of the track in the fast cache tier in which a track hole corresponds to one or more contiguous sectors of the second subset of sectors of the track for which data caching is omitted during the first track write caching operation,
determining the number of track holes defined by the second subset of sectors omitted from the first track write caching operation; and
queuing prestage requests in one of a plurality of prestage request queues as a function of the number of track holes determined to be present in the second subset of sectors absent from the fast cache tier, wherein each prestage request when executed prestages read data from storage to a slow cache tier of the multi-tier cache, for one or more sectors for the second subset of sectors identified by one or more track holes.

US Pat. No. 11,030,103

DATA COHERENCY MANAGER WITH MAPPING BETWEEN PHYSICAL AND VIRTUAL ADDRESS SPACES

Imagination Technologies ...

1. A coherency manager for receiving physically addressed snoop requests, the snoop requests being addressed in a physical address space and relating to a virtually addressed cache memory addressable using a virtual address space, the cache memory having a plurality of virtually addressed coherent cachelines, the coherency manager comprising:a reverse translation module configured to maintain a mapping from physical addresses to virtual addresses for each coherent cacheline held in the cache memory; and
a snoop processor configured to:
receive a physically addressed snoop request relating to a physical address;
in response to the received snoop request, determine whether the physical address is mapped to a virtual address in the reverse translation module; and
process the snoop request in dependence on that determination.

US Pat. No. 11,030,102

REDUCING MEMORY CACHE CONTROL COMMAND HOPS ON A FABRIC

Apple Inc., Cupertino, C...

1. A computing system comprising:one or more processing units;
one or more memory controllers; and
a communication fabric distinct from the one or more processing units and the one or more memory controllers, wherein the communication fabric comprises a plurality of transaction processing queues, is coupled to the one or more processing units and the one or more memory controllers, and is configured to:
receive, from a first processing unit, a command payload and a data payload of a first write transaction;
store the command payload and the data payload in a given transaction processing queue of the plurality of transaction processing queues; and
prior to moving the command payload and the data payload out of the given transaction processing queue, complete, for the first write transaction:
coherence operations; and
a memory cache lookup of a memory cache accessed independently of the one or more processing units and the one or more memory controllers.

US Pat. No. 11,030,101

CACHE STORAGE FOR MULTIPLE REQUESTERS AND USAGE ESTIMATION THEREOF

ARM Limited, Cambridge (...

1. A cache memory comprising:cache storage configured to store cache lines for a plurality of requesters;
cache control circuitry that controls insertion of a cache line into the cache storage when a memory access request from one of the plurality of requesters misses in the cache memory; and
cache occupancy estimation circuitry configured to hold a count of insertions of cache lines into the cache storage for each of the plurality of requesters over a defined period.

US Pat. No. 11,030,100

EXPANSION OF HBA WRITE CACHE USING NVDIMM

International Business Ma...

1. A computer system comprising:a processing device;
a host bus adaptor (HBA) in communication with the processing device;
at least one persistent storage device in communication with the HBA;
a memory operatively coupled to the processing device, the memory comprising at least one non-volatile dual in-line memory module (NVDIMM), the memory further comprising a NVDIMM HBA Write Cache Module (NHWCM), the NHWCM in communication with the HBA and the NVDIMM, the NHWCM configured to:
determine there is a least one file to be written to the persistent storage from the processing device;
determine that the NVDIMM includes sufficient storage capacity for storage of the at least one file;
write the at least one file to the NVDIMM;
determine that the HBA is available to receive the at least one file from the NVDIMM; and
write the at least one file to the persistent storage from the NVDIMM through the HBA.

US Pat. No. 11,030,099

DATA STORAGE APPARATUS AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A data storage apparatus comprising:a nonvolatile memory device including a plurality of memory blocks in which a plurality of word lines to which one or more pages are coupled are arranged;
a data buffer configured to buffer data to be stored in the one or more pages of the nonvolatile memory device; and
a processor configured to perform a write operation on a first word line and then on a second word line adjacent to the first word line, detect, when a sudden power off (SPO) occurs after or during the write operation on the second word line, one or more first pages coupled to the first word line, in which an interference has occurred due to the write operation on the second word line, and selectively store data corresponding to data stored in the one or more first pages, among the data buffered in the data buffer, in a backup memory block of the nonvolatile memory device.

US Pat. No. 11,030,098

CONFIGURABLE BURST OPTIMIZATION FOR A PARAMETERIZABLE BUFFER

Micron Technology, Inc., ...

1. A method comprising:designating each client of a buffer memory as either a burst over-write client or a data preserving client based on a function performed by each client;
accessing an incoming write request received from a client of the buffer memory, the incoming write request comprising a write command to transfer write data to the buffer memory;
writing an initial portion of the write data to the buffer memory;
determining that a final portion of the write data is unaligned with a memory bank width of the buffer memory;
determining a designation of the client by accessing a configuration register that stores client designations for a plurality of clients of the buffer memory, the designation of the client indicating whether data previously stored in the buffer memory is to be preserved when writing the final portion of the write data to the buffer memory, the client being designated as either a burst-overwrite client or a data preserving client; and
in response to determining that the final portion of the write data is unaligned with the memory bank width of the buffer memory, determining whether to preserve previously stored data in the buffer memory based on the designation of the client.

US Pat. No. 11,030,097

VERIFYING THE VALIDITY OF A TRANSITION FROM A CURRENT TAIL TEMPLATE TO A NEW TAIL TEMPLATE FOR A FUSED OBJECT

Oracle International Corp...

1. A method, comprising:identifying a set of one or more instructions to instantiate a fused object associated with a head and a repeating tail;
determining a tail template to be repeated in the repeating tail;
determining a memory layout for the fused object, wherein determining the memory layout for the fused object comprises:
repeating the tail template in the repeating tail;
allocating a particular portion of memory for the fused object according to the memory layout;
wherein the tail template indicates a plurality of types corresponding respectively to a plurality of offsets;
wherein repeating the tail template in the repeating tail comprises:
determining that a first set of offsets in the repeating tail correspond respectively to the plurality of types; and
determining that a second set of offsets, subsequent to the first set of offsets, in the repeating tail correspond respectively to the plurality of types,
wherein the method is performed by at least one device including a hardware processor.

US Pat. No. 11,030,096

METHOD OF IDENTIFYING AND PREPARING A KEY BLOCK IN A FLASH MEMORY SYSTEM AND MEMORY CONTROLLER THEREFOR

1. A method for preparing a key block in a memory system, comprising:selecting a candidate key block of memory;
checking a quality of the candidate key block using a non-data word line of the candidate key block;
altering operating parameters of the candidate key block;
registering the candidate key block as the key block;
receiving a value indicative of temperature of the memory system;
determining the value indicative of temperature is above a temperature threshold amount;
in response to determining the value indicative of temperature is above the temperature threshold amount, performing a read scrub operation on the key block to retrieve key data in the key block as second key data,
determining a bit error rate of the second key data is above a bit error rate threshold amount; and
in response to determining the bit error rate of the second key data is above the bit error rate threshold amount, moving the second key data to a second key block.

US Pat. No. 11,030,095

VIRTUAL SPACE MEMORY BANDWIDTH REDUCTION

ADVANCED MICRO DEVICES, I...

1. A method, comprising:ascertaining convolutional parameters associated with an image in an image space;
determining, based on said convolutional parameters, a total result area in a virtual matrix-multiplication space of a virtual matrix-multiplication output matrix;
partitioning said total result area of said virtual matrix-multiplication output matrix into a plurality of virtual segments; and
allocating convolution operations to a plurality of compute units based on each virtual segment of said plurality of virtual segments.

US Pat. No. 11,030,094

APPARATUS AND METHOD FOR PERFORMING GARBAGE COLLECTION BY PREDICTING REQUIRED TIME

SK hynix Inc., Gyeonggi-...

1. A memory system comprising:a nonvolatile memory device including a plurality of dies, each die including a plurality of planes, each plane including a plurality of blocks, each block including a plurality of pages, and further including a plurality of page buffers, each page buffer for caching data in a unit of a page to be inputted to, and outputted from, each of the blocks; and
a controller suitable for managing a plurality of super blocks according to a condition, each super block including N blocks capable of being read in parallel among the blocks, generating predicted required times for the super blocks, respectively, each of the predicted required times representing a time needed to extract valid data from the corresponding super block, and selecting a victim block for garbage collection from among the blocks based on the predicted required times,
wherein N is a natural number of 2 or greater, and
wherein the controller generates and manages the predicted required times during a period in which a number of free blocks among the blocks is less than or equal to a first number, and
the controller does not manage the predicted required times during a period in which the number of free blocks among the blocks is greater than or equal to a second number, which is greater than the first number.

US Pat. No. 11,030,093

HIGH EFFICIENCY GARBAGE COLLECTION METHOD, ASSOCIATED DATA STORAGE DEVICE AND CONTROLLER THEREOF

Silicon Motion, Inc., Hs...

1. A high efficiency garbage collection method, the high efficiency garbage collection method being applicable to a data storage device, the data storage device comprising a non-volatile (NV) memory, the NV memory comprising at least one NV memory element, the high efficiency garbage collection method comprising:starting and executing a garbage collection procedure;
determining whether a Trim command from a host device is received;
in response to the Trim command being received, determining whether target data of the Trim command is stored in a source block of the garbage collection procedure, wherein:
the step of determining whether the Trim command from the host device is received is executed multiple times, in order to respectively generate a first determination result and a second determination result, wherein the first determination result and the second determination result respectively indicate the Trim command being received and the Trim command being not received; and
the step of determining whether the target data of the Trim command is stored in the source block of the garbage collection procedure is performed in response to the first determination result;
in response to the target data being stored in the source block, determining whether the target data stored in the source block has been copied to a destination block of the garbage collection procedure;
in response to the target data stored in the source block having been copied to the destination block, changing at least one physical address of the target data of the Trim command to a Trim tag in a logical-to-physical (L2P) address mapping table, wherein the Trim tag indicates invalidation of the target data;
in response to the second determination result, determining whether to close the destination block of the garbage collection procedure; and
in response to closing the destination block of the garbage collection procedure, checking whether a physical address of valid page data of the destination block in the L2P address mapping table is the Trim tag, in order to determine whether to change said physical address of the valid page data in the L2P address mapping table to a default value.

US Pat. No. 11,030,092

ACCESS REQUEST PROCESSING METHOD AND APPARATUS, AND COMPUTER SYSTEM

HUAWEI TECHNOLOGIES CO., ...

1. An access request processing method performed by a computer system comprising a non-volatile memory (NVM), wherein the access request processing method comprises:receiving a write request for writing to-be-written data;
determining an object cache page corresponding to the to-be-written data, wherein the object cache page is a memory page for caching file data of an object file in an internal memory of the computer system, wherein the to-be-written data is for modifying the file data in the object cache page;
determining that the NVM stores a log chain of the object cache page, wherein the log chain comprises a first data node, wherein the first data node comprises information about a first log data chunk of the object cache page, and wherein the first log data chunk comprises modified data of the object cache page during a modification of the object file;
inserting a second data node into the log chain, wherein the second data node comprises information about a second log data chunk of the object cache page, wherein the second log data chunk comprises at least part of the to-be-written data, and wherein the information about the second log data chunk comprises the second log data chunk or a storage address of the second log data chunk in the NVM;
determining that an intra-page location of the second log data chunk overlaps an intra-page location of the first log data chunk, wherein the intra-page location of the second log data chunk is a location of the second log data chunk in the object cache page, and wherein the intra-page location of the first log data chunk is a location of the first log data chunk in the object cache page; and
setting, in the first data node, data that is in the first log data chunk and that overlaps the second log data chunk to invalid data.

US Pat. No. 11,030,091

SEMICONDUCTOR STORAGE DEVICE FOR IMPROVED PAGE RELIABILITY

Winbond Electronics Corp....

1. A semiconductor storage device, comprising:a NAND type storage unit array, comprising a plurality of blocks;
a volatile memory, storing a conversion table configured for converting a logical address into a physical address;
a processor, configured to:
detect whether a power supply voltage drops to a fixed voltage; and
perform programming of data on a page of a block selected according to the physical address; and
a non-volatile memory, wherein the non-volatile memory stores a target logical address of the page of the block currently being programmed and conversion information configured for converting the target logical address into an other physical address when the fixed voltage is detected during a period when the programming is performed by the processor,
wherein the processor compares an inputted logical address with the target logical address, and the processor converts the inputted logical address into the physical address according to the conversion table of the volatile memory when the inputted logical address and the target logical address are not identical or converts the inputted logical address into the other physical address according to the conversion information of the non-volatile memory when the inputted logical address and the target logical address are identical.

US Pat. No. 11,030,090

ADAPTIVE DATA MIGRATION

Pure Storage, Inc., Moun...

1. A method for elective garbage collection in storage memory, performed by a storage system, comprising:selecting between a RAID rebuild and a garbage collection move to perform for data migration, based on detection of an imbalance of available storage space across a plurality of portions of storage memory available, wherein both the RAID rebuild and the garbage collection move are configurable to stripe data across each of the plurality of portions of storage memory.

US Pat. No. 11,030,089

ZONE BASED RECONSTRUCTION OF LOGICAL TO PHYSICAL ADDRESS TRANSLATION MAP

Micron Technology, Inc., ...

1. A method comprising:loading a portion of a translation map on a first memory component of a storage system, the translation map mapping a first plurality of logical block addresses (LBAs) to a first plurality of physical block addresses, wherein the first plurality of LBAs are ordered in a logically consecutive order within the translation map;
identifying a last snapshot of the portion of the translation map stored on a second memory component of the storage system, wherein the second memory component is different from the first memory component;
identifying one or more logs of write operations that are stored on the second memory component after the last snapshot of the portion of the translation map was stored, wherein the one or more logs comprise a second plurality of LBAs mapped to a second plurality of physical block addresses and wherein the second plurality of LBAs are ordered in a chronological order in which the second plurality of LBAs were written to; and
reconstructing, by a processing device, the portion of the translation map by:
reading each of the chronologically ordered second plurality of LBAs of the one or more logs to identify a first logical block address (LBA) that matches with a second LBA within the consecutively ordered logical addresses on the loaded portion of the translation map on the first memory component; and
updating a physical block address corresponding to the second LBA on the loaded portion with a physical block address corresponding to the first LBA from the one or more logs.

US Pat. No. 11,030,088

PSEUDO MAIN MEMORY SYSTEM

Samsung Electronics Co., ...

1. A system comprising:a processor;
a memory system comprising:
an adapter module; and
a first memory; and
a second memory connected to the processor through a management module,
wherein the adapter module has a first interface connected to the processor and a second interface connected to the first memory,
wherein the adapter module is configured to store data in, and retrieve data from, the first memory, utilizing a change of a storage capacity of the first memory, the change being configured to modify free memory of the first memory according to a ratio,
wherein the adapter module is further configured to estimate the ratio as an estimated ratio, and
wherein the processor is further configured to assess that sufficient space is available in the first memory based on the estimated ratio, the estimated ratio being a function of augmentation ratios for data stored in the first memory over an interval of time.

US Pat. No. 11,030,087

SYSTEMS AND METHODS FOR AUTOMATED INVOCATION OF ACCESSIBILITY VALIDATIONS IN ACCESSIBILITY SCRIPTS

JPMORGAN CHASE BANK, N.A....

1. A method for automated invocation of accessibility validations in accessibility scripts, comprising:in an information processing apparatus comprising at least one computer processor, an automated accessibility test program performing the following:
invoking an automated test program;
invoking the automated accessibility test program in the automated test program wherein the automated accessibility test program comprises a binary file that is integrated into the automated test program;
loading a webpage to be validated;
identifying at least one interactive webpage element on the webpage;
causing the automated accessibility test program to validate the interactive webpage element with the automated accessibility test program;
storing a result of the validation; and
performing an action validation on the interactive webpage element.

US Pat. No. 11,030,086

MACHINE LEARNING MODEL FULL LIFE CYCLE MANAGEMENT FRAMEWORK

TENCENT AMERICA LLC, Pal...

1. An apparatus comprisingat least one memory configured to store computer program code;
at least one hardware processor configured to access said computer program code and operate as instructed by said computer program code, said computer program code including:
model storage code configured to cause the at least one hardware processor to store an artificial intelligence (AI) model;
model serving code configured to cause the at least one hardware processor to load the AI model into a serving platform from a plurality of serving platforms;
model testing code configured to cause the at least one hardware processor to load and test a test unit against the AI model loaded into the serving platform; and
monitoring and reporting code configured to cause the at least one hardware processor to collect reports from results of storing the AI model, loading the AI model into the serving platform and testing the test unit against the AI model loaded into the server platform.

US Pat. No. 11,030,085

USER DEFINED MOCKING SERVICE BEHAVIOR

salesforce.com, inc., Sa...

1. A method for testing an application programming interface (API) at a server, comprising:parsing, in memory of the server, an API specification for the API to determine a parsed model for the API specification;
generating a mock implementation of the API based at least in part on the parsed model for the API specification;
receiving, from a user device, a request message indicating the mock implementation of the API and one or more response behavior parameters comprising at least a first response behavior parameter requesting dynamically generated data for a data field supported by the API specification;
generating random data according to a data type of the data field based at least in part on the first response behavior parameter;
running, in the memory of the server, the mock implementation of the API using the generated random data to determine a response to the request message according to the one or more response behavior parameters; and
sending, to the user device, the response to the request message.

US Pat. No. 11,030,084

API SPECIFICATION PARSING AT A MOCKING SERVER

salesforce.com, Inc., Sa...

1. A method for testing an application programming interface (API) at a server, comprising:receiving, from a user device, an identifier indicating an API specification for the API;
retrieving the API specification based at least in part on the identifier;
parsing, in memory of the server, the API specification to determine a parsed model for the API specification;
receiving, from the user device, a request message indicating a mock implementation of the API;
generating the mock implementation of the API based at least in part on the parsed model for the API specification;
generating a link to the mock implementation of the API, the link configured to be shared from the user device to an additional user device;
providing the link to the mock implementation of the API to the user device; and
running, in the memory of the server, the mock implementation of the API according to the request message.

US Pat. No. 11,030,083

SYSTEMS AND METHODS FOR REDUCING STORAGE REQUIRED FOR CODE COVERAGE RESULTS

ATLASSIAN PTY LTD., Sydn...

1. A computer implemented method performed at a computer system comprising a processor executing instructions, the method comprising:receiving new code coverage analysis data for a new code coverage analysis performed on an updated version of a particular source code base and a test suite,
wherein the particular source code base is maintained in a source code repository of a version control system;
parsing, using the processor, a previous code coverage commit stored at a code coverage result repository of the version control system to identify a first code coverage file that includes code coverage data for a first source code file, and
a second code coverage file that includes code coverage data for a second source code file, wherein the first and second source code files are associated with a previous version of the particular source code base;
comparing the new code coverage analysis data to the previous code coverage commit to determine that the first code coverage file is unmodified and that the second code coverage file has changed; and
generating a new code coverage commit for the updated version of the particular source code base, comprising:
a link to the first code coverage file stored in the previous code coverage commit; and
a modified version of the second code coverage file comprising changes to the second code coverage file that resulted from changes made to the updated version of the particular source code base.

US Pat. No. 11,030,082

APPLICATION PROGRAMMING INTERFACE SIMULATION BASED ON DECLARATIVE ANNOTATIONS

salesforce.com, inc., Sa...

1. A computer implemented method for application programming interface (API) simulation, the method comprising:receiving, by an application programming interface (API) simulator, a description of an API schema comprising a set of methods, the description comprising, for each method:
(1) a set of input parameters,
(2) a schema of a response object, and
(3) one or more annotations, each annotation representing a constraint on the response object of the method, at least one annotation specifying a constraint based on an input parameter;
receiving, by the API simulator, a request for an invocation of a method of the API schema, the request specifying a set of values for input parameters of the method;
accessing, by the API simulator, the one or more annotations for the method as specified in the description of the API schema;
generating, by the API simulator, a synthetic response object for the method invocation, the generating comprising, assigning a value to each attribute of the synthetic response object, wherein the assigned value satisfies constraints specified by the one or more annotations; and
providing, by the API simulator, the synthetic response object as a result of the invocation of the method.

US Pat. No. 11,030,081

INTEROPERABILITY TEST ENVIRONMENT

Michigan Health Informati...

1. A method comprising:receiving, at data processing hardware of a collaboration platform, from a client device, configuration data for creating a collaboration environment for building and testing a software application associated with treatment of a particular disease or medical condition;
based on the configuration data:
generating, by the data processing hardware, a simulated network of simulated services, each simulated service within the simulated network of services comprising a set of resources;
obtaining, by the data processing hardware, one or more synthetic personas configured to progress through the simulated network of simulated services, each synthetic persona comprising a respective synthetic representation of an individual; and
receiving, at the data processing hardware, from the client device, modification inputs to modify the synthetic representation of the individual of at least one of the one or more synthetic personas to specifically tailor the at least one of the one or more synthetic personas to align with building and testing the software application; and
transmitting, by the data processing hardware, visualization data associated with execution of the software application in the collaboration environment to the client device, the client device configured to display the visualization data on a user interface.

US Pat. No. 11,030,080

SYSTEM FOR OPTIMIZATION OF DATA LOADING IN A SOFTWARE CODE DEVELOPMENT PLATFORM

BANK OF AMERICA CORPORATI...

1. A system for optimization of data loading in a software code development platform, comprising:at least one processing device;
at least one memory device; and
a module stored in the at least one memory device comprising executable instructions that when executed by the at least one processing device, cause the at least one processing device to:
gather one or more input parameters associated with data loading in a software application;
simulate a production environment based on the one or more input parameters for executing the software application, wherein the production environment is a replica of a real-time production server;
execute a data loading code associated with the software application in the simulated production environment;
calculate a data loading time based at least on historical data and an output associated with executing the data loading code in the simulated production environment;
identify that the data loading time is greater than a predetermined threshold;
automatically optimize, via a machine learning model, the data loading code, wherein the machine learning model is trained based on historical optimization data; and
display the optimized data loading code to a user.

US Pat. No. 11,030,079

SERVICE VIRTUALIZATION PLATFORM

Capital One Services, LLC...

1. A method for providing a service virtualization platform, the method comprisingintegrating, by a plugin integrator, one or more virtualization tools;
creating or managing, by a core application programming interface (API), one or more virtual services based on the one or more integrated virtualization tools; and
running, by a proxy agent, the one or more virtual services, and
wherein the plugin integrator is accessed or provided at a plugin layer and the core API is accessed or provided at a core layer, the core layer being different from the plugin layer, and
wherein the creating or managing of the one or more virtual services is performed solely by the core API and the running of the one or more virtual services is performed solely by the proxy agent.

US Pat. No. 11,030,078

SYSTEMS AND METHODS FOR DIGITAL CONTENT TESTING

Facebook, Inc., Menlo Pa...

1. A computer-implemented method comprising:receiving, by a computing system, test device information identifying one or more user computing devices as test devices;
receiving, by the computing system, a selection of a first advertisement type of a plurality of advertisement types, wherein the first advertisement type specifies a call to action to associate with a test advertisement and wherein the call to action is for installation of an application;
receiving, by the computing system, a first advertisement request from a first user computing device within a predetermined period of time from when the selection of the first advertisement type was received;
determining, by the computing system, that the first user computing device is identified as a test device;
transmitting, by the computing system, the test advertisement of the first advertisement type to the first user computing device based on the determination that the first user computing device is identified as the test device and that the first advertisement request was received within the predetermined period of time; and
transmitting, by the computing system, an error message indicating that no advertisement is found based on a second advertisement request that was received after the predetermined period of time.

US Pat. No. 11,030,077

FRAMEWORK FOR TESTING AND VALIDATING CONTENT GENERATED BY APPLICATIONS

Amazon Technologies, Inc....

1. A computer-implemented method comprising:receiving a test specification at a provider network, the test specification being originated by a client and specifying a content validation block to be used as part of validating content generated by an application of the provider network, the content validation block identifying one or more data fields and a corresponding one or more expected data values, wherein the test specification further identifies one or more actions to be performed to test the application;
causing the one or more actions to be performed;
identifying, based on the test specification, one or more services within the provider network;
communicating with the one or more services to retrieve one or more artifacts generated by the application, wherein the artifacts are generated by the application based on performance of the one or more actions;
generating a validation result, the generating including comparing the one or more expected data values specified in the content validation block to a corresponding one or more actual data values obtained from the one or more artifacts; and
providing the validation result to the client.

US Pat. No. 11,030,076

DEBUGGING METHOD

Undo Ltd., Cambridge (GB...

1. A method of generating an output log for analysis of a computer program, the method comprising:receiving a recording of an execution of the program;
receiving an original output log generated based upon the execution of the program associated with the recording or generated based upon a replaying of the recording, the output log including the output from one or more existing print instructions included in the program;
receiving an additional print instruction to print a value of a data item and an indication of a point in the program at which the additional print instruction is to be evaluated;
determining a corresponding point in the recording of the execution based upon the indication of the point in the program;
evaluating the additional print instruction based upon the recording of the execution and the determined corresponding point without re-compiling or re-executing the program to determine an output of the additional print instruction; and
generating a simulated output log by combining the output of the one or more existing print instructions included in the original output log and the output of the additional print instruction;
wherein combining the output of the one or more existing print instructions and the output of the additional print instructions includes determining a position to insert the output of the additional print instruction within the one or more print instructions included in the original output log according to the order in which the one or more existing print instructions are executed in the recording of the execution and the determined corresponding point in the recording associated with the additional print instruction and inserting the output of the additional print at the determined position.

US Pat. No. 11,030,075

EFFICIENT REGISTER BREAKPOINTS

MICROSOFT TECHNOLOGY LICE...

1. A method, implemented at a computer system that includes one or more processors, for performing a register breakpoint check, the method comprising:decoding a machine code instruction that is a subject of a debugging session;
based on the decoding,
identifying one or more registers that the machine code instruction could potentially touch during execution of the machine code instruction, including identifying a register storing a processor flag that could be affected by execution of the machine code instruction, and
inserting an identification of the identified one or more registers into a stream of executable operations for the machine code instruction;
subsequent to inserting the identification of the identified one or more registers into the stream of executable operations for the machine code instruction, executing the machine code instruction by executing the executable operations for the machine code instruction; and
after executing the executable operations for the machine code instruction,
identify, from the identification of the identified one or more registers that was inserted into the stream of executable operations for the machine code instruction, a particular register that was actually touched by execution of the executable operations for the machine code instruction, the particular register comprising the register storing the processor flag and which was affected by execution of the executable operations for the machine code instruction; and
perform at least the following for the touched particular register:
compare the particular register with a register breakpoint collection; and
generate an event when the particular register is in the register breakpoint collection.

US Pat. No. 11,030,073

HYBRID INSTRUMENTATION FRAMEWORK FOR MULTICORE LOW POWER PROCESSORS

Oracle International Corp...

1. A method comprising:rewriting an original binary executable into a rewritten binary executable that invokes telemetry instrumentation that makes performance observations and emits traces of said performance observations;
first executing the rewritten binary executable to make first performance observations and emit first traces of said first performance observations;
second executing the original binary executable;
suspending, based on said first traces of said first performance observations of said first executing, a thread of said original binary executable;
generating a dynamic performance profile based on said second executing the original binary executable.

US Pat. No. 11,030,072

CREATING AND STARTING FAST-START CONTAINER IMAGES

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:receiving, at a host computer, a preview image of a container, the preview image comprising a subset of an original image of the container;
executing, at the host computer, the preview image of the container for a workload;
receiving in a background mode, at the host computer, subsequent to receiving the preview image of the container, a portion of the original image of the container that does not include the subset of the original image of the container included in the preview image, wherein the portion combined with the subset is the original image; and
based at least in part on detecting a fault during the executing of the preview image of the container, accessing, for continuing execution of the workload, the portion of the original image of the container that does not include the subset of the original image of the container included in the preview image of the container.

US Pat. No. 11,030,071

CONTINUOUS SOFTWARE DEPLOYMENT

Armory, Inc., San Mateo,...

1. A computer-implemented method for automated canary deployment, the method comprising:providing a first production hardware infrastructure running a first production code;
receiving a code update in a development environment;
launching a second production hardware infrastructure running a second production code that incorporates the code update;
directing a first portion of user traffic to the first production hardware infrastructure and a second portion of user traffic to the second production hardware infrastructure;
monitoring the performance of the first production code and the second production code in serving user traffic;
determining whether the performance of the second production code falls below a threshold performance on one or more metrics;
when the second production code does not fall below the threshold performance on the one or more metrics for a period of time, redirecting the first portion of user traffic from the first production hardware infrastructure to the second production hardware infrastructure.

US Pat. No. 11,030,070

APPLICATION HEALTH MONITORING BASED ON HISTORICAL APPLICATION HEALTH DATA AND APPLICATION LOGS

VMWARE, INC., Palo Alto,...

1. A method comprising:obtaining historical application health data and historical application logs associated with an application for a period;
determining priority of services associated with the application based on the historical application health data associated with a portion of the period;
determining priority of exceptions associated with each of the services based on the historical application health data and the historical application logs associated with the portion of the period;
training an application regression model by correlating the priority of the services, the associated priority of the exceptions, and the corresponding historical application health data;
testing the application regression model based on the historical application health data and the historical application logs associated with a remaining portion of the period; and
predicting health of the application for an upcoming period using the application regression model based on the testing.

US Pat. No. 11,030,069

MULTI-LAYER AUTOSCALING FOR A SCALE-UP CLOUD SERVER

INTERNATIONAL BUSINESS MA...

1. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a device to cause the device to perform operations comprising:detecting whether a performance of a system, which executes an application using a layered software environment that provides one or more lower layer software environments below an upper layer software environment, reaches a target performance;
horizontally scaling the layered software environment, including scaling a first layer software environment in the layered software environment in response to the performance of the system not reaching the target performance and scaling a second layer software environment that is above the first layer software environment in the layered software environment in response to the performance of the system not reaching the target performance despite the first layer software environment being scaled; and
vertically scaling hardware resources used for executing the layered software environment in the system in response to the performance of the system not reaching the target performance based on a response time of the application and a central processing unit (CPU) usage rate before scaling of the first layer software environment based on a lock contention.

US Pat. No. 11,030,068

GRAPHICAL USER INTERFACE (GUI) FOR REPRESENTING INSTRUMENTED AND UNINSTRUMENTED OBJECTS IN A MICROSERVICES-BASED ARCHITECTURE

Splunk Inc., San Francis...

1. A method of rendering a graphical user interface (GUI) comprising an application topology graph for a microservice architecture, the method comprising:generating a plurality of traces from a first plurality of spans generated by instrumented services in the microservice architecture for a given time duration;
generating a second plurality of spans for uninstrumented services in the microservice architecture using information extracted from the first plurality of spans;
grouping the second plurality of spans with the plurality of traces;
traversing the plurality of traces and collecting a plurality of span pairs therefrom, wherein each span pair of the plurality of span pairs is associated with a call between two services in the microservice architecture;
aggregating information across the plurality of span pairs to generate aggregated information for the given time duration; and
rendering the application topology graph for the microservice architecture in the GUI using the aggregated information for the given time duration, wherein the application topology graph comprises both the instrumented services and the uninstrumented services in the microservice architecture.

US Pat. No. 11,030,067

COMPUTER SYSTEM AND METHOD FOR PRESENTING ASSET INSIGHTS AT A GRAPHICAL USER INTERFACE

Uptake Technologies, Inc....

1. A computing system comprising:a network interface configured to communicatively couple the computing system to a plurality of data sources;
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to:
receive data related to the operation of a plurality of assets;
based on the received data, derive a plurality of insights related to the operation of at least a subset of the plurality of assets;
from the plurality of insights, define a given subset of insights to be presented to a given user;
define at least one aggregated insight, wherein the at least one aggregated insight is representative of two or more individual insights in the given subset of insights that are related to a common underlying problem, and wherein the at least one aggregated insight is included in the given subset of insights to be presented to the given user in place of the two or more individual insights; and
cause a client station associated with the given user to display a visualization of the given subset of insights to be presented to the given user that includes (i) an insights pane comprising at least a partial listing of the given subset of insights that includes the at least one aggregated insight in place of the two or more individual insights that are represented by the at least one aggregated insight and (ii) a details pane that provides additional details regarding a selected one of the given subset of insights.

US Pat. No. 11,030,066

DYNAMIC APPLICATION DECOMPOSITION FOR EXECUTION IN A COMPUTING ENVIRONMENT

EMC IP Holding Company LL...

1. An apparatus comprising:at least one processing platform comprising one or more processing devices;
the at least one processing platform being configured to:
execute a portion of an application program in a first virtual computing element, wherein the application program comprises one or more portions of marked code;
receive a request for execution of a select portion of the one or more portions of marked code;
decide whether to execute the select portion of marked code identified in the request in the first virtual computing element or in a second virtual computing element; and
cause the select portion of marked code identified in the request to be executed in the second virtual computing element, when it is decided to execute the select portion of marked code in the second virtual computing element;
wherein the processing platform comprises a controller module and an interceptor module; and
wherein the interceptor module associated with the select portion of marked code identified in the request intercepts the request and
sends a query to the controller module to make the decision whether to execute the select portion of marked code identified in the request in the first virtual computing element or in the second virtual computing element.

US Pat. No. 11,030,065

APPARATUS AND METHOD OF GENERATING RANDOM NUMBERS

Arm Limited, Cambridge (...

1. An apparatus comprising:analog circuitry comprising an entropy source, the entropy source being configured to provide a random output;
first digital circuitry to receive the output of the entropy source and, based on said output, generate random numbers;
second digital circuitry to receive the output of the entropy source and, based on said output, generate random numbers, the second digital circuitry being a duplicate of the first digital circuitry; and
difference detection circuitry to determine a difference of operation between the first digital circuitry and the second digital circuitry,
wherein each of the first digital circuitry and the second digital circuitry comprises entropy checking circuitry to check the entropy of the output of the entropy source.

US Pat. No. 11,030,064

FACILITATING CLASSIFICATION OF EQUIPMENT FAILURE DATA

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a memory that stores computer executable components;
a processor that executes the computer executable components stored in the memory, wherein the computer executable components comprise:
a first grouping component that groups a first equipment failure data of a set of equipment failure data into a first failure type group based on a determined failure criterion;
an equipment assembly including one or more capacitors to increase one or more power factors, wherein the determined failure criterion comprises an average reactive power data identifying a fluid traversing from a first position to a second position; and
a selection component that selects a second equipment failure data from the set of equipment failure data based on a level of similarity between the first equipment failure data and the second equipment failure data.

US Pat. No. 11,030,063

ENSURING DATA INTEGRITY DURING LARGE-SCALE DATA MIGRATION

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:obtaining, at a first host, a first message from a first message queue, the first message indicating a storage location of an untransformed data object, wherein the storage location is a first storage location of a data storage service of a computing resource service provider;
generating, at the first host, a reference set of checksums corresponding to a transformation of the untransformed data object into one or more first transformed data objects according to a transformation scheme;
generating, by the first host, a second message, the second message including: the first message, the reference set of checksums, and a host identifier of the first host;
providing, by the first host, the second message to a second message queue;
obtaining, at a second host, the second message from the second message queue;
parsing, at the second host, the second message to determine the storage location of the untransformed data object indicated by the first message;
obtaining, at the second host, the untransformed data object from the storage location;
determining, by the second host, whether the host identifier included in the second message is different from an identifier of the second host,
as a result of the determination that the host identifier included in the second message is different from the identifier of the second host, transforming, at the second host, the untransformed data object into one or more second transformed data objects according to the transformation scheme;
sending, by the second host, the one or more second transformed data objects and the reference set of checksums obtained from the second message, to the data storage service;
generating, by the data storage service, a verification set of checksums for the one or more second transformed data objects;
verifying, by the data storage service, that the verification set of checksums for the one or more second transformed data objects match the reference set of checksums obtained from the second message; and
as a result of the verifying, storing, by the data storage service, the one or more second transformed data objects at a second storage location of the data storage service different from the first storage location.

US Pat. No. 11,030,062

CHUNK ALLOCATION

Rubrik, Inc., Palo Alto,...

21. A non-transitory machine-readable medium comprising instructions which, when read by a machine, cause the machine to perform operations at a data management system comprising a memory configured to store a first snapshot of a real or virtual machine and store a second snapshot of the real or virtual machine different from the first snapshot of the virtual machine, the operations comprising, at least:generating a first plurality of data sets using the first snapshot of the real or virtual machine;
acquiring disk ages for a plurality of disks at a first point in time;
determining a plurality of estimated times to failure for the plurality of disks at the first point in time using the disk ages for the plurality of disks at the first point in time;
identifying a first subset of the plurality of disks less than all of the disks of the plurality of disks using the plurality of estimated times to failure for the plurality of disks at the first point in time;
identifying a second subset of the plurality of disks different from the first subset of the plurality of disks using the plurality of estimated times to failure for the plurality of disks at the first point in time;
storing a first data set of the first plurality of data sets using the first subset of the plurality of disks;
storing a second data set of the first plurality of data sets using the second subset of the plurality of disks;
acquiring a second snapshot of the real or virtual machine subsequent to acquiring the first snapshot of the real or virtual machine;
generating a second plurality of data sets using the second snapshot of the real or virtual machine;
acquiring disk ages for the plurality of disks at a second point in time subsequent to the first point in time;
determining a plurality of estimated times to failure for the plurality of disks at the second point in time using the disk ages for the plurality of disks at the second point in time;
identifying a third subset of the plurality of disks less than all of the disks of the plurality of disks using the plurality of estimated times to failure for the plurality of disks at the second point in time, wherein the first subset of the plurality of disks is different from the third subset of the plurality of disks; and
storing a third data set of the second plurality of data sets using the third subset of the plurality of disks.

US Pat. No. 11,030,061

SINGLE AND DOUBLE CHIP SPARE

Hewlett Packard Enterpris...

1. A device comprising:a memory controller to:
access a first portion of a memory, the first portion of the memory operating in single chip spare mode;
access a second portion of the memory, the second portion of the memory operating in double chip spare mode while the first portion is operating in the single chip spare mode;
detect an error in a first region of the first portion of the memory; and
convert the first region containing the error to operate in double chip spare mode, while leaving a second region of the first portion operating in single chip spare mode.

US Pat. No. 11,030,060

DATA VALIDATION DURING DATA RECOVERY IN A LOG-STRUCTURED ARRAY STORAGE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for data validation during data recovery in a log-structure array (LSA) storage system, comprising:reading a log record of a recovery log for a logical address to obtain a physical address at a storage backend for data of the logical address at the time of the log record;
reading reference metadata at the obtained physical address, wherein the reference metadata indicates the logical address that last wrote data to the physical address;
validating that the physical address for the log record contains valid data for the logical address of the log record by comparing the logical address of the reference metadata to the logical address of the log record; and
in response to a determination that the physical address is validated for recovery of virtual domain logical metadata, replaying the log record and mapping the logical address to the physical address of the log record,
wherein counters are maintained at the logical metadata of the virtual domain and of the log records, and wherein a counter of a log record being read is compared with a current counter of the logical metadata for the logical address such that log records with a greater counter than the current counter of the logical metadata are replayed to update the logical metadata.

US Pat. No. 11,030,059

AUTOMATION OF DATA STORAGE ACTIVITIES

Commvault Systems, Inc., ...

1. A method for allocating execution of a data storage workflow suite within a networked data storage system, the method comprising:causing a display to display a plurality of data storage display objects associated with a plurality of data storage activities;
receiving relationship data associated with the plurality of data storage display objects, wherein the relationship data indicates an order in which data storage activities are to be performed for at least first and second client computing devices having one or more processors;
with a first computing system comprising one or more processors, generating a data storage workflow suite based at least in part on the received relationship data, the data storage workflow suite comprising executable instructions for carrying out the plurality of data storage activities, the plurality of data storage activities comprising:
one or more backup storage activities in which primary data generated by the first and second client computing devices and residing in primary storage is copied to and stored in secondary storage;
deploying the data storage workflow suite to a workflow engine which executes on a second computing system comprising one or more processors;
receiving an instruction to initiate the data storage workflow suite;
based at least in part on the instruction to initiate the data storage workflow suite, causing the workflow engine to execute the one or more backup storage activities, wherein executing the one or more backup storage activities comprises:
identifying a first data agent for implementing at least one of the one or more backup storage activities, wherein the first data agent resides on the first client computing device and is configured to process first data in preparation for backing up the first data,
identifying a second data agent for implementing at least one of the one or more backup storage activities, wherein the second data agent resides on the second client computing device and is configured to process second data in preparation for backing up the second data,
copying the primary data from a primary data store associated with the first client computing device and a primary data store associated with the second client computing device, and
storing the copied primary data in the secondary storage; and
based at least in part on the relationship data of the data storage workflow suite, identifying a second data storage activity of the data storage workflow suite to execute.

US Pat. No. 11,030,058

RESTORING ARCHIVED OBJECT-LEVEL DATABASE DATA

Commvault Systems, Inc., ...

1. A system for removing and restoring database data, the system comprising:one or more computing devices comprising computer hardware configured to:
process a database file residing on one or more first storage devices to identify a subset of data in the database file for archiving, the database file generated by a database application executing on the one or more computing devices;
copy the subset of the data from the database file to a first volume, wherein the first volume is organized as a plurality of volume blocks having a volume block size;
delete the subset of the data from the database file;
create a snapshot of the first volume comprising the copied subset of data; and
divide the snapshot into a plurality of blocks having a common size; and
at least one secondary storage controller computer comprising hardware and residing in a secondary storage subsystem, the at least one secondary storage controller computer configured to:
receive the plurality of blocks over a network connection;
copy the plurality of blocks to one or more secondary storage devices to create a secondary copy of the plurality of blocks;
create a table that provides a mapping between the copied plurality of blocks and corresponding locations in the one or more secondary storage devices;
receive a request from the one or more computing devices to restore a database block from the one or more secondary storage devices in the secondary storage subsystem, wherein the request is initiated by the one or more computing devices in response to intercepting a read operation by the database application to access one or more database blocks that have been removed; and
in response to receiving the request to restore the database block:
access the table to determine location of the database block in the one or more secondary storage devices; and
restore the requested database block from the one or more secondary storage devices to primary storage subsystem.

US Pat. No. 11,030,057

SYSTEM AND METHOD FOR CRITICAL VIRTUAL MACHINE PROTECTION

EMC IP Holding Company LL...

1. A backup agent for facilitating restorations of virtual machines, comprising:a persistent storage that stores backup/restoration policies; and
a backup/restoration policy updater programmed to:
identify a change of a label associated with a portion of data of a production host, wherein the label specifies a characteristic ascribed to the data by a client that utilizes services provided by a virtual machine of the virtual machines, wherein the characteristic indicates a level of importance of the portion of the data to the client;
in response to identifying the change in the label:
perform a threat analysis, using the changed label, of a virtual machine of the virtual machines to determine a new security policy for the virtual machine; and
update a policy of the backup/restoration policies associated with the virtual machine based on the new security policy,
wherein the updated policy specifies that a first quantity of computing resources are to be used to generate a backup of the portion of the data, the policy specifies that a second quantity of the computing resource are to be used to generate the backup of the portion of the data, and the first quantity is different from the second quantity.

US Pat. No. 11,030,056

DATA SYSTEM FOR MANAGING SYNCHRONIZED DATA PROTECTION OPERATIONS AT NODES OF THE DATA SYSTEM, SOURCE NODE AND DESTINATION NODE, AND COMPUTER PROGRAM PRODUCT FOR USE IN SUCH DATA SYSTEM

HITACHI VANTARA LLC, San...

1. A data system for managing synchronized data protection operations at plural nodes of the data system, comprising:one or more physical devices which provide resources that configure the plural nodes which include a node chain including at least a first node, a second node downstream of the first node, and a third node downstream of the second node in the node chain, and the first node being communicably connected to the second node, and the second node being communicably connected to the third node, the first node receiving data from an application host,
wherein the resources are configured to store data associated with an application running on the application host,
wherein the first node is configured to operate on the basis of first sequence information including instructions of a sequence of operations to be executed by the first node for performing a first data protection operation and for synchronizing the sequence of operations to be executed by the first node with the second node and the third node,
wherein the second node is configured to operate on the basis of second sequence information including instructions of a sequence of operations to be executed by the second node for performing a second data protection operation and for synchronizing the sequence of operations to be executed by the second node with the first node and the third node,
wherein the third node is configured to operate on the basis of third sequence information including instructions of a sequence of operations to be executed by the third node for performing a third data protection operation and for synchronizing the sequence of operations to be executed by the third node with the first node and the second node,
wherein the first node is configured to put the application into a hold on the basis of a respective instruction included in the first sequence information,
wherein the first, second and third data protection operations relate to data associated with the application running on the application host, and after the first node, second node and third node each initiate their sequence of operations, respectively, the first node is configured to request to the application, on the basis of a respective instruction included in the first sequence information and before putting the application on hold, application context information which indicates one or more logical storage locations of data related to the application,
wherein the first node is configured to request, on the basis of a respective instruction included in the first sequence information and before putting the application on hold, storage context information which indicates one or more physical storage locations, of storage areas of the first node, of data related to the application associated with the one or more logical storage locations indicated in the application context information,
wherein the first node is configured to, when needed, translate the application context information into the requested storage context information, which indicates one or more physical storage locations, of the storage areas of the first node, of data related to the application associated with the one or more logical storage locations indicated in the application context information, on the basis of a respective instruction included in the first sequence information and on the basis of the application context information, and to transmit the generated storage context information to the second node,
wherein the first node is configured to transmit, on the basis of a respective instruction included in the first sequence information and before putting the application on hold, the requested application context information to the second node,
wherein the second node is configured to request, on the basis of a respective instruction included in the second sequence information, storage context information which indicates one or more physical storage locations, of storage areas of the second node, of data related to the application associated with the one or more logical storage locations indicated in the application context information,
wherein the second node is configured to transmit, on the basis of a respective instruction included in the second sequence information and before putting the application on hold, the requested application context information to the third node,
wherein the third node is configured to request, on the basis of a respective instruction included in the third sequence information, storage context information which indicates one or more physical storage locations, of storage areas of the third node, of data related to the application associated with the one or more logical storage locations indicated in the application context information,
wherein the first node, the second node and the third node are each configured to perform the first data protection operation, the second data protection operation and the third data protection operation, respectively, after putting the application into the hold,
wherein the first node is configured to perform the first data protection operation, after the first application is put into the hold, and to then transmit a first synchronization notification to the second node on the basis of respective instructions included in the first sequence information,
wherein the second node is configured to receive the first synchronization notification from the first node, perform the second data protection operation in response to the first synchronization notification from the first node, and to then transmit a second synchronization notification to the third node on the basis of respective instructions included in the second sequence information,
wherein the third node is configured to receive the second synchronization notification from the second node and perform the third data protection operation in response to the received second synchronization notification from the second node,
wherein the third node is further configured to transmit a first confirmation notification to the second node when having performed the third data protection operation on the basis of a respective instruction included in the third sequence information,
wherein the second node is further configured to transmit a second confirmation notification to the first node when having performed the second data protection operation and having received the first confirmation notification from the third node on the basis of a respective instruction included in the second sequence information, and
wherein the first node is configured to resume the application upon receiving the second confirmation notification from the second node on the basis of a respective instruction included in the first sequence information.

US Pat. No. 11,030,055

FAST CRASH RECOVERY FOR DISTRIBUTED DATABASE SYSTEMS

Amazon Technologies, Inc....

1. A database system, comprising:a database head node implemented by one or more computing devices, and configured to receive and process access requests from one or more clients over a network;
a second database node implemented by one or more computing devices; and
a distributed log-structured storage system comprising a plurality of storage nodes communicating with the database head node over the network, wherein each of the plurality of storage nodes comprises one or more persistent storage devices different from the one or more computing devices implementing the database head node, wherein each of the plurality of storage nodes is configured to store one or more redo log records received from the database system head node, and wherein the distributed log-structured storage system provides current versions of data of the database system for the one or more clients in response to access requests from the one or more clients, the current versions of data generated based at least in part on one or more of the stored plurality of redo log records;
wherein the database head node is configured to perform an update operation to the database system, wherein to perform the update operation the database head node is configured to:
receive, from a client of the one or more clients, a request to update a particular one or more data pages of the database system; and
send, over the network in response to the update request, one or more redo log records to one or more storage nodes of the plurality of storage nodes implementing the distributed log-structured storage system, wherein each of the one or more redo log records corresponds to an update to a data page stored in one of the one or more storage nodes, and wherein the one or more storage nodes store the one or more redo log records received from the database head node to update the particular one or more data pages; and
wherein the second database node is configured to:
subsequent to the one or more storage nodes receiving the update request, recover from a failure of the database head node, wherein to recover from the database head node failure, the second database node is configured to:
establish one or more respective connections over the network with the one or more storage nodes of the plurality of storage nodes implementing the distributed log-structured storage system; and
responsive to establishment of the one or more respective connections, provide access to a current version of the one or more data pages stored in the database system, wherein to provide access to the current version of the one or more data pages stored in the database system, the second database node is further configured to:
receive, from another client of the one or more clients, an access request for the database; and
responsive to the received access request, send a request over a connection of the one or more respective connections to a storage node of the distributed log-structured storage system for the current version of the one or more data pages, wherein the storage node of the distributed log-structured storage system generates the current version of the one or more data pages by applying the stored one or more redo log records to previously stored versions of the one or more data pages; and
receive the current version of the one or more data pages over the connection from the storage node without the second database node replaying the one or more redo log records.

US Pat. No. 11,030,054

METHODS AND SYSTEMS FOR DATA BACKUP BASED ON DATA CLASSIFICATION

International Business Ma...

1. A computer-executed method comprising:maintaining a plurality of data storage systems for storing electronic data containing one or more processors having circuitry and logic that perform calculations and logic operations in communication with an external metadata management system containing one or more processors having circuitry and logic that perform calculations and logic operations;
operating the metadata management system to store metadata corresponding to the electronic data stored on the plurality of data storage systems;
identifying, using information included in the metadata management system, a candidate electronic data set stored on at least one of the plurality of data storage systems on which at least one of a plurality of backup actions should be performed;
in response to identifying the candidate electronic data set, identifying the at least one of the plurality of backup actions, wherein identifying the at least one of a plurality of backup actions comprises:
identifying one or more custom tags for metadata corresponding to the candidate electronic data set; and
using the one or more custom tags to identify the at least one of a plurality of backup actions; and
executing the at least one of a plurality of backup actions on the candidate electronic data set stored on the plurality of data storage systems, wherein each of the plurality of backup actions comprises storing the candidate electronic data set in secondary storage different than the plurality of data storage systems and in a form different than the format stored on the plurality of data storage systems.

US Pat. No. 11,030,053

EFFICIENT DISASTER ROLLBACK ACROSS HETEROGENEOUS STORAGE SYSTEMS

Nutanix, Inc., San Jose,...

1. A method, comprising:determining target disaster recovery (DR) data for a target system from snapshot data captured from a source system at an earlier timepoint, wherein the source system has a first hypervisor type that is different from a second hypervisor type of the target system;
performing a user virtual machine activity at the target system to generate updated target DR data at a later timepoint after the earlier timepoint; and
in response to a failback event signal for a failback operation on the source system:
calculating a difference between the updated target DR data and the target DR data; and
sending at least the difference to the source system, wherein the failback operation is performed by converting the difference of the second hypervisor type into a converted difference of the first hypervisor type and by combining the difference with the snapshot data.

US Pat. No. 11,030,052

DATA PROTECTION USING CHECKPOINT RESTART FOR CLUSTER SHARED RESOURCES

EMC IP Holding Company LL...

1. A method of backing up a cluster resource, comprising:determining, by one or more processors, that backup of a cluster shared volume is to be performed; and
in response to determining that the backup of the cluster shared volume is to be performed:
causing, by the one or more processors, an active cluster node to take and store persistently on the cluster shared volume a persistent snapshot of the cluster shared volume;
using, by the one or more processors, the persistent snapshot to back up the cluster shared volume to a backup storage node connected to the active cluster node via a network, including by storing checkpoint information indicating as the backup progresses which portions of the persistent snapshot have been backed up, wherein using the active cluster node to take and store persistently on the cluster shared volume the persistent snapshot of the cluster shared volume includes invoking a virtual shadow copy service (VSS) in a manner that results in the persistent snapshot being store persistently on the cluster shared volume; and
in response to a determination that a failure is detected, restarting a backup of the cluster shared volume based at least in part on the checkpoint information.

US Pat. No. 11,030,051

SYSTEM AND METHOD FOR IDENTIFYING CHANGES IN DATA CONTENT OVER TIME

PROPYLON LIMITED

1. A system comprising a data analysis engine, a first data source and a second data source, the engine being configured to cross reference content within each of the first data source and the second data source, the content within the first data source being defined relative to a first point in time, the second data source comprising a data repository configured for storing original data content and modified data content, the original and modified content being addressable for facilitating fully-consistent point-in-time retrieval thereof without the use of locks or replicas; the data repository comprising:a first corpus defined by two or more digital files which are associated with the original content, the two or more digital files being inter-related to one another at a first point-in-time, the first corpus including hypertext links that allow users to traverse links, moving from one digital file at the first point-in-time to another inter-related digital file at the same first point-in-time via a web browser, the first corpus defining a snapshot of the two or more digital files as they existed at the first point-in-time;
a log detailing actions implemented on the two or more of the digital files;
a second corpus defined by two or more digital files which are associated with the original content, the two or more digital files being inter-related to one another at a second point-in-time, the second corpus being defined by a versioned repository generated after an action is implemented on the one of the two or more digital files of the first corpus, wherein the versioned repository includes an updated version of the entire first corpus at the second point-in-time, the updated version including one or more modified digital files which are associated with the modified content and hypertext links that allows users to traverse links, moving from one digital file at the second point-in-time to another inter-related digital file at the second point-in-time via a web browser, the second corpus defining a snapshot of the two or more digital files as they existed at the second point-in-time; and
a point-in-time version identifier defining a revision number associated with each corpus such that the first corpus is defined by a first revision number and the second corpus defined by a second different revision number, each revision number providing a point-in-time identifier and retrieval point of the totality of digital files defined by each of the first corpus and the second corpus respectively, and
wherein the data repository is configured:
to use the revision number for each of the first and second corpus in combination with the hypertext links within each of the first corpus and the second corpus to constrain a user traversal of the hyperlinks defined within each of the first corpus and the second corpus to those documents that are defined wholly within the first corpus and the second corpus respectively; and
to use the revision numbers in generating point-in-time hyperlinks such that retrieved query result sets can be traversed at any particularly retrieved point-in-time using the generated point-in-time hyperlinks which are specific to that retrieved query result set; and
wherein the data analysis engine is configured to define within the first data source a tuple comprising a date stamp for the first data source's last change event and identified content within the first data source that is referenced to content within the second data source's content, the data analysis engine being further configured to use the tuple to parse the first corpus and the second corpus to identify changes in the second data source content that is relevant to the first data source content.

US Pat. No. 11,030,050

METHOD AND DEVICE OF ARCHIVING DATABASE AND METHOD AND DEVICE OF RETRIEVING ARCHIVED DATABASE

ARMIQ Co., Ltd., Seoul (...

1. A database archiving and retrieving method, comprising: selecting at least one record group including a plurality of records from an original table of a database from which data is archivable, based on selection information on at least one of a time and a field value;when a record group in the at least one selected record group has records the number of which exceeds a predetermined threshold value, dividing the record group into a plurality of record groups such that each of the plurality of divided record groups has records the number of which is equal to or smaller than the predetermined threshold value;
storing group compression data compressed to be created for every record group and the selection information corresponding to the group compression data in a compression table of the database, with respect to each of at least one selected record group;
receiving a retrieving condition for retrieving records from the compression table;
determining a number of database (DB) retrieving processes for retrieving the records in parallel, based on a performance of a computer which is configured to perform retrieval and the number of group compression data corresponding to the selection information satisfying the retrieving condition; and
retrieving the records satisfying the retrieving condition in parallel, based on the determined number of DB retrieving processes,
wherein the determining of the number of DB retrieving processes includes:
collecting computer performance information on at least one of a number of central processing units (CPUs) included in the computer, a capacity of a memory, and an input/output speed of a storage device;
determining the number of group compression data corresponding to the selection information satisfying the received retrieving condition among the group compression data stored in the compression table; and
determining the number of DB retrieving processes for retrieving the records in parallel, based on the collected computer performance information and the determined number of group compression data,
wherein the retrieving of the records satisfying the retrieving condition in parallel includes:
allocating at least one group compression data to each of the determined number of DB retrieving processes, based on the number of group compression data corresponding to the selection information satisfying the retrieving condition, so that all of the group compression data are allocated to all of the DB retrieving processes: and
decompressing the at least one group compression data which is allocated for every DB retrieving process and retrieving the records satisfying the retrieving condition in parallel.

US Pat. No. 11,030,049

DATA BACKUP MANAGEMENT DURING WORKLOAD MIGRATION

International Business Ma...

1. A computer-implemented method for managing data backup of workloads, the workloads being migrated from a source environment to a target environment, the computer-implemented method comprising:identifying, by a computer, a set of workloads for migration from the source environment to the target environment in response to receiving a request to migrate the set of workloads;
initiating, by the computer, the migration of the set of workloads from the source environment to the target environment along with migration of backup data corresponding to the set of workloads; and
determining, by the computer, a backup configuration transformation from a backup configuration corresponding to the source environment to a set of backup configurations corresponding to the target environment based on semantic matching between characteristics of the backup configuration corresponding to the source environment and characteristics of the set of backup configurations corresponding to the target environment, a state of the source environment, backup configuration transformation actions, and a goal state of the target environment, wherein the characteristics include data dependencies between virtual machines executing the set of workloads and wherein the set of workloads is migrated in waves based on the data dependencies.

US Pat. No. 11,030,048

METHOD A SOURCE STORAGE DEVICE TO SEND A SOURCE FILE AND A CLONE FILE OF THE SOURCE FILE TO A BACKUP STORAGE DEVICE, A SOURCE STORAGE DEVICE AND A BACKUP STORAGE DEVICE

Huawei Technologies Co., ...

1. A method for a source storage device to send data to a backup storage device, the method comprising:sending, by a processor included in the source storage device, a data block to the backup storage device to be stored as a target file, wherein the source storage device includes the processor and one or more disks for storing files, a source file includes the data block and a first block pointer pointing to the data block, and a clone file of the source file includes a second block pointer pointing to the data block;
determining, by the processor included in the source storage device, that the source file is associated with the clone file by searching a cloning recorder with a source file ID of the source file, wherein the cloning recorder includes IDs of files each associated with an ID of a corresponding clone file;
based upon the determining, sending, by the processor included in the source storage device, a clone creating request including the source file ID and an ID of the clone file of the source file to the backup storage device to instruct the backup storage device to create a clone file of the target file;
receiving, by the processor included in the source storage device, a message in response to the clone creating request, wherein the message includes an ID of the clone file of the target file, and wherein the ID of the clone file of the target file is identical to the ID of clone file of the source file;
determining, by the processor included in the source storage device, that the source file has been updated after the clone file of the source file is created, wherein first update data is used to update the source file and is written into the source storage device after the clone file of the source file in the source storage device is created; and
after receiving the message in response to the clone creating request, sending, by the processor included in the source storage device, the first update data to the backup storage device to be stored on the backup storage device, wherein sending the first update data to the backup storage device causes the backup storage device to:
record, by a log program, a log of the first update data and write the log into a log area, wherein the log includes an offset within the target file and a size of the first update data;
based on the offset within the target file and the size of the first update data, determine which block of the target file corresponds to a block to be replaced by the first update data; and
redirect a data pointer pointing to the block to be replaced to a block corresponding to the first update data stored on backup storage device.

US Pat. No. 11,030,047

INFORMATION HANDLING SYSTEM AND METHOD TO RESTORE SYSTEM FIRMWARE TO A SELECTED RESTORE POINT

Dell Products L.P., Roun...

1. An information handling system (IHS), comprising:a computer readable non-volatile memory configured to store system firmware;
a computer readable storage device configured to store an operating system (OS), a system registry, and an OS restore application; and
a processing device configured to execute program instructions within the OS restore application to restore the system registry to a selected restore point and reboot the IHS;
wherein as the IHS is in the process of being rebooted, the processing device is further configured to execute program instructions within a firmware restore application to restore the system firmware to the selected restore point.

US Pat. No. 11,030,046

CLUSTER DIAGNOSTICS DATA FOR DISTRIBUTED JOB EXECUTION

Snowflake Inc., San Mate...

1. A method comprising:receiving, by a computing cluster, an application for processing by nodes of the computing cluster, the nodes including a driver node and a plurality of execution nodes for processing of tasks of the application;
distributing, by the driver node, the tasks of the application for processing by the plurality of execution nodes;
processing the tasks by the plurality of execution nodes;
identifying, by the computing cluster, one or more errors in the application received by the computing cluster;
in response to the one or more errors, re-processing the tasks of the application by the plurality of execution nodes with a telemetry setting active in the application, the driver node passing a log request to a log object of a wrapper to bypass a native logging service of the computing cluster in response to the telemetry setting being active, the driver node further receiving telemetry metadata for access to a telemetry network service of a distributed database that provides data to the computing cluster through a database connector, the driver node receiving the telemetry metadata through the database connector of the distributed database;
distributing, by the driver node, the telemetry metadata to the plurality of execution nodes for re-processing; and
transmitting, by the plurality of execution nodes, to the telemetry network service of the distributed database, log data that is generated by the plurality of execution nodes in re-processing the tasks of the application.

US Pat. No. 11,030,045

APPARATUS AND METHOD FOR UTILIZING DIFFERENT DATA STORAGE TYPES TO STORE PRIMARY AND REPLICATED DATABASE DIRECTORIES

Futurewei Technologies, I...

1. A method, comprising:configuring each first data storage of a plurality of first data storages to store a primary database directory, the plurality of first data storages comprising a direct-access storage type, a first data storage of the plurality of first data storages being in communication with a corresponding node of the plurality of nodes, the first data storage being configured to store the primary database directory for access by only the corresponding node and is not shared with other nodes;
detecting a failure in connection with at least one node of the plurality of nodes; and
in response to detection of the failure, accessing a shared storage apparatus via a network interface coupled to the plurality of nodes, the shared storage apparatus including a plurality of second data storages, each second data storage of the plurality of second data storages configured to store a replicated database directory that replicates at least a portion of the primary database directory.

US Pat. No. 11,030,044

DYNAMIC BLOCKCHAIN DATA STORAGE BASED ON ERROR CORRECTION CODE

Alipay (Hangzhou) Informa...

1. A computer-implemented method for processing blockchain data performed by a first blockchain node of a blockchain network, the method comprising:receiving a request for performing error correction coding (ECC) to one or more blocks of a blockchain;
obtaining the one or more blocks based on blockchain data received from at least one second blockchain node of the blockchain network; and
performing ECC of the one or more blocks to generate one or more encoded blocks, wherein a code rate of the one or more encoded blocks equals a minimum number of honest blockchain nodes required by the blockchain network and a total number of blockchain nodes of the blockchain network.

US Pat. No. 11,030,043

ERROR CORRECTION CIRCUIT AND MEMORY SYSTEM

TOSHIBA MEMORY CORPORATIO...

1. An error correction circuit comprising:a syndrome calculator to calculate syndrome information of input data comprising a plurality of bits;
an error position calculator to calculate error position information of the input data, based on the syndrome information;
a holder to hold the syndrome information or the error position information at a predetermined timing;
an error detection determiner to determine whether an error of the input data is correctly detected, based on syndrome information calculated by inputting error-corrected data, based on the error position information to the syndrome calculator; and
an error corrector to output the input data after correcting the error of the input data based on information held by the holder when it is determined by the error detection determiner that the error is correctly detected, and to output the input data with no error correction when it is determined by the error detection determiner that the error is not correctly detected.

US Pat. No. 11,030,042

FLASH MEMORY APPARATUS AND STORAGE MANAGEMENT METHOD FOR FLASH MEMORY

Silicon Motion, Inc., Hs...

1. A flash memory apparatus, comprising:a flash memory module comprising a plurality of storage blocks, a cell of each storage block can be used for storing data of a first bit number or data of a second bit number; and
a flash memory controller, configured for classifying data to be programmed into a plurality of groups of data, respectively executing error code encoding to generate a first corresponding parity check code to store the groups of data and the first corresponding parity check code into the flash memory module;
wherein the flash memory controller is further arranged for executing error correction and de-randomize operation upon the groups of data to generate de-randomized data, executing randomize operation upon the de-randomized data according to a set of seeds to generate randomized data, performing error code encoding upon the randomized data to generate a second corresponding parity check code, storing the randomized data and the second corresponding parity check code into the flash memory module.

US Pat. No. 11,030,041

DECODING METHOD, ASSOCIATED FLASH MEMORY CONTROLLER AND ELECTRONIC DEVICE

Silicon Motion, Inc., Hs...

1. A decoding method applicable to a flash memory controller, comprising:reading first data from a flash memory module;
decoding the first data in order to obtain a decoding result and reliability information;
comparing the decoding result with the reliability information to determine at least one specific address of the flash memory module with high reliability errors (HRE); and
reading second data from the flash memory module, and decoding the second data with regard to said specific address;
wherein data with HRE within the first data represents that the data has high reliability and an incorrect initial bit value.

US Pat. No. 11,030,040

MEMORY DEVICE DETECTING AN ERROR IN WRITE DATA DURING A WRITE OPERATION, MEMORY SYSTEM INCLUDING THE SAME, AND OPERATING METHOD OF MEMORY SYSTEM

SK hynix Inc., Gyeonggi-...

1. A memory device comprising:a write error check circuit suitable for detecting an error in received data, transferred from a memory controller, using a received error correction code, transferred from the memory controller, during a write operation; and
a memory core suitable for storing the received data and the received error correction code that is used for detecting the error in the received data when no error is detected by the write error check circuit,
wherein, when an error in the received data is detected by the write error check circuit during the write operation, the received error correction code and the received data are not stored in the memory core.

US Pat. No. 11,030,039

OUT-OF-BOUNDS RECOVERY CIRCUIT

Imagination Technologies ...

1. An out-of-bounds recovery circuit for an electronic device having at least a first operating state having first non-allowable memory addresses and a second operating state having second non-allowable memory addresses, the out-of-bounds recovery circuit comprising:detection logic configured to:
monitor one or more control and/or data signals of the electronic device,
detect an out-of-bounds violation in the electronic device, when the detection logic determines, based on the one or more control and/or data signals of the electronic device, that the electronic device is in the first operating state and a processing element of the electronic device has fetched an instruction from one of the first non-allowable memory addresses, and
detect an out-of-bounds violation in the electronic device, when the detection logic determines, based on the one or more control and/or data signals of the electronic device, that the electronic device is in the second operating state and the processing element of the electronic device has fetched an instruction from one of the second non-allowable memory addresses; and
transition logic configured to, in response to the detection logic detecting an out-of-bounds violation, cause the electronic device to transition to a predetermined safe state.

US Pat. No. 11,030,038

FAULT PREDICTION AND DETECTION USING TIME-BASED DISTRIBUTED DATA

Microsoft Technology Lice...

1. A computer-implemented method for predicting a state of a storage device, the method comprising:collecting, by a computing device, read/write latency data for input/output operations executed at a storage device of a plurality of storage devices of a software-defined storage network;
based on the collected read/write latency data, determining, by the computing device, a time-distributed read/write latency performance profile for the storage device;
determining, by the computing device, a characteristic time-distributed read/write latency performance profile for a representative group of storage devices having common characteristics with the storage device and based on previously collected performance data for devices of the representative group;
determining, by the computing device, that a difference between the time-distributed read/write latency performance profile for the storage device and the characteristic time-distributed read/write latency performance profile exceeds a predetermined deviance threshold that is indicative of a probable failure of the storage device; and
based on the determining that the storage device exceeded the predetermined deviance threshold, redirecting read/write requests for the storage device to other storage devices.

US Pat. No. 11,030,037

TECHNOLOGY SYSTEM AUTO-RECOVERY AND OPTIMALITY ENGINE AND TECHNIQUES

Capital One Services, LLC...

1. An apparatus, comprising:a memory storing programming code; and
a triage processing component, coupled to the memory and, via a communication interface, to a monitoring component that monitors operation of computer implemented processes of a network, operable to execute the stored programming code, that when executed causes the triage processing component to perform functions, including functions to:
evaluate a first process break event received from a monitoring component for a correlation to a possible cause of a potential operational breakdown of a computer process of the computer implemented processes;
based on the correlation to the possible cause of the potential operational breakdown of the computer process, identify corrective actions implementable to fix a computer implemented process exhibiting a symptom of the potential operational breakdown;
populate a risk assessment matrix with a break risk assessment value and a fix risk assessment value assigned to each of the identified corrective actions;
obtain a list of corrective actions correlated to the first process break event from a runbook, wherein the runbook includes a plurality of corrective actions that correct potential operational breakdowns of the computer implemented processes of the network; and
modify the list of corrective actions based on a rule set applied to the risk assessment matrix, wherein the modified list of corrective actions includes at least one of the identified corrective actions as an optimal corrective action.

US Pat. No. 11,030,036

EQUIPMENT FAILURE RISK DETECTION AND PREDICTION IN INDUSTRIAL PROCESS

International Business Ma...

1. A computer-implemented method of detecting equipment failure risk in an industrial process, the method performed by at least one hardware processor, comprising:determining from a cluster of nodes, a plurality of nodes storing equipment operations data associated with an operation performed by an equipment during a time range between an installation time and a maintenance time of the equipment;
constructing a target table based on aggregated operation features determined from the plurality of nodes configured to execute a distributed processing operation in parallel, the distributed processing operation computing operation features associated with the equipment; and
performing a risk failure analysis of an instance of the equipment based on the aggregated operation features stored in the target table,
wherein the determining is performed for multiple entries in parallel as distributed parallel processing.

US Pat. No. 11,030,035

PREVENTING CASCADE FAILURES IN COMPUTER SYSTEMS

International Business Ma...

1. A method comprising:receiving binary data that identifies multiple subcomponents in a complex stream computer system, wherein identified multiple subcomponents comprise multiple upstream subcomponents that generate multiple outputs and a downstream subcomponent that executes a downstream computational process that uses the multiple outputs, and wherein the multiple upstream subcomponents execute upstream computational processes;
determining accuracy values by assigning a determined accuracy value to each of the multiple outputs from the multiple upstream subcomponents, wherein determined accuracy values describe a confidence level of an accuracy of each of the multiple outputs;
determining weighting values by assigning a weighting value to each of multiple inputs to the downstream subcomponent, wherein determined weighting values describes a criticality level of each of the multiple inputs when executing the downstream computational process in the downstream subcomponent;
utilizing the determined accuracy values and the determined weighting values to dynamically adjust which of the multiple inputs are used by the downstream subcomponent in order to generate an output from the downstream subcomponent that meets a predefined trustworthiness level for making a first type of prediction;
determining that no variations of execution of one or more functions used by the downstream subcomponent ever produce an output that meets the predefined trustworthiness level for making the first type of prediction; and
in response to determining that no variations of execution of the one or more functions used by the downstream subcomponent ever produce the output that meets the predefined trustworthiness level for making the first type of prediction, executing, by computer hardware, a new downstream computational process that produces a different second type of prediction than that produced by the downstream subcomponent.

US Pat. No. 11,030,034

QUANTITATIVE SOFTWARE FAILURE MODE AND EFFECTS ANALYSIS

Intel Corporation, Santa...

1. A control device for performing a software failure mode and effects analysis (SW FMEA) comprising:a processor;
memory, including instructions, which when executed by the processor, cause the processor to:
receive threshold values for safety integrity levels for a software component within a system;
monitor the software component, wherein to monitor the software component includes to monitor an interaction between the software component and at least one of: a second component or a subcomponent;
determine, based at least in part on a result of the monitoring of the software component, a preliminary risk priority number for the software component, the risk priority number based on a ranking of a possibility of failure of the software component, the ranking determined by a severity, an occurrence likelihood, and a detectability of a failure of the software component;
compare the preliminary risk priority number to the threshold values to identify a preliminary safety integrity level corresponding to the software component; and
output the preliminary safety integrity level;
apply a mitigation to the system for the software component, wherein the mitigation is at least one of: improving a code in at least one of: the software component, the second component, or the subcomponent, swapping at least a portion of at least one of: the software component, the second component or the subcomponent, rewriting at least a portion of: the software component, the second component or the subcomponent, or changing an interaction between the software component and at least one of the second component or the subcomponent; and
a display device to present, on a user interface, the preliminary safety integrity level.

US Pat. No. 11,030,033

MEMORY DEVICE AND METHOD FOR HANDLING INTERRUPTS THEREOF

WINBOND ELECTRONICS CORP....

1. A memory device, comprising:a memory cell array;
a monitoring circuit, configured to detect one or more event parameters of the memory cell array, wherein the one or more event parameters correspond to one or more interrupt events of the memory cell array;
an event-checking circuit, configured to determine whether to assert an interrupt signal according to the one or more event parameters detected by the monitoring circuit; and
a control logic, configured to control the memory cell array according to a command from a processor, wherein the control logic comprises a mode register that pre-records a plurality of operation modes of the memory cell array and a status of the interrupt event corresponding to each event parameter,
wherein in response to the event-checking circuit determining to assert the interrupt signal, the event-checking circuit directly transmits the interrupt signal to the processor via a physical interrupt pin of the memory device, so that the processor handles the one or more interrupt events of the memory device according to the interrupt signal,
wherein in response to the processor having handled each interrupt event sequentially, the processor changes the status of each interrupt event in the mode register to a normal status.

US Pat. No. 11,030,032

SYSTEM AND METHOD FOR TRANSFORMING OBSERVED METRICS INTO DETECTED AND SCORED ANOMALIES

1. A system comprising:a normal behavior characterization circuit configured to (i) receive values for a plurality of metrics and (ii) generate, for each metric, a baseline profile corresponding to the metric that indicates normal behavior of the metric based on the received values for the metric exclusive of values for any other metrics;
a relationship data store circuit configured to store a graph as a data structure, wherein (i) nodes in the graph represent the plurality of metrics and (ii) edges in the graph represent relationships between pairs of the plurality of metrics;
an anomaly identification circuit configured to, for each metric, identify anomalies in present values of the metric in response to deviations of the present values outside the corresponding baseline profile for the metric;
an anomaly characterization circuit configured to develop a model of anomalies of each of the plurality of metrics based on the identified anomalies;
an anomaly scoring circuit configured to determine scores for anomalies identified by the anomaly identification circuit based on the anomaly models; and
a reporting circuit configured to send an alert to a designated user in response to one of the scores exceeding a threshold,
wherein the normal behavior characterization circuit is configured to update the baseline profiles in response to receiving additional metric values, and
wherein the anomaly characterization circuit is configured to update the anomaly models on a schedule.

US Pat. No. 11,030,031

REDUNDANT SENSOR FABRIC FOR AUTONOMOUS VEHICLES

GHOST LOCOMOTION INC., M...

1. A method for a redundant sensor fabric in an autonomous vehicle, comprising:receiving, by a processing unit, sensor data from a first sensor of a plurality of sensors associated with a same sensing space of the autonomous vehicle;
detecting a fault associated with the first sensor;
establishing, via a switched fabric, in response to detecting the fault, a communications path between the processing unit and a second sensor of the plurality of sensors; and
receiving, by the processing unit, sensor data from the second sensor instead of the first sensor.

US Pat. No. 11,030,030

ENHANCED ADDRESS SPACE LAYOUT RANDOMIZATION

Intel Corporation, Santa...

1. An enhanced address space memory corruption detection apparatus, comprising:memory circuitry to provide a linear address space that includes a metadata data structure;
metadata circuitry to receive a memory access request that includes a target address pointer comprising a target metadata value and target linear address; and
metadata comparison circuitry to compare the target metadata value to a metadata value stored in the metadata data structure at a location pointed to by at least a portion of the target linear address;
wherein the metadata circuitry is further to, responsive to a determination that the target metadata value and the stored metadata value are identical, grant the memory access request to allow access to the target linear address.

US Pat. No. 11,030,029

METHOD OF DETERMINING POTENTIAL ANOMALY OF MEMORY DEVICE

YANDEX EUROPE AG, Lucern...

1. A method of determining a potential anomaly of a memory device, the memory device being a hard disk drive (HDD) for processing a plurality of I/O operations, the method executable at a supervisory entity computer, the supervisory entity computer being communicatively coupled to the memory device, the method comprising:over a pre-determined period of time:
determining a subset of input/output (I/O) operations having been sent to the memory device for processing;
applying at least one counter to determine an actual activity time of the memory device during the pre-determined period of time, the actual activity time being an approximation value representative of time the memory device took to process at least a portion of the subset of I/O operations;
applying a pre-determined model to generate an estimate of a benchmark processing time for each one of the subset of I/O operations, the applying the pre-determined model comprising:
based on a size of the sequential one of the subset of I/O operations and a pre-determined speed of execution of a type of the sequential one of the subset of I/O operations, determining an execution time for the sequential one of the subset of I/O operations;
determining if execution of the sequential one of the subset of I/O operations requires a full-cycle re-positioning of a writing head of the memory device from a position where a previous one of the subset of I/O operations terminated being recorded; and
if full-cycle re-positioning is required, adding to the execution time a pre-determined full-cycle re-positioning time to derive the benchmark processing time;
calculating a benchmark processing time for the subset of I/O operations;
generating a performance parameter based on the actual activity time and the benchmark processing time; and
based on an analysis of the performance parameter, determining if the potential anomaly is present in the memory device.

US Pat. No. 11,030,028

FAILURE DETECTION APPARATUS, FAILURE DETECTION METHOD, AND NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM

YOKOGAWA ELECTRIC CORPORA...

1. A failure detection apparatus comprising:a RAM; and
a controller configured to execute processing related to detection of a physical quantity in a predetermined sampling period;
wherein the RAM comprises a plurality of partitioned areas generated by partitioning an entire area of the RAM; and
wherein the controller is configured to execute, during each sampling period in a sequence of sampling periods, sequential failure detection on a portion of the plurality of partitioned areas during a remaining time of each sampling period, when the controller is not executing the processing in each of the sampling periods, the controller thereby executing in alternating sequence the processing related to the detection of the physical quantity and the sequential failure detection, and wherein
the sampling periods are set depending on the detected physical quantity.

US Pat. No. 11,030,027

SYSTEM FOR TECHNOLOGY ANOMALY DETECTION, TRIAGE AND RESPONSE USING SOLUTION DATA MODELING

BANK OF AMERICA CORPORATI...

1. A system for technology anomaly detection, triage, and response using solution data modeling, the system comprising:one or more memory devices having computer readable code stored thereon;
one or more processing devices operatively coupled to the one or more memory devices, wherein the one or more processing devices are configured to execute the computer readable code to:
generate one or more solution data models comprising a plurality of asset systems and a plurality of users, wherein each of the plurality of asset systems is associated with at least one user of the plurality of users, at least a first of the plurality of asset systems is associated with at least a second of the plurality of asset systems, and the one or more solution data models are generated by:
accessing one or more authentication systems, wherein the one or more authentication systems comprise authentication information associated with the plurality of asset systems and the plurality of users;
extracting the authentication information associated with the plurality of asset systems and the plurality of users;
accessing one or more human resources systems, wherein the one or more human resources systems comprise human resources information associated with the plurality of users;
extracting the human resources information associated with the plurality of users;
accessing one or more asset management systems, wherein the one or more asset management systems comprise asset information associated with at least type and location of the plurality of asset systems;
extracting the asset information associated with plurality of asset systems;
identifying a first set of relationships between each of the plurality of asset systems based on the extracted authentication information;
identifying a second set of relationships between each of the plurality of users and each of the plurality of asset systems based on the extracted authentication information; and
formulating the one or more solution data models based on the first set of relationships, the second set of relationships, the asset information, and the human resources information;
continuously monitor the plurality of asset systems;
detect an anomaly associated with one or more tasks associated with at least a first group of asset systems of the plurality of asset systems based on continuously monitoring the plurality of asset systems;
extract a first solution data model associated with the first group of asset systems based on detecting the anomaly associated with the one or more tasks;
identify one or more relationships associated with the first group of asset systems based on the extracted first solution data model; and
identify a point of failure associated with the anomaly and the first group of asset systems based on the one or more relationships, wherein the point of failure is associated with a first asset system of the first group of systems.

US Pat. No. 11,030,026

SYSTEM AND METHOD FOR ERROR DETECTION AND MONITORING OF OBJECT-ASSET PAIRS

GROUPON, INC., Chicago, ...

1. An apparatus comprising a processor and a memory, the memory comprising instructions that configure the apparatus to:receive a plurality of first data stream sets, wherein each first data stream set of the plurality of first data stream sets is associated with a request data object of a plurality of request data objects;
receive a plurality of second data stream sets, wherein each second data stream set of the plurality of second data stream sets is associated with a network response asset of a plurality of network response assets, and wherein each network response asset of the plurality of network response assets is associated with a corresponding request data object of the plurality of request data objects;
for each object-asset pair of a plurality of object-asset pairs comprising a network response asset of the plurality of network response assets and the corresponding request data object of the plurality of request data objects that is associated with the network response asset, determine a prioritization score based on one or more prioritization parameters for the object-asset pair, wherein the one or more prioritization parameters for an object-asset pair of the plurality of object-asset pairs comprises one or more prioritization features of a requesting entity associated with the object-asset pair;
determine, based on each prioritization score for an object-asset pair of the plurality of object-asset pairs, a prioritized subset of the plurality of object-asset pairs; and
generate a monitoring system interface, wherein the monitoring system interface comprises a set of renderable data objects for each object-asset pair in the prioritized subset of the plurality of object-asset pairs.

US Pat. No. 11,030,025

MANAGING INTER-PROCESS COMMUNICATIONS IN A CONTAINERIZED APPLICATION ENVIRONMENT

VMware, Inc., Palo Alto,...

1. A method of managing inter-process communications (IPCs) in a containerized application environment, the method comprising:identifying a generation request to generate an IPC object using a first identifier;
translating the first identifier of the IPC object to a second identifier based on a container associated with the generation request;
storing the IPC object in the memory system using the second identifier;
identifying, from an application executing in the container on a host, a request for the IPC object using the first identifier for the IPC object, wherein the first identifier is used by the application in the container and one or more other applications executing in one or more other containers on the host;
translating the first identifier to the second identifier associated with the IPC object based on the container for the application, wherein the second identifier is unique to at least the container and wherein requests from at least a portion of the one or more other containers using the first identifier are translated to one or more other unique identifiers associated with one or more other IPC objects located in the memory system with the IPC object;
accessing the IPC object in the memory system using the second identifier; and
providing the IPC object to the application.

US Pat. No. 11,030,024

ASSIGNING A SEVERITY LEVEL TO A COMPUTING SERVICE USING TENANT TELEMETRY DATA

Microsoft Technology Lice...

1. A system for determining a severity level of a computing service, the system comprising:an electronic processor configured to
receive telemetry data associated with one or more tenants of an online service, the online service providing services through a plurality of computing services;
calculate, based on the telemetry data, a number of accesses of each of the plurality of computing services during a predetermined time period by counting each access by a unique user during the predetermined time period;
for each of the plurality of computing services, assign a severity level to each computing service based on the number of accesses of each computing service during the predetermined time period relative to the number of accesses of another computing service included in the plurality of computing services during the predetermined time period; and
in response to detecting a failure of one of the plurality of computing services, initiate a response to the failure based on the severity level assigned to the one of the plurality of computing services,
wherein the electronic processor is configured to assign the severity level to each computing service based on the number of accesses of each computing service during the predetermined time period relative to the number of accesses of another computing service included in the plurality of computing services during the predetermined time period by calculating one or more percentile thresholds based on the number of accesses of each of the plurality of computing services and assigning the severity level to each of the plurality of computing services based on a comparison of the number of accesses of each of the plurality of computing services to the one or more percentile thresholds.

US Pat. No. 11,030,023

PROCESSING SYSTEM WITH INTERSPERSED PROCESSORS DMA-FIFO

Coherent Logix, Incorpora...

1. An apparatus, comprising:a plurality of processors configured to execute at least one program;
a plurality of memories, wherein each memory of the plurality of memories is coupled to a subset of the plurality of processors;
a plurality of configurable communication elements, wherein each configurable communication element of the plurality of configurable communication elements includes a plurality of communication ports, a first memory, and a routing engine;
wherein to execute the at least one program, each processor of the subset of the plurality of processors is configured to communicate with at least one other processor of the subset of the plurality of processors via a particular configurable communication element of the plurality of configurable communication element; and
a plurality of direct memory access (DMA) engines, wherein each DMA engine of the plurality of DMA engines is configured to transfer data between selected memories of the plurality of memories and selected configurable communication elements of the plurality of configurable communication elements;
a controller circuit configured to coordinate operation of a first subset of DMA engines of the plurality of DMA engines independent of the plurality of processors; and
wherein the first subset of DMA engines is configured to:
receive respective input data streams and generate a single output data stream with a first data rate by storing the respective input data streams with a second data rate in a buffer located in a portion of a particular memory of the plurality of memories, wherein the first data rate is greater than the second data rate; and
in response to a determination that a particular DMA engine of the plurality of DMA engines cannot receive the single output data stream, signal a second subset of DMA engines sending the respective input data streams to pause before continuing to send the respective input data streams, wherein the particular DMA engine is not included in the first subset of DMA engines.

US Pat. No. 11,030,022

SYSTEM AND METHOD FOR CREATING AND MANAGING AN INTERACTIVE NETWORK OF APPLICATIONS

CIMPLRX CO., LTD., Seoul...

1. A method for information analysis comprising:receiving, by a processor, a first command;
generating, by the processor, a first container in response to the first command;
invoking, by the processor, a first application for generating a first display of data in the first container;
generating, by the processor, first contextual information associated with the first container;
storing, by the processor, the first contextual information in a data store;
detecting, by the processor, interaction with the first container;
in response to the interaction, applying, by the processor, a function in relation to the data displayed in the first container;
generating, by the processor, a second container in response to applying the function;
invoking, by the processor, a second application for generating a second display of data in the second container;
generating, by the processor, second contextual information associated with the second container;
updating the data store with the second contextual information; and
linking, by the processor, the first container to the second container via a graphical link.

US Pat. No. 11,030,021

VARIABLE SELECTION OF DIFFERENT VERSIONS OF AN EVENT HANDLER

TRACELINK, INC., North R...

1. A method for variable event handling in a multi-tenant environment comprising:receiving an event placed on an event bus in an event driven data processing system in connection with an event name, a version of a target application and an owner of a version of the target application, wherein the event corresponds to a multiplicity of different instances of a single event handler, and wherein each instance being adapted to process the event;
decoding the event to identify a version of a target application for the event;
matching the version of the target application to an end point for a particular one of the different event handlers, wherein the end point comprise a location prefix appended to the event name, the matching utilizing the event name, version of the target application and owner as a key into a routing table stored in cache memory of the event driven data processing system to identify the end point; and,
routing the event to the matched end point by inserting the event name with prefix onto the event bus.

US Pat. No. 11,030,020

ASYNCHRONOUS HANDLING OF SERVICE REQUESTS

Oracle International Corp...

1. One or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause execution of operations comprising:receiving, from a requesting entity, a request comprising a function identifier and function input, the request being managed by a first thread;
responsive to receiving the request, selecting a first event handler, of a plurality of event handlers, to process the request;
translating, via the first event handler, the function identifier to the native function call; and
initiating execution of the native function call using the function input;
receiving output corresponding to the execution of the native function call;
responsive to receiving the output corresponding to the execution of the native function call, selecting a second event handler, of the plurality of event handlers, to process the output;
generating, at least in part by the second event handler, a response based on the output;
determining that the first thread managing the request has failed; and
transmitting the response to the requesting entity via a second thread.

US Pat. No. 11,030,019

DELETION OF EVENTS BASED ON A PLURALITY OF FACTORS IN A CONNECTED CAR COMPUTING ENVIRONMENT

International Business Ma...

1. A method, comprising:maintaining, by a computational device, indications of a plurality of events associated with navigation of a plurality of vehicles in a geographical area, wherein at least one event of the plurality of events is generated by the computational device, in response to an identical automotive system being activated in at least a threshold number of different vehicles of the plurality of vehicles within a predetermined time interval; and
in response to receiving by an event deletion manager, an indication from one or more deletion determination agents that a determination cannot be made on whether to delete or not delete an event of the plurality of events, calculating an earliest decision time for the one or more deletion determination agents and requesting the one or more deletion determination agents to provide indication on whether to delete the event by applying a relaxed criteria for deleting the event, wherein the relaxed criteria makes it more likely to allow determination of whether to indicate deletion of events in comparison to a previously applied criteria, wherein the relaxed criteria is a less stringent criteria for deletion of the event in comparison to the previously applied criteria, and wherein for a type of event generated via a delegation, the earliest decision time is derived from a location of a vehicle, based on a distance and velocity of the vehicle.

US Pat. No. 11,030,018

ON-DEMAND MULTI-TIERED HANG BUSTER FOR SMT MICROPROCESSOR

International Business Ma...

1. A computer-implemented method for breaking out of a hang condition in a processor, comprising:determining a first plurality of actions available at each of a plurality of tiers used for breaking out of the hang condition in the processor;
after detecting the hang condition on a first thread of the processor, performing one or more actions available at a first tier of the plurality of tiers to break out of the hang condition;
after performing the one or more actions at the first tier and determining that the hang condition is still present, performing one or more actions available at one or more second tiers of the plurality of tiers to break out of the hang condition; and
upon reaching a last tier of the plurality of tiers after performing the one or more actions at the first tier and performing the one or more actions at the one or more second tiers:
determining that the hang condition is still present; and
performing a flush operation to break out of the hang condition, wherein the flush operation is one of the first plurality of actions and is performed at an initial time at the last tier of the plurality of tiers in order to break out of the hang condition.

US Pat. No. 11,030,017

TECHNOLOGIES FOR EFFICIENTLY BOOTING SLEDS IN A DISAGGREGATED ARCHITECTURE

Intel Corporation, Santa...

1. A sled comprising:a network interface controller;
a set of processors;
firmware that includes an operating system; and
circuitry to (i) perform, with multiple processors in the set of processors, a boot process that includes to perform a first subset of boot operations with a first processor in the set of processors, and to perform a second subset of boot operations with a second processor in the set of processors, the second subset of boot operations to include the second processor to copy an operating system boot sector into a memory of the sled, (ii) initialize the operating system present in the firmware, (iii) receive, with the network interface controller and from another sled, an assignment of a workload, and (iv) execute the assigned workload with the operating system.

US Pat. No. 11,030,016

COMPUTER SERVER APPLICATION EXECUTION SCHEDULING LATENCY REDUCTION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for assigning containers for executing functions in a software application using a server cluster, the method comprising:creating, by a global processor of the server cluster, a first container at a first host node in the server cluster, the first container assigned for executing a first function from the software application;
intercepting, by an in-node processor of the first host node, a command from the first function for creating a second container for executing a second function;
assigning, by the in-node processor, an in-node container as the second container, the in-node container being a container executing on the first host node, wherein assigning the in-node container as the second container further comprises:
identifying, by the in-node processor, that the in-node container exists in suspended state, and in response, assigning the in-node container to the execution of the second function, the suspended state is an inactive state, wherein the in-node container is mapped to one or more resources of the first host node without the in-node container being associated with any instance of a function execution, wherein assigning the in-node container as the second container comprises:
determining, by the in-node processor, that resources available at the first host node meet resources to be used by the second function, and in response, creating the in-node container at the first host node; and
in response to the resources available at the first host node not meeting the resources to be used, the in-node processor forwards the command from the first function for creating the second container to the global processor; and
updating, by the in-node processor, a resource database that indicates that the second function is being executed in the in-node container on the first host node.

US Pat. No. 11,030,015

HARDWARE AND SOFTWARE RESOURCE OPTIMIZATION

International Business Ma...

1. A hardware and software resource optimization method comprising:retrieving from hardware and software systems, by a processor of a hardware controller, operational parameters of said hardware and software systems;
analyzing, by said processor, said operational parameters;
determining, by said processor based on results of said analyzing, a probability of impact with respect to modified sizing requirements associated with said hardware and software systems;
generating, by said processor based on said results of said analyzing, actions comprising logical rules mapped to said operational parameters;
executing, by said processor, said actions;
determining, by said processor based on results of said execution, an actual impact with respect to executing said modified sizing requirements associated with said hardware and software systems; and
modifying, by said processor in response to results of said executing and said determining said actual impact, operational allocations of said hardware and software systems with respect to operational functionality of said hardware and software systems.

US Pat. No. 11,030,014

CONCURRENT DISTRIBUTED GRAPH PROCESSING SYSTEM WITH SELF-BALANCE

Oracle International Corp...

1. A method comprising:each computer of a plurality of computers independently performing:
receiving a respective distributed job to process a respective data partition of a plurality of data partitions of a same application;
dividing the respective distributed job into a plurality of local computation tasks that may concurrently execute;
dispatching the plurality of local computation tasks for execution to a first subset of a local plurality of bimodal threads that are configured for a first mode that executes local computation tasks, wherein:
a second subset of the local plurality of bimodal threads are configured for a second mode that provides remote data access, and
each bimodal thread of the local plurality of bimodal threads has two modes of operation and may be reconfigured for either the first mode that executes local computation tasks or the second mode that provides remote data access;
upon completion of a local computation task of the plurality of local computation tasks, reconfiguring a bimodal thread of the local plurality of bimodal threads for either the first mode that executes local computation tasks or the second mode that provides remote data access to increase system throughput.

US Pat. No. 11,030,013

SYSTEMS AND METHODS FOR SPLITTING PROCESSING BETWEEN DEVICE RESOURCES AND CLOUD RESOURCES

Verizon Patent and Licens...

1. A method, comprising:receiving, by a server device, a request to split processing between a user device and the server device,
wherein the processing is associated with an application installed on the user device and the server device;
determining, by the server device, a processing capability of the user device based on information in the request;
determining, by the server device, a processing capability of the server device;
identifying, by the server device, a service agreement that identifies a subscription status associated with the user device;
determining, by the server device and based on the processing capability of the user device, the processing capability of the server device, and the subscription status identified by the service agreement, a first set of processes of the application that are to be executed by the user device and a second set of processes of the application that are to be executed by the server device,
wherein the request to split processing is authorized based on the subscription status associated with the user device, and
wherein a percentage of the second set of processes of the application is determined based on the subscription status associated with the user device;
sending, by the server device, a response to the user device,
wherein the response identifies the first set of processes and instructions to permit the user device to execute the first set of processes; and
executing, by the server device, the second set of processes.

US Pat. No. 11,030,012

METHODS AND APPARATUS FOR ALLOCATING A WORKLOAD TO AN ACCELERATOR USING MACHINE LEARNING

Intel Corporation, Santa...

1. An apparatus for executing a workload, the apparatus comprising:a workload attribute determiner to identify a first attribute of a first workload, the first attribute representing whether the first workload would cause an accelerator to use at least a threshold amount of memory, the workload attribute determiner to identify a second attribute of a second workload;
an accelerator selection processor to:
in response to the first attribute representing that the first workload would not cause the accelerator to use at least the threshold amount of memory, cause at least a portion of the first workload to be executed by a first accelerator and a second accelerator;
in response to the first attribute representing that the first workload would cause the accelerator to use at least the threshold amount of memory, cause at least the portion of the first workload to be executed by the first accelerator and cause at least the portion of the first workload to not be executed by the second accelerator; and
access respective performance metrics corresponding to execution of the first workload by the first accelerator and the second accelerator;
a neural network trainer to train a machine learning model based on an association between the first attribute and the performance metrics; and
a neural network processor to process, using the machine learning model, the second attribute to select one of the at least two accelerators to execute the second workload.

US Pat. No. 11,030,011

USER INTERFACE AND SYSTEM SUPPORTING USER DECISION MAKING AND READJUSTMENTS IN COMPUTER-EXECUTABLE JOB ALLOCATIONS IN THE CLOUD

International Business Ma...

1. A computer-implemented method of providing a user interface supporting user decision making and readjusting of computer job execution allocations, comprising:receiving a list of computer executable jobs for execution on a computing environment;
receiving one or more user preferences and a plurality of constraints associated with the list of computer executable jobs, wherein the plurality of constraints comprises pricing and turnaround time;
generating a plurality of execution scheduling options for the list of computer executable jobs;
providing a visualization comprising at least the plurality of execution scheduling options with corresponding time and cost estimates;
providing a sandbox environment for editing one or more of the plurality of execution scheduling options, wherein a user may use the sandbox to explore variations in time and cost by reconfiguring the scheduling options;
receiving a user selection of an execution scheduling option from the plurality of execution scheduling options; and
iteratively performing:
monitoring execution of the computer executable jobs allocated in the execution scheduling option selected by the user;
monitoring a current resource status of the computing environment while the computer executable jobs are executing;
determining estimation of time and cost of running one or more of the computer executable jobs based on the monitoring of the execution and the current resource status;
determining uncertainty associated with the estimation of time and cost of incomplete jobs at least based on analyzing internal computing structures associated with historical jobs, wherein incomplete jobs comprises running and queued jobs;
checking the constraints against the current resource status, each constraint may be applied to the current resource status to generate a warning to the user responsive to at least one of the constraints being compromised; and
allowing updating, during the execution, of the visualization associated with the generated plurality of execution scheduling options and scheduling options stored by the user using the sandbox environment with updates to the plurality of execution scheduling options based on the estimation of time and cost and the uncertainty of remaining incomplete jobs.

US Pat. No. 11,030,010

PROCESSING STORAGE MANAGEMENT REQUEST BASED ON CURRENT AND THRESHOLD PROCESSOR LOAD USING REQUEST INFORMATION

HITACHI, LTD., Tokyo (JP...

1. An information processing apparatus, comprising:a computer resource including a processor and a memory;
a storage device coupled to the processor,
wherein the memory stores a first table indicating information of a plurality of requests, the information of each request including:
a request identification (ID), a request type, an object type, an object ID, a related object type indicating another object having a predetermined relationship with the object type, a related objected ID indicating another object ID having a predetermined relationship with the object ID, information indicating one or more other request IDs required to be processed prior to the request ID, and load information indicating a load on the processor for processing the request ID,
wherein the memory stores instructions that when executed by the processor, configures the processor to:
receive a request to be processed to manage the storage device, and store information of the request in the first table, the information of the request including the request identification (ID), object type, object ID, related object type, related objected ID, information indicating one or more other request IDs required to be processed prior to the request ID, and load information of the request,
determine the information indicating one or more other request IDs for each of the requests in the first table based on a predefined dependence relationship among the respective object types of the requests in the first table,
determine an unused load capacity of the processor based on a current load of the processor and a predetermined threshold load of the processor, and
process each of the requests in the first table having load information less than the unused load capacity of the processor in descending order of the load information except for a respective request having the information indicating one or more other request IDs required to be processed prior to the respective corresponding request ID,
wherein the object type indicates a storage pool or a storage volume and the related object type indicates a storage pool or a storage volume, and
wherein the processor is configured to process each of the requests in the first table upon determining the unused load capacity of the processor is greater than a first threshold value, which is different than the predetermined threshold load of the processor.

US Pat. No. 11,030,009

SYSTEMS AND METHODS FOR AUTOMATICALLY SCALING COMPUTE RESOURCES BASED ON DEMAND

ATLASSIAN PTY LTD., Sydn...

1. A computer implemented method for scaling compute resources in a compute group including one or more compute resources, the method comprising:determining an amount of compute capacity required to complete one or more job requests;
determining an amount of allocable compute capacity available on the compute resources in the compute group;
calculating a utilization of the compute group based on the amount of required compute capacity and the amount of allocable compute capacity;
determining that the utilization of the compute group either falls above a first threshold value or below a second threshold value, the first threshold value higher than the second threshold value;
upon determining that the utilization falls above the first threshold value:
calculating a first scaling delta using the first threshold value and the utilization to determine a number of additional compute resources required to bring the utilization of the compute group below the first threshold value; and
instantiating the number of additional compute resources into the compute resources in the compute group to bring the utilization of the compute group below the first threshold value by instructing a resource provider to start up the number of additional compute resources in the compute group; and
upon determining that the utilization of the compute group falls below the second threshold value:
calculating a second scaling delta using the second threshold value and the utilization to determine a number of excess compute resources which, when removed from the compute resources, would bring the utilization of the compute group above the second threshold value; and
removing the number of excess compute resources from the compute resources in the compute group to bring the utilization of the compute group above the second threshold value by instructing the resource provider to terminate the number of excess compute resources in the compute group.

US Pat. No. 11,030,008

CONTROLLER, MEMORY SYSTEM INCLUDING THE CONTROLLER, AND MEMORY ALLOCATION METHOD OF THE MEMORY SYSTEM

SK hynix Inc., Icheon-si...

1. A controller comprising:an allocation manager configured to determine whether a host identification (ID) output from a host is an allocable ID;
an address manager configured to perform an allocation operation using the host ID to select logical blocks corresponding to the host ID when the host ID is received from the allocation manager, and output an address of the logical blocks as an allocation address; and
a map table component configured to store a map table in which logical block addresses and physical block addresses are respectively mapped, select a logical block address corresponding to the allocation address, and output the physical block address mapped to the selected logical block address.

US Pat. No. 11,030,007

MULTI-CONSTRAINT DYNAMIC RESOURCE MANAGER

WESTERN DIGITAL TECHNOLOG...

1. An arrangement configured to couple to a host, the arrangement comprising: a controller comprising:a removable physical storage unit comprising one or more buffers;
random access memory (RAM); and
a multi-constraints dynamic resource manager module configured to control both software clients and hardware clients, wherein the multi-constraints dynamic resource manager module is connected to the removable physical storage unit, and wherein the multi-constraints dynamic resource manager module is configured to:
allocate and deallocate resources selected from the group comprising the RAM, data buffers for reads and writes, descriptors, flow control counters, dynamic buffers, and time slots; and
allocate and deallocate the resources to control a power budget of the arrangement or to minimize latency of a task executing on the arrangement.

US Pat. No. 11,030,006

APPARATUS AND METHOD OF SECURELY AND EFFICIENTLY INTERFACING WITH A CLOUD COMPUTING SERVICE

Palantir Technologies Inc...

1. A method comprising:assisting, by an intermediary device, in preparing configuration settings for a new deployment of one or more computing devices accessible by a cloud computing service;
generating, by the intermediary device, a specification according the configuration settings, wherein the specification describes a configuration of one or more computing devices on the cloud computing service;
translating, by the intermediary device, contents of the specification into a plurality of commands that are compatible with the cloud computing service, wherein the plurality of commands include setup commands and deployment commands;
sending, by the intermediary device, one or more requests, which include the plurality of commands compatible with the cloud computing service, to the cloud computing service;
wherein the one or more requests are programmed to cause the cloud computing service to configure, according to the specification, one or more computing devices on the cloud computing service;
wherein the method is performed using one or more processors.

US Pat. No. 11,030,005

CONFIGURATION OF APPLICATION SOFTWARE ON MULTI-CORE IMAGE PROCESSOR

Google LLC, Mountain Vie...

1. A method performed by one or more computers, the method comprising:receiving a request to compute kernel assignments for an image processing pipeline to be executed on a device having a plurality of stencil processors, wherein the image processing pipeline comprises a plurality of kernels;
generating a plurality of candidate kernel assignments, each candidate kernel assignment assigning each kernel of the image processing pipeline to a respective stencil processor of the plurality of stencil processors;
computing a respective total weight for each of the plurality of candidate kernel assignments, the total weight for each candidate kernel assignment being based on respective transfer sizes of data transferred between kernels according to the candidate kernel assignment;
selecting a candidate kernel assignment according to the respective total weights computed for each of the plurality of candidate kernel assignments; and
assigning kernels of the plurality of kernels to respective stencil processors according to the selected candidate kernel assignment.

US Pat. No. 11,030,004

RESOURCE ALLOCATION FOR SOFTWARE DEVELOPMENT

International Business Ma...

1. A computer program product comprising a non-transitory computer readable medium having a plurality of instructions stored thereon, which, when executed by a processor, cause the processor to perform operations comprising:accessing software development data indicative of a development activities of a software development project carried out using a plurality of components, wherein the plurality of components are allocated system resources based at least in part on an anticipated component workload of a first phase of a plurality of phases of the software development project, wherein accessing the software development data includes accessing a transaction log storing transaction data for software development transactions that include instances of use of the plurality of components;
inferring a phase change of the software development program to a second phase of the plurality of phases based upon, at least in part, the software development data, including a relative frequency and nature of the various software development platform transactions determined from transaction data of the transaction log;
providing a plurality of application server profiles, wherein each profile of the plurality of application server profiles corresponds to a respective phase of the plurality of phases of the software development project and includes parameter settings for the plurality of components of a software development platform, wherein a component parameter controls allocation of system resources for the plurality of components and the parameter settings in each profile, which are based at least in part on an anticipated workload of the plurality of components during the corresponding phase;
in response to inferring the phase change of the software development program to the second phase, setting the component parameter in accordance with the parameter settings in the application server profile which corresponds to the second phase of the plurality of phases; and
allocating system resources to the plurality of components software development platform in accordance with the set component parameters.

US Pat. No. 11,030,003

METHOD AND CLOUD MANAGEMENT NODE FOR MANAGING A DATA PROCESSING TASK

TELEFONAKTIEBOLAGET LM ER...

1. A method for managing a data processing task requested from a client, the method comprising:estimating an amount of energy needed for executing the data processing task;
determining a time period during which the data processing task should be executed;
obtaining for each one of a plurality of data centers a cost to process the data processing task on each data center and an energy cost during the determined time period, the cost to process the data processing task on a given data center comprising a product of a delay tolerance based service rate and a cost of processing resources on the given data center used to process the data processing task and the energy cost for the given data center comprising a total cost of energy consumed by the processing resources during any idle state and during execution of each job running on the given data center divided by a number of jobs running on the given data center;
selecting one of the plurality of data centers based on the obtained costs to process data processing tasks and energy costs;
scheduling execution of the data processing task on the selected one of the plurality of data centers within the determined time period; and
acquiring, through an electrical grid configured to distribute electricity from multiple energy sources, the needed energy from an energy source selected from among the multiple energy sources for use when executing the data processing task.

US Pat. No. 11,030,002

OPTIMIZING SIMULTANEOUS STARTUP OR MODIFICATION OF INTER-DEPENDENT MACHINES WITH SPECIFIED PRIORITIES

International Business Ma...

1. A computer-implemented method of parallel administration of a multi-machine computing system, the method comprising:identifying a plurality of nodes that correspond to individual machines of the multi-machine computing system;
obtaining, at the at least one of the individual machines, estimated total administration times for each of the plurality of nodes;
obtaining, at the at least one of the individual machines, administration priorities for each of the nodes;
constructing, in the at least one of the individual machines, a graph of dependencies among the nodes;
identifying, at the at least one of the individual machines, availability of a first set of administration resources to assist in administration of one or more of the individual machines;
selecting, at the at least one of the individual machines, a first set of the nodes for administration in response to the graph of dependencies, administration priorities, estimated total administration times, and availability of the first set of administration resources;
administering a first set of machines, corresponding to the first set of the nodes, in parallel using the first set of administration resources;
updating the graph of dependencies in response to administration of the first set of machines;
selecting, at the least one of the individual machines, a subsequent set of nodes for administration in response to the updated graph of dependencies, administration priorities, estimated total administration times, and availability of a subsequent set of administration resources; and
administering a subsequent set of machines, corresponding to the subsequent set of the nodes, in parallel using the subsequent set of administration resources.

US Pat. No. 11,030,001

SCHEDULING REQUESTS BASED ON RESOURCE INFORMATION

INTERNATIONAL BUSINESS MA...

1. A method for execution by a request scheduler of a storage unit comprising multiple memory devices, the request scheduler including a processor, the method comprising:determining resource requirements for each request of a set of incoming requests to be executed at the storage unit, wherein the set of incoming requests are received from at least one request issuer;
sending a query to a backing store of the storage unit requesting a location of data associated with each request of the set of incoming requests;
receiving location data from the backing store in response to the query;
determining current resource availability data for resources indicated in the resource requirements, the resource availability data including storage locations of currently stored data in one or more of the multiple memory devices;
determining, based on the current resource availability data, a level of contention for shared resources between the set of incoming requests and: (a) one or more scheduled requests queued by the request scheduler and not yet executed, and (b) one or more tasks that are currently being executed by the storage unit;
generating scheduling data for the set of requests based on the level of contention for shared resources, wherein generating the scheduling data includes identifying a first subset of the set of requests to be queued and a second subset of the set of requests to be immediately, simultaneously executed based on determining that a sum of portions of bandwidth resources, memory resources and processor resources required by the second subset of the set of requests does not exceed currently available bandwidth resources, memory resources, and processing resources;
adding the first subset of the set of requests to a queue maintained by the request scheduler in response to the scheduling data indicating the first subset of the set of requests be queued for execution;
facilitating execution of the set of requests within the storage unit in accordance with the scheduling data by facilitating immediate, simultaneous execution of the second subset of the set of requests and by facilitating serial execution of the first subset of the set of requests; and
removing the first subset of the set of requests from the queue in response to facilitating execution of the first subset of the set of requests.

US Pat. No. 11,030,000

CORE ADVERTISEMENT OF AVAILABILITY

INTEL CORPORATION, Santa...

1. A processor comprising:a plurality of cores including at least a first and a second core;
the first core comprising:
performance monitoring circuitry to monitor performance of the first core,
core-to-core offload circuitry to:
determine an offload availability status of the first core based at least in part on values store in the performance monitoring circuitry, and
transmit an availability indication to the second core of an availability of the first core to act as a helper core to perform one or more tasks on behalf of the second core based upon the determined offload availability status of the first core,
execution circuitry to execute decoded instructions of the one or more tasks of the second core; and
the second core comprising:
execution circuitry to execute decoded instructions of the one or more tasks of the second core, and
an offload phase tracker to maintain status information about at least an availability of the first core to act as a helper core.

US Pat. No. 11,029,999

LOTTERY-BASED RESOURCE ALLOCATION WITH CAPACITY GUARANTEES

Amazon Technologies, Inc....

1. A system, comprising:a pool of compute resources of a multi-client provider network; and
one or more computing devices configured to implement a capacity management system that schedules jobs in the pool of compute resources of the multi-client provider network, wherein the capacity management system is configured to:
receive a job request from a first client of the provider network, wherein a first capacity guarantee with respect to a first quantity of one or more slots in the pool of compute resources of the provider network is associated with the first client, and wherein a second capacity guarantee with respect to a second quantity of one or more slots in the pool of compute resources of the provider network is associated with a second client;
determine that the first quantity of one or more slots associated with the first capacity guarantee for the first client are in use by one or more compute resources of the provider network executing one or more jobs requested by the first client and initiated prior to receiving the job request;
determine that the second quantity of one or more slots associated with the second capacity guarantee for the second client are not all in use by one or more compute resources of the provider network executing one or more jobs requested by the second client, and comprises an available slot;
allocate the available slot of the second quantity of one or more slots associated with the second capacity guarantee for the second client to the job request from the first client, wherein the available slot of the second quantity of one or more slots associated with the second capacity guarantee for the second client is allocated based at least in part on a lottery algorithm; and
perform the job request from the first client using a compute resource of the provider network in the available slot of the second quantity of one or more slots associated with the second capacity guarantee for the second client.

US Pat. No. 11,029,998

GROUPING OF TASKS FOR DISTRIBUTION AMONG PROCESSING ENTITIES

INTERNATIONAL BUSINESS MA...

1. A method, comprising:maintaining a plurality of processing entities;
generating a plurality of task control block (TCB) groups, wherein each of the plurality of TCB groups are restricted to one or more different processing entities of the plurality of processing entities;
assigning a TCB to one of the plurality of TCB groups, at TCB creation time, wherein TCBs assigned to a TCB group reserve a common set of locks, wherein processing speed of TCBs assigned to the TCB group improves up to a predetermined number of processing entities allocated for processing of the TCBs in the TCB group, and processing speed of TCBs assigned to the TCB group slows down if a greater number of processing entities beyond the predetermined number of processing entities are allocated for processing the TCBs assigned to the TCB group, and wherein no more than the predetermined number of processing entities are allocated for processing the TCBs assigned to the TCB group;
indicating, for the TCB, a primary processing entities group and a secondary processing entities group; and
in response to determining that the secondary processing entities group has processing cycles available for processing additional TCBs, moving the TCB from the primary processing entities group to the secondary processing entities group for processing.

US Pat. No. 11,029,997

ENTERING PROTECTED PIPELINE MODE WITHOUT ANNULLING PENDING INSTRUCTIONS

Texas Instruments Incorpo...

1. A method for executing a plurality of instructions by a processor, the method comprising:receiving a first instruction for execution on an instruction execution pipeline, wherein the instruction execution pipeline is in a first execution mode, and wherein the first instruction is configured to utilize a first memory location;
setting a lifetime tracking value of the first instruction based on an expected time for the execution of the first instruction;
beginning the execution of the first instruction on the instruction execution pipeline;
receiving an execution mode instruction to switch the instruction execution pipeline to a second execution mode;
adjusting the lifetime tracking value based on the received execution mode instruction;
switching the instruction execution pipeline to the second execution mode based on the received execution mode instruction;
receiving a second instruction for execution on the instruction execution pipeline, the second instruction configured to utilize the first memory location;
based on the lifetime tracking value having been adjusted, determining that the first instruction and the second instruction utilize the first memory location; and
stalling the execution of the second instruction based on the determining.

US Pat. No. 11,029,996

MULTICORE OR MULTIPROCESSOR COMPUTER SYSTEM FOR THE EXECUTION OF HARMONIC TASKS

1. A computer system comprising:a plurality of processor units that are synchronous and that access resources in non-concurrent manner in order to execute a harmonic set of tasks each having a period that divides into the period of the following task, the tasks being numbered in such a manner that the periods are ordered in an increasing order, the shorter the period of a task, the higher its priority and the lower its number; and
a task interrupt switch device having:
an input for receiving a common time base,
outputs each connected to a respective one of the processor units,
registers each corresponding to a respective output,
reinitializable counters each corresponding to a respective output for counting a number of times task interrupts have been transmitted to said output, and
a control unit arranged:
to store in each register a value corresponding to the number (i) used for ordering the task currently executed on the processor unit connected to the corresponding output;
each time a task interrupt is received, to compare the registers between them and the counters between them in order to reissue each task interrupt, upon initiation of each time step received from the common time base, to the output for which the register has the greatest value, and in the event of equality, to the output for which the counter has the smallest value;
each time a task interrupt is reissued on one of the outputs, to increment the corresponding counter by 1;
wherein each task interrupt corresponds to execution, by the processor unit connected to the corresponding output, of a new subset of tasks of the harmonic set of tasks and each task of said new subset of tasks is executed by said processor unit to completion according to the order associated with the new subset of tasks and at the end of which there is a return to executing the interrupted subset of tasks.

US Pat. No. 11,029,995

HARDWARE TRANSACTIONAL MEMORY-ASSISTED FLAT COMBINING

Oracle International Corp...

1. A method, comprising:performing, by one or more computing devices:
beginning execution of a multithreaded application that comprises a plurality of operations targeting a concurrent data structure, wherein the concurrent data structure is accessible by a plurality of threads of the multithreaded application to apply the plurality of operations to the concurrent data structure;
attempting, by a given thread of the plurality of threads, execution of a given operation of the plurality of operations using a hardware transaction, wherein said attempting is performed prior to adding a descriptor of the given operation to a set of published operations to be applied to the concurrent data structure;
in response to a failure of said attempted execution of the given operation:
adding, by the given thread, the descriptor of the given operation to the set of published operations;
selecting, by the given thread, a subset of operations whose descriptors are included in the set of published operations to execute, wherein the subset comprises the given operation and one or more other operations of the plurality of operations, wherein a descriptor of at least one of the one or more other operations was added to the set of published operations by a different thread of the plurality of threads; and
executing, by the given thread, the selected subset of operations using one or more hardware transactions, wherein said executing comprises applying the selected subset of operations to the concurrent data structure.

US Pat. No. 11,029,994

SERVICE CREATION AND MANAGEMENT

1. A system comprising:a processor; and
a memory that stores computer-executable instructions that, when executed by the processor, cause the processor to perform operations comprising
detecting, at a service control that controls a service, a capacity event associated with the service,
determining, based on a recipe and based on the capacity event, resources that are needed to provide functionality of a scaled version of the service, the resources comprising a virtual machine,
instructing an infrastructure control to instantiate the virtual machine, wherein the scaled version of the service comprises a service component that provides the functionality of the scaled version of the service, and wherein the virtual machine is instantiated to host the service component, and
loading, by the service control, the service component to the virtual machine.

US Pat. No. 11,029,993

SYSTEM AND METHOD FOR A DISTRIBUTED KEY-VALUE STORE

Nutanix, Inc., San Jose,...

1. An apparatus comprising a processor having programmed instructions to:deploy a container instance providing a compute resource to access metadata indicating a storage location of an object in a distributed storage platform;
deploy a virtual node implementing a key-value range of a key-value store, wherein the virtual node stores the metadata, wherein the container instance includes the virtual node;
receive a request to read the object, wherein the request includes the key-value range;
access, from the virtual node, the metadata using the container instance; and
provide, to an object controller, the metadata comprising the key-value range, wherein the object controller accesses, via a controller virtual machine (CVM) of a CVM cluster, the object in the storage location according to the metadata, wherein each CVM of the CVM cluster runs a subset of distributed operating system for accessing data in the distributed storage platform and has the key-value range.

US Pat. No. 11,029,992

NONDISRUPTIVE UPDATES IN A NETWORKED COMPUTING ENVIRONMENT

International Business Ma...

1. A computer-implemented method for facilitating nondisruptive maintenance on a virtual machine (VM) in a networked computing environment, comprising:creating, in response to a receipt of a request to implement an update on an active VM, a copy of the active VM, wherein the copy is a snapshot VM;
installing, while saving any incoming changes directed to the active VM to a storage system, the update on the snapshot VM;
applying, when the update on the snapshot VM is complete, the saved incoming changes on the snapshot VM until the saved incoming changes are completely applied to the snapshot VM such that the active VM and the snapshot VM are running in parallel; and
switching, in response to the active VM and the snapshot VM running in parallel, from the active VM to the snapshot VM so the snapshot VM becomes a new active VM and the active VM becomes an inactive VM.

US Pat. No. 11,029,991

DISPATCH OF A SECURE VIRTUAL MACHINE

INTERNATIONAL BUSINESS MA...

1. A computer implemented method comprising:receiving, by a hypervisor that is executing on a host server, a request to dispatch a virtual machine;
based on a determination that the virtual machine is a secure virtual machine, preventing the hypervisor from directly accessing any data of the secure virtual machine; and
performing by a secure interface control of the host server:
determining a security mode of the virtual machine;
based on the security mode being a first mode, loading a virtual machine state from a first state descriptor for the virtual machine, the first state descriptor stored in a non-secure portion of memory; and
based on the security mode being a second mode, loading the virtual machine state from a second state descriptor for the virtual machine, the second state descriptor stored in a secure portion of the memory, and wherein the first state descriptor includes a pointer to the second state descriptor.

US Pat. No. 11,029,990

DELIVERING A SINGLE END USER EXPERIENCE TO A CLIENT FROM MULTIPLE SERVERS

Microsoft Technology Lice...

1. A remote server computer configured to provide a hosted virtual desktop infrastructure (VDI), comprising:a plurality of physical graphics processing units (GPUs);
a first host partition configured as a graphics server having access to the physical GPUs, wherein the first host partition is configured to render graphics, capture the graphics, and encode the graphics, and to provide a first interface to the physical GPUs; and
and a first guest partition configured as a first virtual machine, wherein the first virtual machine includes a first guest operating system (OS) and a first virtual graphics processing unit (vGPU), wherein the first vGPU is configured to provide a second interface between the first guest OS and the physical GPUs to share physical GPU resources in a hosted multi-user environment, and wherein the first guest partition is configured to effectuate a virtual desktop for coupled remote client computer, wherein the first guest partition is configured to communicate with the first host partition.

US Pat. No. 11,029,989

DISTRIBUTED NOTEBOOK KERNELS IN A CONTAINERIZED COMPUTING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method, comprising:executing, using computer hardware, a notebook server in a first container, wherein the notebook server is configured to communicate with a gateway in a second container;
in response to a request for a kernel from the notebook server, the gateway requesting, using the computer hardware, a new container including the kernel from a container manager;
instantiating, using the computer hardware, the new container including the kernel within a selected computing node of a plurality of computing nodes;
publishing, using the computer hardware, communication port information for the new container to the gateway;
exchanging electronic messages, using the computer hardware, between the notebook server and the kernel through the gateway using the communication port information for the new container;
wherein in response to receiving a lifecycle management message for the kernel from the notebook server, the gateway sending the lifecycle management message to a communication port of a kernel controller; and
the kernel controller, in response to the lifecycle management message, performing a signal write to a process within the new container executing the kernel.

US Pat. No. 11,029,988

STORAGE RESERVATION POOLS FOR VIRTUAL INFRASTRUCTURE

VMWARE, INC., Palo Alto,...

1. A method for a computing system to allocate storage from a storage reservation pool to virtual disks of virtual machines, the method comprising:assigning a quota on available space from the storage reservation pool;
adding the virtual disks to the storage reservation pool;
for each virtual disk belonging to the storage reservation pool:
allocating a first epoch specific storage space to the virtual disk based on the available space in the storage reservation pool, wherein the first epoch specific storage space corresponds to a first epoch of time, and the first epoch of time occurs prior to a second epoch of time;
creating a memory map for the virtual disk to track space used by the virtual disk;
for every write request to the virtual disk during the first epoch:
updating the memory map with information about the write request, including a write location and an amount of data written;
based on the memory map and without querying the storage reservation pool, determining if the used space consumed by the virtual disk is greater than a threshold of the first epoch specific storage space; and
when the used space is greater than the threshold of the first epoch specific storage space:
predicting additional space that will be consumed by other writes for the remainder of the first epoch to the virtual disk;
determining if the additional space is available from the storage reservation pool; and
when the additional space is available from the storage reservation pool:
increasing the first epoch specific storage space by the additional space; and
proceeding with the write request to the virtual disk.

US Pat. No. 11,029,987

RECOVERY OF STATE, CONFIGURATION, AND CONTENT FOR VIRTUALIZED INSTANCES

CYBERARK SOFTWARE LTD, P...

1. A non-transitory computer-readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform operations for enabling storage and analysis of data associated with virtual computing instances, the operations comprising:identifying a virtual computing instance;
archiving, in at least one data storage system, delta data for the virtual computing instance, the delta data representing a plurality of differences between a first state of the virtual computing instance and a second state of the virtual computing instance, wherein the plurality of differences are based on a plurality of operations performed on the virtual computing instance;
reinstantiating the virtual computing instance based on the archived delta data;
simulating, based on the reinstantiated virtual computing instance, one or more previous states of the virtual computing instance; and
performing a forensics operation on the reinstantiated virtual computing instance.

US Pat. No. 11,029,986

PROCESSOR FEATURE ID RESPONSE FOR VIRTUALIZATION

Microsoft Technology Lice...

1. A method for supporting virtualization, comprising:prior to receiving, by a physical processor, a request originated from a guest partition for a processor feature ID value:
receiving, by the physical processor, a processor feature ID value from the supervisory partition; and
storing, by the physical processor, the processor feature ID value in a hardware register accessible by the physical processor;
receiving, by the physical processor, the request originated from the quest partition for the processor feature ID value; and
in response the request, providing, by the physical processor without intervention of the supervisory partition, the requested processor feature ID value to the quest partition from the hardware register accessible by the physical processor.

US Pat. No. 11,029,985

PROCESSOR VIRTUALIZATION IN UNMANNED VEHICLES

GE Aviation Systems LLC, ...

1. A processing system for an unmanned vehicle (UV), comprising:a first processing unit of an integrated circuit;
a second processing unit of the integrated circuit;
a first operating system provisioned using the first processing unit, the first operating system configured to execute a first vehicle control process;
a virtualization layer configured using at least the second processing unit; and
a second operating system provisioned from the virtualization layer, the second operating system configured to execute a second vehicle control process.

US Pat. No. 11,029,984

METHOD AND SYSTEM FOR MANAGING AND USING DATA CONFIDENCE IN A DECENTRALIZED COMPUTING PLATFORM

EMC IP Holding Company LL...

1. A method for managing data in a local data system, the method comprising:performing, by a cleaning module, a cleaning process on a local data set to generate a confidence score corresponding to the local data set, wherein the local data system comprises a local data manager, a local data source, and the cleaning module, wherein the local data set is stored on the local data system;
sending, by the cleaning module, a data set entry to a catalog node, wherein the data set entry comprises the confidence score, local data set metadata, and cleaning module metadata,
wherein the local data set metadata comprises at least one of: a geographical location of the local data source, a type of the local data source, and the local data manager;
obtaining, by a virtual machine (VM) executing on a computing node, a data computation request from a client;
in response to the data computation request:
sending a metadata request to the catalog node based on the data computation request;
obtaining a plurality of data set entries from the catalog node based on the metadata request, wherein the plurality of data set entries comprises at least the data set entry;
performing a confidence score analysis on the plurality of data set entries to obtain selected data set entries, wherein the selected data set entries comprise at least the data set entry;
initiating, on the local data system, a data computation using the local data set;
obtaining, in response to the initiating, a result from the local data system; and
sending a data computation result to the client based on the data computation, wherein the data computation result is based on the result.

US Pat. No. 11,029,982

CONFIGURATION OF LOGICAL ROUTER

NICIRA, INC., Palo Alto,...

1. A method for configuring a plurality of host computers to implement a logical router for a logical network, the method comprising:receiving a description of a logical network that comprises a logical router;
generating configuration data for a plurality of host computers, the configuration data for each host computer of the plurality of host computers comprising (i) instructions to execute a managed physical routing element (MPRE) for implementing the logical router, the MPRE connecting to a port of a managed physical switching element (MPSE) that also executes on the host computer and (ii) a list of logical interfaces (LIFs) of the logical router, each LIF corresponding to a different segment of the logical network; and
providing the generated configuration data to the plurality of host computers.

US Pat. No. 11,029,981

TEXT RESOURCES PROCESSING IN AN APPLICATION

International Business Ma...

1. A computer-implemented method comprising:in response to initiation of a plug-in, running an updated application,
wherein information displayed on at least one text resource of a plurality of text resources in an original application of the updated application is not editable, and
wherein a subset of the plurality of text resources includes the at least one text resource;
in response to a first piece of information displayed on a text resource of the at least one text resource being changed to a second piece of information:
obtaining an ID of the text resource of the at least one text resource in the updated application; and
mapping the second piece of information to the ID of the text resource in a file corresponding to the at least one text resource in the updated application, wherein the updated application, when built, deployed and run, comprises a first variable identifier which can be enabled to make the subset of the plurality of text resources in the updated application editable from a user interface (UI) of the updated application, and further comprises a second variable identifier which can be enabled to make an entirety of the plurality of text resources in the updated application editable from the (UI) of the updated application;
obtaining source code of the original application;
in response to determining a type of the source code as a programming language, and in response to the information displayed on the at least one text resource in the original application being not editable:
determining an ID of the at least one text resource, and code templates related to the type of the source code which comprise code with editable text resources;
inputting the ID of each text resource of the at least one text resource into the code templates to produce replaced templates;
applying the replaced templates to produce updated source code;
saving the replaced templates in code of the plug-in; and
building and deploying the updated source code to produce the updated application.

US Pat. No. 11,029,980

CUSTOMIZABLE ANIMATIONS

salesforce.com, inc., Sa...

1. A method, comprising:receiving, via a declarative programming wizard, a request to configure a visual component of an application to present an animation based on a condition and a frequency option indicating how often the animation is to be presented when the condition is true;
generating a record comprising the condition and the frequency option;
generating application code associated with the application;
receiving, from a client device in response to execution of the application code, a request for animation information associated with the record, wherein the request for animation information includes an application identifier associated with the application;
identifying the record based at least in part on the application identifier;
generating the animation information based upon the record, wherein the animation information comprises the condition and the frequency option; and
sending the animation information to the client device, wherein the application executing on the client device causes the visual component to present the animation based on evaluation of the animation information.

US Pat. No. 11,029,979

DYNAMICALLY GENERATING CUSTOM APPLICATION ONBOARDING TUTORIALS

GOOGLE LLC, Mountain Vie...

1. A method comprising:determining, by an application executing at a computing device, an amount of time since a particular feature of the application has been utilized by a user through direct interaction with the computing device, the particular feature of the application being one of a plurality of features of the application;
determining, based at least in part on the amount of time since the particular feature has been utilized by the user through direct interaction with the computing device, whether to generate and output a customized graphical user interface that includes an onboarding tutorial associated with the particular feature of the application, the onboarding tutorial including information related to how to utilize the particular feature of the application;
responsive to determining to generate and output the customized graphical user interface that is associated with the particular feature:
determining, by the application and based on given contextual information, of contextual information associated with the computing device, content to include in a template graphical user interface of a plurality of template graphical user interfaces for the onboarding tutorial of the application, wherein the template graphical user interface is associated with the particular feature of the application;
responsive to determining the content to include in the template graphical user interface, generating, by the application and based on the template graphical user interface and the content, the customized graphical user interface; and
outputting, by the application, for display at a display device, and for presentation to the user of the computing device, an indication of the customized graphical user interface.

US Pat. No. 11,029,978

INDUSTRIAL CONTROLLER AND METHOD FOR AUTOMATICALLY CREATING USER INTERFACE

SIEMENS AKTIENGESELLSCHAF...

1. An industrial controller for automatically creating a user interface, the industrial controller being configured to be set in an industrial system submodule, the industrial controller comprising:a collecting part, configured to collect information generated when the industrial system submodule is operating;
a processing part, configured to extract an operation parameter of interest from the information generated when the industrial system submodule is operating and create a user interface for reproducing the operation parameter of interest according to the operation parameter of interest extracted, the user interface including a custom configuration trigger control and a custom configuration operation interface; and
a communication part, configured to send the user interface to at least one user terminal, receive an information input from the at least one user terminal and send the information input to the processing part, wherein when the communication part receives a parameter update input from the user interface, the communication part sends the custom configuration input to the processing part and the processing part re-creates a user interface according to the custom configuration input.

US Pat. No. 11,029,977

METHODS, SYSTEMS AND APPARATUS TO TRIGGER A WORKFLOW IN A CLOUD COMPUTING ENVIRONMENT

VMware, Inc., Palo Alto,...

1. An apparatus comprising:memory; and
at least one processor to execute computer readable instructions to implement a virtual appliance in a cloud computing environment, the virtual appliance comprising:
a graphical user interface to present a template on a display device, the template corresponding to an event topic, the event topic to trigger a workflow associated with a first workflow subscription, the first workflow subscription included in a plurality of workflow subscriptions having a hierarchy, the template to include a first field to specify whether at least one other workflow subscription associated with the event topic is to be blocked until a hierarchically dominant workflow subscription has at least one of been notified of the event topic or taken action based on the event topic; and
a subscription manager to trigger the workflow in response to an event notification associated with the event topic.

US Pat. No. 11,029,976

FACILITATING MULTI-INHERITANCE WITHIN A SINGLE INHERITANCE CONTAINER-BASED ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:facilitating multi-inheritance within a single-inheritance container-based environment, the facilitating comprising:
generating, based on a configuration file with a multi-inheritance instruction, a composited image for a new container from multiple existing images of the single-inheritance container-based environment, where each existing image of the multiple existing images comprises one or more instruction layers in a respective single-inheritance tree, with the respective single-inheritance trees being derived from a common base image, the multiple existing images being identified in the multi-inheritance instruction, and the generating comprising:
creating a composited directory file which, in part, references layers of the multiple existing images of the single-inheritance container-based environment, and associating a command instruction of the configuration file with the composited directory file;
building the composited image in association with starting the new container, the building being based on the composited directory file and associated command; and
saving the composited image to storage without further saving in the storage one or more layers of the composited image re-used from previously constructed layers of the multiple existing images in the composited image.

US Pat. No. 11,029,975

AUTOMATED CONTAINER IMAGE ASSEMBLY

International Business Ma...

1. A computer-implemented method for automatically generating a container image assembly file, the computer-implemented method comprising:assessing, by a computer, a definition of an application to determine a base container image and application libraries needed as add-ons for a container image corresponding to the application;
generating, by the computer, a library dependency graph of flow from the base container image to add-on libraries for the application;
generating, by the computer, the container image assembly file based on the library dependency graph of flow from the base container image to the add-on libraries for the application;
removing, by the computer, vulnerabilities corresponding to the add-on libraries of the container image assembly file;
ingesting, by the computer, historical container image assembly file data corresponding to the application;
building, by the computer, a knowledge base of historical library dependency data based on ingested historical container image assembly file data corresponding to the application and received user feedback corresponding to container image assembly file optimization to automatically generate the container image assembly file; and
curating, by the computer, the historical library dependency data in the knowledge base.