US Pat. No. 10,990,581

TRACKING A SIZE OF A DATABASE CHANGE LOG

Amazon Technologies, Inc....

1. A system, comprising:a database service to store a database;
a log store to store a change log for the database;
one or more computing devices of the database service and comprising one or more processors and a memory, the memory storing instructions that, when executed by the one or more processors, cause the one or more processors to:
receive an indication of a plurality of change events that have occurred at the database;
generate a new log segment for the change log for the database based on the received indication, wherein the new log segment includes the plurality of change events; and
add the new log segment to the change log, wherein said add comprises:
retrieve metadata for an end log segment of the change log to identify, in the metadata, a cumulative size for the change log;
determine whether one or more portions of the new log segment are included in the change log based at least in part on a sequence end identifier of the end log segment and a sequence start identifier of the new log segment;
remove the one or more portions from the new log segment;
determine a new cumulative size for the change log based on a size of the new log segment and the identified cumulative size; and
store the new log segment to the change log as a new end log segment with metadata indicating the new cumulative size for the change log.

US Pat. No. 10,990,576

PROVIDING SNAPSHOTS OF JOURNAL TABLES

Snowflake Inc., San Mate...

1. A method comprising:defining a journal table of a database, the journal table comprising a snapshot and a log table, the snapshot comprising a representation of data in the journal table at a particular time, the log table comprising a listing of requested changes to the journal table since the particular time, the snapshot stored in a first micro-partition, the log table stored in a second micro-partition;
receiving, after at least one first requested transaction has been executed, a request to execute a second requested transaction on the journal table; and
generating, prior to executing the second requested transaction, a second snapshot, the second snapshot comprising a second representation of data in the journal table after the at least one first requested transaction has been executed, the second snapshot stored in a third micro-partition different than the first micro-partition and the second micro-partition.

US Pat. No. 10,990,575

REORGANIZATION OF DATABASES BY SECTIONING

1. A method of reorganizing a tablespace in a database such that rows of the tablespace are arranged in a sequence defined in a balanced tree-type clustering index of the tablespace, the method comprising:sectioning, by a processor, the balanced tree-type clustering index and the tablespace into sections comprising logically distinct sets of data by reading only tree pages of the balanced tree-type clustering index to determine logical divisions;
allocating, by a processor, an amount of output space on a storage device for each section of the tablespace and of the balanced tree-type clustering index, to provide for each section a first range of storage space for an output clustering index for the section, and a second range of storage space for an output tablespace for the section;
scheduling, by a processor, a reorg task for each section; and
executing, by at least one processor, the scheduled reorg tasks on the sections.

US Pat. No. 10,990,573

FAST INDEX CREATION SYSTEM FOR CLOUD BIG DATA DATABASE

SYSCOM COMPUTER ENGINEERI...

1. A fast index creation system for a cloud big data database, electrically and communicatively coupled to a cloud non-relational database and a user service system, for inquiring and creating an index, comprising:one or more hardware processors;
an application exchange module executed by the one or more hardware processors, electrically and communicatively coupled to the user service system, for receiving a query string inputted from the user service system;
a data exchange module executed by the one or more hardware processors, electrically and communicatively coupled to the cloud non-relational database, and having at least one temporary index table with field data related to record data of the cloud non-relational database;
a first processing module executed by the one or more hardware processors, electrically and communicatively coupled to the data exchange module and the application exchange module, for receiving and computing the query string to generate a query instruction, and the query instruction including at least one key field and at least one sorting condition; the first processing module computing the at least one temporary index table according to the query instruction and comparing the at least one temporary index table to the at least one key filed of the query to check whether or not the at least one temporary index table has any data that is same as the at least one key field and then generating a cache index table, a create instruction, or both; wherein, if the at least one temporary index table has data that is same as the at least one key field, then the first processing module will compute the at least one temporary index table according to the query instruction to generate the cache index table; and if the at least one temporary index table does not have data that is same as the at least one key field, then the first processing module will generate the create instruction;
a second processing module executed by the one or more hardware processors, electrically and communicatively coupled to the data exchange module, the first processing module and the cloud non-relational database, for receiving the create instruction and the query instruction and computing the cloud non-relational database according to the query instruction to generate an index table; and
an integrated processing module executed by the one or more hardware processors, electrically and communicatively coupled to the first processing module, the second processing module, the data exchange module and the application exchange module, for receiving and computing the cache index table, the index table or both according to the query instruction to generate a result index table, and the result index table has field data related to record data of the cloud non-relational database and returned to the application exchange module for prompting the result index table to the user service system to let the user service system know that the result index table has been generated.

US Pat. No. 10,990,569

SORTING SYSTEM

1. A sorting apparatus to sort elements of a list of elements comprising:a list communication bus to supply the list of elements;
a plurality of registers, coupled in parallel to the list communication bus, wherein a register of the plurality of registers includes, a value storage to store a value of one of the elements in the list; an input node to receive an input value exist indication;
an output node to supply an output value exist indication to indicate, when asserted, that the register is storing a value of an element of the list in the value storage;
a register stack communicatively coupled to the plurality of registers, the register stack including a register identification field indexed by respective values stored in respective value storages of the plurality of registers; and
a duplicate stack communicatively coupled to the plurality of registers and configured to store one or more sets of indications of list locations of elements in the list, each of the one or more sets associated with a particular element value.

US Pat. No. 10,990,568

MACHINE LEARNING FOR AUTOMATED MODEL GENERATION WITH CONSTRAINTS

Microsoft Technology Lice...

1. An automated method for training a machine learning system in view of a set of constraints, the method comprising:receiving a data set for modeling from a submitting party;
generating a set of candidate pipelines for modeling the received data set based on, at least in part, an analysis of similarities between the received data set and previously modeled data sets;
identifying, by a currently executing trained judge model, an optimal pipeline of the candidate pipelines;
generating optimal results according to the identified optimal pipeline, wherein the currently executing trained judge model is updated according to an optimal trained judge model identified from a set of candidate trained judge models; and
providing the optimal results to the submitting party in response to the received data set.

US Pat. No. 10,990,566

PERSISTENT FILE LOCKS IN A STORAGE SYSTEM

Pure Storage, Inc., Moun...

1. A method, comprising:receiving, at a storage system having a distributed file system, a request for access of a file;
locking the file, through one of a plurality of persistent file locks in the storage system, a state of each of the plurality of persistent file locks maintained within memory of the storage system and transferred from the memory to nonvolatile memory of the storage system responsive to detecting a power loss;
accessing the file, through the distributed file system; and
administering the distributed file system, at least in part, by a plurality of authorities in the storage system, with each authority of the plurality of authorities configurable to own a range of user data and use the persistent file locks, and each authority of the plurality of authorities transferable from one storage node to another storage node in the storage system;
unlocking the file, through the one of the plurality of persistent file locks.

US Pat. No. 10,990,565

SYSTEM AND METHOD FOR AVERAGE ENTROPY CALCULATION

EMC IP Holding Company, L...

1. A computer-implemented method, executed, via a hardware processor, on a computing device, comprising:processing a data portion to divide the data portion into a plurality of candidate data chunks, wherein each of the candidate data chunks is a five-hundred-twelve byte block sector;
performing an entropy analysis on each of the plurality of candidate data chunks to generate a plurality of candidate data chunk entropies, wherein the entropy analysis on each of the plurality of candidate data chunks includes utilizing a Shannon entropy equation to generate the plurality of candidate data chunk entropies, wherein the entropy analysis on each of the plurality of data chunks determines a level of compressibility associated with a deduplication candidate from the plurality of candidate data chunks;
determining an average data chunk entropy from the plurality of candidate data chunk entropies; and
performing an entropy analysis on each of a plurality of target data chunks associated with a potential target to generate a plurality of target data chunk entropies, wherein the entropy analysis on each of the plurality of target data chunks identifies one or more sector offsets between the deduplication candidate from the plurality of candidate data chunks and a deduplication target from the plurality of target data chunks.

US Pat. No. 10,990,564

DISTRIBUTED COLUMNAR DATA SET AND METADATA STORAGE

SAS INSTITUTE INC., Cary...

1. An apparatus comprising at least one processor and a storage to store instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising:receive, at a node device of multiple node devices, and from a control device via a network, an instruction to the multiple node devices to persistently store a data set within at least one storage device, wherein:
the data set comprises multiple data values organized into numerous rows;
each row comprises multiple data fields that each fall within a column of multiple columns;
the numerous rows of the data set are divided into subsets of multiple rows that are distributed among multiple storage spaces provided by the multiple node devices; and
the data values of the multiple rows that are stored within the storage space provided by each node device are stored in a row-wise organization in which data values within each row are stored at adjacent storage locations;
in response to receiving the instruction to persistently store the multiple rows that are stored within the storage space provided by the node device, the at least one processor is caused to perform operations comprising:
within each collection thread of a quantity of collection threads, the at least one processor is caused to perform operations comprising:
assemble a subset of the multiple rows stored within the node device into a row group with the data values reorganized into a columnar organization;
generate row group metadata corresponding to the row group that includes, for each column, indications of the highest and lowest data values, and each unique data value; and
store the row group and row group metadata within a data buffer of a buffer queue;
operate the buffer queue as a first-in-first-out (FIFO) buffer in which the first data buffer of multiple data buffers to be filled with a row group from a collection thread becomes the first data buffer from which a row group is retrieved by an aggregation thread;
within each aggregation thread of a quantity of aggregation threads, the at least one processor is caused to perform operations comprising:
assemble a data set part from multiple row groups retrieved from multiple data buffers of the buffer queue;
generate part metadata corresponding to the data set part from multiple row group metadata corresponding to the multiple row groups, and retrieved from the multiple data buffers; and
transmit at least one of the data set part and the part metadata to the at least one storage device via the network; and
in response to each instance of retrieval of a row group and corresponding row group metadata from a data buffer of the buffer queue for use within an aggregation thread, analyze a level of availability of storage space within the node device to determine whether to dynamically adjust a quantity of data buffers of the buffer queue.

US Pat. No. 10,990,563

INFORMATION READ/WRITE METHOD AND APPARATUS BASED ON BLOCKCHAIN

ADVANCED NEW TECHNOLOGIES...

1. A computer-implemented method for information read/write based on a blockchain, comprising:requesting to create a storage space in an object storage server to store files of a project;
obtaining a storage key returned from the object storage server and corresponding to the storage space;
receiving a service request from a project member of the project, wherein the service request comprises a to-be-processed service material of the project member;
sending the storage key to the object storage server for a first verification;
upon determining that the first verification failed, receiving a new storage key returned from the object storage server, wherein the new storage key corresponds to a new storage space in the object storage server;
sending the new storage key to the object storage server for a second verification;
upon determining that the second verification succeeded, sending a service file in the to-be-processed service material to the new storage space corresponding to the new storage key in the object storage server and receiving a file storage identifier returned by the object storage server;
processing service information in the to-be-processed service material to generate a service identifier; and
writing the service identifier and the file storage identifier into the blockchain, and saving a hash value that is returned by the blockchain and that corresponds to write success information.

US Pat. No. 10,990,562

SYSTEM AND METHOD OF ASYMMETRIC SYSTEM DESCRIPTION FOR OPTIMIZED SCHEDULING

Dell Products L.P., Roun...

1. An asymmetric information handling system comprising:a system board having a plurality of sockets;
a plurality of processors, each processor disposed in a respective socket;
a plurality of interconnect links configured to provide point-to-point links between at least some of the sockets; and
a plurality of memories corresponding to the processors;
wherein one of the processors is operable to:
determine an arrangement of the processors, the memories, and the interconnect links wherein the arrangement includes a plurality of system localities, and wherein each system locality includes one of the processors and one of the memories;
determine a first value for the each of the processors, a second value for each of the memories, and a third value for each of the interconnect links, wherein the first value of the each processor is based on a weighted processor bandwidth capacity of the each processor, wherein the second value of the each memory is based on a weighted memory bandwidth capacity of the each memory, and wherein the third value of the each interconnect link is based on a weighted interconnect link bandwidth capacity of the each interconnect link;
calculate interconnect link bandwidth values for the each of the interconnect links based at least in part on the first value of the each processor, the second value of the each memory, and the third value of the each interconnect link; and
populate an interconnect bandwidth table using the interconnect link bandwidth values, wherein an operating system executing on the asymmetric information handling system prioritize resources of the asymmetric information handling system by selecting an interconnect link having maximum available bandwidth from the interconnect bandwidth table, and wherein the operating system allocates processes among a subset of the system localities in the asymmetric information handling system based on the prioritization.

US Pat. No. 10,990,561

PARAMETER SERVER AND METHOD FOR SHARING DISTRIBUTED DEEP LEARNING PARAMETER USING THE SAME

ELECTRONICS AND TELECOMMU...

1. A parameter server, comprising:a memory storing instructions; and
a processor executing the instructions to:
send and receive a message to and from at least one of a master process and one or more worker processes and support read and write operations based on Remote Direct Memory Access (RDMA);
manage allocation and deallocation of shared memory;
calculate distributed deep-learning parameters; and
announce occurrence of an event to at least one of the master process and the one or more worker processes, corresponding to the shared memory, when the event for the shared memory has occurred,
wherein the processor executes the instructions to:
create the shared memory for storing the distributed deep-learning parameters in response to a first request for creating the shared memory that is received from the master process;
send a shared memory creation key of the shared memory and information for accessing the shared memory to the master process;
allocate the shared memory to a worker process, which has received the shared memory creation key from the master process, in response to a second request for allocating the shared memory that is received from the worker process; and
send information for accessing the allocated shared memory to the worker process.

US Pat. No. 10,990,560

USB TYPE-C SIDEBAND SIGNAL INTERFACE CIRCUIT

Cypress Semiconductor Cor...

1. A Universal Serial Bus Type-C (USB-C) controller comprising:a first pair of terminals to communicate over a first communication protocol that is other than a Universal Serial Bus (USB) protocol;
a second pair of terminals to communicate over a second communication protocol that is other than the USB protocol;
a third pair of terminals, each of which is to be coupled to a corresponding SBU1 terminal or SBU2 terminal of a Type-C receptacle;
a multiplexer to selectively couple the first pair of terminals to the third pair of terminals and the second pair of terminals to the third pair of terminals, wherein the multiplexer comprises a first set of four low-voltage (LV) transistors to selectively couple each terminal of the first pair of terminals to one of the third pair of terminals; and
first logic to control gates of each of the first set of four LV transistors according to a mode enabled within a configuration channel (CC) signal;
wherein the USB-C controller is disposed on an integrated circuit (IC).

US Pat. No. 10,990,558

SYSTEMS AND METHODS FOR CLOUD BASED PIN PAD DEVICE GATEWAY

Worldpay, LLC, Symmes To...

1. A method of processing payment transactions, comprising:determining whether a gateway is available for a client device;
upon determining that a gateway is not available for the client device, generating a gateway;
creating a connection between the client device and the generated gateway;
creating a message filter for the client device on a message bus;
listening for messages on the message bus;
upon finding a message on the message bus matching the message filter for the client device, translating the message from a message bus communication protocol to a communication protocol of the client device; and
transmitting the translated message to the client device by way of the generated gateway,
wherein the client device is stateless and state information for the client device, including an address of a point of sale (POS) device associated with the client device, is stored on the generated gateway.

US Pat. No. 10,990,557

TRANSMISSION INTERFACE COMMUNICATING METHOD AND CONNECTION INTERFACE

REALTEK SEMICONDUCTOR COR...

1. A transmission interface communicating method used in a display device, the transmission interface communicating method comprising:electrically coupling the display device to a host device such that a hot plug detect (HPD) status of the display device becomes a high state;
receiving a first status update signal from the host device after electrically coupling the display device to the host device such that the hot plug detect (HPD) status of the display device becomes the high state;
transmitting a HPD signal having a low state to the host device in response to the first status update signal after receiving the first status update signal from the host device;
receiving a configuration signal from the host device after transmitting the HPD signal having the low state to the host device in response to the first status update signal;
transmitting a configuration acknowledgement signal to the host device in response to the configuration signal after receiving the configuration signal from the host device; and
actively transmitting the HPD signal having the high state to the host device after transmitting the configuration acknowledgement signal to the host device in response to the configuration signal.

US Pat. No. 10,990,556

PROGRAMMABLE LOGIC DEVICE WITH ON-CHIP USER NON-VOLATILE MEMORY

GOWIN Semiconductor Corpo...

1. A programmable logic device (PLD) containing on-chip non-volatile memory capability, comprising:a programmable logic array having an SRAM array which stores user data and configuration data;
a non-volatile memory (NVM) block fabricated adjacent to the programmable logic array on the PLD and organized to have an information zone and a data zone, wherein the information zone includes product information, wherein the data zone includes multiple segments for storing the configuration data and multiple segments for storing user data which is generated during normal operation after configuration, the NVM block having a one interface configured to handle data, address, and control signals for NVM access; and
a programming controller (PC) fabricated adjacent to the NVM block on the PLD and coupling to the NVM block via the one interface, wherein the PC is configured to facilitate communication between the programmable logic array and the NVM block via the one interface coupling to a data bus and an address bus for allowing user logic in the programmable logic array to directly access memory locations in the NVM block in accordance with addresses on the address bus, the PC configured to initialize at least a portion of user data in the SRAM array retrieved from the NVM block after power to the PLD resumes.

US Pat. No. 10,990,555

PROGRAMMABLE PIPELINE AT INTERFACE OF HARDENED BLOCKS

XILINX, INC., San Jose, ...

1. A programmable integrated circuit (IC), comprising:a hardened block comprising a first sequential element;
a programmable logic (PL) fabric comprising a second sequential element; and
an interface comprising a programmable pipeline that communicatively couples the first and second sequential elements,
wherein the programmable pipeline comprises a third sequential element and a bypass path that bypasses the third sequential element, wherein the programmable pipeline is programmed, based on a time criticality assigned to a net by a software design application, to use one of the third sequential element and the bypass path when transmitting data between the first and second sequential elements.

US Pat. No. 10,990,554

MECHANISM TO IDENTIFY FPGA AND SSD PAIRING IN A MULTI-DEVICE ENVIRONMENT

SAMSUNG ELECTRONICS CO., ...

1. A system, comprising:a Solid State Drive (SSD), including:
a first storage for data;
a second storage for a unique SSD identifier (ID), the unique SSD ID associated with the SSD; and
and a third storage for a unique co-processor ID, the unique co-processor ID associated with a co-processor;
the co-processor, including:
a fourth storage for the unique co-processor ID; and
a fifth storage for the unique SSD ID,
wherein the co-processor is operative to query the SSD for the unique SSD ID and to store the unique SSD ID in the fifth storage; and
a hardware interface between the SSD and the co-processor,
wherein an operating system may use the unique co-processor ID and the unique SSD ID to pair the SSD and the co-processor.

US Pat. No. 10,990,553

ENHANCED SSD STORAGE DEVICE FORM FACTORS

Liqid Inc., Broomfield, ...

1. A storage drive, comprising:a 2.5-inch storage drive form factor chassis that structurally supports elements of the storage drive;
at least one host connector;
a plurality of M.2 storage device connectors;
a Peripheral Component Interconnect Express (PCIe) switch circuit configured to receive storage operations over the at least one host connector and transfer the storage operations for delivery to ones of the plurality of M.2 storage device connectors over associated device PCIe interfaces; and
power circuitry configured to provide holdup power to ones of the plurality of M.2 storage device connectors after loss of input power over the at least one host connector.

US Pat. No. 10,990,552

STREAMING INTERCONNECT ARCHITECTURE FOR DATA PROCESSING ENGINE ARRAY

XILINX, INC., San Jose, ...

1. A method, comprising:processing data in a first data processing engine in an array of data processing engines disposed in an integrated circuit, wherein each of the data processing engines are coupled together using an interconnect, wherein each of the data processing engines comprise at least one streaming interconnect configured to form the interconnect;
identifying a second data processing engine of the data processing engines as a destination for the processed data;
determining whether the second data processing engine neighbors the first data processing engine in the array and has a direct communication path to the first data processing engine; and
upon determining the second data processing engine does not have a direct communication link to the first data processing engine, transmitting the processed data to the second data processing engine using a reserved point-to-point communication path through a plurality of the streaming interconnects in the interconnect, wherein the point-to-point communication path couples the first data processing engine to the second data processing engine.

US Pat. No. 10,990,550

CUSTOM CHIP TO SUPPORT A CPU THAT LACKS A DISPLAYPORT INPUT

Dell Products L.P., Roun...

1. A method comprising:determining, by a logic device, that a video signal is present, wherein the logic device is connected to:
a first type of output interface of a central processing unit that lacks a second type of input interface; and
a second type of output interface of a graphics processing unit; and
wherein the video signal comprises either:
a first type of video signal output by the central processing unit; or
a second type of video signal output by the graphics processing unit;
re-timing, by the logic device, the video signal to create a re-timed video signal; and
outputting, by the logic device, the re-timed video signal over a serial bus port;
receiving a frequency sweep from a test generator to determine a transmission capabilities of the logic device.

US Pat. No. 10,990,548

QUALITY OF SERVICE LEVELS FOR A DIRECT MEMORY ACCESS ENGINE IN A MEMORY SUB-SYSTEM

Micron Technology, Inc., ...

15. A non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, cause the processing device to:receive a DMA command for a plurality of data sectors to be moved from a source memory region to a destination memory region;
retrieve a sector priority indicator from the DMA command, wherein the sector priority indicator is reflective of sector priority values associated with the plurality of data sectors of the DMA command;
for each data sector of the plurality of data sectors, determine a corresponding sector priority value of the respective data sector in view of the sector priority indicator;
read the plurality of data sectors from the source memory region based on the corresponding sector priority value of each data sector; and
write the plurality of data sectors to the destination memory region based on the corresponding sector priority value of each data sector.

US Pat. No. 10,990,545

SYSTEM AND METHOD FOR HANDLING IN-BAND INTERRUPTS WITH MULTIPLE I3C MASTERS

Dell Products L.P., Roun...

1. A multiplexor for an Improved Inter-Integrated Circuit (I3C) network, the multiplexor comprising:a switch to couple a plurality of I3C slave interfaces to a selectable one of a plurality of I3C master interfaces, wherein each I3C slave interface is identified by an associated I3C address;
a routing map including a plurality of map entries, wherein each I3C slave interface is associated with at least one map entry, and each map entry maps the associated I3C slave interface with one of the I3C master interfaces based upon the I3C address of the I3C slave interface, such that for each map entry, an In-Band Interrupt (MI) received from the associated I3C slave interface is mapped to the associated I3C master interface; and
an interrupt detector configured to detect a first IBI from a first one of the I3C slave interfaces, determine that a first map entry associated with the first I3C slave interface maps the first I3C slave interface with a first one of the I3C master interfaces based upon the first IBI, and direct the switch to couple the first I3C slave interface to the first I3C master interface based upon the first map entry.

US Pat. No. 10,990,544

PCIE ROOT COMPLEX MESSAGE INTERRUPT GENERATION METHOD USING ENDPOINT

NXP USA, Inc., Austin, T...

1. A method comprising:a first central processing unit (CPU) creating a first data structure that comprises one or more first source addresses mapped to one or more first destination addresses, respectively, and a predetermined source address mapped to a predetermined destination address, wherein the one or more first source addresses correspond to one or more first source locations, respectively, in a first memory system where one or more first data blocks are stored, respectively, and wherein the predetermined source address corresponds to a predetermined source location where a predefined data pattern is stored;
a direct memory access (DMA) controller participating in a DMA data transfer based on the first data structure, wherein the DMA controller is contained in a first circuit, and wherein the DMA data transfer comprises:
sequentially copying the one or more first data blocks to one or more first destination storage locations, respectively, corresponding to the one or more first destination addresses, respectively, wherein the one or more first destination storage locations is located in a second memory system, and;
subsequent to the sequential copying of the one or more first data blocks to the one or more first destination storage locations, copying the predefined data pattern to a predetermined storage location corresponding to the predetermined destination address, the predetermined storage location is a register of an interrupt controller circuit and the predefined data pattern indicates an end of DMA transfer to the interrupt controller circuit.

US Pat. No. 10,990,542

FLASH MEMORY SYSTEM AND METHOD OF GENERATING QUANTIZED SIGNAL THEREOF

KOREA INSTITUTE OF SCIENC...

1. A flash memory system comprising:a flash memory programming a selected page, providing a plurality of reference read voltages to a selected word line coupled to the selected page to generate a plurality of signals during a quantization signal generating operation, the plurality of signals corresponding to a plurality of quantization intervals, and generating a quantized signal by performing a logical operation on the plurality of signals; and
a memory controller receiving the quantized signal from the flash memory and generating a response using the quantized signal,
wherein the memory controller receives a challenge from a host and controls the flash memory to perform the quantization signal generating operation.

US Pat. No. 10,990,541

CONTROLLER USING CACHE EVICTION POLICY BASED ON READ DATA SIZE

SK hynix Inc., Gyeonggi-...

1. A controller for controlling an operation of a semiconductor memory device, the controller comprising:a cache buffer configured to store multiple cache data;
a request analyzer configured to generate request information including information on a size of read data to be read corresponding to a read request received from a host; and
a cache controller configured to determine an eviction policy of the multiple cache data, based on the size of the read data in the request information,
wherein, in response to a determination that the size of the read data is greater than a predetermined reference value, the cache controller controls the cache buffer to evict cache data that has been least frequently used among the multiple cache data, and
wherein, in response to a determination that the size of the read data is less than or equal to the predetermined reference value, the cache controller controls the cache buffer to evict cache data that was unused for a longest time among the multiple cache data.

US Pat. No. 10,990,540

MEMORY MANAGEMENT METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A memory management method comprising:traversing, for at least one process, page tables corresponding to virtual memory areas (VMAs) of the at least one process by
determining, for a currently traversed VMA, a page frame corresponding to each page table entry stored in page tables corresponding to the currently traversed VMA; and
isolating the page frame to a linked list of isolated pages;
determining memory pages in page frames stored in the linked list of isolated pages as the memory pages that need to be swapped out of a memory when a quantity of the page frames stored in the linked list of isolated pages reaches a threshold or a next VMA is to be traversed;
generating, for each memory page that needs to be swapped out, a work task reclaiming a corresponding memory page; and
allocating each work task to a dedicated worker thread for execution.

US Pat. No. 10,990,539

CONTROLLER, MEMORY SYSTEM INCLUDING THE SAME, AND METHOD OF OPERATING MEMORY SYSTEM

SK hynix Inc., Icheon (K...

1. A memory system comprising:a memory device comprising first and second memory groups; and
a controller including a resource controller and first and second flash translation layer (FTL) cores, each of the first and second FTL cores managing a plurality of logical addresses (LAs) that are mapped, respectively, to a plurality of physical addresses (PAs) of a corresponding memory group,
wherein the resource controller is configured to:
determine LA use rates of the first and second FTL cores;
select a source FTL core and a target FTL core from the first and second FTL cores using the LA use rates; and
balance the LA use rates of the source FTL core and the target FTL core by moving data stored in storage spaces associated with a portion of the LAs from the source FTL core to storage spaces associated with the target FTL core.

US Pat. No. 10,990,538

ARITHMETIC PROCESSING DEVICE, INFORMATION PROCESSING APPARATUS, AND METHOD FOR CONTROLLING ARITHMETIC PROCESSING DEVICE

FUJITSU LIMITED, Kawasak...

1. An arithmetic processing device comprising:an arithmetic operation control unit;
a first access management unit that receives, from the arithmetic operation control unit, an access request with respect to a first address and access authorization assigned to the access request, that translates the first address to a second address, that determines the suitability of the access authorization, and that outputs, when the access authorization is not suitable, the access request with respect to the first address;
a responding unit that determines, based on the first address, whether a predetermined process in which response time does not affect security is to be performed and that outputs, when the responding unit determines to perform the predetermined process, a result of the predetermined process to the arithmetic operation control unit; and
a second access management unit that receives the access request with respect to the first address output from the first access management unit, that translates the first address to the second address, that determines the suitability of the access authorization, and that outputs, when the access authorization is not suitable, a notification of access prohibition to the arithmetic operation control unit.

US Pat. No. 10,990,537

LOGICAL TO VIRTUAL AND VIRTUAL TO PHYSICAL TRANSLATION IN STORAGE CLASS MEMORY

International Business Ma...

1. A memory system for storing data, the memory system comprising:one or more memory cards, each card having a plurality of storage chips, and each chip having a plurality of dies having a plurality of memory cells;
a memory controller comprising a translation module, the translation module further comprising:
a logical to virtual translation table (LVT) having a plurality of entries, each entry in the LVT configured to map a logical address to a virtual block address (VBA), where the VBA corresponds to a group of the memory cells on the one or more memory cards,
wherein each entry in the LVT further includes a write wear level count to track the number of writing operations to the VBA mapped to that LVT entry, and a read wear level count to track the number of read operations for the VBA mapped to that LVT entry.

US Pat. No. 10,990,536

MEMORY CONTROLLER, OPERATING METHOD OF THE MEMORY CONTROLLER, AND STORAGE DEVICE INCLUDING THE MEMORY CONTROLLER

Samsung Electronics Co., ...

1. A memory controller comprising:a memory configured to store an address mapping table and a segment mapping table, the address mapping table including a plurality of segments, each of the plurality of segments including a plurality of mapping entries representing mapping information between logical addresses and physical addresses, and the segment mapping table including physical addresses representing areas in which each of a plurality of segments are stored in a nonvolatile memory; and
processing circuitry configured to,
update the address mapping table in response to instructing the nonvolatile memory to store data such that at least two of segments among the plurality of segments are updated,
select the at least two of the updated segments among the plurality of segments included in the address mapping table as one page of data,
instruct the nonvolatile memory to store the data including the at least two of the updated segments at the physical addresses associated with the one page, and
update the segment mapping table based on the physical addresses associated with the one page such that, in the segment mapping table, the at least two of the updated segments are indicated as stored at the physical addresses for a same page of the nonvolatile memory.

US Pat. No. 10,990,535

STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD FOR DEDUPLICATION

FUJITSU LIMITED, Kawasak...

1. A storage control apparatus comprising:a memory configured to store a plurality of meta-information associating a position of a logical area with a position of a physical area; and
a processor coupled to the memory and configured to:
when a first data block including data, a check code corresponding to the data, and first positional information within the logical area is stored in the physical area, and a second data block including the data, the check code, and second positional information within the logical area is written in the logical area, obtain a first position at which the first data block is present in the physical area included in meta-information of the first data block among the plurality of the meta-information, the second positional information includes a plurality of pieces of additional information individually corresponding to a plurality of pieces of divided data that is included the data;
store, in the meta-information of the second data block among the plurality of the meta-data, the first position as a position of the physical area; and
store, in the meta-information of the second data block among the plurality of the meta-data, the second positional information within logical area obtained from the second data block in association with the first position;
store, in the memory, first additional information corresponding to a divided data present at a head of the data among the plurality of pieces of additional information when storing the second information in the memory; and
generate a second additional information based on the first additional information present in the memory among the plurality of pieces of additional information when restoring the second data block, the second additional information is remaining pieces of additional information among the plurality of pieces of additional information.

US Pat. No. 10,990,534

DEVICE, SYSTEM AND METHOD TO FACILITATE DISASTER RECOVERY FOR A MULTI-PROCESSOR PLATFORM

Intel Corporation, Santa...

1. A circuit device comprising:a first processor to couple to a first memory, wherein a first node is to comprise the first processor and the first memory, wherein the first processor comprises:
a first cache to cache first data based on an execution of a software process by the first processor, the execution while the first node is coupled to a second node which comprises a second processor and a second memory, wherein, on behalf of the software process, the first node is to access second data at a memory location of the second memory, wherein the first data is to comprise a cached version of the second data, and wherein the first data corresponds to first address information which indicates the memory location of the second memory;
first circuitry coupled to detect a power disruption event while the first data is cached at the first cache;
second circuitry coupled to the first circuitry, wherein, based on the power disruption event, the second circuitry is to:
interrupt a software execution pipeline of the first processor;
terminate communications between the first node and the second node; and
capture an image of a state of the first processor, comprising the second circuitry to flush the first cache to a reserved region of the first memory, wherein the second circuitry is to write both the first data and the first address information to the reserved region; and
third circuitry to generate a signal after the image is captured, the signal to enable a shutdown of the first processor based on the power disruption event.

US Pat. No. 10,990,533

DATA CACHING USING LOCAL AND REMOTE MEMORY

Hewlett Packard Enterpris...

1. A method for retrieving cached data, comprising:receiving, within a cache server, a request for cached data including a key relating to the cached data;
searching a table using a key lookup to identify a data object corresponding to the cached data, wherein the table and the data object reside on a local memory of the cache server, and wherein the key lookup is processed in the local memory;
obtaining a pointer from the data object, wherein the pointer identifies a location of the cached data residing on a remote memory communicatively coupled to the cache server; and
retrieving the cached data from the remote memory.

US Pat. No. 10,990,532

OBJECT STORAGE SYSTEM WITH MULTI-LEVEL HASHING FUNCTION FOR STORAGE ADDRESS DETERMINATION

Intel Corporation, Santa...

1. A method, comprising:performing a first hash on a name of an object with a first hardware element of an object storage system, the name of the object being part of a request that is associated with the object, a result of the first hash explicitly identifying a second hardware element directly beneath the first hardware element in a hierarchical arrangement of hardware elements in the object storage system, the second hardware element being the only hardware element identified by the result of the first hash, and where, the hierarchical arrangement of hardware elements is to include multiple hardware elements directly beneath the first hardware element;
sending the request to the second hardware element; and,
performing a second hash on the name of the object with the second hardware element, a result of the second hash identifying a third hardware element directly beneath the second hardware element in the hierarchical arrangement; and,
sending the request to the third hardware element to advance the request toward being serviced by the object storage system.

US Pat. No. 10,990,531

CLOUD-BASED FREQUENCY-BASED CACHE MANAGEMENT

Intel Corporation, Santa...

20. A method comprising:in response to one or more of an installation of an application or a modification to the application, generating a lookup key based on a first file that is associated with the application;
determining that the lookup key is to be transmitted to a server; and
determining whether to store at least a portion of the first file in a memory cache based on a first frequency indicator associated with the first file from the server.

US Pat. No. 10,990,530

IMPLEMENTATION OF GLOBAL COUNTERS USING LOCALLY CACHED COUNTERS AND DELTA VALUES

EMC IP Holding Company LL...

1. A method of providing global values comprising:configuring a global memory to include a global counter;
configuring a plurality of processing cores to have a plurality of private caches, wherein each private cache of the plurality of private caches is used exclusively by a different one of the plurality of processing cores, where said each private cache includes two sets of buffers, an update toggle and a read toggle, wherein each of the two sets of buffers in each of the plurality of private caches includes a local counter and a local delta value corresponding to the global counter;
performing first processing by a first of the plurality of processing cores to read a current value for the global counter, wherein a first private cache of the plurality of private caches is used exclusively by the first processing core, wherein the first processing comprises:
determining the current value of the global counter as a mathematical sum of the local counter value and the local delta value from one of the two sets of buffers of the first private cache identified by the read toggle of the first private cache; and
performing second processing by the first processing core to modify the global counter by a first amount, wherein the second processing comprises:
updating the local delta value from a specified one of the two set of buffers of the first private cache identified by the read toggle, wherein said updating includes adding the first amount to the local delta value from the specified one of the two set of buffers of the first private cache identified by the update toggle of the first private cache.

US Pat. No. 10,990,529

MULTI-POWER-DOMAIN BRIDGE WITH PREFETCH AND WRITE MERGING

TEXAS INSTRUMENTS INCORPO...

1. A processing system comprising:one or more processors;
a cache memory coupled to the one or more processors; and
a memory controller coupled to the one or more processors via a bridge, the bridge comprising:
first interface circuitry configured to receive a first memory request associated with a first clock domain;
address conversion circuitry configured to convert a first memory address associated with the first memory request from a first memory address format associated with the first clock domain to a second memory address format associated with the second clock domain;
a plurality of buffers configured to transition the first memory request to a second clock domain;
address hazarding circuitry configured to create a first scoreboard entry associated with the first memory request;
second interface circuitry configured to:
transmit the first memory request to a memory based on the converted first memory address; and
receive a first response to the first memory request;
wherein the plurality of buffers is further configured to transition the first response to the second clock domain; and
wherein the address hazarding circuitry is further configured to clear the first scoreboard entry based on the received response.

US Pat. No. 10,990,528

METHODS AND SYSTEMS FOR MANAGING PHYSICAL INFORMATION OF MEMORY UNITS IN A MEMORY DEVICE

Macronix International Co...

1. A method for managing physical information of memory units in a memory device, the method comprising:monitoring, by a memory controller associated with the memory device, physical statuses of one or more memory units corresponding to a subset of a total number of physical information blocks that include physical information of a plurality of memory units in the memory device, wherein the physical information of a memory unit indicates a physical condition of the memory unit and is distinct from user data stored in the memory unit, and wherein monitoring the physical statuses of the one or more memory units comprises, for at least one memory unit of the one or more memory units, checking at least one of a threshold voltage setting, a health status or a validity status;
receiving, by the memory controller, a request to update a physical status of a first memory unit stored in a memory chunk present in a table in a cache memory corresponding to the memory device, the table storing information corresponding to the subset of the total number of physical information blocks in one or more memory chunks, wherein a memory chunk includes one or more entries that store physical information of one or more memory units included in a physical information block in the subset of the total number of physical information blocks;
in response to the request, accessing, by the memory controller, the table in the cache memory at a first time;
determining, by the memory controller, that first physical information of the first memory unit is stored in a first memory chunk present in the table;
upon the determination, accessing, by the memory controller, an entry in the first memory chunk present in the table, the entry corresponding to the first memory unit; and
updating a value of the entry, comprising changing values of one or more bits included in the entry, the one or more bits representing at least one of a threshold voltage setting for the memory unit, a health status for the memory unit, a validity status for the memory unit, an encode type for content of the memory unit, a bit error rate for the memory unit, or one of an erase, program, or read count for the memory unit.

US Pat. No. 10,990,527

STORAGE ARRAY WITH N-WAY ACTIVE-ACTIVE BACKEND

EMC IP HOLDING COMPANY LL...

11. A method comprising:in a storage node comprising a first engine comprising at least one computing node with at least one drive adapter, a second engine comprising at least one computing node with at least one drive adapter, at least one drive array comprising a plurality of drives, and an interconnecting fabric via which the drives of the drive array are accessible to the drive adapter of the first engine and the drive adapter of the second engine:
organizing the drives of the drive array into hypers; and
causing each hyper to be accessible to both the drive adapter of the first engine and the drive adapter of the second engine.

US Pat. No. 10,990,526

HANDLING ASYNCHRONOUS POWER LOSS IN A MEMORY SUB-SYSTEM THAT PROGRAMS SEQUENTIALLY

Micron Technology, Inc., ...

1. A system comprising:a non-volatile memory (NVM) device;
a volatile memory coupled to the NVM device, the volatile memory to store:
a zone map data structure that maps a zone of a logical block address (LBA) space to a zone state and to a zone index within the LBA space, wherein the zone comprises a plurality of sequential LBAs that are mapped to a plurality of sequential physical addresses;
a journal data structure; and
a high frequency update table; and
a processing device coupled to the volatile memory and the NVM device, wherein the processing device is to:
write, within an entry of the high frequency update table, a value of a zone write pointer corresponding to the zone index, wherein the zone write pointer comprises a location in the LBA space where the processing device is writing to the zone in service of a write request;
write, within an entry of the zone map data structure, a table index value that points to the entry of the high frequency update table;
update, within the journal data structure, metadata of the entry of at least one of the zone map data structure or the journal data structure affected by a flush transition between the zone map data structure and the high frequency update table; and
in response to an asynchronous power loss (APL) event, flush the journal data structure and the high frequency update table to the NVM device.

US Pat. No. 10,990,525

CACHING DATA IN ARTIFICIAL NEURAL NETWORK COMPUTATIONS

Mipsology SAS

1. A system for caching data in artificial neural network (ANN) computations, the system comprising:a module configured to:
estimate, based on a structure of the ANN to be computed, times of reuse of the data in the computation of the ANN, wherein the data are associated with a logical address and include values associated with neurons of the ANN, wherein the estimation of the times of reuse of data is based on a number of the neurons using the data and a location of the neurons using the data in the structure of the ANN; and
assign, based on the times of reuse of the data, a priority to the logical address;
a plurality of physical memories, the physical memories being associated with physical addresses and physical parameters; and
memory controller coupled to the plurality of physical memories, wherein the memory controller is configured to:
receive the data and the logical address of the data and the priority of the logical address;
determine, based on the priority of the logical address and the physical parameters, a physical address of a physical memory of the plurality of physical memories, the data to be stored solely in the physical memory; and
perform an operation associated with the data and the physical address.

US Pat. No. 10,990,524

MEMORY WITH PROCESSING IN MEMORY ARCHITECTURE AND OPERATING METHOD THEREOF

Powerchip Semiconductor M...

1. A memory with a processing in memory architecture, comprising:a memory array, comprising a plurality of memory regions;
a mode register, configured to store a plurality of memory mode settings;
a memory interface, coupled to the memory array and the mode register, and externally coupled to a special function processing core; and
an artificial intelligence core, coupled to the memory array and the mode register,
wherein the plurality of memory regions are respectively selectively assigned to the special function processing core or the artificial intelligence core according to the plurality of memory mode settings of the mode register, so that the special function processing core and the artificial intelligence core respectively access different memory regions in the memory array according to the plurality of memory mode settings,
wherein the plurality of memory regions comprise a first memory region and a second memory region, the first memory region is configured for exclusive access by the artificial intelligence core, and the second memory region is configured for exclusive access by the special function processing core,
wherein the plurality of memory regions further comprise a plurality of data buffer regions, and the artificial intelligence core and the memory interface alternately access different data in the plurality of data buffer regions,
wherein when the artificial intelligence core performs a neural network operation, the artificial intelligence core reads input data of one of the plurality of data buffer regions as an input parameter, and reads weight data of the first memory region, wherein the artificial intelligence core outputs feature map data to the first memory region.

US Pat. No. 10,990,523

MEMORY CONTROLLER AND OPERATING METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. An operating method of a memory controller for controlling a memory device comprising a plurality of banks, the operating method comprising:determining whether a number of write commands enqueued in a command queue of the memory controller exceeds a reference value;
calculating a level of write power to be consumed by the memory device in response to at least some of the write commands from among the enqueued write commands in response to the number of enqueued write commands exceeding the reference value; and
scheduling, based on the calculated level of write power, interleaving commands executing a first interleaving operation of the memory device, from among the enqueued write commands,
wherein the calculating of the level of write power comprises:
selecting at least some of the write commands from among the enqueued write commands to calculate a power level corresponding to each of the selected write commands;
checking whether there is a read command corresponding to a target bank of a first selected write command and preceding the first selected write command;
generating a pre-read command preceding the first selected write command when there is no read command and write command both preceding the first selected write command; and
comparing pre-read data read from the target bank of the memory device in response to the pre-read command with write data corresponding to the first selected write command to calculate the level of write power of the first selected write command.

US Pat. No. 10,990,522

ELECTRONIC DEVICES RELATING TO A MODE REGISTER INFORMATION SIGNAL

SK hynix Inc., Icheon-si...

1. An electronic device comprising:an information signal storage circuit configured to store an information signal during a mode register set operation, and configured to output the stored information signal as a mode register information signal;
a write data selection circuit configured to receive the mode register information signal and input data and output selectively the mode register information signal or input data as write data; and
a cell array configured to store the write data,
wherein, during a first mode operation, the write data selection circuit receives the mode register information signal, outputs the mode register information signal as the write data to the cell array, and the cell array stores the write data, within the cell array, which corresponds to one of combinations of an address signal,
wherein the information signal storage circuit includes a register,
wherein the register is configured to store a selection information signal, in response to an internal control signal, and
wherein the register is configured to output the stored selection information signal as the mode register information signal; and
wherein the information signal comes from an external source, the address signal comes from an external source and, the first mode operation corresponds to a command received from an external source.

US Pat. No. 10,990,521

DATA STORAGE SYSTEM, DATA STORAGE DEVICE AND MANAGEMENT METHOD THEREOF

ASMedia Technology Inc., ...

1. A data storage device comprising:a memory array;
a prediction unit configured to obtain a plurality of association rules between a plurality of access locations according to a plurality of previous access commands; and
a look-up table management unit configured to manage a plurality of look-up tables according to the plurality of association rules, wherein the look-up table management unit determines whether a current access command corresponds to at least one of the plurality of look-up tables to obtain a physical address of the current access command in the memory array from the corresponding look-up table, and predicts a look-up table corresponding to a subsequent access command according to the plurality of association rules.

US Pat. No. 10,990,520

METHOD FOR GABAGE COLLECTING FOR NON-VOLATILE MEMORY

Storart Technology Co., L...

1. A method for garbage collecting for non-volatile memory, comprising the steps of:a) providing a Solid State Drive (SSD), connected to a host, containing a plurality of Triple Level Cell (TLC) blocks and a plurality of Single Level Cell (SLC) blocks, wherein at least one clean TLC block having no data;
b) reading 3M TLC pages in a TLC block having data;
c) moving valid data read from the 3M TLC pages in the TLC block in step b) to the at least one clean TLC block;
d) sending a host program command of 1 page to the host;
e) repeating step b) to step d) until valid data in each of the TLC page in 8 TLC blocks having data are moved;
f) reading 1 SLC page in a SLC block having data;
g) moving valid data read from the 1 SLC page in the SLC block in step f) to the at least one clean TLC block;
h) sending a host program command of ? page to the host; and
i) repeating step f) to step h) until valid data in each of the SLC page in the SLC block having data are moved;
wherein M is an integer and ? equals to

 where n is a difference between the number of free SLC blocks and a first threshold.

US Pat. No. 10,990,519

MULTI-TENANT CLOUD ELASTIC GARBAGE COLLECTOR

International Business Ma...

1. A computer-implemented method, comprising:in response to a run-time application programming interface call to a garbage collection routine, determining a load value for each tenant of a plurality of tenants running applications in a distributed cloud computing environment on a shared computer server as a function of measuring input network traffic to each of the tenants for a predetermined interval of time, wherein the shared computer server includes a plurality of computer processor cores;
determining a capacity value for each tenant of the plurality of tenants as a difference between the determined load value and maximum core allocation values for the each tenant;
identifying a first subset of the plurality of computer processor cores that are used to process the load determined for a one of the tenants that has a largest capacity relative to others of the tenants, wherein a total number of the first subset of the plurality of computer processor cores is less than a totality of the plurality of computer processor cores;
assigning a second subset of the plurality of computer processor cores to perform garbage collection for one or more tenants, wherein the second subset processor cores are not within the first subset; and
invoking garbage collection using the assigned second subset of computer processor cores which deallocates no longer used memory in a corresponding heap for the one or more tenants.

US Pat. No. 10,990,518

METHOD AND SYSTEM FOR I/O PARALLEL DISTRIBUTED GARBAGE COLLECTION OF A DEDUPLICATED DATASETS

EMC IP HOLDING COMPANY LL...

1. A method for distributed garbage collection of deduplicated datasets, the method comprising:creating a first set of temporary files, and storing the first set of temporary files across a plurality of nodes in a set of multiple computing device nodes, where each temporary file of the first set of temporary files stores a range of fingerprints for data within data files associated with a directory tree structure;
creating a second set of temporary files, and storing the second set of temporary files across a second plurality of nodes in the set of multiple computing device nodes, where each temporary file of the second set of temporary files stores a range of fingerprints of storage segments stored on one or more deduplicated storage containers;
sorting the fingerprints stored in each temporary file using distributed out of core sorting across each node in the set of multiple computing device nodes, wherein each temporary file of the first set and the second set are sorted in parallel at each corresponding node where the temporary file is stored, and sorting information including the temporary files of the first set of temporary files and the second set of temporary files is exchanged between the nodes to generate a first set of sorted files and a second set of sorted files and sorting the fingerprints includes transferring a first temporary file storing a first range of fingerprints to a first node associated with the first range of fingerprints, receiving a second temporary file associated with a second range of fingerprints, and storing the second temporary file in local storage;
determining an intersection of the fingerprints in the first set of sorted files and the second set of sorted files; and
generating a garbage collection recipe for each of the one or more deduplicated storage containers based on the intersection of the fingerprints.

US Pat. No. 10,990,517

CONFIGURABLE OVERLAY ON WIDE MEMORY CHANNELS FOR EFFICIENT MEMORY ACCESS

XILINX, INC., San Jose, ...

1. A system comprising:a programmable device configured to couple to a host, the host comprising a processor and a memory controller, wherein the host is configured to generate a plurality of read/write requests, wherein the programmable device is separate from the host, wherein the programmable device is configured to receive the plurality of read/write requests and addresses associated therewith from the memory controller of the host, and wherein the programmable device is further configured to interleave the plurality of read/write requests across multiple communication channels of the programmable device, wherein a subset of bits within each address is used to identify a communication channel of the programmable device from the multiple communication channels of the programmable device; and
a memory configured to receive the plurality of read/write requests and another subset of bits associated therewith from the programmable device, wherein the memory is separate from the programmable device, wherein the memory is configured to store contents associated with the addresses for write requests and wherein the memory is configured to return contents associated with the addresses for a read request to the programmable device,
wherein the programmable device is further configured to return the received contents to the host for processing.

US Pat. No. 10,990,516

METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR PREDICTIVE API TEST SUITE SELECTION

Liberty Mutual Insurance ...

1. An apparatus for selecting a test suite for an API, the apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to:receive test patterns and heuristics;
receive an input API, wherein the input API comprises a set of subroutine definitions, protocols, and tools for building a software application;
parse the input API to extract API specifications; and
based at least in part on the extracted API specifications, the test patterns, and the heuristics, programmatically generate a test suite based at least in part on a machine learning model,
wherein programmatically generating the test suite comprises selecting one or more tests for inclusion in the test suite from a plurality of recommended tests, wherein the one or more tests are selected for inclusion in the test suite based at least in part on the one or more tests being assigned a high importance indicator, and wherein the test suite comprises one or more test routines, one or more data values, and one or more expected results.

US Pat. No. 10,990,515

AUTOMATED UNIT TESTING IN A MAINFRAME ENVIRONMENT

BMC Software, Inc., Hous...

1. An automated system for unit testing an application in a mainframe execution environment, comprising:a vector table associated with the application and resides in a non-transitory storage medium, where the application includes one or more commands for a given type of file system and each command has an entry in the vector table with an address to a handler for the corresponding command;
a test configurator executing in the mainframe execution environment and configured to receive and parse a test input file, where the test input file includes a record for a particular file accessed by the application using the given type of file system;
upon reading the record, the test configurator calls a stub setup routine, wherein the stub setup routine is associated with the given type of file system and its associated command stub simulation routines for the particular file in the mainframe execution environment, such that simulation by the command stub simulation routines is appropriate for the given type of file system; and
an interceptor routine accessible by the application and, in response to a given command issued by the application for the given type of file system, operates to call a corresponding command simulation routines for the type of command in the application;
wherein the test configurator further operates to update an entry in the vector table for the given command by replacing the address to a handler for the given command with an address for the interceptor routine, and the test configurator and the interceptor routine are implemented by computer executable instructions executed by a computer processor.

US Pat. No. 10,990,514

DETECTING PROBLEMATIC CODE CHANGES

Red Hat, Inc., Raleigh, ...

1. A system comprising:a processing device; and
a memory device including instructions executable by the processing device for causing the processing device to:
identify a broken software build and a last stable software-build associated with a software project;
based on identifying the broken software build and the last stable software-build, generate a history of code commits associated with the software project by retrieving a plurality of commit logs from a plurality of commit repositories and aggregating together information from the plurality of commit logs about code commits that were applied to the software project after the last stable software-build and before the broken software build; and
subsequent to generating the history of code commits, iteratively test the code commits specified in the history of code commits to determine a problematic code-commit that is at least partially responsible for the broken software build, wherein iteratively test the code commits specified in the history of code commits to determine the problematic code-commit comprises:
identifying a code commit that is halfway between two boundaries in the history of code commits,
testing the code commit to determine if it is the problematic code-commit, and
if the code commit is not the problematic code-commit, modifying one of the two boundaries to correspond to the code commit.

US Pat. No. 10,990,513

DETERMINISTIC CONCURRENT TEST PROGRAM EXECUTOR FOR AN AUTOMATED TEST EQUIPMENT

Advantest Corporation, T...

1. An apparatus comprising:a test program executor for an automated test equipment, wherein the test program executor is configured to execute a test flow comprising a plurality of test suites,
wherein the test program executor is configured
to asynchronously execute the plurality of test suites, wherein a test suite of the test suites comprises a call of a subsystem function of a subsystem, wherein the subsystem function of the subsystem is related to a subsystem operation that is to be executed by the subsystem, and
to signal the call of the subsystem function of the subsystem by transmitting an asynchronous request to the subsystem, the asynchronous request comprising a call tree hierarchy address that is call specific and the subsystem operation that is call specific to be executed by the subsystem, and wherein the test program executor is further configured to
determine an execution order of a plurality of subsystem operations, wherein the order of the execution sequence of the subsystem operations is based on the order of respective call tree hierarchy addresses that are specific and are associated with the subsystem operations.

US Pat. No. 10,990,511

APPARATUS AND APPLICATION INTERFACE TRAVERSING METHOD

TENCENT TECHNOLOGY (SHENZ...

1. An application interface traversing method, comprising:selecting, by processing circuitry of an apparatus, a target user interface from a plurality of user interfaces of an application to be tested, the target user interface being associated with at least one of a control element, a sub-interface, and a parent interface;
obtaining a first control list for the target user interface, the first control list indicating whether the at least one of the control element, the sub-interface, and the parent interface has been traversed;
determining whether the first control list indicates that the target user interface is associated with a non-traversed user interface corresponding to one of the at least one of the control element, the sub-interface, and the parent interface; and
when the target user interface is determined to be associated with the non-traversed user interface, selecting the non-traversed user interface to update the first control list.

US Pat. No. 10,990,510

ASSOCIATING ATTRIBUTE SEEDS OF REGRESSION TEST CASES WITH BREAKPOINT VALUE-BASED FINGERPRINTS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for associating a particular test case with a particular test fingerprint indicative of code coverage of the particular test case, the method comprising:executing the particular test case;
determining a code path traversed during execution of the particular test case;
determining a collection of breakpoints encountered during traversal of the code path;
determining the particular test fingerprint corresponding to the particular test case based at least in part on the collection of breakpoints;
determining an attribute seed associated with the particular test case; and
storing an association between the attribute seed and the particular test fingerprint,
wherein storing the association between the attribute seed and the particular test fingerprint comprises populating a translation table with an entry storing the association;
determining a desired amount of cumulative code coverage; and
selecting and executing a group of test cases to execute to obtain the desired amount of cumulative code coverage, wherein the group of test cases comprises the particular test case.

US Pat. No. 10,990,509

DEBUGGING SYSTEMS

Undo Ltd., Cambridge (GB...

1. A system for debugging a computer program comprising computer readable program code, the system comprising:debugging data comprising replay debugging data for replaying an execution of the computer program;
first and second user devices, configured to display a first and second graphical user interface respectively, each graphical user interface comprising data indicative of an execution of a debugger processing the debugging data, said execution of the debugger comprising replaying the execution of the computer program using the debugging data;
the first and second graphical user interfaces each configured to:
simultaneously display data indicative of a temporal position within the replay of the execution of the computer program associated with the first graphical user interface and display data indicative of a temporal position within the replay of the execution of the computer program associated with the second graphical user interface; and
display data indicative of a position within the computer readable program code associated with the other one of said graphical user interfaces.

US Pat. No. 10,990,508

COMPUTING SYSTEM WITH GUI TESTING DEVICE AND RELATED METHODS

CITRIX SYSTEMS, INC., Fo...

1. A computing system comprising:a client computing device configured to execute a software application with an associated graphical user interface (GUI), the GUI comprising a plurality of fields, each field to hold a text string; and
a GUI testing device in communication with said client computing device and configured to
execute a testing framework for interacting with the software application to generate a plurality of versions of the GUI, each of the plurality of versions being in a different spoken communication language, and defining a plurality of expected text strings in the plurality of fields,
extract the plurality of fields from the plurality of versions of the GUI,
perform optical character recognition (OCR) processing on the plurality of fields to generate a plurality of actual text strings, and
compare the plurality of actual text strings with the plurality of expected text strings.

US Pat. No. 10,990,507

SYSTEM AND METHOD FOR PROVISIONING A VIRTUAL MACHINE TEST ENVIRONMENT

Dell Products L.P., Roun...

1. A system comprising a computer processor for testing changes to a website, the system comprising:a monitoring component to monitor for changes in a web site, and to detect a change in the web site;
a repository; and
a hypervisor communicatively connected to the repository and to the monitoring component, in response to the detection of the change in the website the hypervisor to instantiate a virtual test environment, within the virtual test environment the hypervisor configured to:
instantiate a first virtual machine, from a first snapshot stored in the repository, as a first environment node in a test environment;
apply scripts to configure the first virtual machine to test a first proposed webpage of the website;
if the test is successful, then store in the repository a second snapshot of the first virtual machine as configured;
instantiate a second virtual machine, from the second snapshot, as a second environment node;
configure the second virtual machine to test a second webpage of the website; and
test an interface between the first virtual machine and the second virtual machine to test an interface between the second webpage of the website and the first proposed webpage.

US Pat. No. 10,990,506

CROSS-THREAD MEMORY INDEXING IN TIME-TRAVEL DEBUGGING TRACES

MICROSOFT TECHNOLOGY LICE...

1. A method, implemented at a computer system that includes at least one processor, for creating memory snapshot data that reduces processing for thread-focused analysis, the method comprising:identifying a plurality of trace fragments within a trace that represents prior execution of a plurality of threads, each trace fragment representing an uninterrupted consecutive execution of a plurality of executable instructions on a corresponding thread of the plurality of threads, the plurality of trace fragments including a first and a second trace fragment corresponding to a first thread, and a third trace fragment corresponding to a second thread;
determining at least a partial ordering among the plurality of trace fragments, including determining that the first trace fragment is orderable prior to the second trace fragment on the first thread, and that the third trace fragment is orderable between the first and second trace fragments;
based on the third trace fragment being orderable between the first and second trace fragments, identifying at least one memory cell that is interacted with by one or more executable instructions whose execution is represented by the third trace fragment; and
inserting memory snapshot data into trace data corresponding to the first thread, the memory snapshot data at least identifying the at least one memory cell.

US Pat. No. 10,990,505

STIPULATED OVERRIDES WITH VIOLATION RESOLUTION

DREAMWORKS ANIMATION LLC,...

1. A computer-implemented method for composing a scene using a data module, the method comprising:receiving, from a user, an instruction to instantiate the data module to produce at least a first instance of the data module in a second data module;
receiving, from the user, a first override for modifying the first instance of the data module;
receiving, from the user, a second override for modifying the data module;
identifying a conflict introduced by the first override and the second override;
configuring a display interface to display an indication informing the user of the identified conflict;
configuring the display interface to display one or more options for resolving the identified conflict;
receiving, from the user, a selection of an option of the one or more options; and
in response to the selection of the option, resolving the identified conflict by deleting the first override or the second override.

US Pat. No. 10,990,504

TIME TRAVEL SOURCE CODE DEBUGGER INCORPORATING FUTURE PREDICTION

OzCode Ltd., Herzelia (I...

1. A method, implemented on a computer that includes one or more processors, of time travel debugging for use in a source code debugger, the method comprising:obtaining source code corresponding to a debuggee application;
instrumenting the source code without altering the behavior of the debuggee application, whereby hook functions are inserted into strategic points of interest in the source code;
simulating execution of the instrumented source code in an isolated execution environment without making any changes to original process memory, wherein said hook functions are operative to collect information about runtime execution behavior of the source code;
recording chronologically memory writes generated by the simulation in a memory based journal;
generating a nested data structure documenting simulated execution flow of the instrumented source code utilizing said information about runtime execution, whereby logpoints corresponding to said strategic points of interest in the source code are entered into said nested data structure;
generating checkpoints into said memory based journal wherein each checkpoint corresponds to a particular point in time of the simulated execution of the instrumented source code; and
storing said checkpoints in appropriate locations in said nested data structure thereby associating particular points in time of execution of the instrumented source code in the future with memory locations in said journal, thereby enabling a user to time travel to the future.

US Pat. No. 10,990,502

DETAILED PERFORMANCE ANALYSIS BY FLOW AWARE MARKER MECHANISM

EMC IP Holding Company LL...

1. A method comprising:executing a set of threads in a storage system, the set of threads including at least a first thread;
executing a plurality of performance counters of the storage system, the plurality of performance counters being executed concurrently with the set of threads, the plurality of performance counters including at least: (i) a first performance counter that is executed when an operating state of the first thread is changed in response to the first thread accessing a synchronization object, and (ii) a second performance counter that is executed when a marker inserted in the first thread is executed;
generating one or more performance data containers associated the first thread based on performance data associated with the first thread; and
generating a directed graph based on the performance data containers, the directed graph including a plurality of nodes connected to one another by a plurality of edges, the plurality of nodes including a first node corresponding to the synchronization object, and a second node corresponding to the marker.

US Pat. No. 10,990,501

MACHINE LEARNING SYSTEM FOR WORKLOAD FAILOVER IN A CONVERGED INFRASTRUCTURE

VMware, Inc., Palo Alto,...

1. A method, comprising:identifying, by at least one computing device, a cluster of virtual machines executed in computing environment;
performing, by the at least one computing device, a plurality of simulations for the cluster of virtual machines, the plurality of simulations simulating a failure of one or more hosts in the computing environment, the plurality of simulations further simulating an effect on the cluster of virtual machines as a result of the failure;
generating, by the at least one computing device, a score for respective ones of the simulations, the score representing the effect on the cluster of virtual machines;
performing, by the at least one computing device, a clustering process on one of the simulations based upon on the score, the clustering process being trained using data from at least one other deployment within a converged infrastructure environment; and
identifying, by the at least one computing device, based on the clustering process, a most similar deployment to the cluster of virtual machines within the computing environment.

US Pat. No. 10,990,500

SYSTEMS AND METHODS FOR USER ANALYSIS

BEIJING DIDI INFINITY TEC...

1. A system, comprising:at least one storage medium including a set of instructions for user mining;
at least one processor in communication with the at least one storage medium, wherein when executing the set of instructions, the at least one processor is directed to:
obtain a plurality of first feature vectors of a plurality of positive samples, each first feature vector including first feature information that describes a plurality of features of a corresponding positive sample in the plurality of positive samples;
obtain a plurality of second feature vectors of a plurality of negative samples, each second feature vector including second feature information that describes a plurality of features of a corresponding negative sample in the plurality of negative samples;
determine, based on the plurality of first feature vectors and the plurality of second feature vectors, a plurality of expanded first feature vectors and a plurality of expanded second feature vectors; and
determine, among the plurality of features corresponding to the plurality of first feature vectors, one or more core features related to the plurality of positive samples based on a trained binary model, which is produced by using the plurality of expanded first feature vectors and the plurality of expanded second feature vectors.

US Pat. No. 10,990,499

MANAGING VISITOR ACCESS

MyOmega Systems GmbH, Nu...

1. A method of managing a visitor outside of a physical area, comprising:remotely registering the visitor to an access system by validating an application on a mobile user device of the visitor using compute resources of a security module, wherein the compute resources are physically secured against access by external components and processes;
the application communicating with the access system using the compute resources of the security module;
the visitor entering personnel credentials to authorize use of the application while the visitor is outside of the physical area; and
in response to the visitor authorizing use of the application, the application and the access system exchanging data containing at least one of: information about the visitor, remote executed activities of the visitor, remote data, or information about localization of the user device.

US Pat. No. 10,990,498

DATA STORAGE DEVICE AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A data storage device comprising:a nonvolatile memory device including a plurality of dies; and
a controller configured to control the nonvolatile memory device, the controller comprising:
a processor configured to transmit operation commands to the nonvolatile memory device based on requests of a host device, and output control signals instructing to generate power consumption profiles for dies which operate in response to the operation commands; and
a power management unit configured to operate according to the control signals outputted from the processor, the power management unit comprising:
a power profile command table in which one or more power profile commands corresponding to each of the operation commands are stored;
a power profile command processing circuit configured to generate the power consumption profiles representing power consumption amounts changing with the lapse of time for each of the dies which operate, by processing the one or more power profile commands corresponding to each control signal; and
a power budget scheduler configured to determine whether to transmit an operation command other than the operation commands to the nonvolatile memory device, depending on a total power consumption amount summed at each set unit time based on the power consumption profiles,
wherein each power profile command is a bitmap which includes a mode setting region, a power profile generation time setting region and a power value setting region, and
wherein the mode setting region comprises:
a first mode setting bit to set an end reference to generation of the power profile command; and
a second mode setting bit to set a scheme in which a power value set in the power value setting region is applied for a time period set in the power profile generation time setting region.

US Pat. No. 10,990,497

DATA STORAGE SYSTEM AND METHOD FOR OPERATING NON-VOLATILE MEMORY

Silicon Motion, Inc., Jh...

1. A data storage device, comprising:a non-volatile memory;
a plurality of thermometers, configured to detect temperature of different regions of the non-volatile memory; and
a controller, configured to operate the non-volatile memory to heat up a target region of the non-volatile memory according to a regional temperature detected by a target thermometer corresponding to the target region.

US Pat. No. 10,990,496

SYSTEM AND METHOD TO DERIVE HEALTH INFORMATION FOR A GENERAL PURPOSE PROCESSING UNIT THROUGH AGGREGATION OF BOARD PARAMETERS

Dell Products L.P., Roun...

1. An information handling system, comprising:a host processing system including:
a main processor that instantiates a management controller agent; and
a general-purpose processing unit (GPU); and
a baseboard management controller (BMC) coupled to the host processing system and to the GPU, the BMC configured to:
direct the management controller agent to retrieve first management information from the GPU;
receive the first management information from the management controller agent;
retrieve second management information from the GPU; and
provide a health indication for the GPU based upon the first management information and the second management information, wherein at least one of the first and second management information includes a page retirement count for the GPU.

US Pat. No. 10,990,495

SELECTIVELY ENABLING FEATURES BASED ON RULES

Snap Inc., Santa Monica,...

1. A method comprising:providing, by one or more processors, to a client device, a messaging application comprising a plurality of features;
accessing, by the one or more processors, a first configuration rule of a plurality of configuration rules that associates a first device property rule with a first feature of the plurality of features of the messaging application, the first device property rule comprising a specified location or a bandwidth;
determining, by the one or more processors, at a first point in time, that a first property of the client device matches the first device property rule associated with the first configuration rule, the determining comprising comparing a current location of the client device to the specified location in the first device property rule or comparing an available bandwidth of the client device to the bandwidth in the first device property rule;
in response to determining that the first property of the client device matches the first device property rule associated with the first configuration rule, enabling, by the one or more processors, the first feature of the plurality of features on the client device at the first point in time;
receiving, by the one or more processors, an updated first property of the client device at a second point in time; and
in response to determining that the updated first property of the client device fails to match the first device property rule associated with the first configuration rule at the second point in time, disabling, by the one or more processors, the first feature of the plurality of features on the client device.

US Pat. No. 10,990,494

DISTRIBUTED HARDWARE TRACING

Google LLC, Mountain Vie...

1. A method performed by a hardware tracing system for capturing data describing hardware events, the method comprising:detecting, at a multi-core processor comprising a plurality of cores, a trigger that triggers capturing of the data describing the hardware events, wherein the trigger is:
i) satisfied as a result of executing program code using respective processor components within each of the plurality cores; and
ii) configured to cause storage of event data about the operations performed at the multi-core processor;
in response to detecting the trigger, storing, at the multi-core processor, event data describing trace activity, wherein the event data is captured and stored during execution of the program code at the multi-core processor and is specific to operations involving:
i) a transfer of data between respective processor components within at least one core of the plurality of cores; and
ii) a transfer of data between respective cores of the multi-core processor; and
providing, to host software, the event data that is stored in response to detecting the trigger, wherein the event data provided to the host software represents a timeline of operations and enables analyzing performance of the program code using the host software.

US Pat. No. 10,990,491

STORAGE CONTROL APPARATUS AND RECOVERY METHOD OF A FAILED TRACK

HITACHI, LTD., Tokyo (JP...

1. A storage control apparatus configured to write data cached in track units to a physical storage, a track being a minimum unit during caching, the storage control apparatus comprising:a memory;
a format bitmap for managing whether or not the physical storage is to be formatted in page units; and
a processor communicatively coupled to the memory and the format bitmap, wherein the processor is configured to
determine whether a failed track in which a failure has occurred during writing to the physical storage corresponds to a control area or an unallocated area; and
format, when the failed track corresponds to the control area or the unallocated area, a storage area of the physical storage corresponding to the control area or the unallocated area,
wherein a page of the page units is a minimum unit of virtual allocation of the storage area,
wherein when the failed track occurs and the failed track corresponds to the control area or the unallocated area, the storage control apparatus sets a bit position of the format bitmap corresponding to a page allocated to the failed track to ON, and
wherein when the storage area corresponding to a page allocated to the failed track is formatted, the storage control apparatus sets a bit position of the format bitmap corresponding to a page included in the formatted storage area to OFF.

US Pat. No. 10,990,490

CREATING A SYNCHRONOUS REPLICATION LEASE BETWEEN TWO OR MORE STORAGE SYSTEMS

Pure Storage, Inc., Moun...

1. A method of establishing a synchronous replication relationship between two or more storage systems, the method comprising:identifying, for a dataset, a plurality of storage systems across which the dataset will be synchronously replicated;
exchanging, between the plurality of storage systems, timing information for at least one of the plurality of storage systems; and
establishing, in dependence upon the timing information for at least one of the plurality of storage systems, a synchronous replication lease, the synchronous replication lease identifying a period of time during which the synchronous replication relationship is valid, wherein a request to modify the dataset may only be acknowledged after a copy of the dataset has been modified on each of the storage systems.

US Pat. No. 10,990,489

REPLICATION SYSTEM WITH NETWORK FAILOVER

Amazon Technologies, Inc....

1. A method for disk replication over a network with network failover, comprising:generating a first write packet when a write instruction is detected from a first computing environment, where the first write packet includes: metadata associated with a data block, and a packet identifier;
storing the first write packet in a cache;
sending the first write packet from the cache to a second computing environment for storage; and
determining that the first write packet has been successfully stored in the second computing environment based on receipt of an acknowledgement sent responsive to a successful snapshot being taken of the storage in the second computing environment; and
removing a plurality of write packets from the cache in response to determining that the first write packet was successfully stored in the second computing environment, wherein the packet identifier of the first write packet is larger than each other serially consecutive packet identifier of the other write packets of the plurality of write packets.

US Pat. No. 10,990,488

SYSTEM AND METHOD FOR REDUCING FAILOVER TIMES IN A REDUNDANT MANAGEMENT MODULE CONFIGURATION

Dell Products L.P., Roun...

1. An information handling system, comprising:a shared memory to store a first plurality of attributes for the information handling system; and
a first management module to communicate with the shared memory, the first management module including a first enclosure controller including a first local memory to store a first stack, and a first memory manager,
wherein when the first management module is set as a standby module of the information handling system, the first enclosure controller to: provide a first plurality of requests for attribute data for the information handling system; receive first response data for attribute data associated with a first subset of the first requests; store the first response data in the first local memory of the first enclosure controller; and receive request failure responses associated with a second subset of the first requests, wherein the second subset of the first requests is directed to a subset of the attributes data for the information handling system stored in the shared memory,
wherein when the first management module is set as an active module of the information handling system, the first management module is granted access to the shared memory, and the first enclosure controller to: provide retry requests for attributes associated with the request failure responses; receive second response data associated with the retry requests; and store the second response data in the first local memory of the first enclosure controller.

US Pat. No. 10,990,487

SYSTEM AND METHOD FOR HYBRID KERNEL- AND USER-SPACE INCREMENTAL AND FULL CHECKPOINTING

OPEN INVENTION NETWORK LL...

1. A system, comprising:computer system memory;
one or more Central Processing Units (CPUs) operatively connected to said computer system memory and configured to execute one or more multi-process applications on a host with a host operating system;
wherein a checkpoint is loaded from the memory for said one or more multi-process applications;
wherein said checkpoint is comprised of at least one of an application virtualization space, an application process hierarchy, and one or more checkpoints for the processes comprising said one or more multi-process applications; and
a kernel-space checkpoint-restore module configured to capture one or more virtual memory area data structures for a process of the one or more multi-process applications during the checkpoint.

US Pat. No. 10,990,486

DATA STORAGE SYSTEM WITH REPAIR OF MID-LEVEL MAPPING BLOCKS OF INTERNAL FILE SYSTEM

EMC IP Holding Company LL...

1. A method of repairing an indirect addressing structure of a file system that has been damaged by corruption of a mid-level mapping (MID) page providing a middle-level mapping of data of a set of files used to store a family of storage objects, the file system including leaf pages associated with respective MID pages and providing a lower-level mapping of file system data, comprising:scanning selected leaf pages to identify leaf pages associated with the corrupted MID page, the scanning including (1) based on an association of groups of leaf pages with corresponding sets of families of storage objects, scanning the leaf pages of only those groups of leaf pages that are associated with the family of storage objects for the corrupted MID page, and (2) performing a two-pass process including first identifying all leaf pages for the logical offset range of the corrupted MID page and then pruning those identified leaf pages that are reachable via non-corrupted MID pages for the logical offset range, the pruning yielding a set of resulting leaf pages for the corrupted MID page;
creating a replacement MID page by incorporating pointers to the resulting leaf pages into the replacement MID page; and
incorporating the replacement MID page into the indirect addressing structure in place of the corrupted MID page.

US Pat. No. 10,990,485

SYSTEM AND METHOD FOR FAST DISASTER RECOVERY

Acronis International Gmb...

1. A computer-implemented method for restoring a computing system, comprising:receiving a backup of a computing device executing a protected application;
generating an ancillary virtual machine (VM) comprising a virtual disk that is emulated from the backup;
generating a delta disk linked to the virtual disk of the ancillary VM, wherein the ancillary VM is given read-only access to the virtual disk and write access to the delta disk;
determining and writing, to the delta disk, configuration modifications to the ancillary VM that enable booting using the virtual disk, wherein the delta disk comprises the configuration modifications for executing the protected application on a different device with dissimilar hardware as the computing device;
deleting the generated ancillary VM;
responsive to receiving a request to perform recovery of the computing device, creating on the different device a recovery virtual machine (VM) having a base virtual disk emulated from the backup;
modifying the recovery VM by attaching the delta disk having the configuration modifications for executing the protected application; and
resuming execution of the protected application on the recovery VM.

US Pat. No. 10,990,484

PERFORMING BACKUP OPERATIONS AND INDEXING BACKUP DATA

Commvault Systems, Inc., ...

1. A storage system for generating and indexing backup data, the storage system comprising:a first computing device comprising one or more hardware processors and computer memory;
a second computing device comprising one or more hardware processors and computer memory;
wherein the first computing device is configured to:
at a first time, perform a first backup operation,
wherein the first backup operation generates first backup data as a plurality of data chunks and stores the first backup data on one or more storage devices that are communicatively coupled to the first computing device,
generate one or more first transaction logs based on the first backup operation,
wherein for each data chunk within the first backup data, a corresponding first transaction log comprises metadata about the data chunk, and
after a data chunk within the first backup data is stored on the one or more storage devices, transmit a corresponding one of the one or more first transaction logs to the second computing device, which maintains an index,
wherein the index is configured to enable restoring backup files generated by at least the first backup operation, including restoring first backup files within the first backup data,
wherein after terminating prematurely before completion, the first backup operation resumes based on using the index in combination with the one or more first transaction logs, and
wherein the first backup operation resumes without restarting and without re-creating the one or more first transaction logs that were created before the terminating prematurely; and
wherein the second computing device is configured to:
at a second time after the first time, update the index by applying the one or more first transaction logs to the index.

US Pat. No. 10,990,483

MINIMIZING A FOOTPRINT OF INCREMENTAL BACKUPS

EMC IP HOLDING COMPANY LL...

1. A method for performing a backup operation, the method comprising:tracking used blocks on a storage device in a used block structure associated with a filesystem, wherein the used block structure identifies used blocks of the filesystem and unused blocks of the filesystem;
tracking changed blocks on the storage device in a changed block structure since a previous backup operation by a processor;
triggering an incremental backup operation for the storage device;
in response to the triggered incremental backup operation, comparing the changed block structure with the used block structure to identify blocks that are both used and changed, wherein used blocks store data that needs to be included in the incremental backup operation; and
backing up only the blocks that are both used and that have changed since the previous backup operation, wherein blocks that are unused and that have changed are not included in the backup operation, wherein unused blocks do not store data that needs to be included in the incremental backup operation.

US Pat. No. 10,990,482

PARTITION LEVEL RESTORE

EMC IP HOLDING COMPANY LL...

1. A computer-implemented method for partition level restore, the method comprising:determining a restoration source virtual machine disk image and a restoration target virtual machine disk image, the determining of the restoration target virtual machine disk image including taking a snapshot of the target virtual machine disk image;
determining partition information for the restoration source and target virtual machine disk images;
analyzing the partition information for the restoration source and target virtual machine disk images;
determining whether the partition information for the restoration source virtual machine disk image matches the partition information for the restoration target virtual machine disk image; and
in response to determining that the partition information for the restoration source virtual machine disk image matches the partition information for the restoration target virtual machine disk image, copying all data from a partition to be restored on the restoration source virtual machine disk image to a corresponding partition on the restoration target virtual machine disk image, wherein the restoration source partition on the source virtual machine disk image includes setting up a first loop device, and wherein the restoration target partition on the target virtual machine disk image includes setting up a second loop device.

US Pat. No. 10,990,481

USING ALTERNATE RECOVERY ACTIONS FOR INITIAL RECOVERY ACTIONS IN A COMPUTING SYSTEM

International Business Ma...

1. A computer program product for performing a recovery action upon detecting an error in a computing system, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations, the operations comprising:detecting an error in the computing system;
determining whether to use an initial recovery action for the detected error or an alternate recovery action for the initial recovery action, wherein the alternate recovery action provided for the initial recovery action specifies a different recovery path involving at least one of a different action and a different component in the computing system than involved in the initial recovery action for which the alternate recovery action is provided; and
using the initial recovery action or the alternate recovery action determined to use to address the detected error.

US Pat. No. 10,990,480

PERFORMANCE OF RAID REBUILD OPERATIONS BY A STORAGE GROUP CONTROLLER OF A STORAGE SYSTEM

PURE STORAGE, INC., Moun...

1. A storage system comprising:a plurality of solid-state storage devices; and
a storage group controller operatively coupled to a set of solid-state storage devices of the plurality of solid-state storage devices, the storage group controller comprising a processing device, the processing device configured to:
receive, from a central storage controller, a command comprising information associated with a RAID rebuild operation to reconstruct data stored at the set of solid-state storage devices, wherein the information associated with the RAID rebuild operation comprises a threshold amount of time to reconstruct the data;
read other data and parity data associated with the data to be reconstructed at the set of solid-state storage devices based on the information associated with the RAID rebuild operation upon receiving the information associated with the RAID rebuild operation;
upon reading the other data and the parity data stored at the set of solid-state storage devices, reconstruct the data based on the other data, the parity data and the information associated with the RAID rebuild operation; and
transmit, to the central storage controller, the reconstructed data.

US Pat. No. 10,990,479

EFFICIENT PACKING OF COMPRESSED DATA IN STORAGE SYSTEM IMPLEMENTING DATA STRIPING

EMC IP Holding Company LL...

1. An apparatus comprising:at least one processing device comprising a processor coupled to a memory;
the processing device being configured:
to utilize a first compress block size for compressing first data stored in a first one of a plurality of stripes of a storage system implementing data striping across a plurality of storage devices; and
to utilize a second compress block size different than the first block size for compressing second data stored in a second one of the plurality of stripes;
wherein the first stripe and the second stripe utilize a common stripe column size;
wherein the first stripe splits the common stripe column size into a first number of rows based at least in part on the first compress block size; and
wherein the second stripe splits the common stripe column size into a second number of rows different than the first number of rows based at least in part on the second compress block size.

US Pat. No. 10,990,478

FLEXIBLE RELIABILITY CODING FOR STORAGE ON A NETWORK

Fungible, Inc., Santa Cl...

1. A method comprising:receiving, by a data processing unit configured to implement a plurality of durability schemes for data reliability, a set of data to be stored;
identifying, by the data processing unit, a durability scheme from among the plurality of different durability schemes for data reliability, wherein each of the plurality of different durability schemes is implemented through use of a different one of a plurality of coefficient matrices;
determining, by the data processing unit and based on the identified durability scheme, an identified coefficient matrix appropriate for the identified durability scheme by determining which portions of the set of data are used to generate each of a plurality of parity fragments and by configuring the identified coefficient matrix to select, for each respective parity fragment in the plurality of parity fragments, the determined portions of the set of data when the identified coefficient matrix is applied to the set of data;
caching, by the data processing unit, a subset of coefficient matrices from the plurality of coefficient matrices, wherein caching the subset of coefficient matrices includes identifying which of the plurality of coefficient matrices has been used most recently to implement one or more of the plurality of data durability schemes; and
generating, by data durability circuitry within the data processing unit and by applying the identified coefficient matrix to the set of data, parity data, wherein the parity data includes the plurality of parity fragments.

US Pat. No. 10,990,477

DEVICE AND METHOD FOR CONTROLLING THE DATA REFRESH CYCLES IN REPROGRAMMABLE NON-VOLATILE MEMORIES

1. A method for controlling the refresh of data in reprogrammable nonvolatile memories, said memories comprising a plurality of memory pages for storing data, the steps of the method being executed during an operation of reading a memory page and comprising:identifying with an error correction code errors in a read memory page;
computing among the identified errors the number of retention errors and of non-retention errors, the non-retention errors especially comprising repeated read or programming errors;
computing the retention age of said read memory page;
estimating the remaining retention time for said read memory page depending on the parameters number of retention errors, number of non-retention errors and retention age computed beforehand;
comparing the estimated value of the remaining retention time to a predefined value corresponding to a maximum time interval between two successive operations of reading a memory page; and
determining whether said read memory page must be refreshed or not depending on the results of the comparison.

US Pat. No. 10,990,476

MEMORY CONTROLLER AND METHOD OF OPERATING THE SAME

SK hynix Inc., Icheon-si...

1. A memory controller for controlling a memory device that stores data, comprising:a bit counter configured to generate a first count value by counting a number of bits in a host data to be programmed at an address in the memory device in response to a program request received from a host;
a flash translation layer configured to generate a first page information indicating the address of the programmed data stored in the memory device;
an additional data generator configured to generate a judgment data based on the first count value and the first page information;
a comparator configured to generate a comparison information by comparing the judgment data with detection data, wherein the detection data is generated using the programmed data read from the memory device in response to a read request received from the host; and
a read data controller configured to perform an operation of correcting an error in the programmed data read from the memory device based on the comparison information.

US Pat. No. 10,990,475

READ LEVEL EDGE FIND OPERATIONS IN A MEMORY SUB-SYSTEM

MICRON TECHNOLOGY, INC., ...

11. A method comprising:receiving a request to locate one or more distribution edges of one or more programming distributions of a memory cell, the request specifying a target bit error rate (BER) for the one or more programming distributions;
selecting a first programming distribution for which to locate a first distribution edge;
measuring a first BER sample of the first programing distribution using a first offset value that is offset from a first center value corresponding to a first read level threshold;
measuring a second BER sample of the first programming distribution using a second offset value that is offset from the first offset value;
determining that the second BER sample exceeds the target BER and the first BER sample does not exceed the target BER; and
determining a first location of the first distribution edge at the target BER by interpolating between the first BER sample and the second BER sample.

US Pat. No. 10,990,474

COST-BENEFIT AWARE READ-AMPLIFICATION IN RAID SCRUBBING

SEAGATE TECHNOLOGY LLC, ...

1. A method, comprising:upon a read operation for a stripe of a storage device:
determining a percentage amount of potential read amplification for the read operation;
determining a current age of the stripe in the read operation as a percentage of a longest safe elapsed time between read scrub operations on a stripe of the storage device; and
performing a read scrub operation on the stripe when the current age is greater than the percentage amount of potential read amplification.

US Pat. No. 10,990,472

SPARE SUBSTITUTION IN MEMORY SYSTEM

Micron Technology, Inc., ...

1. A method, comprising:receiving a first portion of a code word from a memory medium, the code word comprising a set of bit fields indicative of a plurality of data bursts across a plurality of channels;
identifying a spare bit in the first portion of the code word to replace a bit field of the set based at least in part on receiving the first portion of the code word;
determining, based at least in part on identifying the spare bit, the bit field of the set to be replaced by the spare bit;
receiving a second portion of the code word based at least in part on determining the bit field of the set to be replaced by the spare bit; and
replacing the bit field of the set in the second portion of the code word with the spare bit concurrently with receiving the second portion of the code word.

US Pat. No. 10,990,470

ENTITY RESOLUTION FRAMEWORK FOR DATA MATCHING

ROVI GUIDES, INC., San J...

1. A data storage and retrieval method for matching a corrupted database record with a record of a validated database using a denoising autoencoder, the method comprising:receiving a first corrupted record from a first database, the first corrupted record comprising a first plurality of data fields;
generating a first input data vector based on the first corrupted record;
selecting a denoising autoencoder specific to the first database, wherein the denoising autoencoder was trained using a plurality of training example data pairs;
applying the selected denoising autoencoder to the first input data vector to generate a first denoised data vector;
comparing the first denoised data vector with each of a plurality of validated data vectors generated based on records of the validated database to determine whether the first denoised data vector matches a first matching vector of the plurality of data vectors; and
in response to determining that the first denoised data vector matches the first matching vector:
providing a data pair comprising the first input data vector and the first matching vector as an additional training example data pair to the denoising autoencoder to train the denoising autoencoder to denoise input from the first database;
retrieving, from the validated database, a first validated record that was used to generate the first matching vector; and
outputting the retrieved first validated record.

US Pat. No. 10,990,469

MAINTENANCE METHODS OF DIGITAL SIGNAGE AND TROUBLESHOOTING AND WARNING METHODS, DIGITAL SIGNAGE PLAYING SYSTEMS AND PLAYERS THEREOF

ACER INCORPORATED, New T...

1. A maintenance method of digital signage applied to a player of the digital signage, wherein the player performs playback software to play multimedia information through the digital signage, the maintenance method comprising:detecting processor usage statuses of one or more processes corresponding to the playback software;
determining whether a predetermined condition is satisfied based on the processor usage statuses of the one or more processes;
activating a screen analysis module to detect whether a screen image is abnormal in response to determining that the predetermined condition is satisfied; and
activating a troubleshooting module to automatically perform an automatic troubleshooting procedure in response to the screen detection module detecting that the screen image is abnormal.

US Pat. No. 10,990,468

COMPUTING SYSTEM AND ERROR HANDLING METHOD FOR COMPUTING SYSTEM

HITACHI, LTD., Tokyo (JP...

1. A computing system comprising:a processor that executes an operating system controlling a hardware device and a storage program operating on the operating system and using the hardware device via the operating system; and
a memory that records condition management information managing a predetermined condition on which the storage program determines error handling on the hardware device,
wherein upon the operating system receiving a notification of an error that occurred in the hardware device, the operating system determines to execute a recovery processing to the hardware device and executes the recovery processing to the hardware device,
wherein the storage program sets an interrupt handler in the operating system,
wherein the interrupt handler identifies an error status of the hardware device and upon determining the error status satisfies a predetermined condition, the operating system notifies the error status to the storage program running on the operating system without performing the recovery processing,
wherein the storage program that receives a notice of the error status requests the operating system to perform hardware device processing,
wherein the operating system performs to shutdown processing of the hardware device,
wherein the memory has a queue that manages processes planned to be executed by the operating system,
wherein the operating system has, as interrupt handlers, a first interrupt handler and a second interrupt handler added in response to a request from the storage program,
wherein the processor executes the first interrupt handler when receiving an interrupt as the notification of the error,
wherein the first interrupt handler registers error handling by the operating system in the queue,
wherein the processor executes the second interrupt handler after executing the first interrupt handler, and
wherein the second interrupt handler determines whether the error status satisfies the predetermined condition while referring to the condition management information, and cancels the error handling by the operating system registered in the queue when the error status satisfies the predetermined condition.

US Pat. No. 10,990,467

ACCESSING COMPUTING RESOURCE ATTRIBUTES OF AN EXTERNAL SERVICE PROVIDER

Nutanix, Inc., San Jose,...

1. A method comprising:identifying a first external service, the first external service providing a first function to a computing resource entity;
accessing a first set of resource entity attributes that perform the first function by:
registering the first external service to an access mechanism to authorize access to a first set of resource entity attributes;
receiving a first entity state query to access a portion of the first set of resource entity attributes;
performing a first lookup operation to determine a first access type corresponding to the first set of resource entity attributes; and
returning first query results corresponding to the first entity state query and the determined first access type, wherein the first access type provides access to external service parameters for a corresponding type of computing resource entity;
accessing a second set of resource entity attributes that perform a second function by:
registering a second external service to the access mechanism to authorize access to a second set of resource entity attributes;
receiving a second entity state query to access a portion of the second set of resource entity attributes;
performing a second lookup operation to determine a second access type corresponding to the second set of resource entity attributes; and
returning second query results corresponding to the second entity state query and the determined second access type,
wherein the first access type is different from the second access type, and wherein the second access type provides access to external service parameters,
updating resource entity attributes based on external service parameters in response to an entity operation resulting in changes to one or more resource entity attributes, wherein the one or more updated resource entity attributes include a firmware update for a firewall.

US Pat. No. 10,990,466

MEMORY SUB-SYSTEM WITH DYNAMIC CALIBRATION USING COMPONENT-BASED FUNCTION(S)

Micron Technology, Inc., ...

1. A system, comprising:a memory circuitry configured to receive a data access command from a controller, and in response to the data access command:
generate a first read result based on reading a set of memory cells using a first read voltage; and
generate a second read result based on reading the set of memory cells using a second read voltage,wherein:the first read voltage and the second read voltage are separately associated with a read level voltage initially assigned to read the set of memory cells, and
the data access command is (1) different from a read command issued by the controller and (2) configured to provide samples that are analyzed in calibrating the read level voltage.

US Pat. No. 10,990,465

MRAM NOISE MITIGATION FOR BACKGROUND OPERATIONS BY DELAYING VERIFY TIMING

Spin Memory, Inc., Fremo...

1. A method of writing data into a memory device, the method comprising:utilizing a pipeline to process write operations of a first plurality of data words addressed to a memory bank;
writing a second plurality of data words and associated memory addresses into an error buffer, wherein the second plurality of data words are a subset of the first plurality of data words, wherein the error buffer is associated with the memory bank and wherein further the second plurality of data words comprises data words that are awaiting write verification associated with the memory bank;
searching for a data word that is awaiting write verification in the error buffer, wherein a verify operation associated with the data word is operable to be performed in a same cycle as a write operation, and wherein the verify operation associated with the data word occurs in a same row as the write operation;
determining if an address of the data word is proximal to an address for the write operation; and
responsive to a determination that the address of the data word is proximal to the address for the write operation, delaying a start of the verify operation, wherein a rising edge of the verify operation occurs a predetermined delay after a rising edge of the write operation.

US Pat. No. 10,990,464

BLOCK-STORAGE SERVICE SUPPORTING MULTI-ATTACH AND HEALTH CHECK FAILOVER MECHANISM

Amazon Technologies, Inc....

1. A block storage system configured to host a plurality of logical volumes and to allow multiple virtual machine clients to connect to a given logical volume, the block storage system comprising:a first computing device storing a primary replica of a logical volume, wherein the first computing device is configured to accept connections from a plurality of virtual machine clients to enable the plurality of virtual machine clients to perform I/O operations with respect to data of the logical volume; and
a second computing device storing a secondary replica of the logical volume, wherein secondary replica receives replicated writes to the data of the logical volume from the primary replica, and wherein the second computing device is configured to:
receive, from a first virtual machine client of the plurality of virtual machine clients, a request to connect to the second computing device storing the secondary replica, whereby the secondary replica is configured to assume a role of the primary replica for the logical volume in response to accepting the connection request;
send, prior to assuming the role of primary replica, a request to a health check application programmatic interface (API) of the first computing device storing the primary replica, wherein the health check API is configured to return health information for the first computing device;
in response to receiving the health information, determine based on the health information whether the first computing device remains attached to at least one of the plurality of virtual machine clients; and
in response to determining that the first computing device remains attached to at least one of the plurality of client computing devices, reject the request to connect to the first client computing device to refrain from assuming the role of the primary replica for the logical volume.

US Pat. No. 10,990,462

APPLICATION AWARE INPUT/OUTPUT FENCING

Veritas Technologies LLC,...

1. A computer-implemented method comprising:in response to detection of a network partition event in a cluster comprising a plurality of nodes,
determining whether the network partition event resulted in partitioning of the plurality of nodes into a first sub-cluster and a second sub-cluster,
wherein
each node of the plurality of nodes executes at least one application instance of a first plurality of application instances of a first application,
in response to a determination that the network partition event partitioned the plurality of nodes into the first sub-cluster and the second sub-cluster,
determining an application weight of at least one application instance of the first plurality of application instances,
determining whether to include the first application in a first group of applications, wherein
the determining whether to include the first application in the first group of applications is based, at least in part, on the application weight, and
in response to a determination that the first application should be included in the first group of applications, including the first application in the first group of applications, and
performing a cumulative fencing race, wherein the cumulative fencing race includes the first group of applications.

US Pat. No. 10,990,461

APPLICATION INTERACTION METHOD, INTERACTION METHOD AND APPARATUS

Beijing Xiaomi Mobile Sof...

1. An application interaction method, comprising:acquiring a target message passing through an operating system of a user terminal, wherein the target message instructs a jump from a current application to a target application;
extracting a target application parameter from the target message, wherein the target application parameter comprises at least an identifier of the target application; and
launching the target application according to the target application parameter,
wherein acquiring the target message passing through the operating system comprises:
establishing a virtual network connection between the current application and a virtual server preset in the user terminal; and
acquiring the target message passing through the operating system via the virtual network connection; and
wherein acquiring the target message passing through the operating system further comprises:
receiving a triggering operation of a user on a preset interface element in an interface of the current application, wherein the triggering operation triggers the current application to send a message passing through the operating system;
acquiring the message passing through the operating system;
determining whether the message comprises preset feature information, wherein the preset feature information comprises a keyword or a preset encoding method; and
determining the message as the target message based on a determination that the message comprises the preset feature information,
wherein the message passing through the operating system comprises a network request; and the message passing through the operating system is acquired by at least one of:
monitoring a network request sent by a browser; or
monitoring a network request sent by a network proxy service.

US Pat. No. 10,990,459

DISTRIBUTED THREADED STREAMING PLATFORM READER

Chicago Mercantile Exchan...

1. A streaming platform reader comprising:a memory;
a processor;
a reader thread executed on the processor and configured to retrieve messages from each partition of a plurality of partitions of a streaming platform, wherein each message in the plurality of partitions comprises message content and is associated with a unique identifier, and wherein the streaming platform transmits an end of partition signal to the reader thread signifying an empty partition;
a first plurality of queues stored in the memory and coupled with the reader thread, wherein each queue of the first plurality of queues is associated with one of the plurality of partitions and configured to store messages or an end of partition signal from the reader thread, wherein each queue of the first plurality of queues stores messages in a sequence in which messages are retrieved by the reader thread, and wherein each queue of the first plurality of queues includes a first position that stores the earliest message stored by a queue;
an extraction thread executed on the processor and controlled by gate control logic that:
compares the identifiers of all of the messages in the first positions of the queues of the first plurality of queues;
extracts the message content from the message associated with the earliest identifier from among the first positions of the queues of the first plurality of queues; and
forwards the extracted message content to an available queue of a second plurality of queues;
wherein the gate control logic blocks the extraction thread unless each of the queues of the first plurality of queues contains message content or an end of partition signal;
a plurality of processing threads executed on the processor, each configured to retrieve the message content from one of the queues of the second plurality of queues, process the retrieved message content and store the processed message content in a buffer, the buffer being operative to automatically arrange the stored processed message content in an ordering in accordance with the associated identifiers; and
a writer thread executed on the processor and configured to extract the processed message content associated with the earliest identifier from the buffer and forward the extracted processed message content associated with the earliest identifier to a consuming application.

US Pat. No. 10,990,458

EVENT COMMUNICATION BETWEEN APPLICATIONS

ADP, LLC, Roseland, NJ (...

1. A method of communicating events between applications, comprising:receiving, by a first application, event information for an event;
performing a first action, by the first application, in response to receiving the event information;
generating an event message, by the first application, wherein the event message comprises an event name and a message payload, wherein the message payload comprises fields mapped from event information fields in the event information; and
publishing the event message, by the first application, by sending the message payload to an event message pipeline connected to domains such that each domain comprises, respectively, an event listener receiving the fields and mapping them into inputs within an action definer commanding actions by a miniapp in the domain.

US Pat. No. 10,990,457

INTERPOSITION

Apple Inc., Cupertino, C...

21. A computer system comprising:one or more processors; and
a non-transitory computer accessible storage medium coupled to the one or more processors and storing a plurality of instructions that are executable by the one or more processors, the plurality of instructions comprising:
a channel actor configured to control channels between a plurality of actors forming a system, wherein the plurality of actors communicate using the channels; and
an interposer actor configured to interpose between a first actor of the plurality of actors and at least one other actor of the plurality of actors, wherein the interposer actor is configured to communicate to the channel actor to reroute one or more channels between the at least one other actor and the first actor to the interposer actor.

US Pat. No. 10,990,456

METHODS AND SYSTEMS FOR FACILITATING APPLICATION PROGRAMMING INTERFACE COMMUNICATIONS

ROVI GUIDE, INC., San Jo...

1. A method for facilitating communications using application programming interfaces (“APIs”), the method comprising:receiving, from a device, via a network, by control circuitry of a server, an application programming interface (“API”) request for interpreting a command, wherein the API request includes an image of a user interface as displayed on a display screen of the device when the command was received, wherein the server is remote from the device;
caching, by the control circuitry, at the server, the image in the API request;
determining, by the control circuitry of the server, a command response based on the command and the image;
generating, by the server, an API response based on the command response; and
transmitting, from the server, via the network, the API response to the device.

US Pat. No. 10,990,455

MANAGEMENT OF APPLICATION PROGRAMMING INTERFACE (API) RETENTION

Moesif, Inc., San Franci...

1. A method comprising:obtaining application programming interface (API) request information associated with API requests from an API user to an API provider;
determining time stamps associated with the API requests based on the API request information;
comparing the time stamps of the API requests to at least one retention criterion to determine a retention of the API user to the API provider as a function of time, wherein the retention indicates whether the API user uses the API provider for consecutive time periods; and
generating a retention summary based on the retention of the API user.

US Pat. No. 10,990,454

MULTI-PROCESS INTERACTIVE SYSTEMS AND METHODS

Oblong Industries, Inc., ...

1. A method comprising:displaying a multi-user virtual space that includes frame data of a plurality of processes across a plurality of display devices by using a plurality of processing devices;
generating a spatial user input data capsule for each user of a process included in the virtual space, and transferring each spatial user input data capsule to a user input repository that is shared across the plurality of processes, each spatial user input data capsule including spatial user input data of a three-space coordinate system; and
at least one process of the plurality of processes accessing the user input repository and updating frame data of the virtual space based on at least one spatial user input data capsule included in the user input repository.

US Pat. No. 10,990,453

IMPROVING LATENCY BY PERFORMING EARLY SYNCHRONIZATION OPERATIONS IN BETWEEN SETS OF PROGRAM OPERATIONS OF A THREAD

Advanced Micro Devices, I...

1. A method for executing a thread synchronization operation, the method comprising:detecting an early synchronization operation between multiple sets of program operations of a first thread;
initiating the early synchronization operation, causing early synchronization sub-operations for the early synchronization operation to be performed, wherein the early synchronization sub-operations comprise operations to make data available to threads other than the first thread, wherein the data is written by program operations of the first thread;
performing inter-synchronization operations, the inter-synchronization operations comprising a set of the multiple sets of the program operations between the early synchronization operation and a resolving synchronization operation in program order, at least one of the inter-synchronization operations being performed in an overlapping time period with the early synchronization sub-operations;
initiating a resolving synchronization operation, causing resolving synchronization sub-operations for the resolving synchronization operation to be performed; and
notifying a second thread that the resolving synchronization operation has been performed.

US Pat. No. 10,990,452

RELIABILITY DETERMINATION OF WORKLOAD MIGRATION ACTIVITIES

VMWARE, INC., Palo Alto,...

1. A method comprising:determining a plurality of sub-tasks associated with a workload migration activity;
retrieving statistical data associated with an execution of the plurality of the sub-tasks corresponding to different instances of the workload migration activity within a period;
training a reliability model through machine learning using the statistical data corresponding to a portion of the period to determine reliability of the workload migration activity;
predicting the reliability of an instance of the workload migration activity corresponding to a remaining portion of the period using the trained reliability model;
determining accuracy of the trained reliability model by comparing the predicted reliability with the retrieved statistical data for the remaining portion of the period; and
determining reliability of a new workload migration activity using the trained reliability model, wherein the trained reliability model is selected to determine the reliability of the new workload migration activity when the determined accuracy of the trained reliability model is greater than or equal to a predefined threshold.

US Pat. No. 10,990,451

EFFICIENT HANDLING OF TRIGGER TRANSACTIONS IN A WORKLOAD MANAGEMENT ENVIRONMENT

International Business Ma...

1. A method comprising:receiving a first transaction at a first application system of a plurality of application systems of a distributed system;
completing the first transaction, wherein completing the first transaction comprises writing a first record to a first queue;
generating, by a first application resource monitor (ARM) of the first application system, a first response, wherein the first response includes (i) an identifier of the first record and (ii) an identifier of the first application system;
transmitting, by the first ARM, the first response to a transaction distribution system, wherein the transaction distribution system distributes transactions among the plurality of application systems in the distributed system;
receiving a second transaction at the first application system;
upon determining that the second transaction is a trigger transaction:
determining, by the first ARM, a first plurality of records that are associated with the second transaction, wherein the first plurality of records includes the first record;
retrieving, by the first ARM, the first record from the first queue; and
completing the second transaction based at least in part on the first record;
receiving a third transaction at the first application system; and
upon determining that the third transaction is not a trigger transaction and will not access the first queue:
completing the third transaction; and
refraining from transmitting a response to the transaction distribution system.

US Pat. No. 10,990,450

AUTOMATIC CLUSTER CONSOLIDATION FOR EFFICIENT RESOURCE MANAGEMENT

VMware, Inc., Palo Alto,...

1. A method for automatically consolidating clusters of host computers in a distributed computer system, the method comprising:receiving digital representations of first and second clusters of host computers in the distributed computer system;
generating a digital representation of a simulated merged cluster of host computers using the digital representations of the first and second clusters of host computers, the simulated merged cluster of host computers being a simulation of a consolidation of the first and second clusters of host computers;
applying a resource management operation on the digital representation of the simulated merged cluster of host computers to produce resource management analysis results on the simulated merged cluster of host computers; and
executing an automatic consolidation operation on the first and second clusters of host computers to generate a merged cluster of host computers that includes the host computers from both the first and second clusters.

US Pat. No. 10,990,449

MANAGING APPLICATION RELATIONSHIPS IN MACHINE-TO-MACHINE SYSTEMS

Convida Wireless, LLC, W...

1. An apparatus for managing an application relationship, the apparatus comprising:a processor; and
a memory coupled with the processor, the memory stores a common services entity, wherein the common services entity comprising a service layer and an application relationship management service, wherein the memory having stored thereon executable instructions that when executed by the processor cause the processor to effectuate operations comprising:
the service layer performing the steps of:
providing instructions to create a first resource for a first application based on a first registration request received from the first application;
providing instructions to create a second resource for a second application based on a second registration request received from the second application;
providing instructions to create a third resource for a third application having no subscription to the first application based on a third registration request received from the third application;
the application relationship management service performing the steps of:
receiving, from a requestor, a first request, the first request comprising a request to create a relationship between the first application, the second application, and the third application, wherein the first request includes:
relationship information, link to resources, names, labels, or identifiers for the first application, the second application and the third application respectively;
providing instructions based on the first request to the service layer, to create a relationship between the first application as child, the second application, and the third application as parents, wherein the first application is a composite application of the second and third applications, the creation of the relationship comprises updating the first resource, the second resource and the third resource, wherein the first resource is added with a child pointer/input pointer or link that connects to the second resource and the third source, the second resource and the third source are each added with a parent pointer/composite pointer or link that connects to the first resource, and the second resource and the third source are each added with a sibling pointer or link that connects to each other;
updating, by the service layer, an application relationship record, wherein the application relationship record indicates the created relationship between the first application, the second application, and the third application;
sending, from the service layer, a response back to the requester to notify successful creation of parent/child relationship among the first application, the second application, and the third application; and
sending, from the service layer, an application relationship notification to the first application, the second application, and the third application respectively, wherein the respective application relationship notification includes relationship information between the first application, the second application, and the third application.

US Pat. No. 10,990,448

ALLOCATION OF MEMORY RESOURCES TO SIMD WORKGROUPS

Imagination Technologies ...

1. A memory subsystem for use with a single-instruction multiple-data (SIMD) processor comprising a plurality of processing units configured for processing one or more workgroups each comprising a plurality of SIMD tasks, the memory subsystem comprising:a shared memory partitioned into a plurality of memory portions for allocation to tasks that are to be processed by the processor; and
a resource allocator configured to, in response to receiving a memory resource request for first memory resources in respect of a first-received task of a workgroup, allocate to the entire workgroup a block of memory portions sufficient in size for each task of the workgroup to receive memory resources in the block equivalent to the first memory resources.

US Pat. No. 10,990,447

SYSTEM AND METHOD FOR CONTROLLING A FLOW OF STORAGE ACCESS REQUESTS

Lightbits Labs Ltd., Kfa...

1. A system for controlling a flow of storage access requests from a plurality of client computers to storage media, the system comprising:a Network Interface Controller (NIC) configured to establish a plurality of connections with the plurality of client computers, and receive at least one storage access request from at least one client computer connection; and
a processor, configured to:
dynamically allocate, on a Random Access Memory (RAM) device, a buffer memory space, dedicated to each, specific client computer connection;
convey a size of an allocated buffer, on the RAM device, to at least one respective, connected client computer, to prevent said allocated buffer from overflowing, and accumulate, in said buffer, data of at least one storage access request received via the respective client computer connection; and
upon completion of the accumulation of data, propagate the buffered data to at least one storage device of the storage media, so as to control flow of storage access requests of the plurality of client computers to the storage media.

US Pat. No. 10,990,446

FAULT-TOLERANT AND HIGHLY AVAILABLE CONFIGURATION OF DISTRIBUTED SERVICES

PALANTIR TECHNOLOGIES INC...

1. A method for log forwarding, the method comprising:obtaining, at a host, network endpoint information from a replica of a distributed configuration store that is stored locally at the host, the replica of the distributed configuration store updated using a consensus protocol that is configured to keep network endpoint information of multiple replicas of the distributed configuration store consistent;
wherein the network endpoint information identifies a location on a network of a first service, the first service comprising a centralized log collection and aggregation network service;
obtaining, at the host, an identifier of a second service installed on the host from the replica of the distributed configuration store, the second service being distinct from the first service;
based, at least in part, on the obtaining the identifier of the second service, obtaining, at the host, from the replica of the distributed configuration store, information indicating where, in a file system, one or more logs generated by the second service are stored;
using the information indicating where the one or more logs generated by the second service are stored, collecting, at the host, the one or more logs generated by the second service;
using the network endpoint information, sending, by the host, the one or more logs generated by the second service to the first service.

US Pat. No. 10,990,445

HARDWARE RESOURCE ALLOCATION SYSTEM FOR ALLOCATING RESOURCES TO THREADS

Apple Inc., Cupertino, C...

1. An apparatus, comprising:first and second hardware execution pipelines that correspond to respective first and second data masters that manage execution of corresponding threads;
a plurality of hardware resource allocation circuits, at least two of which are configured to allocate a respective different type of hardware resource to the threads; and
a resource allocation management circuit coupled to a memory device configured to store a thread allocation map that identifies, for each thread, a respective state identification value, a respective execution state, and an indication of numbers of each type of hardware resource allocated to each thread, and wherein the resource allocation management circuit is configured to:
receive a resource allocation request to allocate a requested number of different types of hardware resources respectively for a first thread of the first data master that corresponds to the first hardware execution pipeline;
determine that fewer than the requested number of the different types of hardware resources is available;
select a second thread of the second data master that corresponds to the second hardware execution pipeline for deallocation, wherein the second thread is selected based on the second thread having an inactive execution state and at least one of the different types of hardware resources being allocated to the second thread as indicated by the thread allocation map;
send deallocation requests to one or more of the plurality of hardware resource allocation circuits to deallocate one or more portions of the different types of hardware resources identified by the resource allocation request, wherein the deallocation requests include a state identification value of the second thread;
send allocation requests to one or more of the plurality of hardware resource allocation circuits to allocate the one or more portions of the requested number of the different types of hardware resources to the first thread, wherein the allocation requests include a state identification value of the first thread;
update the thread allocation map after the requested number of the different types of hardware resources has been allocated to the first thread; and
indicate that execution of the first thread has been approved.

US Pat. No. 10,990,443

UTILIZATION PROFILING AND SCHEDULING OPERATIONS USING THREAD SPECIFIC EXECUTION UNITS USAGE OF A MULTI-CORE MULTI-THREADED PROCESSOR

INTERNATIONAL BUSINESS MA...

1. A method for utilization profiling of thread specific execution units and scheduling software on a multi-core processor, the method comprising:profiling, by the multi-core processor, a workload received for execution by a core of the multi-core processor;
logging, by the multi-core processor, an execution unit sensitivity to an operating system with respect to the workload; and
utilizing, by the multi-core processor, the execution unit sensitivity for subsequent workload scheduling to minimize core-sharing between workloads with same execution unit sensitivities, wherein the subsequent workload scheduling includes finding an active core with maximum capacity per execution unit of interest to a subsequent workload and wherein finding the active core includes removing an ineligible core from a core candidate list.

US Pat. No. 10,990,441

MULTI-LEVEL JOB PROCESSING QUEUES

Nutanix, Inc., San Jose,...

1. A method for scheduling job requests in multi-level job processing queues, the method performed by at least one computer and comprising:initializing a multi-level queue, the multi-level queue comprising a high priority job queue and a low priority job queue;
receiving a job request from an individual virtual machine;
locating or creating a job request group for the individual virtual machine, wherein the job request group corresponds to the individual virtual machine and different job request groups are created for different respective virtual machines;
positioning the job request into the job request group for the individual virtual machine;
positioning the job request group for the individual virtual machine into at least one of, the high priority job queue, or the low priority job queue;
scheduling execution of a job by locating a next job in a next job request group for the individual virtual machine that is at either a high priority job queue location or at a low priority job queue location; and
reorganizing the multi-level queue.

US Pat. No. 10,990,440

REAL-TIME DISTRIBUTED JOB SCHEDULER WITH JOB SELF-SCHEDULING

Rubrik, Inc., Palo Alto,...

1. A method for operating a data management system, comprising:identifying a set of candidate jobs assigned to a first job window associated with a first node of a cluster of data storage nodes;
identifying a first subset of the set of candidate jobs;
adding the first subset of the set of candidate jobs to a job queue for the first node;
executing a first job of the first subset using the first node;
detecting that the first job comprises one of a sequence of reoccurring jobs;
detecting that a second job corresponding with a subsequent job of the reoccurring jobs should be executed within a threshold period of time prior to a next polling of candidate jobs within the first job window by the first node, the threshold period of time corresponding with a fraction of a polling frequency;
adding the second job to the job queue for the first node prior to detecting a completion of the first job;
executing the second job using the first node, wherein the executing of the second job using the first node is performed based on a polling of candidate jobs at the first node, the second job using the first node causing a snapshot of a virtual machine to be captured and stored using the first node and the data generated by the second job comprising the snapshot of the virtual machine; and
storing data generated by the second job using the first node.

US Pat. No. 10,990,439

TRACING TASK EXECUTION ACROSS SERVICES IN MICROKERNEL-BASED OPERATING SYSTEMS

Facebook Technologies, LL...

1. A method comprising, by a computing device:allocating a shared memory region accessible by a tracing service and one or more services of an operating system, wherein the shared memory region is configured to be used by each service to store entries of execution data associated with operations executed by the service;
executing one or more tasks by one or more of the services to generate a plurality of entries of execution data in the shared memory region,
wherein each of the plurality of entries of execution data (1) is associated with a task identifier corresponding to one of the one or more tasks and (2) comprises one or more of a timestamp or information associated with an executed operation associated with that task;
receiving, by the tracing service, a query for execution data associated with a desired task identifier;
retrieving, by the tracing service, a set of one or more entries of execution data from the shared memory region based on the desired task identifier,
wherein the task identifier of each entry of execution data in the set matches the desired task identifier; and
returning, by the tracing service, the set of one or more entries of execution data;
wherein the tracing service and the one or more services are running in user mode outside of a microkernel of the operating system.

US Pat. No. 10,990,438

METHOD AND APPARATUS FOR MANAGING EFFECTIVENESS OF INFORMATION PROCESSING TASK

FUJITSU LIMITED, Kawasak...

1. A method of managing effectiveness of an information processing task in a decentralized data management system that includes a client and multiple execution subjects, the method comprising:sending requests for multiple information processing tasks initiated by the client to the multiple execution subjects, the client and the multiple execution subjects holding same database copies respectively and the respective database copies being updated based on results approved by both the client and the multiple execution subjects, wherein among the multiple information processing tasks, information processing tasks in a sequential information processing task list including at least two information processing tasks in an order are sent to the multiple execution subjects in the order, and the sequential information processing task list is generated by performing a concurrency risk detection on the requested multiple information processing tasks;
caching the requested multiple information processing tasks to a task cache queue, wherein the sequential information processing task list is cached as a whole to the task cache queue;
judging whether a respective information processing task in the task cache queue satisfies a predetermined conflict condition; and
with respect to the respective information processing task in the task cache queue,
moving the respective information processing task to a conflict task queue upon determining that the respective information processing task satisfies the predetermined conflict condition,
deleting the respective information processing task from the conflict task queue, and
caching the respective information processing task to the task cache queue for continuing with subsequent processing upon determining the predetermined conflict condition is unsatisfied.

US Pat. No. 10,990,437

INTERFACE DATA DISPLAY OPTIMIZATION DURING DEVICE OPERATION

PAYPAL, INC., San Jose, ...

1. A mobile device system comprising:a non-transitory memory storing instructions; and
one or more hardware processors coupled to the non-transitory memory and configured to read the instructions from the non-transitory memory to cause the mobile device system to perform operations comprising:
displaying an application interface of a first application through a graphical user interface (GUI) of the mobile device system, wherein the application interface includes an interface element displaying first data, wherein the first application comprises a mapping application that displays a route with an upcoming transition on the route, and wherein the first data includes information about the upcoming transition on the route;
receiving an indication of second data associated with a second application for display through the GUI;
determining that a display of the second data overlaps at least a portion of the first data on the GUI;
determining that a display of the first data has a higher priority than the display of the second data based on the upcoming transition;
preventing the second data from being displayed on the GUI while the first data is being displayed until a completion of the upcoming transition on the route;
determining that the upcoming transition is completed;
determining that the route does not have an additional transition in the first application for an amount of travel; and
displaying, upon determining that the first data is no longer being displayed, the second data on the GUI.

US Pat. No. 10,990,436

SYSTEM AND METHOD TO HANDLE I/O PAGE FAULTS IN AN I/O MEMORY MANAGEMENT UNIT

Dell Products L.P., Roun...

1. An information handling system, comprising:a processor;
an input/output (I/O) device that is controlled by the processor via an I/O device driver;
a memory device, wherein a portion of a system physical address (SPA) space of the memory device is allocated to the I/O device driver; and
an input/output memory management unit (I/OMMU) configured to translate virtual addresses from the I/O device to physical addresses in the SPA space;
wherein the I/O device is configured to send a first virtual address to the I/OMMU, to receive a page fault indication from the I/OMMU, and to send an error indication to the I/O device driver in response to the page fault indication, wherein the error indication indicates that the I/OMMU failed to translate the first virtual address into a first physical address; and
wherein, in response to receiving the error indication, the I/O device driver is configured to allocate a sub-portion of the portion of the SPA space to the I/O device, and to send a page table translation to the I/OMMU, the page table translation mapping the first virtual address into a second physical address within the sub-portion.

US Pat. No. 10,990,435

VIRTUAL REDUNDANCY FOR ACTIVE-STANDBY CLOUD APPLICATIONS

The Regents of the Univer...

1. A virtual machine placement scheduling system, for managing virtual machine placement scheduling within a cloud computing platform, the virtual machine placement scheduling system comprising:a processor; and
a memory having instructions stored thereon that, when executed by the processor, cause the processor to perform operations comprising
computing, for each standby virtual machine of a plurality of standby virtual machines available in the cloud computing platform, a minimum required placement overlap delta to meet an entitlement assurance rate threshold,
computing a minimum number of available virtual machine slots in the cloud computing platform for activating each standby virtual machine to meet the entitlement assurance rate threshold,
sorting, in descending order, a candidate list of servers by the minimum required placement overlap delta and a number of available virtual machine slots, and
selecting, from the candidate list of servers, a candidate server.

US Pat. No. 10,990,433

EFFICIENT DISTRIBUTED ARRANGEMENT OF VIRTUAL MACHINES ON PLURAL HOST MACHINES

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus that arranges virtual machines, the information processing apparatus comprising:a memory; and
a processor coupled to the memory and configured to:
determine a similarity of names of a plurality of virtual machines,
divide the plurality of virtual machines into clusters, based on a result of the determination, such that virtual machines having a value that represents the similarity of the names that is equal to or less than a given threshold are included in a first cluster and virtual machines having a value that represents the similarity of the names that is greater than the given threshold are included in a second cluster,
place virtual machines included in the first cluster on different host machines,
determine, between the first cluster and the second cluster, a target cluster comprising virtual machines with a largest total scheduled use amount of a computational resource whose ratio of a surplus resource that is unused is lowest in computational resources with which the plurality of host machines are equipped, and
place virtual machines included in the target cluster on host machines prior to placing virtual machines included in the remaining cluster.

US Pat. No. 10,990,432

METHOD AND SYSTEM FOR INTERACTIVE CYBER SIMULATION EXERCISES

1. A system for cyber exercises which will comprise replicas of real life computing environments, where these replicas are adapted for participant interaction, and where these replicas comprise logical elements such as startup sequences of individual components, wherein said replicas comprise one or more virtual machines, or VMs, and wherein said replicas can be made from systems in use, or from systems at rest, and also wherein a system comprises an authoring workspace, which comprises a computing environment which is adapted to execute creation tasks associated with the creation of the exercises in a simulation, and also wherein the authoring workspace is adapted to enable publication of an exercise stage, wherein publication of an exercise stage comprises adding said exercise stage to an exercise catalog, and wherein stages on the exercise catalog are adapted to be used as baselines that can be used to create other canvas(es) in any simulation, and also wherein the exercise catalog is adapted to allow participants to access exercises, and wherein an execution workspace is adapted to allow an operator to make available modifiable copies of stages involved in an exercise.

US Pat. No. 10,990,431

VIRTUAL MACHINE HOT MIGRATION METHOD AND APPARATUS, AND SYSTEM

TENCENT TECHNOLOGY (SHENZ...

1. A virtual machine hot migration method, the method being applied to a cloud computing system, the cloud computing system comprising a plurality of hosts, each host comprising a plurality of virtual machines, and the method comprising:obtaining a load of each of the plurality of hosts;
in accordance with the load of each of the plurality of hosts,
determining, among the plurality of hosts, a host whose load exceeds a preset threshold as a source host, wherein the source host is configured with more virtual machines than a number of cores of a processor on the source host;
determining, among the plurality of hosts, a target host whose load does not exceed the preset threshold after the target host bears a to-be-hot-migrated target virtual machine;
determining a to-be-hot-migrated target virtual machine in the source host, wherein the load of the source host is reduced to be below the preset threshold after hot migration of the to-be-hot-migrated target virtual machine to the target host; and
controlling memory data and disk data corresponding to the target virtual machine to be hot-migrated from the source host to the target host, the target host providing core resources and memory resources of a processor and a cloud disk in the cloud computing system providing disk resources, wherein:
hot-migrating of the target virtual machine comprises hot migrating data related to the target virtual machine; and
hot-migrating comprises a process for migrating a virtual machine from the source host to the target host without affecting normal services of the virtual machine to a remote user of the virtual machine.

US Pat. No. 10,990,430

EFFICIENT DATA MANAGEMENT IMPROVEMENTS, SUCH AS DOCKING LIMITED-FEATURE DATA MANAGEMENT MODULES TO A FULL-FEATURED DATA MANAGEMENT SYSTEM

Commvault Systems, Inc., ...

1. A method for docking at least one limited-feature module, configured to be connected to a limited-feature device, with a full-featured data management system, wherein the limited-feature device is configured to be connectable to at least one computer via a network, the method comprising:executing, at the limited-feature device, a limited-feature module, wherein the limited-feature module has fewer features than the full-featured data management system;
generating data based at least in part on execution of the limited-feature module,
wherein the data is generated without interaction with the full-featured data management system;
storing the generated data at a storage device;
docking the limited-feature device to the full-featured data management system;
utilizing at least one of metadata, profiles, configurations, and data from the limited-feature module; and
performing, by the full-featured data management system, at least one storage operation,
wherein the at least one storage operation includes at least one of:
analyzing the generated data,
integrating the generated data, and
creating copies of the generated data.

US Pat. No. 10,990,428

VIRTUAL MACHINE INTEGRITY

TELEFONAKTIEBOLAGET LM ER...

1. A method of verifying the integrity of a virtual machine in a cloud computing deployment, the method comprising:creating a virtual machine image derived from a trusted virtual machine, wherein the trusted virtual machine has a first Keyless Signature Infrastructure (KSI) signature stored in a signature store;
determining whether a computation resource can be trusted, wherein determining whether the computation resource can be trusted comprises:
receiving attested information regarding software that has been booted on the computation resource, wherein the attested information comprises a Quantum-Immune Keyless Signatures with Identity (QSI) signed value of a boot log; and
confirming whether the computation resource can be trusted based on comparing the attested information with an attested list of trusted computing pool software configurations;
as a result of determining that the computation resource can be trusted:
submitting the virtual machine image to a computation resource pool of the trusted computation resource;
verifying the virtual machine image, wherein the verifying comprises checking a signature of the virtual machine image against the first KSI signature of the trusted virtual machine;
as a result of verifying the virtual machine image, launching the virtual machine image on the trusted computation resource;
creating a second KSI signature of the virtual machine image, wherein the second KSI is based on the launch of the virtual machine image; and
storing the second KSI signature of the virtual machine image in a signature store.

US Pat. No. 10,990,425

SIMULATED CHANGE OF IMMUTABLE OBJECTS DURING EXECUTION RUNTIME

Morgan Stanley Services G...

1. A system for executing software, comprising:a computing device comprising at least one processor and non-transitory memory storing first software instructions for a code execution module such that, when the first software instructions are executed by the at least one processor, the at least one processor will:
receive, for execution by the code execution module, second software instructions;
create one or more immutable software nodes denoted as immutable by the second software instructions;
determine that the second software instructions comprise an instruction to begin a first simulated change of the one or more immutable software nodes during runtime, an instruction to begin a second simulated change to one or more different immutable software nodes while the first simulated change is in effect, and an instruction to end the second simulated change without ending effect of the first simulated change;
store the first simulated change to the one or more immutable software nodes and the second simulated change to one or more different immutable software nodes in a simulated change apparatus;
using the simulated change apparatus, perform one or more operations of the second software instructions as if both the one or more immutable software nodes and the one or more different immutable software nodes had been changed in the non-transitory memory, during a period of time where each of the one or more immutable software nodes is guaranteed to retain logical immutability; and
output results of the one or more operations.

US Pat. No. 10,990,423

PERFORMANCE OPTIMIZATIONS FOR EMULATORS

MICROSOFT TECHNOLOGY LICE...

1. A method, implemented at a computer system that includes at least one processor, for handling native and guest function calls within an environment with a guest architecture running within a native architecture system, the method comprising:receiving a call to a hybrid binary, wherein the call uses a function calling format having an incompatibility with the native architecture system, and wherein the hybrid binary comprises:
a native function compiled into native binary code that is natively executable on the native architecture, the native binary code being generated from source code based on using one or more preprocessor defines associated with targeting the guest architecture;
an interoperability thunk comprising code that is executable by the native architecture and that is configured to handle at least one incompatibility between how the guest architecture and the native architecture pass data between functions, the interoperability thunk comprising at least one of (i) an interoperability pop thunk comprising code that is executable by the native architecture and that is configured to pop data off a stack and into a register, or (ii) an interoperability push thunk comprising code that is executable by the native architecture and that is configured to push data onto the stack; and
native host remapping metadata that is usable by an emulator to redirect native host callable targets to the interoperability thunk; and
as a result of receiving the call, invoking the interoperability thunk to handle the incompatibility with the native architecture system, enabling the call to invoke the native function within the hybrid binary in order to natively execute the native binary code on the native architecture system.

US Pat. No. 10,990,421

AI-DRIVEN HUMAN-COMPUTER INTERFACE FOR ASSOCIATING LOW-LEVEL CONTENT WITH HIGH-LEVEL ACTIVITIES USING TOPICS AS AN ABSTRACTION

Microsoft Technology Lice...

1. A computer-implemented method, comprising:receiving, from one or more data sources, content associated with a user;
receiving, via a configuration user interface (UI), first user input that defines a first activity identifier that indicates a first activity that is engaged in by the user;
receiving, via the configuration UI, second user input that defines a second activity identifier that indicates a second activity that is engaged in by the user;
based on the first user input and the second user input, causing an artificial intelligence (AI) engine to analyze the content based on an AI model to determine:
a first topic, from the content associated with the user, that is associated with the first activity that is engaged in by the user, and
a second topic, from the content associated with the user, that is associated with the second activity that is engaged in by the user;
rendering, via the configuration UI, at least:
a first topic identifier UI element that identifies the first topic determined to be associated with the first activity that is engaged in by the user, and
a second topic identifier UI element that identifies the second topic determined to be associated with the second activity that is engaged in by the user;
causing the AI engine to generate an activity graph that is associated with the user and that includes:
a first node that corresponds to the first activity,
a second node that corresponds to the second activity,
a third node that corresponds to the first topic and that is linked, within the activity graph, to the first node that corresponds to the first activity, and
a fourth node that corresponds to the second topic and that is linked, within the activity graph, to the second node that corresponds to the second activity;
receiving third user input confirming that the first topic is associated with the first activity that is engaged in by the user;
receiving fourth user input refuting that the second topic is associated with the second activity that is engaged in by the user; and
updating the AI model and the activity graph in association with the user based on the fourth user input refuting that the second topic is associated with the second activity that is engaged in by the user.

US Pat. No. 10,990,420

CUSTOMIZING USER INTERFACE COMPONENTS

Dell Products L.P., Roun...

1. A method, comprising:presenting a user interface to a user;
determining whether or not to customize a size of one or more components on the user interface;
determining candidate components on the user interface to customize, when a determination is made to customize a size of one or more components on the user interface;
customizing the candidate components on the user interface; and
presenting a customized user interface including the candidate components to the user;
wherein determining the candidate components is based in part on historical frequency of use of the candidate components;
wherein customizing the candidate components of the user interface comprises at least one of enlarging and reducing a size of the candidate components;
wherein determining whether or not to customize a size of one or more components on the user interface further comprises:
determining a derived radius for a contact area calculated in accordance with the user directly touching the one or more components of the user interface;
determining an average radius for a contact area associated with the one or more components of the user interface; and
comparing the derived radius against the average radius; and
wherein the presenting, determining, and customizing steps are executed by a processing device operatively coupled to a memory.

US Pat. No. 10,990,419

DYNAMIC MULTI MONITOR DISPLAY AND FLEXIBLE TILE DISPLAY

MICROSOFT TECHNOLOGY LICE...

1. A computer system comprising:one or more processors; and
one or more storage medium having stored executable instructions that are operable, when executed by the one or more processors, for configuring the computer system to control configuration of an interface having multiple types of tiles that each render content for presentation on different types of display systems having different pluralities of display devices with different numbers of display devices, and by at least configuring the computer system to perform the following:
create the interface having a plurality of tiles, the interface being configured to be automatically and dynamically adjusted in size or arrangement of tiles to accommodate different display systems having different pluralities of display devices with different numbers of display devices, the interface being created with a plurality of tiles including:
a first set of one or more tiles that each has a boundary display parameter that specifies a particular type of display area boundary between two or more display devices of the different display systems that the first set of tiles is prohibited from overlapping during display, and
a second set of one or more tiles that omits the boundary display parameter or that has the boundary display parameter set to inactive;
identify a display parameter for a particular display system of the different display systems, the display parameter comprising one or more of: (i) quantity of display devices of the particular display system, or (ii) relative positioning of the display devices of the particular display system;
determine a layout presentation of the interface for rendering the interface on at least two display devices of the particular display system of the different display systems simultaneously, the layout presentation of the interface being at least partially based on the display parameter for the particular display system; and
render the interface within display areas of the at least two display devices of the particular display system simultaneously without any of the first set of tiles overlapping a boundary of the particular type of display area boundary between the at least two display devices of the particular display system.

US Pat. No. 10,990,418

SYSTEM AND METHOD FOR PROVIDING APPLIANCE OPERATION INSTRUCTIONS AND DATA ENTRY DURING THE PREPARATION OF A RECIPE

PERFECT COMPANY, Vancouv...

1. A system, comprising:a computing device having at least one processor, at least one user interface and a memory, the memory including computer-executable instructions that, when executed by the processor, cause the at least one processor to:
receive a recipe, the recipe indicating at least two ingredients and at least one action;
receive an identification of an appliance and appliance data, the appliance data including at least one data field of a control panel of the appliance, the at least one action associated with the data field of the control panel of the appliance;
render, on the user interface, a first indication associated with the at least one action and the data field of the control panel of the appliance, wherein the first indication is operable on the user interface to receive a data entry, the data entry is associated with an operation of the appliance, the action is an instruction to perform the data entry on the first indication; and
receive the data entry for the first indication; and transmit the data entry to the appliance.

US Pat. No. 10,990,417

METHOD AND SYSTEM FOR CONNECTING USERS BASED ON A MEASURE OF CORRELATION

AUTODESK, INC., San Rafa...

1. A computer-implemented method for facilitating a connection between users, the method comprising:receiving a first activity data element associated with a first user, wherein the first activity data element comprises activity information associated with a first software application, and the first activity data element includes one or more characteristics of at least a first document that have been modified by the first user via the first software application;
receiving a second activity data element associated with a second user, wherein the second activity data element comprises activity information associated with the first software application, and the second activity data element includes one or more characteristics of at least one of the first document or a second document that have been modified by the second user via the first software application;
determining, via a processing unit, that a connection between the first user and the second user should be facilitated in response to determining, based on a measure of correlation, that the one or more characteristics of the at least the first document that have been modified by the first user are correlated with the one or more characteristics of the at least one of the first document or the second document that have been modified by the second user, wherein the measure of correlation is determined by comparing at least one of contextual information included in the first and second activity data elements or one or more types of commands issued by the first and second users; and
in response, facilitating a connection between the first user and the second user.

US Pat. No. 10,990,416

LOCATION-BASED MOBILE APPLICATION PROCESSING

NCR Corporation, Atlanta...

1. A method, comprising:continuously reporting a current physical location of a mobile device to a server;
receiving configuration data associated with an enterprise application (app) for an enterprise detected to be within a configured distance of the current physical location by the server;
transforming an existing mobile app associated with an existing enterprise to an instance of the enterprise app using the configuration data;
connecting the instance to an enterprise server of the enterprise using the configuration data;
processing the instance on the mobile device to interact with enterprise mobile services associated with the enterprise;
detecting a predefined event;
removing the instance from memory of the mobile device based on the predefined event;
launching the existing mobile app responsive to a user activation of an icon associated with the existing mobile app from a touch screen display of the mobile device;
detecting a change application option selected from the existing mobile app;
displaying a list of available mobile apps;
receiving a selected mobile app;
obtaining new configuration data for the selected mobile app;
removing an executing instance of the existing mobile app from the mobile device;
transforming the existing mobile app into the selected mobile app instance of the selected mobile app;
connecting to the selected mobile app instance to a new enterprise server associated with the selected mobile app; and
launching the selected mobile app instance on the mobile device to interact with new enterprise mobile services associated with the new enterprise server.

US Pat. No. 10,990,415

DISK MANAGEMENT METHOD AND APPARATUS IN ARM DEVICE AND ARM DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A disk management method in an advanced reduced instruction set computing (RISC) Machine (ARM) device, comprising:receiving, by the ARM device, configuration information, wherein the ARM device comprises a plurality of disks, wherein each of the disks has a respective slot number, and wherein the configuration information comprises a mapping data between a startup sequence of each of the disks and the respective slot number;
creating, by the ARM device, a device tree block (DTB) file comprising the configuration information, wherein creating the DTB file comprises:
recording, by the ARM device, the configuration information into a preconfigured device tree source (DTS) file; and
transferring, by the ARM device, the preconfigured DTS file into the DTB file; and
starting, by the ARM device, each of the disks in a sequence based on the DTB file.

US Pat. No. 10,990,414

SYSTEM CONSTRUCTION ASSISTANCE SYSTEM, INFORMATION PROCESSING DEVICE, METHOD AND STORAGE MEDIUM FOR STORING PROGRAM

NEC CORPORATION, Tokyo (...

1. A system construction assistance system comprising:a memory that stores a set of instructions; and
at least one processor configured to execute the set of instructions to:
divide a state model into one or more groups based on at least dependency between state elements included in the state model, the state model representing a system, each of the state elements representing a component of the system and being capable of taking a plurality of states and being capable of transitioning between the plurality of states;
determine invertibility of state elements belonging to a specified group, the invertibility being a capability of the state elements to transition from any one of the plurality of states to any other of the plurality of states in the specified group;
calculate, separately for each of the groups generated by dividing, a procedure for causing the state elements belonging to the respective group to transition to a required state; and
integrate the procedure calculated for each of the groups.

US Pat. No. 10,990,413

MAINFRAME SYSTEM STRUCTURING

International Business Ma...

1. A method comprising:structuring, by a processor, a mainframe of an organization, the mainframe including a transaction layer and a middleware layer and an operating system layer, wherein the transaction layer comprises applications and transactions of the application that are each indivisible computing operations to be executed on the transaction layer, wherein the processor structures the mainframe by:
defining a transaction relationship that relates a plurality of transactions of the transaction layer to an associated type of action of the organization;
loading the defined transaction relationship into a mainframe repository that stores type data and transaction data, where the action is stored as action data and each of the plurality of associated transactions is stored as transaction data;
identifying, by running an operating system (OS) discovery library adapter (DLA), a plurality of resources of the mainframe that may be used to execute the plurality of transactions, wherein the plurality of resources includes at least one processor and at least one memory;
identifying, by running middleware DLA, one or more transaction access paths between the transaction layer and the middleware layer to a first set of resources of the plurality of resources that are associated with middleware of the middleware layer;
generating, in the mainframe repository, a middleware DLA book that lists each of the first set of resources and includes respective identified transaction access path for each of the first set of resources;
preprocessing, in the mainframe repository, an OS DA book that includes a second set of resources of the plurality of resources that are associated with an OS of the operating system layer;
loading, and in the mainframe repository, both the OS DLA book and the middleware DLA book to a service manager (SM) service component repository (SCR); and
generating a service model of the mainframe that includes a visual representation of the plurality of transactions and the plurality of resources that are related to the relationship across the middleware layer and the transaction layer and the operating system layer of the mainframe.

US Pat. No. 10,990,412

SOFTWARE DEPLOYMENT IN DISAGGREGATED COMPUTING PLATFORMS

Liqid Inc., Broomfield, ...

1. A computing apparatus comprising:one or more computer readable storage media;
a management processor operatively coupled with the one or more computer readable storage media; and
program instructions stored on the one or more computer readable storage media, that when executed by the management processor, direct the management processor to at least:
instruct a communication fabric to establish a first partitioning in the communication fabric between a first processor and a storage element;
deploy at least a software configuration to the storage element using the first partitioning;
instruct the communication fabric to de-establish the first partitioning; and
instruct the communication fabric to establish a second partitioning in the communication fabric between a second processor and the storage element comprising the software configuration, wherein the second processor operates using the software configuration.

US Pat. No. 10,990,411

SYSTEM AND METHOD TO INSTALL FIRMWARE VOLUMES FROM NVME BOOT PARTITION

Dell Products L.P., Roun...

1. An information handling system, comprising:a processor;
a Basic Input/Output System (BIOS) read-only memory (ROM) device to store a first firmware volume of BIOS code for the information handling system; and
a non-volatile memory device including a first boot partition to store a second firmware volume of the BIOS code, and a copy of the first firmware volume, wherein the processor executes the first and second firmware volumes during Unified Extensible Firmware Interface (UEFI) boot process.

US Pat. No. 10,990,410

SYSTEMS AND METHODS FOR VIRTUALLY PARTITIONING A MACHINE PERCEPTION AND DENSE ALGORITHM INTEGRATED CIRCUIT

quadric.io, Inc., Burlin...

1. A method for virtually partitioning an integrated circuit, the method comprising:identifying a data partitioning scheme from a plurality of distinct data partitioning schemes for an input dataset based on:
(i) one or more dimensional attributes of the input dataset, and
(ii) one or more architectural attributes of the integrated circuit;
partitioning the input dataset into a plurality of distinct partitions of data based on the identified data partitioning scheme;
identifying a processing core partitioning scheme from a plurality of distinct processing core partitioning schemes for an architecture of the integrated circuit based on the partitioning of the input dataset; and
virtually partitioning the architecture of the integrated circuit based on (a) the data partitioning scheme and (b) the processing core partitioning scheme.

US Pat. No. 10,990,409

CONTROL FLOW MECHANISM FOR EXECUTION OF GRAPHICS PROCESSOR INSTRUCTIONS USING ACTIVE CHANNEL PACKING

INTEL CORPORATION, Santa...

1. An apparatus comprising:a graphics processor, including:
a plurality of processing units to execute single instruction, multiple data (SIMD) instructions; and
wherein the graphics processor is to:
detect a diverging control flow in a plurality of SIMD channels of a first SIMD instruction for an application, the first SIMD instruction comprising a plurality of sections of SIMD channels, each section comprising multiple SIMD channels;
detect which SIMD channels of the plurality of SIMD channels are active SIMD channels and determine indices of the active SIMD channels, wherein the indices comprise set bits corresponding to the active SIMD channels in an input register of the graphics processor;
determine whether a number of the active SIMD channels is below a predetermined threshold percentage of the plurality of SIMD channels; and
upon determining the number is below the threshold percentage, further to:
identify a code region of the application impacted by the diverging control flow;
duplicate the identified code region into a duplicated code region that executes using a subset of the plurality of SIMD channels, the subset comprising one or more of the plurality of sections of SIMD channels;
pack input to the active SIMD channels into the subset of the plurality of SIMD channels by applying an execution mask that utilizes the set bits of the determined indices of the active SIMD channels to identify the active SIMD channels for packing into the subset of the plurality of SIMD channels;
execute packed instructions for the subset of the plurality SIMD channels to generate computed data; and
unpack, at an output register of the graphics processor, any output from the subset of the plurality of SIMD channels that is consumed outside of the identified code region into original locations for the active SIMD channels in the plurality of SIMD channels.

US Pat. No. 10,990,407

DYNAMIC INTERRUPT RECONFIGURATION FOR EFFECTIVE POWER MANAGEMENT

Intel Corporation, Santa...

23. A tangible non-transient machine readable medium having a plurality of instructions stored thereon comprising an operating system configured to be executed on a multi-core microprocessor including a central processing unit (CPU) having a plurality of processor cores to cause an apparatus including the multi-core microprocessor to perform operations comprising:mapping interrupt vectors to the plurality of processor cores in the CPU, each interrupt vector being mapped to a particular processor core;
detecting that an interrupt workload of a first processor core has fallen below a threshold;
and in response thereto,
reconfiguring each of the interrupt vectors mapped to the first processor core to be remapped to a processor core other than the first processor core.

US Pat. No. 10,990,406

INSTRUCTION EXECUTION METHOD AND INSTRUCTION EXECUTION DEVICE

SHANGHAI ZHAOXIN SEMICOND...

1. An instruction execution method, applied to a processor, wherein the processor comprises an instruction translator, an execution unit, an architecture register, and a reorder buffer, the instruction execution method comprising:using the instruction translator to receive a macro-instruction, and to translate the macro-instruction into a first micro-instruction, a second micro-instruction and a third micro-instruction, wherein the instruction translator marks the first micro-instruction and the second micro-instruction with the same atomic operation flag;
using the execution unit to execute the first micro-instruction to generate a first execution result, and storing the first execution result in a temporary register;
using the execution unit to execute the second micro-instruction to generate a second execution result, and storing the second execution result in the architecture register; and
using the execution unit to execute the third micro-instruction to read the first execution result from the temporary register and store the first execution result in the architecture register.

US Pat. No. 10,990,405

CALL/RETURN STACK BRANCH TARGET PREDICTOR TO MULTIPLE NEXT SEQUENTIAL INSTRUCTION ADDRESSES

INTERNATIONAL BUSINESS MA...

1. A computer data processing system comprising:a branch detection module configured to determine a first program including a first program branch that is a possible call branch preceding a next sequential instruction address (NSIA) included in the first program, and to determine a first routine including a first routine branch and to determine the first routine branch is a possible return capable branch, the first routine branch located at a first routine instruction address included in the first routine and having a target instruction address included in the first program that is detected as being offset within a defined range of allowed offsets from the NSIA; and
a branch predictor module configured to determine the first program includes a second program branch that is a possible call branch having a NSIA included in the first program, and to determine the first routine includes a second routine branch that is a predicted return branch having a predicted target instruction address included in the first program based on the NSIA of the second program branch and the offset.

US Pat. No. 10,990,404

APPARATUS AND METHOD FOR PERFORMING BRANCH PREDICTION USING LOOP MINIMUM ITERATION PREDICTION

Arm Limited, Cambridge (...

18. A method of performing branch prediction in an apparatus having processing circuitry to execute instructions, and branch prediction circuitry to make branch outcome predictions in respect of branch instructions, the method comprising:using within the branch prediction circuitry loop minimum iteration prediction circuitry having one or more entries, each entry being associated with a loop controlling branch instruction that controls repeated execution of a loop comprising a number of instructions;
during a training phase for an entry, seeking to identify a minimum number of iterations of the associated loop from an analysis of multiple instances of execution of the loop during the training phase, where a total number of iterations of the loop varies for the multiple instances of execution of the loop considered during the training phase; and
for a given instance of execution of the loop encountered after the training phase has successfully identified the minimum number of iterations, employing the loop minimum iteration prediction circuitry to identify a branch outcome prediction for the associated loop controlling branch instruction for use during each of the minimum number of iterations of the loop, with the branch prediction circuitry then employing another prediction component to make the branch outcome prediction for the associated loop controlling branch instruction for each remaining iteration of the given instance of execution of the loop.

US Pat. No. 10,990,403

PREDICTING AN OUTCOME OF AN INSTRUCTION FOLLOWING A FLUSH

Arm Limited, Cambridge (...

1. An apparatus comprising:processing circuitry to speculatively execute an earlier instruction and a later instruction by generating a prediction of an outcome of the earlier instruction and a prediction of an outcome of the later instruction, wherein the prediction of the outcome of the earlier instruction causes a first control flow path to be executed;
storage circuitry to store the outcome of the later instruction in response to the later instruction completing; and
flush circuitry to generate a flush in response to the prediction of the outcome of the earlier instruction being incorrect,
wherein when re-executing the later instruction in a second control flow path following the flush, the processing circuitry is adapted to generate the prediction of the outcome of the later instruction as the outcome stored in the storage circuitry during execution of the first control flow path.

US Pat. No. 10,990,402

ADAPTIVE CONSUMER BUFFER

Red Hat, Inc., Raleigh, ...

1. A message broker system for distributing messages to consumers comprising:a performance metrics tracker;
a message distributor;
a memory; and
a processor, in communication with the memory, to:
receive a maximum cache limit from each consumer of a set of consumers in communication with the message broker system, wherein each maximum cache limit includes a maximum measurable quantity;
transmit, via the message distributor, a first set of messages to a first consumer of the set of consumers in communication with the message broker system, wherein the first set of messages includes a first measurable quantity less than or equal to the respective maximum measurable quantity of the maximum cache limit of the first consumer;
determine, via the performance metrics tracker, a count of all of the consumers in the set of consumers that are in communication with the message broker system;
compare, via the performance metrics tracker, the determined count of consumers to a previously determined count;
determine, via the performance metrics tracker, one or more performance metrics related to the first consumer after transmitting the first set of messages; and
transmit, via the message distributor, a second set of messages to the first consumer, wherein the second set of messages includes a second measurable quantity based on a change in the count of consumers and the one or more performance metrics meeting a respective predetermined threshold.

US Pat. No. 10,990,400

MEMORY APPARATUS AND DATA PROCESSING SYSTEM INCLUDING THE SAME

SK hynix Inc., Icheon-si...

7. A memory apparatus comprising:at least one memory; and
a memory controller configured to control the at least one memory,
wherein the memory controller comprises:
a command decoder configured to generate a write command, an address recognition command, or a mode register command by decoding a command/address signal provided through shared pins according to a clock signal;
a flip-flop configured to latch the write command according to the clock signal;
a mode register set configured to generate a training mode signal according to the mode register command;
a first logic circuit configured to perform a logic operation on the address recognition command and the training mode signal and output an output signal; and
a second logic circuit configured to generate an internal write signal according to the signal latched in the flip-flop and the output signal of the first logic circuit.

US Pat. No. 10,990,398

MECHANISM FOR INTERRUPTING AND RESUMING EXECUTION ON AN UNPROTECTED PIPELINE PROCESSOR

Texas Instruments Incorpo...

1. A method for executing a plurality of instructions by a processor, the method comprising:receiving a first instruction for execution on an instruction execution pipeline;
determining a lifetime tracking value for the execution of the first instruction;
beginning the execution of the first instruction;
updating the lifetime tracking value during the execution of the first instruction;
receiving one or more second instructions for execution on the instruction execution pipeline, the one or more second instructions associated with a higher priority task than the first instruction;
in response to the one or more second instructions, storing a register state associated with the execution of the first instruction in a memory;
determining that the one or more second instructions have completed execution; and
in response to the one or more second instructions having completed execution, restoring the register state from the memory to a location in the instruction execution pipeline determined based on the lifetime tracking value.

US Pat. No. 10,990,397

APPARATUSES, METHODS, AND SYSTEMS FOR TRANSPOSE INSTRUCTIONS OF A MATRIX OPERATIONS ACCELERATOR

Intel Corporation, Santa...

1. An apparatus comprising:a matrix operations accelerator circuit comprising a two-dimensional grid of fused multiply accumulate circuits;
a first plurality of registers that represents an input two-dimensional matrix coupled to the matrix operations accelerator circuit;
a decoder, of a core coupled to the matrix operations accelerator circuit, to decode an instruction into a decoded instruction; and
an execution circuit of the core to execute the decoded instruction to: switch the matrix operations accelerator circuit from a first, fused multiply accumulate mode where a respective output of each of a first proper subset of fused multiply accumulate circuits of the two-dimensional grid is transmitted to a respective input of an adder circuit of each of a second proper subset of fused multiply accumulate circuits of the two-dimensional grid to form respective fused multiply accumulate values from the input two-dimensional matrix, to a second, transpose mode where a first proper subset of the input two-dimensional matrix is input to the first proper subset of fused multiply accumulate circuits of the two-dimensional grid, a second proper subset of the input two-dimensional matrix is input to the second proper subset of fused multiply accumulate circuits of the two-dimensional grid, and the second proper subset of the input two-dimensional matrix is locked from propagating to the respective inputs of the adder circuit of each of the second proper subset of fused multiply accumulate circuits until the first proper subset of the input two-dimensional matrix is propagated ahead of the second proper subset of the input two-dimensional matrix in the adder circuit of each of the second proper subset of fused multiply accumulate circuits.

US Pat. No. 10,990,396

SYSTEMS FOR PERFORMING INSTRUCTIONS TO QUICKLY CONVERT AND USE TILES AS 1D VECTORS

Intel Corporation, Santa...

1. A processor comprising: fetch circuitry to fetch an instruction having fields to specify an opcode, locations of a two-dimensional (2D) matrix and a one-dimensional (1D) vector, and a group of elements comprising one of a row, part of a row, multiple rows, a column, part of a column, multiple columns, or a rectangular sub-tile of the specified 2D matrix, and wherein the opcode is to indicate a move of the specified group between the 2D matrix and the 1D vector; decode circuitry to decode the fetched instruction; and execution circuitry, responsive to the decoded instruction, when the opcode specifies a move from 1D, to move contents of the specified 1D vector to the specified group of elements.

US Pat. No. 10,990,394

SYSTEMS AND METHODS FOR MIXED INSTRUCTION MULTIPLE DATA (XIMD) COMPUTING

Intel Corporation, Santa...

1. An integrated circuit, comprising:a mixed instruction multiple data (xIMD) computing system, comprising:
a plurality of data processors, each data processor representative of a lane of a single instruction multiple data (SIMD) computing system, wherein the plurality of data processors are configured to use a first dominant lane for instruction execution and to fork a second dominant lane when a data dependency instruction that does not share a taken/not-taken state with the first dominant lane is encountered during execution of a program by the xIMD computing system.

US Pat. No. 10,990,393

ADDRESS-BASED FILTERING FOR LOAD/STORE SPECULATION

ADVANCED MICRO DEVICES, I...

1. A method of address-based filtering for load/store speculation, the method comprising:maintaining a filtering table comprising table entries associated with ranges of addresses;
in response to receiving an ordering check triggering transaction, querying the filtering table using a target address of the ordering check triggering transaction to determine if an instruction dependent upon the ordering check triggering transaction has previously been generated a physical address; and
in response to determining that the filtering table lacks an indication that the instruction dependent upon the ordering check triggering transaction has previously been generated a physical address, bypassing a lookup operation in an ordering violation memory structure to determine whether the instruction dependent upon the ordering check triggering transaction is currently in-flight.

US Pat. No. 10,990,392

EFFICIENT LOOP EXECUTION FOR A MULTI-THREADED, SELF-SCHEDULING RECONFIGURABLE COMPUTING FABRIC

Micron Technology, Inc., ...

1. A configurable circuit, comprising:a configurable computation circuit;
a first memory circuit coupled to the configurable computation circuit;
a synchronous network;
a plurality of synchronous network inputs coupled to the synchronous network and to the configurable computation circuit;
a plurality of synchronous network outputs coupled to the synchronous network and to the configurable computation circuit;
an asynchronous network input queue coupled to an asynchronous packet network;
an asynchronous network output queue; and
a second configuration memory circuit coupled to the configurable computation circuit, to the plurality of synchronous network inputs, and to the plurality of synchronous network outputs; and
a control circuit coupled to the configurable computation circuit, the control circuit comprising:
a memory control circuit;
a plurality of control registers storing a thread identifier pool having a predetermined number of a plurality of thread identifiers and a completion table having a loop count of an active number of loop threads; and
a thread control circuit, wherein in response to receipt of an asynchronous packet network message returning a thread identifier of the plurality of thread identifiers to the thread identifier pool, the control circuit decrements the loop count and, when the loop count reaches zero, transmits an asynchronous packet network completion message.

US Pat. No. 10,990,391

BACKPRESSURE CONTROL USING A STOP SIGNAL FOR A MULTI-THREADED, SELF-SCHEDULING RECONFIGURABLE COMPUTING FABRIC

Micron Technology, Inc., ...

1. A configurable circuit, comprising:a configurable computation circuit;
a plurality of synchronous network inputs coupled to the configurable computation circuit;
a plurality of synchronous network outputs coupled to the configurable computation circuit;
an asynchronous network input queue coupled to an asynchronous packet network;
an asynchronous network output queue coupled to the asynchronous packet network;
a flow control circuit coupled to the asynchronous network output queue, the flow control circuit adapted to generate a stop signal when a predetermined threshold has been reached in the asynchronous network output queue; and
a configuration memory circuit coupled to the configurable computation circuit, to the plurality of synchronous network inputs, and to the plurality of synchronous network outputs, the configuration memory circuit comprising:
a first instruction memory storing a first plurality of data path configuration instructions to configure a data path of the configurable computation circuit; and
a second instruction memory storing a second plurality of data path configuration instructions or instruction indices for selection of a master synchronous network input of the plurality of synchronous network inputs for receipt of a current data path configuration instruction or instruction index from another, different configurable circuit.

US Pat. No. 10,990,389

BIT STRING OPERATIONS USING A COMPUTING TILE

Micron Technology, Inc., ...

1. An apparatus, comprising:a plurality of computing devices resident on a storage controller that is resident on a memory device, wherein the plurality of computing devices are coupled to one another, and wherein each of the computing devices comprises a processing unit and a memory array configured as a cache for the processing unit;
an interface coupled to the plurality of computing devices and coupleable to a host device; and
a controller coupled to the plurality of computing devices and comprising circuitry configured to:
request data comprising a bit string having a first format that supports arithmetic operations to a first level of precision from a memory device coupled to the apparatus;
write the data to a first memory array associated with a first computing device among the plurality of computing devices;
determine, by a second computing device among the plurality of computing devices, that an operation in which the bit string is converted to a second format that supports arithmetic operations to a second level of precision that is different from the first level of precision is to be performed using the second computing device;
write, in response to the determination, the data from the first memory array associated with the first computing device to a second memory array associated with the second computing device; and
cause the processing unit of the second computing device among the plurality of computing devices to perform the operation in which the bit string is converted to the second format that supports arithmetic operations to the second level of precision that is different from the first level of precision.

US Pat. No. 10,990,388

SEMICONDUCTOR DEVICE

Samsung Electronics Co., ...

1. An image processing method of a semiconductor device, wherein the semiconductor device comprises an internal register configured to store image data, a data arrange layer configured to rearrange the stored image data into N number of data rows each having a plurality of lanes, and a plurality of arithmetic logic units (ALUs) comprising N ALU groups configured to process the N number of data rows, the method comprising:rearranging, by the data arrange layer, first data of the stored image data to provide rearranged first image data, the first data having an n×n matrix size wherein n is a natural number; and
performing, by the ALUs, a first map calculation using the rearranged first image data to generate first output data.

US Pat. No. 10,990,387

CONVERTING FLOATING-POINT OPERANDS INTO UNIVERSAL NUMBER FORMAT OPERANDS FOR PROCESSING IN A MULTI-USER NETWORK

Micron Technology, Inc., ...

1. An apparatus, comprising:circuitry communicatively coupled to a pool of shared computing resources deployed in a multi-user network, wherein the circuitry is configured to:
receive a request to perform an arithmetic operation or a logical operation, or both, using at least one posit bit string operand, wherein the request includes a parameter corresponding to performance of the operation using the at least one posit bit string operand, wherein the parameter corresponds to a bit length of the at least one posit bit string operand, a quantity of exponent bits of the at least one posit bit string operand, or both; and
perform the arithmetic operation or the logical operation, or both, using the at least one posit bit string operand based, at least in part, on the received parameter.

US Pat. No. 10,990,384

SYSTEM, APPARATUS AND METHOD FOR DYNAMIC UPDATE TO CODE STORED IN A READ-ONLY MEMORY (ROM)

INTEL CORPORATION, Santa...

1. An apparatus comprising:a control circuit to dynamically enable a comparison circuit based on a dynamic update to a hook table and a patch table; and
the comparison circuit coupled to the control circuit to compare an address of a program counter to at least one address stored in the hook table, and in response to a match between the address of the program counter and the at least one address stored in the hook table, cause a jump from code stored in a read only memory (ROM) to patch code stored in a patch storage, wherein in response to the match, the comparison circuit is to update the program counter to a second address, the second address obtained from the patch table and corresponding to an entry point to the patch code and obtain an index from an entry of the hook table having the matching at least one address and access an entry of the patch table according to the index, the entry of the patch table having the second address.

US Pat. No. 10,990,383

CONSTRUCTING SOFTWARE DELTA UPDATES FOR CONTROLLER SOFTWARE AND ABNORMALITY DETECTION BASED ON TOOLCHAIN

Aurora Labs Ltd., Tel Av...

16. A computer-implemented method for changing software on a controller, the method comprising:accessing a delta file comprising position-independent code in response to a detected deviation of controller behavior from an acceptable operational envelope, the delta file representing differences between a plurality of attributes of a software change to be executed on the controller and a plurality of attributes of current software on the controller;
configuring the position-independent code to execute on the controller; and
providing the delta file to the controller, wherein the controller is configured to link execution of the current software to execution of an instruction contained in the delta file.

US Pat. No. 10,990,382

ENCRYPTION MACHINE UPGRADE, DATA IMPORT AND REQUEST MIGRATION METHOD, APPARATUS AND DEVICE

Alibaba Group Holding Lim...

1. A method, comprising:determining, by a controller for managing upgrading of encryption machine, a first encryption machine to be upgraded;
storing, by the controller, an upgrade software package for upgrading the first encryption machine into a file storage device;
transferring, by the controller, data of the first encryption machine to a second encryption machine; and
sending, by the controller, an upgrade command for instructing the first encryption machine to conduct upgrade, to the first encryption machine, wherein sending, by the controller, the upgrade command for instructing the first encryption machine to conduct upgrade, to the first encryption machine further comprises
sending, by the controller, the upgrade command for instructing the first encryption machine to obtain the upgrade software package in the file storage device and
utilize the upgrade software package to conduct the upgrade, to the first encryption machine, wherein the upgrade operation is executed by the first encryption machine according to the upgrade command.

US Pat. No. 10,990,381

METHOD AND DEVICE FOR UPDATING A PROGRAM

Robert Bosch GmbH, Stutt...

1. A method for a program in a memory, the memory including a plurality of blocks in which the program is stored and a single backup block, the method comprising:executing a first version of the program (i) by accessing the program according to an address space that points to the plurality of blocks in which the program is stored and (ii) while the plurality of blocks are operated in a single level mode; and
updating the program from the first version of the program to a second version of the program while the program remains executable partly from the plurality of blocks and partly from the single backup block, the updating being performed by:
(1) executing a plurality of iterations of a series of steps, wherein the iterations are executed until an entirety of the second version is transferred onto the plurality of blocks and wherein the steps of each respective one of the iterations include:
copying a respective part of the first version of the program from a respective one of the plurality of blocks to the single backup block;
switching a respective portion of the address space, corresponding to the respective part of the first version of the program, from (a) pointing to the respective one of the plurality of blocks to instead (b) point to the single backup block;
setting the respective one of the plurality of blocks to a multi-level mode;
while the respective one of the plurality of blocks is set to the multi-level mode and the respective portion of the address space points to the single backup block, storing a respective part of the second version of the program on the respective one of the plurality of blocks so that the respective one of the plurality of blocks simultaneously stores the respective part of the first version of the program and the respective part of the second version of the program; and
switching the respective portion of address space back from (a) pointing to the single backup block to instead (b) point to the respective one of the plurality of blocks while the one of the blocks remains in the multi-level mode, thereby freeing the single backup block to be able to be used for any other of the iterations if any other of the iteration are still to be performed; and
(2) subsequent to the execution of all of the iterations at which point none of the address space points to the single backup block, executing an entirety of the second version of the program in place of the first version of the program by accessing the program according to the address space that points to the plurality of blocks.

US Pat. No. 10,990,380

POWER SAFE OFFLINE DOWNLOAD

WESTERN DIGITAL TECHNOLOG...

1. A method for operating a storage device, comprising:writing new firmware to a first non-volatile memory device of the storage device;
loading the new firmware into a volatile memory device of the storage device from the first non-volatile memory device;
booting the storage device with the new firmware loaded into the volatile memory device;
updating a firmware slot on a second non-volatile memory device of the storage device with the new firmware;
rebooting the storage device with the new firmware loaded into the volatile memory device when the storage device power cycles while updating the firmware slot on the second non-volatile memory device with the new firmware; and
updating a status of the second non-volatile memory device once the firmware slot is finished updating.

US Pat. No. 10,990,379

VEHICULAR SOFTWARE UPDATE APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A vehicular software update apparatus used to a vehicle to update software stored in an electronic control unit mounted on the vehicle,the vehicular software update apparatus comprising:
a low power communication device configured to perform a wide area wireless communication with a low power consumption;
an update information processor configured to
operate the low power communication device in an update confirmation state including a state where neither power generation in the vehicle nor power supplying to the vehicle is performed, and
cause the low power communication device to download software update information which is information used to update the software when the software update information is provided by a server; and
a software download processor configured to execute processing related to downloading of the software, the processing related to downloading of the software being determined based on the software update information received by the low power communication device.

US Pat. No. 10,990,378

STORAGE DEVICE AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A storage device comprising:a semiconductor memory device including a plurality of memory blocks; and
a controller configured to control the semiconductor memory device,
wherein the semiconductor memory device stores original firmware as default firmware and one or more copies of the original firmware as pieces of backup firmware in a first memory block among the plurality of memory blocks, and
wherein the controller comprises:
a firmware load circuit configured to load the default firmware when the default firmware is valid and load one of the pieces of backup firmware sequentially according to a start page index of a region corresponding to multiple offsets where each piece of backup firmware is stored when the default firmware is not valid; and
a firmware update circuit configured to update the original firmware with an updated version of the original firmware, a currently updated version becoming the default firmware and the original firmware becoming one of the pieces of backup firmware.

US Pat. No. 10,990,377

ONLINE MARKETPLACE OF PLUGINS FOR ENHANCING DIALOG SYSTEMS

GOOGLE LLC, Mountain Vie...

1. A system for enhancing dialog systems, comprising:a memory operable to maintain an online marketplace comprising a plurality of dialog system extension elements, wherein each of the plurality of dialog system extension elements includes at least one of a dialog system plugin, a dialog system add-on, a dialog system update, and a dialog system upgrade; and
at least one processor operable to:
receive a selection of one of the plurality of dialog system extension elements from a software developer, the software developer being associated with and managing a dialog system engine of a dialog system;
based on the selection, associate the one of the plurality of dialog system extension elements with the dialog system engine associated with the software developer, wherein associating the one of the plurality of dialog system extension elements with the dialog system engine modifies at least a portion of functionality of the dialog system;
subsequent to associating the one of the plurality of dialog system extension elements with the dialog system engine:
receive a user request of a user via a dialog system interface of the dialog system, wherein the user request corresponds to voice input provided by the user, and wherein the dialog system interface is installed on a user device;
identify, based on the user request, the dialog system engine that is associated with the software developer; and
based on identifying the dialog system engine that is associated with the software developer:
process the user request by the one of the plurality of dialog system extension elements associated with the dialog system engine to generate a response to the user request, wherein the response to the user request includes at least one of: text to be delivered to the user or metadata including instructions to perform one or more actions, and
cause delivery of the response, by the dialog system interface and at the user device of the user, for visual or audio presentation to the user.

US Pat. No. 10,990,375

SYSTEMS AND METHODS FOR APPLICATION PROGRAM AND APPLICATION PROGRAM UPDATE DEPLOYMENT TO A MOBILE DEVICE

MFOUNDRY, Inc., San Fran...

1. A computer system for deploying software updates to mobile devices, comprising:at least one memory storing instructions; and
at least one hardware processor to execute the instructions to perform operations, comprising:
receiving a communication comprising a version identifier of a software application on a mobile device;
determining, based at least in part on analyzing the received version identifier of the software application in view of a second version identifier associated with an available version of the software application, that an updated version of the software application is available;
determining that the updated version of the software application is approved for release based on at least a user selection at a device identifying the updated version of the software application as ready for release to the mobile device;
associating a user identifier with the updated version of the software application;
providing, to the mobile device and based on the determining that the updated version of the software application is approved for release, the updated version of the software application to be installed on the mobile device and the associated user identifier, wherein the provided software application is agnostic to an operating system on the mobile device, and is configured to be interpreted by the mobile device and distributed to a plurality of mobile device types.

US Pat. No. 10,990,374

VIRTUAL MACHINE UPDATE WHILE KEEPING DEVICES ATTACHED TO THE VIRTUAL MACHINE

MICROSOFTTECHNOLOGY LICEN...

1. In a computing system running a host operating system and a virtual machine (VM), the host operating system executing one or more first virtual machine (VM) components, and the VM executing one or more second VM components that are to remain loaded in computing system physical memory during a servicing operation of the computing system, the one or more first VM components configured to manage the one or more second VM components via one or more identification pointers, a method for performing the servicing operation of the VM, the method comprising:suspending an operation of the VM running the one or more second VM components, the VM having one or more devices that are directly attached to it;
saving a state of the one or more first VM components of the host operating system;
saving the one or more identification pointers for the one or more second VM components in a portion of the computing system physical memory without removing any underlying data structures of the one or more second VM components from the computing system physical memory, wherein the one or more directly attached devices remain configured as attached to the VM and remain configured to communicate with the VM while the VM is suspended and while the servicing operation is performed since the underlying data structures of the one or more second VM components are not removed;
shutting down the one or more first VM components by removing any underlying data structures for the one or more first VM components from the computing system physical memory;
restoring at the completion of the servicing operation the one or more first VM components;
reconnecting the restored one or more first VM components to the one or more second VM components using the identification pointers; and
resuming the operation of the VM.

US Pat. No. 10,990,373

SERVICE MANAGERS AND FIRMWARE VERSION SELECTIONS IN DISTRIBUTED COMPUTING SYSTEMS

Nutanix, Inc., San Jose,...

1. A method comprising:displaying first and second selection interfaces in a user interface, the first and second selection interfaces corresponding to a first and second firmware components, respectively, on a computing node;
displaying, in the user interface with the first and second selection interfaces, a first setpoint for the first selection interface and a second setpoint for the second selection interface, the first setpoint indicating a first firmware version of the first firmware component and the second setpoint indicating a second firmware version of the second firmware component, wherein the first firmware component includes one of a basic input/output (BIOS) firmware, a base management controller (BMC) firmware, a host bus adapter (HBA) firmware, or a disk firmware, wherein the second firmware component includes a different one of the BIOS firmware, the BMC firmware, the HBA firmware, or the disk firmware;
receiving user input selecting a selected third setpoint on the first selection interface, the selected third setpoint indicating a third firmware version of the first firmware component;
evaluating, using a manager service of the computing node, dependencies between the first and second firmware components to provide an additional selected setpoint for the second selection interface based on the selected third setpoint, the additional selected setpoint indicating a fourth firmware version of the second firmware component; and
displaying, in the user interface with the first and second selection interfaces, an indication of selection of the additional selected setpoint for the second selection interface.

US Pat. No. 10,990,372

UPDATING AN EDGE COMPUTING DEVICE

Microsoft Technology Lice...

1. On a computing device configured as an edge device node in a cluster of computing devices configured to operate as an edge computing device located at a network edge between a local network and a cloud service to perform functions of the cloud service at the network edge, a method for updating system software of the computing device, the method comprising:booting into a system disk image at a boot location;
receiving, from a server computing device, an updated system disk image and storing the updated system disk image;
changing the boot location from a location of the system disk image to a location of the updated system disk image;
booting into the updated system disk image;
detecting a potential error, wherein the detecting is performed at the computing device; and
receiving an instruction to roll back to a specified prior version of the system disk image, wherein receiving the instruction to roll back comprises automatically generating the instruction at the computing device in response to detecting the potential error;
based on receiving the instruction to roll back, changing the boot location from the location of the updated system disk image to a location of the specified prior version of the system disk image;
booting into the specified prior version of the system disk image; and
sending an instruction to another edge computing device of the cluster of computing devices to utilize the specified prior version of the system disk image such that each node of the cluster of computing devices is operating using the same prior version of the system disk image.

US Pat. No. 10,990,371

DEVICE DRIVER NON-VOLATILE BACKING-STORE INSTALLATION

CrowdStrike, Inc., Irvin...

1. A system, comprising:at least one processing unit;
a volatile memory comprising at least one tangible, non-transitory volatile computer-readable medium, wherein the volatile memory comprises a first driver; and
a non-volatile (nonV) memory communicatively connected with the at least one processing unit and comprising at least one tangible, non-transitory non-volatile computer-readable medium, wherein the nonV memory comprises:
a driver store comprising a first copy of a second driver, the second driver being a different version of the first driver; and
an installed-driver backing store comprising a second copy of the second driver,
wherein an operating system of the system is configured to prompt a reboot of the system to update a driver based on a determination that the driver store and the installed-driver backing store include different versions of the driver, and
wherein the first copy of the second driver in the driver store and the second copy of the second driver in the installed-driver backing store cause the operating system to avoid prompting the reboot of the system to update the first driver.

US Pat. No. 10,990,370

SYSTEM, APPARATUS AND METHOD FOR DEPLOYING INFRASTRUCTURE TO THE CLOUD

Candid Labs, Inc., Atlan...

1. A system to provision a plurality of software applications for operation as resources operating on a cloud computing network accessible to a plurality of users associated with an enterprise, the system comprising:a decision engine configured to generate a cloud deployment model for the plurality of software applications, the cloud deployment model based on: a) survey data provided by the enterprise for the plurality of software applications, wherein the survey data includes information defining a complexity value for each of the plurality of software applications and a business value for each of the plurality of software applications to the enterprise, wherein each respective complexity value and business value for each of the plurality of software applications is derived from the survey data; b) organizational standards for the enterprise; c) server inventory data for the plurality of software applications; and d) learned approaches for creating cloud deployment models for the enterprise, wherein the learned approaches are determined according to analysis of prior approaches applied to one or more software applications by the enterprise or other enterprises; and
a code generation module configured to convert the cloud deployment model to an infrastructure-as-code definition for deployment to the cloud computing network.

US Pat. No. 10,990,369

REPURPOSING SERVERLESS APPLICATION COPIES

EMC IP Holding Company LL...

1. A system comprising: a processor; and memory configured to store one or more sequences of instructions which, when executed by the processor, cause the processor to carry out the steps of:reviewing an application manifest associated with a serverless application to create a backup copy of the serverless application, the serverless application executing in a first function-as-a-service (FaaS) environment on a compute services platform that is provided by a cloud vendor and comprising a plurality of stateless functions, packaged as containers, that interact with backend services and are invoked according to application function mappings, wherein data including state of the serverless application is stored external to the serverless application by the backend services,
wherein the application manifest specifies the plurality of stateless functions, the backend services, and the application function mappings, and
wherein a stateless function interacts with a backend service in executing the serverless application, and an application function mapping comprises a condition under which the stateless function is invoked;
creating the backup copy of the serverless application by directing each of a backend service listed in the application manifest to create a copy of data associated with the backend service;
receiving a selection of the backup copy of the serverless application for restoring onto the same compute services platform supporting the first FaaS environment, but in a second FaaS environment, different from the first FaaS environment, on the same compute services platform;
accessing the application manifest of the serverless application used to create the copy of the serverless application in the first FaaS environment;
restoring the copy of the serverless application onto the same compute services platform according to the application manifest, the restored copy of the serverless application comprising a restored version of the stateless function, a restored version of the backend service, and a restored version of the application function mapping, wherein upon the restoring, the backend service is accessible to the restored copy of the serverless application; and
creating the second FaaS environment, on the same compute services platform as the first FaaS environment by changing a condition specified in the restored version of the application function mapping so that the backend service is not accessible to the restored copy of the serverless application, wherein based on the changed condition, the restored version of the stateless function is invoked when the restored version of the backend service performs an operation, and
wherein the stateless function of the serverless application in the first FaaS environment and corresponding to the restored version of the stateless function is not invoked.

US Pat. No. 10,990,368

ON-PREMISES AND CLOUD-BASED SOFTWARE PROVISIONING

Oracle International Corp...

1. A non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, cause the processor to provision a software application on-premises or in a cloud, the provisioning comprising:creating and initializing an instance of a provisioning object;
generating a uniform graphical user interface (GUI) including a home window that allows a user to select a configure window, an orchestrate window and a deploy window that receive provisioning parameters from a user interacting with the GUI, the provisioning parameters comprising a selection of deploying the software application either on-premises or in the cloud, the uniform GUI comprising a single window that allows the previously developed software application to be provisioned on either on-premises or in the cloud in response to the user interacting;
receiving the provisioning parameters from the GUI;
creating and initializing a location object and a deployment object based on the provisioning parameters, the location object including an on-premises object for a local network deployment or a cloud object for a remote network deployment, wherein the on-premises object and cloud object are child objects of the location object, the deployment object comprising a software container that includes instances of other objects;
receiving a command to deploy the software application from the GUI; and
deploying the software application on-premises in response to the selection of deploying the software application on-premises or in the cloud in response to the selection of deploying the software application in the cloud using the provisioning object, the location object and the deployment object.