US Pat. No. 10,169,252

CONFIGURING FUNCTIONAL CAPABILITIES OF A COMPUTER SYSTEM

International Business Ma...

1. A computer program product for configuring functional capabilities of a computer system comprising two or more persistent memories and two or more replaceable functional units, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising:transferring, in response to a repair action for a first functional unit, first enablement data stored on the first functional unit to a servicer, wherein the first enablement data stored on the first functional unit is in a first persistent memory of the first functional unit, wherein the first enablement data is associated with a first unique identification number that corresponds to the first functional unit, the first enablement data specifying one or more functional capabilities of the first functional unit, and wherein the one or more functional capabilities are enabled in the first functional unit;
erasing, after transferring the first enablement data in the first persistent memory to the servicer, the first enablement data from the first persistent memory;
obtaining, by the servicer, in response to a replacement action for the first functional unit, a second unique identification number that corresponds to a second functional unit;
transferring, from the servicer, the first unique identification number and the second unique identification number to a trusted environment in the computer system;
transforming, in the trusted environment, the first enablement data to second enablement data by replacing the first unique identification number with the second unique identification number; and
transferring the second enablement data to the second functional unit, wherein the second enablement data is stored in a second persistent memory of the second functional unit, wherein the second enablement data specifies one or more functional capabilities of the second functional unit, and wherein the one or more functional capabilities of the second functional unit are the same as the one or more functional capabilities of the first functional unit.

US Pat. No. 10,169,251

LIMTED EXECUTION OF SOFTWARE ON A PROCESSOR

Massachusetts Institute o...

1. A method for limiting execution of an encrypted computer program on a secure processor comprising:executing a first set of instructions encoding a test for determining whether a value of a register of the secure processor belongs to a set of valid register values encoded in the encrypted computer program, execution of the first set of instructions causing the secure processor to:
destructively read a first register value from the register of the secure processor, the register of the secure processor configured to provide a destructive read of its value such that repeated reads of a same value of the register are prevented, and
determining whether the first register value belongs to the set of valid register values encoded in the encrypted computer program; and
preventing execution of further instructions of the encrypted computer program based on a determination that the first register value does not belong to the set of valid register values encoded in the encrypted computer program,
wherein the set of valid register values is based on a value destructively read from the register prior to the destructive read of the first register value.

US Pat. No. 10,169,250

METHOD AND APPARATUS METHOD AND APPARATUS FOR CONTROLLING ACCESS TO A HASH-BASED DISK

TENCENT TECHNOLOGY (SHENZ...

1. A method for controlling access to a hash-based disk, the disk comprising a storage object, the storage object comprising a set of records and a hash value, the method comprising:constructing a Bloom filter for the storage object, the Bloom filter comprising an initial bit and a plurality of Bloom filter bits;
determining whether the storage object is accessed for a first time, wherein when the initial bit is a predefined value, the storage object is determined being accessed for the first time;
when the storage object is determined as not being accessed for the first time, reading the set of records in the storage object;
filtering an access request to the storage object using the Bloom filter;
counting a number of unnecessary accesses and a number of read accesses, wherein the number of unnecessary accesses is a number of access requests filtered out by the Bloom filter, and the number of read accesses is a number of access requests to the storage object;
calculating a ratio of unnecessary accesses to the storage object based on the number of unnecessary accesses and the number of read accesses, wherein the ratio of unnecessary accesses is calculated by dividing the number of unnecessary accesses by the number of read accesses; and
when the ratio of unnecessary accesses is within a threshold range, selecting the storage object and allocating the Bloom filter to a second storage object.

US Pat. No. 10,169,249

ADJUSTING ACTIVE CACHE SIZE BASED ON CACHE USAGE

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing a cache in at least one memory device in a computer system to cache tracks stored in a storage, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:maintaining an active cache list indicating tracks in an active cache comprising a first portion of the at least one memory device to cache the tracks in the storage during computer system operations;
maintaining an inactive cache list indicating tracks demoted from the active cache;
during caching operations, gathering information on active cache hits comprising access requests to tracks indicated in the active cache list and inactive cache hits comprising access requests to tracks indicated in the inactive cache list;
staging a track requested by an access request indicated in the inactive cache list from the storage to the active cache;
adding indication of the staged track to the active cache list; and
using the gathered information to determine whether to provision a second portion of the at least one memory device unavailable to cache user data to be part of the active cache for use to cache user data during the computer system operations.

US Pat. No. 10,169,248

DETERMINING CORES TO ASSIGN TO CACHE HOSTILE TASKS

INTERNATIONAL BUSINESS MA...

1. A computer program product for dispatching tasks in a computer system having a plurality of cores, wherein each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:processing a task to determine one of the cores on which to dispatch the task;
determining whether the processed task is classified as cache hostile, wherein a task is classified as cache hostile when the task accesses more than a threshold number of memory address ranges in the memory; and
dispatching the processed task to at least one of the cores assigned to process cache hostile tasks.

US Pat. No. 10,169,247

DIRECT MEMORY ACCESS BETWEEN AN ACCELERATOR AND A PROCESSOR USING A COHERENCY ADAPTER

International Business Ma...

8. An adapter for direct memory access (‘DMA’) between a processor and an accelerator, the processor coupled to the accelerator by the adapter, the adapter configured to carry out the steps of:providing, by the adapter, a translation tag (‘XTAG’) to the accelerator; and
responsive to receiving a DMA instruction for a DMA transfer, wherein the DMA instruction comprises the XTAG, generating, by the adapter, a DMA instruction comprising a real address based on the XTAG.

US Pat. No. 10,169,246

REDUCING METADATA SIZE IN COMPRESSED MEMORY SYSTEMS OF PROCESSOR-BASED SYSTEMS

QUALCOMM Incorporated, S...

1. A compressed memory system of a processor-based system, comprising:a metadata circuit comprising a plurality of metadata entries each having a bit size of N bits omitted from a bit size of a full physical address addressable to a system memory, the system memory comprising a plurality of 2N compressed data regions each comprising a plurality of memory blocks each associated with a full physical address, and a set of free memory lists of a plurality of 2N sets of free memory lists, each corresponding to a plurality of free memory blocks of the plurality of memory blocks;
the metadata circuit configured to associate a plurality of virtual addresses to a plurality of abbreviated physical addresses stored in the plurality of metadata entries, each abbreviated physical address among the plurality of abbreviated physical addresses omitting N upper bits from a corresponding full physical address addressable to the system memory; and
a compression circuit configured to:
receive a memory access request comprising a virtual address;
select a compressed data region of the plurality of 2N compressed data regions in the system memory, and a set of free memory lists of the plurality of 2N sets of free memory lists based on a modulus of the virtual address and 2N;
retrieve an abbreviated physical address corresponding to the virtual address from the metadata circuit; and
perform a memory access operation on a memory block of the plurality of memory blocks associated with the abbreviated physical address in the selected compressed data region.

US Pat. No. 10,169,245

LATENCY BY PERSISTING DATA RELATIONSHIPS IN RELATION TO CORRESPONDING DATA IN PERSISTENT MEMORY

Intel Corporation, Santa...

1. An apparatus comprising:a memory unit for a processor, the memory unit to include a prefetcher to:
detect a relationship between two or more addresses of a byte-addressable random access persistent memory based on an access pattern to the persistent memory by an application executed by the processor;
cause information to be stored in a pre-allocated portion of the persistent memory that indicates the relationship between the two or more addresses; and
retrieve the information stored in the pre-allocated portion when the application subsequently accesses an address from among the two or more addresses to cause data to be prefetched from the two or more addresses of the persistent memory based on the relationship indicated in the information.

US Pat. No. 10,169,244

CONTROLLING ACCESS TO PAGES IN A MEMORY IN A COMPUTING DEVICE

ADVANCED MICRO DEVICES, I...

1. A method for handling memory accesses by virtual machines in a computing device, the computing device including a reverse map table (RMT) and a separate guest accessed pages table (GAPT) for each virtual machine, the RMT including a plurality of entries, each entry including information for identifying a virtual machine that is permitted to access an associated page of data in a memory, and each GAPT including a record of pages being accessed by a corresponding virtual machine, the method comprising:receiving, in a table walker, a request to translate a virtual address to a system physical address, the request originating from a given virtual machine;
acquiring, from a corresponding guest page table, a guest physical address associated with the virtual address, and, from a nested page table, a system physical address associated with the virtual address;
checking, based on the guest physical address and the system physical address, at least one of the RMT and a corresponding GAPT to determine whether the given virtual machine has access to a corresponding page; and
when the given virtual machine does not have access to the corresponding page, terminating translating the virtual address to the system physical address.

US Pat. No. 10,169,243

REDUCING OVER-PURGING OF STRUCTURES ASSOCIATED WITH ADDRESS TRANSLATION

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing purging of structure entries associated with address translation, said computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
determining, by a processor, whether a block of memory of a computing environment for which a purge request has been received is backing an address translation structure, the block of memory having an address to identify the block of memory; and
based on determining the block of memory is not backing the address translation structure, performing an action to selectively purge from a structure associated with address translation entries created from translating the address that are at one level of address translation, wherein other entries at one or more other levels of address translation created from translating the address remain in the structure.

US Pat. No. 10,169,242

HETEROGENEOUS PACKAGE IN DIMM

SK Hynix Inc., Gyeonggi-...

1. A memory system comprising:a memory module including:
a first memory device including a first memory and a first memory controller controlling the first memory to store data; and
a second memory device including a second memory and a second memory controller controlling the second memory to store data; and
a processor executing an operating system (OS) and an application to access a data storage memory through the first and second memory devices,
wherein the first and second memories are separated from the processor,
wherein the processor accesses the second memory device through the first memory device,
wherein the first memory controller transfers a signal between the processor and the second memory device based on at least one of values of a memory selection field and a handshaking information field included in the signal,
wherein the memory module includes one or more memory stacks,
wherein one or more volatile memories as the first memory, one or more non-volatile memories as the second memory, and the first and second memory controllers are stacked in the memory stacks,
wherein the first and second memory devices stacked in the memory stacks are communicatively coupled to each other through a through-via,
wherein the first and second memory controllers interface with the first and second memories and the processor through the through-via.

US Pat. No. 10,169,241

MANAGING MEMORY ALLOCATION BETWEEN INPUT/OUTPUT ADAPTER CACHES

International Business Ma...

1. A method for managing memory allocation of caching storage input/output adapters (IOAs) in a redundant caching configuration, the method comprising:detecting a first cache of a first IOA storing a first amount of data that satisfies a memory shortage threshold of the first cache;
transmitting a first request for extra memory for the first cache in response to detecting that the first amount of data satisfying the memory shortage threshold, wherein the first request is transmitted to a plurality of IOAs;
detecting a second cache of a second IOA of the plurality of IOAs storing a second amount of data that satisfies a memory dissemination threshold of the second cache;
allocating memory from the second cache to the first cache in response to both the first request and detecting that the second amount of data satisfies the memory dissemination threshold; and
wherein the first cache has a memory dissemination threshold and the second cache has a memory shortage threshold, further comprising:
detecting the second cache storing a third amount of data that satisfies a memory shortage threshold of the second cache;
identifying an outstanding request for extra memory, wherein the outstanding request for extra memory is a prior request for extra memory from another IOA of the plurality of IOAs, wherein the prior request has not resulted in an allocation of memory to a cache of the requesting IOA; and
deferring a new request for extra memory for the second cache in response to identifying the outstanding request for extra memory from the another IOA.

US Pat. No. 10,169,240

REDUCING MEMORY ACCESS BANDWIDTH BASED ON PREDICTION OF MEMORY REQUEST SIZE

QUALCOMM Incorporated, S...

1. A method of managing memory access bandwidth, the method comprising:determining a size of a used portion of a first cache line stored in a first cache which is accessed by a processor, wherein the first cache is a level-two (L2) cache comprising at least a first accessed bit corresponding to a first half cache line size data of the first cache line and a second accessed bit corresponding to a second half cache line size data of the first cache line, wherein determining the size of the used portion of the first cache line is based on which one or more of the first accessed bit or second accessed bit is/are set;
determining which one or more of the first accessed bit or the second accessed bit is set when the first cache line is evicted from the L2 cache;
for a first memory region in a memory comprising the first cache line, selectively updating a prediction counter for making predictions of sizes of cache lines to be fetched from the first memory region, based on the size of the used portion, wherein selectively updating the prediction counter comprises:
incrementing the prediction counter by a first amount when only one of the first accessed bit or the second accessed bit is set when the first cache line is evicted from the L2 cache;
decrementing the prediction counter by a second amount when both the first accessed bit and the second accessed bit are set when the first cache line is evicted from the L2 cache; or
decrementing the prediction counter by a third amount when a request is received from the processor at the first cache for a portion of the first cache line which was not fetched; and
adjusting a memory access bandwidth between the processor and the memory to correspond to the sizes of the cache lines to be fetched, comprising at least one of:
adjusting the memory access bandwidth for fetching a second cache line from the first memory region to correspond to the half cache line size if the value of the prediction counter is greater than zero; or
adjusting the memory access bandwidth for fetching a second cache line from the first memory region to correspond to the full cache line size if the value of the prediction counter is less than or equal to zero.

US Pat. No. 10,169,239

MANAGING A PREFETCH QUEUE BASED ON PRIORITY INDICATIONS OF PREFETCH REQUESTS

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing prefetch queues, said computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
obtaining a prefetch request based on executing a prefetch instruction included within a program and determining that data at a memory location designated by the prefetch instruction is not located in a selected cache, the prefetch request having a priority assigned thereto, the priority indicating confidence by the program in whether the prefetch request will be used;
determining, based on obtaining the prefetch request, whether the prefetch request may be placed on a prefetch queue, the determining comprising:
determining whether the prefetch queue is full;
checking, based on determining the prefetch queue is full, whether the priority of the prefetch request is considered a high priority;
determining, based on the checking indicating the priority of the prefetch request is considered a high priority, whether another prefetch request on the prefetch queue may be removed;
removing the other prefetch request from the prefetch queue, based on determining the other prefetch request may be removed; and
adding the prefetch request to the prefetch queue, based on removing the other prefetch request.

US Pat. No. 10,169,238

MEMORY ACCESS FOR EXACTLY-ONCE MESSAGING

International Business Ma...

1. A computer-implemented method executed for enabling exactly-once messaging, the method comprising:transmitting a plurality of messages from a first location to a second location via read requests and write requests made to a memory;
controlling the read and write requests by a memory controller including a read queue, a write queue, and a lock address list, each slot of the lock address list associated with a lock bit;
initiating the read requests from the memory via the memory controller when associated lock bits are enabled;
initiating the write requests from the memory via the memory controller when associated lock bits are disabled; and
enabling and disabling the lock bits after the initiation of the write and read requests, respectively.

US Pat. No. 10,169,237

IDENTIFICATION OF A COMPUTING DEVICE ACCESSING A SHARED MEMORY

International Business Ma...

1. A method for identifying, in a system including two or more computing devices that are able to communicate with each other, with each computing device having a cache and connected to a corresponding memory, the computing device accessing one of the memories, the method comprising:monitoring memory access to any of the memories, wherein monitoring memory access comprises identifying respective computing devices that access respective memories, wherein monitoring memory access to any of the memories further comprises:
collecting an access time, a type of command, and a first memory address from a first memory read access to a first memory based on information acquired from a first probe attached to a first bus connecting the first memory to a first computing device, wherein the first memory is remote from the first computing device, and wherein a cache line in the first computing device is in an invalid state;
collecting a third access time, a third type of command, and a third memory address from a first memory write access to the first memory based on information acquired from the first probe, wherein the cache line in the first computing device enters a modified state;
monitoring cache coherency commands between computing devices, wherein monitoring cache coherency commands further comprises:
monitoring a first cache coherency command sent from the first computing device to at least a second computing device at a first cache coherency time based on information acquired from a second probe attached to a second interconnect connecting the first computing device and the second computing device, wherein the second probe collects the cache coherency time, a type of command, a second memory address, and an identification of the computing device issuing the first cache coherency command; and
identifying the computing device accessing one of the memories by using information related to the memory access and cache coherency commands, wherein identifying the computing device further comprises:
identifying the first computing device as the device that accessed the first memory based on the first memory address being equivalent to the second memory address, and further based on a difference between the access time and the first cache coherency time being smaller than a difference between the access time and any other cache coherency time;
wherein the first computing device is identified as the device performing the first memory write access based on the third memory address being identical to the first memory address and further based on the first computing device performing the first memory read access to the first memory;
wherein the system is configured in a non-uniform memory access (NUMA) design, wherein each memory comprises at least one dual in-line memory module (DIMM);
wherein the access time, the type of command, the first memory address, the cache coherency time, the second memory address, and the identification of the computing device issuing the first cache coherency command are stored in a hard disk drive (HDD) accessible by the system; and
wherein the system utilizes a modified, exclusive, shared, invalid (MESI) cache coherence protocol.

US Pat. No. 10,169,236

CACHE COHERENCY

ARM Limited, Cambridge (...

1. A cache coherency controller comprising:a directory indicating, for memory addresses cached by one or more of a group of one or more cache memories connectable in a coherent cache structure, which of the cache memories are caching those memory addresses; and
control circuitry configured to detect a directory entry relating to a memory address to be accessed so as to coordinate, amongst the cache memories, an access to a memory address by one of the cache memories or a coherent agent in instances when the directory entry indicates that another of the cache memories is caching that memory address;
the control circuitry being responsive to status data indicating whether each cache memory in the group is currently subject to cache coherency control so as to take into account, in the detection of the directory entry relating to the memory address to be accessed, only those cache memories in the group which are currently subject to cache coherency control.

US Pat. No. 10,169,235

METHODS OF OVERRIDING A RESOURCE RETRY

Apple Inc., Cupertino, C...

1. An apparatus, comprising:a memory configured to implement a first queue and a second queue, wherein the first queue has a plurality of entries, each configured to store a memory access instruction having one of a set of priority levels, wherein the second queue has fewer entries than the first queue, and wherein each entry in the second queue corresponds to one of the set of priority levels; and
a control circuit configured to:
determine an availability of a memory resource associated with a given memory access instruction, wherein the memory resource associated with the given memory access instruction is included in a plurality of memory resources;
determine a particular priority level of the given memory access instruction in response to a determination that the memory resource associated with the given memory access instruction is unavailable; and
add the given memory access instruction to the second queue in response to a determination that an entry in the second queue corresponding to the particular priority level is available, and that the particular priority level is greater than a respective priority level of each memory access instruction currently in the second queue.

US Pat. No. 10,169,234

TRANSLATION LOOKASIDE BUFFER PURGING WITH CONCURRENT CACHE UPDATES

International Business Ma...

1. A method for purging a translation lookaside buffer concurrently with cache updates in a computer system with a translation lookaside buffer and a primary cache memory having a first cache line that contains a virtual address field and a data field, the method comprising:initiating a translation lookaside buffer purge process;
initiating a cache update process;
determining that the translation lookaside buffer purge process and the cache update process each perform a write operation to the first cache line concurrently;
in response to the determining:
overwriting, by the cache update process, the data field of the first cache line of the primary cache memory,
restoring the translation lookaside buffer purge process from a current state to an earlier state, and
restarting the translation lookaside buffer process from the earlier state.

US Pat. No. 10,169,233

TRANSLATION LOOKASIDE BUFFER PURGING WITH CONCURRENT CACHE UPDATES

International Business Ma...

1. A computer program product for purging a translation lookaside buffer concurrently with cache updates in a computer system with a translation lookaside buffer and a primary cache memory having a first cache line that contains a virtual address field and a data field, comprising a computer readable storage medium having stored thereon program instructions programmed to perform:initiating a translation lookaside buffer purge process;
initiating a cache update process;
determining that the translation lookaside buffer purge process and the cache update process each perform a write operation to the first cache line concurrently;
in response to determining that the translation lookaside buffer purge process and the cache update process each perform a write operation to the first cache line concurrently:
overwriting, by the cache update process, the data field of the first cache line of the primary cache memory,
restoring the translation lookaside buffer purge process from a current state to an earlier state, and
restarting the translation lookaside buffer process from the earlier state.

US Pat. No. 10,169,232

ASSOCIATIVE AND ATOMIC WRITE-BACK CACHING SYSTEM AND METHOD FOR STORAGE SUBSYSTEM

Seagate Technology LLC, ...

1. A method for caching in a data storage subsystem, comprising:receiving a write request indicating one or more logical addresses and one or more data blocks to be written correspondingly to the one or more logical addresses;
in response to the write request, allocating one or more physical locations in a cache memory from a free list;
storing the one or more data blocks in the one or more physical locations;
determining a hash table slot in response to a logical address;
determining whether any of a plurality of entries in the hash table slot identifies the logical address;
storing identification information identifying the one or more physical locations in one or more data structures;
updating an entry in a hash table to include a pointer to the one or more data structures;
maintaining a count of data access requests, including read requests, pending against each physical location in the cache memory having valid data; and
returning a physical location to the free list when the count indicates no data access requests are pending against the physical location.

US Pat. No. 10,169,231

EFFICIENT AND SECURE DIRECT STORAGE DEVICE SHARING IN VIRTUALIZED ENVIRONMENTS

International Business Ma...

1. A method of providing direct storage device sharing in a virtualized environment, comprising:a storage controller assigning each of a plurality of virtual functions of a plurality of guests an associated memory area of a physical memory, including at a first boot of one of the guests, the storage controller receiving a request from the one of the guests, said request including an authentication key, and in response to the request, triggering an interrupt of a physical function to a hypervisor;
the storage controller providing the guests with direct access, via the authentication key and without intervention of the hypervisor, to a specified storage area in the storage device, including the storage controller receiving from the hypervisor a configuration command over the physical function, the configuration command setting up hardware in the storage controller to allocate storage in the storage device for said one of the guests and to provide a mapping function for the authentication key to provide the one of the guests with access to the specified storage area in the storage device; and
the guests directly accessing the specified storage area over the authentication key, including
said one of the guests directly accessing the storage device over the authentication key, including said one guest sending to the storage controller, over one of the virtual functions, the authentication key in a command block requesting access to the storage area allocated to said one guest; and
the storage controller receiving the command block requesting access from the one of the guests, and setting up a mapping for the authentication key to provide said one of the guests with direct access, without intervention of the hypervisor, only to the specified storage area in the storage device set up for the authentication key.

US Pat. No. 10,169,230

METHOD FOR ACCESS TO ALL THE CELLS OF A MEMORY AREA FOR PURPOSES OF WRITING OR READING DATA BLOCKS IN SAID CELLS

MORPHO, Issy-les-Mouline...

1. An access method to access to cells in a memory area of a card for purposes of writing or reading data blocks in said cells, the memory area comprising N+1 separately addressable, physically contiguous cells, where N is an integer greater than or equal to 0;the address of each cell being between 0 and N, where each address is unique;
wherein said method comprises:
for each access time to said cells in said memory area to be accessed, the total number of access times being N+1 per memory area, performing a process of determining an address of a cell of the memory area to be accessed at said access time, the address determined for an access time not being once again determined for another access time, the address determination process comprising:
pseudorandomly determining a pseudorandom bit which can thus take either a first value or a second value;
and testing the value taken by said pseudorandom bit that switches said process either to:
(1) in the event of the first value, determining an index as being equal to the value of a first index, followed by incrementing a unit of said first index modulo N+1, or
(2) in the event of the second value, determining said index as being equal to the value of a second index, followed by decrementing a unit of said second index modulo N+1, the value of said index being the value of said address of the cell of the memory area to be accessed at said access time.

US Pat. No. 10,169,229

PROTOCOLS FOR EXPANDING EXISTING SITES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by computing device within a dispersed storage and task network (DSTN) including at least one site housing a plurality of current distributed storage and task (DST) execution units, the method comprising:determining that a plurality of new DST execution units are to be added to the at least one site;
in response to determining that the plurality of new DST execution units are to be added to the at least one site, assigning the new DST execution units to positions within the at least one site to limit a number of DST execution units through which data must be moved during migration of data to the new DST execution units to a maximum number, the maximum number being less than the number of current DST execution units included in the at least one site, wherein assigning the new DST execution units to positions within the at least one site includes:
obtaining first address ranges assigned to the plurality of current DST execution units;
determining a common magnitude of second address ranges to be assigned to the plurality of new DST execution units and the plurality of current DST execution units;
determining insertion points for each of the plurality of new DST execution units, wherein the insertion points are selected to intersperse the plurality of new DST execution units among the current DST execution units in a pattern arranged so that each current DST execution unit is no more than a predetermined number of current DST execution units distant from one of the plurality of new DST execution units;
determining transfer address ranges, where transfer address ranges correspond to at least a portion of the first address ranges to be transferred to the plurality of new DST execution units in accordance with the insertion points; and
facilitating transfer of address range assignments from particular current DST execution units to particular new DST execution units.

US Pat. No. 10,169,228

MULTI-SECTION GARBAGE COLLECTION

International Business Ma...

1. A computer program product for facilitating garbage collection within a computing environment, the computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
obtaining processing control by a handler executing within a processor of the computer environment based on execution of a load instruction and a determination that an object pointer to be loaded indicates a location within a selected portion of memory undergoing a garbage collection process;
obtaining by the handler an image of the load instruction and calculating an object pointer address from the image, the object pointer address specifying a location of the object pointer, the object pointer indicating a location of an object pointed to by the object pointer;
determining by the handler whether the object pointer is to be modified;
modifying by the handler, based on determining the object pointer is to be modified, the object pointer to provide a modified object pointer; and
storing the modified object pointer in a selected location.

US Pat. No. 10,169,227

MEMORY CONTROLLER AND MEMORY SYSTEM INCLUDING THE SAME

Samsung Electronics Co., ...

1. A read method of a nonvolatile memory device, comprising:the nonvolatile memory device determining whether a read command input to the nonvolatile memory device is a read command for a word line having an upper page program state;
when the input read command is determined not to be a read command for a word line having an upper page program state, the nonvolatile memory device performing a first lower page read operation on the word line;
when the input read command is determined to be a read command for a word line having an upper page program state, the nonvolatile memory device determining whether a page address input together with the input read command is a lower page address;
when the page address input together with the input read command is determined to be the lower page address, the nonvolatile memory device performing a second lower page read operation on the word line; and
when the page address input together with the input read command is determined not to be the lower page address, the nonvolatile memory device performing an upper page read operation on the word line.

US Pat. No. 10,169,226

PERSISTENT CONTENT IN NONVOLATILE MEMORY

Micron Technology, Inc., ...

1. A method comprising:initiating an operation to freeze an application;
responsive to initiating the operation to freeze the application, migrating one or more pages of persistent storage from a volatile memory to a nonvolatile memory;
locking one or more page tables such that entries in the page tables refer only to pages of persistent storage in the nonvolatile memory; and
migrating the one or more frozen page tables to the nonvolatile memory.

US Pat. No. 10,169,225

MEMORY SYSTEM AND MEMORY-CONTROL METHOD WITH A PROGRAMMING STATUS

Silicon Motion, Inc., Jh...

1. A memory system with a programming status, comprising:at least one first memory, wherein each of the at least one first memory comprises a plurality of memory regions to store data;
at least one second memory, wherein each of the at least one second memory comprises a plurality of memory regions for programming the data from the at least one first memory, and the at least one second memory is a flash memory; and
a controller, coupled to the second memory and utilized to record a programming status of the data, wherein the controller checks whether the programming is successful or not by inquiring the programming status in response to the at least one first memory or the at least one second memory being going to be accessed, and the at least one first memory stores the data until the programming is checked to be successful.

US Pat. No. 10,169,224

DATA PROTECTING METHOD FOR PREVENTING RECEIVED DATA FROM LOSING, MEMORY STORAGE APPARATUS AND MEMORY CONTROL CIRCUIT UNIT

PHISON ELECTRONICS CORP.,...

1. A data protecting method for a memory storage apparatus, comprising:determining whether a first procedure which is about to be executed or being executed by the memory storage apparatus is a first type procedure according to a total executing time of the first procedure;
if the first procedure which is about to be executed or being executed by the memory storage apparatus is the first type procedure, temporarily stopping receiving a first data corresponding to a first write command from a host system to all buffers of the memory storage apparatus before the execution of the first procedure is finished; and
allowing the memory storage apparatus to receive the first data corresponding to the first write command only after the first procedure is finished.

US Pat. No. 10,169,222

APPARATUS AND METHOD FOR EXPANDING THE SCOPE OF SYSTEMS MANAGEMENT APPLICATIONS BY RUNTIME INDEPENDENCE

INTERNATIONAL BUSINESS MA...

1. A computer program product for automatic conversion of existing systems management software applications to run in multiple middleware runtime frameworks, the computer program product comprising:a non-transitory readable storage medium having stored thereon program instructions executable by a processor to cause the processor to:
scan the frameworks of system management components to form individual function modules;
map application program interface calls to a generic applicant program interface call layer by creating an association of the individual function modules;
perform runtime dependency analysis by generating ontology alignment mechanisms and outputting a mapping table of ontologies;
perform model unification by mapping runtime dependent functions to semantic counterparts using the ontology alignment mechanisms;
generate multiple runtime independent proxy components for the system management components; and
automatically refactor each of the system management components into two modules: a runtime independent module and a runtime dependent proxy module, wherein the runtime independent module replaces runtime dependent code with runtime independent code counterparts, wherein the model unification is dictionary-based, structure-based, or a combination thereof.

US Pat. No. 10,169,221

METHOD AND SYSTEM FOR WEB-SITE TESTING

ACCELERATE GROUP LIMITED,...

1. A testing service comprising:one or more testing-service computer systems connected to the Internet that
execute testing-service routines,
maintain one or more databases,
receive requests for modifications to a data-object-model representation of a web page under test from user computers, and
respond to a received request by selecting a web-page variant using a probability-based weight associated with the web-page variant and transferring, to the user computer from which the request was received, modifications to the data-object-model representation of the web page under test that direct a browser on the user computer to display the selected web-page variant; and
a client web server that serves web pages to users, the client web server storing a library of routines downloaded to the client web server by the testing service and storing encodings of web pages, the encoding of each web page tested by the testing service including modifications that direct a user's web browser to download the library of routines from the client web server and to request modifications to a data-object-model representation of the web page by calling a script-library routine.

US Pat. No. 10,169,220

PRIORITIZING RESILIENCY TESTS OF MICROSERVICES

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a memory that stores computer executable components; and
a processor that executes the computer executable components stored in the memory, wherein the computer executable components comprise:
a test execution component that:
traverses an application program interface call subgraph of a microservices-based application in a depth first traversal pattern; and
during the traversal, performs resiliency testing of parent application program interfaces of the application program interface call subgraph according to a systematic resilience testing algorithm that reduces redundant resiliency testing of parent application program interfaces, the systematic resilience testing algorithm comprising:
during the traversal at a stop at a parent application program interface of the application program interface call subgraph:
in response to the parent application program interface having multiple dependent application program interfaces, calls to all direct and indirect dependent application program interfaces of the parent application program interface annotated as having been bounded retry pattern tested and circuit breaker pattern tested, and the parent application program interface not being annotated as having been bulkhead pattern tested, perform a bulkhead pattern test on the parent application program interface and annotate the parent application program interface as bulkhead pattern tested.

US Pat. No. 10,169,219

SYSTEM AND METHOD TO INFER CALL STACKS FROM MINIMAL SAMPLED PROFILE DATA

Nintendo Co., Ltd., Kyot...

1. A system for inferring call stacks from a software profile, comprising:a processing system having at least one processor, the processing system configured to:
monitor an executing program to create a sample profile of one or more executing functions to form a call stack database, wherein each function in the sample profile forms part of a respective call stack and each call stack having a call stack depth size;
after creating the call stack database:
for each function, attempt to match, under a first inference strategy, the function and associated call stack depth size to an entry in the call stack database;
infer a call stack for each function that matches to the entry in the call stack database based on the first inference strategy;
if the function does not match with any entry in the call stack database or if the function matches to more than one entry in the call stack database, determine, under a second inference strategy, one or more functions in a temporally adjacent sample; and
infer the call stack, under the second inference strategy, based on at least the temporally adjacent sample.

US Pat. No. 10,169,217

SYSTEM AND METHOD FOR TEST GENERATION FROM SOFTWARE SPECIFICATION MODELS THAT CONTAIN NONLINEAR ARITHMETIC CONSTRAINTS OVER REAL NUMBER RANGES

GENERAL ELECTRIC COMPANY,...

1. A method comprising:receiving, at processing circuitry of a software test generation system, software specification models of software including at least one nonlinear arithmetic constraint over a Real number range;
generating, via the processing circuitry, satisfiable modulo theories (SMT) formulas that are semantically equivalent to the software specification models of the software including the at least one nonlinear arithmetic constraint over a Real number range;
analyzing, via the processing circuitry, the SMT formulas using at least one SMT solver of an analytical engine pool to generate test case data for each of the SMT formulas; and
post-processing, via the processing circuitry, the test case data to automatically generate one or more tests comprising inputs and expected outputs for testing the software including the at least one nonlinear arithmetic constraint over a Real number range;
wherein the generating the SMT formulas comprises flattening one or more state-based operations in the software specification models into SMT formulas that are stateless and capable of being analyzed by the at least one SMT solver;
wherein the post-processing comprises converting ranges of values indicated in the test case data into particular values for one or more input variables and one or more output variables of the software to be verified and the converting comprises truncating the ranges of values indicated in the test case data at a particular precision to yield the particular values.

US Pat. No. 10,169,215

METHOD AND SYSTEM FOR ANALYZING TEST CASES FOR AUTOMATICALLY GENERATING OPTIMIZED BUSINESS MODELS

COGNIZANT TECHNOLOGY SOLU...

1. A computer-implemented method for analyzing one or more test case trees for automatically generating an optimized test tree model, the computer implemented method comprising:configuring a computer processor, the computer processor:
receiving the one or more test case trees as user input from a user interface, each tree having one or more nodes, each node corresponding to a functionality of a test case;
analyzing each of the received test case trees to identify a source tree and a target tree based on the length of the test case tree;
analyzing levels of each node of the source tree and the target tree for identifying a source node and a target node, the source node being the bottom most node of the source tree and the target node being the bottom most node of the target tree;
comparing the source node and the target node, based on one or more parameters, to obtain a match;
merging the source node with the target node when the match is obtained, else identifying a next source node and a next target node, and repeating the steps of comparing and merging the identified nodes;
optimizing the merged nodes by identifying and filtering out one or more invalid nodes and generating the optimized tree model by using the optimized nodes.

US Pat. No. 10,169,214

TESTING OF COMBINED CODE CHANGESETS IN A SOFTWARE PRODUCT

International Business Ma...

1. A method for testing changesets in a software product, the method comprising:determining, by one or more processors, whether there is sufficient building and testing capacity to test a single changeset individually, wherein a changeset is a set of changes to a software product; and
in response to determining that there is not sufficient building and testing capacity to test the single changeset individually:
selecting, by one or more processors, a first combination of changesets from multiple changesets;
calculating, by one or more processors and for each combination of two or more changesets from the multiple changesets, an interaction between changesets in said each combination, wherein the interaction is an overlapping of code found in two or more changesets;
determining, by one or more processors, that the first combination of changesets has a lower amount of overlapping of code than any other combination of changesets from the multiple changesets; and
selecting, by one or more processors, the first combination of changesets for building and testing, wherein said first combination of changesets has the lower amount of overlapping of code than any other combination of changesets from the multiple changesets; and
building and testing, by one or more processors, the first combination of changesets.

US Pat. No. 10,169,213

PROCESSING OF AN APPLICATION AND A CORRESPONDING TEST FILE IN A CONTENT REPOSITORY

Red Hat, Inc., Raleigh, ...

1. A method, comprising:retrieving, by a continuous integration module, a first application code and a test code corresponding to the first application code from an archive of a computing system, wherein a host operating system of the computing system comprises the continuous integration module and the host operating system provides a graphical user interface, wherein the continuous integration module is interfaced by a continuous integration application programming interface (API) to access the archive, wherein the archive is coupled with a software development tool that comprises a first application corresponding to the first application code and a test file corresponding to the test code;
installing, by the continuous integration module, the first application code and the test code in a content repository of a hardware platform of the computing system, wherein the test code is installed as metadata for the first application code in the content repository;
executing, by the continuous integration module, the test file corresponding to the test code to generate test results for the application code;
storing, by the continuous integration module, the test results in the metadata for the first application code in the content repository;
determining, by the continuous integration module, additional data related to the test file, wherein the additional data is at least one type of data selected from a group consisting of archive checksum, test success ratio, test fail ratio, build creator, and current date and time;
storing, by the continuous integration module, the additional data in the metadata for the first application code in the content repository, wherein the host operating system is to provide, via the graphical user interface, access to search content comprising the test results and the additional data independent of the first application corresponding to the first application code; and
integrating, by the continuous integration module, a second application code with the first application code in the content repository, wherein the second application code is a different version of the first application code or a different application code from the first application code, wherein the integrating comprises monitoring, by the continuous integration module, the archive for the second application code and updating the content repository in view of the second application code.

US Pat. No. 10,169,160

DATABASE BATCH UPDATE METHOD, DATA REDO/UNDO LOG PRODUCING METHOD AND MEMORY STORAGE APPARATUS

Industrial Technology Res...

1. A database batch update method applicable to a data storage apparatus comprising a first memory, a second memory and a third memory, wherein the data batch update method comprises:sequentially receiving a plurality of data access commands, wherein the data access commands require to access data from the first memory, wherein the third memory is mirrored to the first memory before the data access commands are sequentially received;
determining that a first subset of the data access commands belong to a first type and a second subset of the data access commands belong to a second type, wherein the data access commands belonging to the first type comprises commands that update data without returning it in real-time, and wherein the data access commands belonging to the second type command comprises commands that return data in real-time without updating it;
storing the first subset of the data access commands in the second memory;
sequentially updating the first memory according to the data access commands stored in the second memory in an order of physical addresses of the first memory;
determining whether the data corresponding to the second subset of the data access commands needs to be updated by inspecting the data access commands stored in the second memory;
if it is determined that the data corresponding to the second subset of the data access commands needs to be updated, updating and returning the data corresponding to the second subset of the data access commands according to the data access commands stored in the second memory; and
if it is determined that the data corresponding to the second subset of the data access commands does not need to be updated, accessing and returning the data corresponding to the second subset of the data access commands from the third memory,
wherein an access rate of sequential physical addresses in the first memory is larger than an access rate of random physical addresses in the first memory, and
wherein an access rate of sequential physical addresses in the third memory is larger than an access rate of random physical addresses in the third memory.

US Pat. No. 10,169,159

AUTOMATED DATA RECOVERY FROM REMOTE DATA OBJECT REPLICAS

International Business Ma...

1. A method for recovering data objects in a distributed data storage system, the method comprising:storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network, wherein a first data center of the one or more data centers includes one or more clusters and each cluster of the one or more clusters includes a respective plurality of compute nodes and the each cluster further includes a respective database that stores metadata specifying list of candidate clusters from which the one or more replicas can be recovered;
recording health information metadata that is within the database about said one or more replicas, wherein the health information comprises data about availability of a replica to participate in a restoration process;
in response to determining that the first data object is to be recovered, calculating a query-priority for the first data object;
querying, based on the calculated query-priority, the health information metadata that is within the database for the one or more replicas to determine which of the one or more replicas is available for restoration of the first data object;
calculating a restoration-priority for the first data object based on the health information metadata that is within the database for the one or more replicas; and
restoring the first data object from the one or more of the available replicas, based on querying the list of candidate clusters and further based on the calculated restoration-priority, wherein the query-priority is calculated based on a priority function P(D)=Func(R(D),C(D),n), where:
D represents a data object with multiple replicas in multiple clusters;
R(D)i, i=1 . . . n, where “i” and “n” are natural numbers, with a remote replica indexed i of D out of n remote replicas;
C(D) represents cost of losing N replicas of D;
P(D) represents priority given by the system for the query operation of D; and
Func( ) represents some function.

US Pat. No. 10,169,158

APPARATUS, SYSTEM AND METHOD FOR DATA COLLECTION, IMPORT AND MODELING

International Business Ma...

1. A method for data analysis of a backup system, the method comprising:extracting predetermined configuration and state information from respective dump files of a plurality of different computer systems, the predetermined configuration and state information is in different native formats based on the respective dump file from which it was extracted;
translating the predetermined configuration and state information from a native format used by each of the plurality of different computer systems into a normalized format, wherein the translated configuration and state information comprises configuration and state information irrespective of which of the plurality of different computer systems from which it was generated; and
determining what components are in the backup system, how the backup system works, how data is stored in the backup system, how efficiently data is stored in the backup system, a total capacity of the backup system, a remaining capacity of the backup system, and an operating cost of the backup system by analyzing the normalized predetermined configuration and state information.

US Pat. No. 10,169,157

EFFICIENT STATE TRACKING FOR CLUSTERS

INTERNATIONAL BUSINESS MA...

1. A method for efficient state tracking for clusters by a processor device in a distributed shared memory architecture, the method comprising:performing an asynchronous calculation of deltas while concurrently receiving client requests and concurrently tracking client requests times;
responding to each of the client requests for data of the same concurrency during a certain period with currently executing client requests with updated views based upon results of the asynchronous calculation; concurrently executing each of the client requests occurring after the certain period on the updated views, wherein all deltas and views are updated; and
bounding a latency for the client requests by a time necessitated for the asynchronous calculation of at least two of the deltas; wherein a first state snapshot is atomically taken while simultaneously calculating the at least two of the deltas, and each of the client requests received during the certain period are served with the updated views of the asynchronously calculated at least two of the deltas.

US Pat. No. 10,169,156

AUTOMATIC RESTARTING OF CONTAINERS

International Business Ma...

1. A method for automatically restarting a container, comprising:reading, by a computing device, custom predefined policy information including one or more condition categories, each of which having a respective reference to a respective log file and defining at least one respective condition for restarting the container;
monitoring, by an agent included in a container engine executed by the computing device, one or more respective log files, each of which corresponding to the respective reference to the respective log file of a corresponding condition category of the one or more condition categories, to detect an occurrence of any one of the at least one condition defined by any one of the one or more condition categories of the custom predefined policy information;
detecting, by the agent, the occurrence of the any one of the at least one condition based on a presence of a string of characters, corresponding to the any one of the at least one condition, in a log file of a corresponding condition category of the any one of the at least one condition to which the custom predefined policy information refers, the string of characters being generated to the log file of the corresponding condition category on behalf of the container;
in response to the detecting of the occurrence of the any one of the at least one condition, saving, by the agent, a state of the container including a state of one or more applications within the container;
automatically restarting the container, by the agent, after detecting the occurrence and saving the state of the container; and
after the automatic restarting of the container, restoring, by the agent, the state of the container, including the state of the one or more applications, wherein the one or more applications continue executing from where the one or more applications left off, thereby improving performance of the computing device.

US Pat. No. 10,169,155

SYSTEM AND METHOD FOR SYNCHRONIZATION IN A CLUSTER ENVIRONMENT

EMC IP Holding Company LL...

1. A computer-implemented method comprising:performing, via a first computing device, a copy sweep operation to a first range of data on a source storage device;
determining that the copy sweep operation has failed;
sending a message to a second computing device to suspend I/O operations to the first range of data; and
retrying the copy sweep operation based upon, at least in part, determining that the copy sweep operation has failed, wherein the copy sweep operation is retried without the first computing device receiving acknowledgement that the I/O operations to the first range of data are suspended by the second computing device.

US Pat. No. 10,169,154

DATA STORAGE SYSTEM AND METHOD BY SHREDDING AND DESHREDDING

International Business Ma...

1. A method for encoding data for storage in a plurality of storage units by use of at least one processor comprising:dividing data into a set of separate pieces of data;
performing a redundancy function and a plurality of transformations on a separate piece of data of the set of separate pieces of data to generate a plurality of encoded data elements, wherein a threshold number of encoded data elements of the plurality of encoded data elements is needed to recover the separate piece of data, in which the threshold number of encoded data elements is less than all of the plurality of encoded data elements, wherein the plurality of transformations includes first transformations performed before performing the redundancy function and second transformations performed after performing the redundancy function;
generating metadata regarding the plurality of encoded data elements, wherein the metadata includes identification for each encoded data element and sequencing information regarding an order in which the redundancy function and the plurality of transformations were performed;
sending the plurality of encoded data elements to the plurality of storage units; and
sending the metadata to one of the storage units of the plurality of storage units or to another storage unit separately from sending the plurality of encoded data elements to the plurality of storage units.

US Pat. No. 10,169,153

REALLOCATION IN A DISPERSED STORAGE NETWORK (DSN)

INTERNATIONAL BUSINESS MA...

1. A computing device comprising:an interface configured to interface and communicate with a dispersed storage network (DSN);
memory that stores operational instructions; and
a processing module operably coupled to the interface and to the memory, wherein the processing module, when operable within the computing device based on the operational instructions, is configured to:
within a dispersed or distributed storage network (DSN) that includes a plurality of storage units (SUs) that distributedly store a set of encoded data slices (EDSs) associated with a data object, during a transition from a first system configuration of a Decentralized, or Distributed, Agreement Protocol (DAP) to a second system configuration of the DAP, direct at least one SU of the plurality of SUs to service a data access request based on at least one EDS of the set of EDSs based on a DAP transition mapping between the first system configuration of the DAP to the second system configuration of the DAP, wherein the data object is segmented into a plurality of data segments, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce the set of EDSs.

US Pat. No. 10,169,152

RESILIENT DATA STORAGE AND RETRIEVAL

International Business Ma...

1. A computer-implemented method for data recovery following loss of a volume manager, the method comprising:determining that the volume manager, and an associated volume manager index, for a distributed storage have been lost;
installing, in response to determining that the volume manager has been lost, a new volume manager, the new volume manager lacking an associated volume manager index;
receiving location information and credentials to access the distributed storage;
receiving a command to recover data from the distributed storage, the data to be recovered comprising one or more data files, each data file stored as two or more data portions, each data portion comprising metadata, the metadata comprising a file ID tag;
attempting to retrieve each data portion from the distributed storage;
retrieving a first data portion and recording a first location in the distributed storage that the first data portion was retrieved from;
reading the first file ID tag attached to the first data portion; and
constructing, in response to determining that the associated volume manager index has been lost, a new volume manager index by storing the first file ID tag and the first location associated with the first data portion in the distributed storage in the new volume manager index such that the new volume manager index provides a reference, to the new volume manager, for the first location and the first file ID tag, the reference associated with the first data portion.

US Pat. No. 10,169,151

UTILIZING REQUEST DEADLINES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a dispersed storage and task (DST) processing unit that includes a processor, the method comprises:generating a first plurality of access requests that include a first execution deadline time, the first plurality of access requests for transmission via a network to a corresponding first subset of a plurality of storage units;
receiving a first deadline error notification via the network from a first storage unit of the first subset;
calculating a missed deadline cost value in response to receiving the first deadline error notification;
comparing the missed deadline cost value to a new request cost threshold;
selecting a new one of the plurality of storage units not included in the first subset in response to receiving the first deadline error notification;
generating a new access request for transmission to the new one of the plurality of storage units via the network that includes an updated execution deadline time, wherein the new access request is based on a one of the first plurality of access requests sent to the first storage unit of the first subset, wherein the new one of the plurality of storage units is selected and the new access request is generated for transmission to the new one of the of the plurality of storage units when the missed deadline cost value compares favorably to the new request cost threshold; and
generating a proceed with execution notification for transmission via the network to the first storage unit of the first subset indicating a request to continue executing the access request when the missed deadline cost value compares unfavorably to the new request cost threshold.

US Pat. No. 10,169,150

CONCATENATING DATA OBJECTS FOR STORAGE IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a computing device of a dispersed storage network (DSN), the method comprises:identifying an independent data object of a plurality of independent data objects for retrieval from DSN memory of the DSN, wherein the plurality of independent data objects is combined to produce a concatenated data object and wherein the concatenated data object is encoded in accordance with a dispersed storage error encoding function to produce a set of encoded data slices;
identifying an encoded data slice of the set of encoded data slices corresponding to the independent data object based on a mapping of the plurality of independent data objects in a data matrix;
sending a retrieval request to a storage unit of the DSN memory regarding the encoded data slice; and
when the encoded data slice is received, decoding the encoded data slice in accordance with the dispersed storage error encoding function and the mapping to reproduce the independent data object.

US Pat. No. 10,169,149

STANDARD AND NON-STANDARD DISPERSED STORAGE NETWORK DATA ACCESS

International Business Ma...

1. A method comprises:determining, by a computing device of a dispersed storage network (DSN), whether to utilize a non-standard DSN data accessing protocol or a standard DSN data accessing protocol to access data from the DSN, wherein the data is dispersed storage error encoded into one or more sets of encoded data slices and wherein the one or more sets of encoded data slices are stored in a set of storage units of the DSN;
when the computing device determines to use the non-standard DSN data accessing protocol:
generating, by the computing device, a set of non-standard data access requests regarding the data, wherein a non-standard data access request of the set of non-standard data access requests includes a network identifier of a storage unit of the set of storage units, a data identifier corresponding to the data, and a data access function;
sending, by the computing device, the set of non-standard data access requests to at least some storage units of the set of storage units, which includes the storage unit;
converting, by the storage unit, the non-standard data access request into one or more DSN slice names;
determining, by the storage unit, that the one or more DSN slice names are within a slice name range allocated to the storage unit; and
when the one or more DSN slice names are within the slice name range, executing, by the storage unit, the data access function regarding one or more encoded data slices corresponding to the one or more DSN slice names.

US Pat. No. 10,169,148

APPORTIONING STORAGE UNITS AMONGST STORAGE SITES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method of apportioning storage units in a dispersed storage network (DSN), the method comprising:generating storage unit apportioning data indicating a mapping of a plurality of desired numbers of storage units, represented as a plurality of numerical values in accordance with the plurality of desired numbers, to a plurality of storage sites based on site reliability data, wherein the mapping includes a first desired number of storage units corresponding to a first one of the plurality of storage sites that is greater than a second desired number of storage units corresponding to a second one of the plurality of storage sites in response to the site reliability data indicating that a first reliability score corresponding to the first one of the plurality of storage sites is more favorable than a second reliability score corresponding to the second one of the plurality of storage sites; and
allocating a plurality of storage units to the plurality of storage sites based on the storage unit apportioning data, wherein each of the plurality of storage units includes at least one processor and at least one memory device.

US Pat. No. 10,169,147

END-TO-END SECURE DATA STORAGE IN A DISPERSED STORAGE NETWORK

International Business Ma...

1. A method comprises:generating, by a first computing device of a dispersed storage network (DSN), a set of encryption keys;
encrypting, by the first computing device, a data matrix based on the set of encryption keys to produce an encrypted data matrix, wherein the data matrix includes data blocks of a data segment of a data object;
sending, by the first computing device, the encrypted data matrix to a second computing device of the DSN;
dispersed storage error encoding, by the second computing device, the data matrix to produce a set of encrypted encoded data slices; and
sending, by the second computing device, the set of encrypted encoded data slices to a set of storage units of the DSN for storage therein.

US Pat. No. 10,169,146

REPRODUCING DATA FROM OBFUSCATED DATA RETRIEVED FROM A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a computing device in a dispersed storage network (DSN), the method comprises:first encoding first data into a first plurality of sets of encoded data slices, wherein the first encoding is in accordance with a first dispersed error encoding function such that, for a set of encoded data slices of the first plurality of sets of encoded data slices, a first decode threshold number of encoded data slices is required to recover a corresponding first data segment of the first data;
second encoding second data into a second plurality of sets of encoded data slices, wherein the second encoding is in accordance with a second dispersed error encoding function such that, for a set of encoded data slices of the second plurality of sets of encoded data slices, a second decode threshold number of encoded data slices is required to recover a corresponding second data segment of the first data, wherein the second data segment is different from the first data segment;
creating a plurality of mixed sets of encoded data slices from the first and second plurality of sets of encoded data slices in accordance with a mixing pattern; and
outputting the plurality of sets of mixed encoded data slices to storage units of the DSN for storage therein.

US Pat. No. 10,169,145

READ BUFFER ARCHITECTURE SUPPORTING INTEGRATED XOR-RECONSTRUCTED AND READ-RETRY FOR NON-VOLATILE RANDOM ACCESS MEMORY (NVRAM) SYSTEMS

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a read buffer memory configured to store data to support integrated XOR reconstructed data and read-retry data, the read buffer memory comprising a plurality of read buffers, each read buffer being configured to store at least one data unit; and
a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to:
receive one or more data units and read command parameters used to read the one or more data units from at least one non-volatile random access memory (NVRAM) device;
determine an error status for each of the one or more data units, wherein the error status indicates whether each data unit comprises errored data or error-free data; and
store error-free data units and the read command parameters used to read the error-free data units to a read buffer of the read buffer memory.

US Pat. No. 10,169,144

NON-VOLATILE MEMORY INCLUDING SELECTIVE ERROR CORRECTION

Micron Technology, Inc., ...

1. An apparatus comprising:a first memory area included in a memory device and a second memory area included in the memory device, the first and second memory area selectively coupled to each other through a conductive path in the memory device; and
control circuitry included in the memory device to communicate with a memory controller, the memory controller including an error correction engine, the control circuitry of the memory device configured to retrieve first information stored in the first memory area and store the first information after the error correction engine performs an error detection operation on the first information, and to retrieve second information stored in the first memory area and store the second information in the second memory area without an additional error detection operation performed on the second information such that the error correction engine skips performing an additional error detection operation on the second information if a result from the error detection operation performed by the error correction engine on the first information meets a threshold condition.

US Pat. No. 10,169,143

PREFERRED STATE ENCODING IN NON-VOLATILE MEMORIES

Invensas Corporation, Sa...

1. An electronic system, comprising:a processor;
a main memory coupled to the processor;
a memory interface coupled to the processor and the main memory;
a memory controller coupled to the memory interface; and
a first non-volatile memory (NVM) integrated circuit coupled to the memory controller, the NVM integrated circuit further comprising:
a memory array having a plurality of pages, a page buffer coupled to the memory array, and a data input/output interface coupled to the page buffer,
wherein:
a first page in the memory array is selected for programming,
first user write data is stored in the page buffer,
preferred state encoding (PSE) is applied to the first user data to generate first PSE encoded write data which is stored in the page buffer according to a first allocation map,
error correction code (ECC) encoding is applied to the first PSE encoded write data to generate first ECC encoded write data which is stored in the page buffer according to the first allocation map, and
the contents of the page buffer are programmed into the selected first page in the memory array.

US Pat. No. 10,169,142

GENERATING PARITY FOR STORAGE DEVICE

Futurewei Technologies, I...

1. A method performed by a solid state device (SSD) controller to generate a parity, the method comprising:receiving, by the SSD controller, input data to be stored to first and second pages of a storage device, wherein the first page is allocated with N codewords and at least one non-integer number of codeword, the second page is allocated with M codewords, N and M are integer, each non-integer number of codeword corresponding to a part of a codeword, and wherein a total number of codewords in the first page is different from a total number of codewords in the second page;
determining, by the SSD controller, a max impact number (MIN) of the storage device dynamically, wherein the MIN is an integer no less than N+1 and no less than M;
configuring, by the SSD controller, codewords of the first and second pages into multiple groups, wherein each group has an integer number of codewords, and wherein the integer number of codewords in each group is no less than the MIN;
generating, by the SSD controller, parities for the multiple groups; and
storing, by the SSD controller, the parities to reserved spaces of the storage device.

US Pat. No. 10,169,141

MODIFIABLE STRIPE LENGTH IN FLASH MEMORY DEVICES

SK Hynix Inc., Gyeonggi-...

1. A memory device comprising:a memory comprising a plurality of memory cells for storing data; and
a controller communicatively coupled to the memory and configured to organize the data as a plurality of stripes, wherein each individual stripe of the plurality of stripes comprises:
a plurality of data groups, each of the plurality of data groups stored in the memory using a subset of the plurality of memory cells, wherein:
a stripe length for the individual stripe is determined by the controller based on detecting a condition associated with one or more data groups of the plurality of data groups, and
the stripe length for the individual stripe is a number of the plurality of data groups included in the individual stripe; and
at least one data group of the plurality of data groups for each of the individual stripes comprising parity data for correcting bit errors associated with the subset of the plurality of memory cells for the individual stripe.

US Pat. No. 10,169,140

LOADING A PHASE-LOCKED LOOP (PLL) CONFIGURATION USING FLASH MEMORY

International Business Ma...

1. A method for loading a phase-locked loop (PLL) configuration into a PLL module of an Application Specific Integrated Circuit (ASIC) using Flash memory, the method comprising:responsive to the PLL module in the ASIC locking a current PLL configuration from a set of current configuration registers in the ASIC, loading, by reset logic in the ASIC, a Flash data image configuration from the Flash memory into a set of holding registers in the ASIC;
responsive to the Flash data image configuration failing to be corrupted, comparing, by comparison logic in the ASIC, the Flash data image configuration in the set of holding registers to the current PLL configuration in the set of current configuration registers;
responsive to the Flash data image configuration differing from the current PLL configuration, loading, by the reset logic, the Flash data image configuration onto a PLL module input; and
responsive to the PLL module locking the Flash data image configuration, loading by the reset logic, the Flash data image configuration in the set of holding registers into the set of current configuration registers.

US Pat. No. 10,169,139

USING PREDICTIVE ANALYTICS OF NATURAL DISASTER TO COST AND PROACTIVELY INVOKE HIGH-AVAILABILITY PREPAREDNESS FUNCTIONS IN A COMPUTING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method for proactive natural disaster preparedness, comprising:receiving, at a computer of a computing environment, a notification of an impending natural disaster;
determining, by the computer, a threshold corresponding to a likelihood of the predicted disaster, further comprising:
using the likelihood to index into a data structure that stores, for each of a plurality of likelihood values, a corresponding threshold, wherein:
each of the stored thresholds represents a different cost tier;
each successively-higher cost tier corresponds to a successively-higher weighted business cost value; and
the determined threshold is the stored threshold that corresponds, in the data structure, to a likelihood value that is less than or equal to the likelihood of the predicted disaster;
determining, by the computer for the threshold, at least one proactive measure corresponding thereto for enabling the computing environment to maintain high availability, further comprising:
using the threshold to index into a data store that stores, for each of a plurality of successively-higher weighted business cost values, a corresponding proactive measure and executable functionality to invoke the corresponding proactive measure; and
selecting, as the determined at least one proactive measure, each of the corresponding proactive measures that corresponds, in the data store, to a weighted business cost value that is less than or equal to the threshold; and
automatically causing the computing environment to carry out the executable functionality to invoke each of the at least one determined proactive measure and thereby maintain the high availability of the computing environment without manual invocation.

US Pat. No. 10,169,137

DYNAMICALLY DETECTING AND INTERRUPTING EXCESSIVE EXECUTION TIME

International Business Ma...

1. A method, comprising:executing, by a first function of a first process executing on a processor, a plurality of calls to a second function of a second process;
programmatically generating, based on a respective amount of time required for each of the plurality of calls to complete, a time threshold for calls from the first function to the second function; and
subsequent to the plurality of calls completing:
storing, by an operating system (OS) kernel executing on the processor and in a queue of the OS kernel, an indication that the first function of the first process executing on the processor has made an additional call to the second function of the second process;
collecting process data for at least one of the first process and the second process;
determining, by the OS kernel, that an amount of time that has elapsed since the first function of the first process made the additional call to the second function of the second process exceeds the programmatically defined time threshold;
storing the queue and the process data as part of a failure data capture; and
performing a predefined operation on at least one of the first process and the second process.

US Pat. No. 10,169,136

DYNAMIC MONITORING AND PROBLEM RESOLUTION

International Business Ma...

1. A method comprising:determining, by one or more processors, a monitoring tier of a first component, of a plurality of components, that is a cause of a malfunction is activated;
in response to determining the monitoring tier of the first component is activated, determining, by one or more processors, a plurality of measurements for the plurality of components;
identifying, by one or more processors, a component of the plurality of components with the greatest number of activated monitoring tiers, based on the plurality of measurements; and
determining, by one or more processors, whether the component with the greatest number of activated monitoring tiers is the first component.

US Pat. No. 10,169,134

ASYNCHRONOUS MIRROR CONSISTENCY AUDIT

International Business Ma...

1. A method for auditing data consistency in an asynchronous data replication environment, the method comprising:executing, at a primary storage system, a “write with no data” command that performs functions associated with a conventional write command but writes no data to a source volume, the “write with no data” command performing the following:
serializing a source data track; and
creating a record set that contains a copy of data in the source data track, a timestamp indicating when the source data track was copied, and a command type indicating that the copy in the record set is not to be applied to a corresponding target data track of a target volume;
replicating the record set from the primary storage system, hosting the source volume, to a secondary storage system, hosting the target volume;
applying, to the target data track, all updates received for the target data track with timing prior to the timestamp;
reading the target data track at the secondary storage system after the updates have been applied;
comparing the target data track to the source data track; and
recording an error if the target data track does not match the source data track.

US Pat. No. 10,169,133

METHOD, SYSTEM, AND APPARATUS FOR DEBUGGING NETWORKING MALFUNCTIONS WITHIN NETWORK NODES

Juniper Networks, Inc., ...

1. A method comprising:building a collection of debugging templates that comprises a first debugging template that corresponds to a first potential cause of a certain networking malfunction and a second debugging template that corresponds to a second potential cause of the certain networking malfunction by:
receiving user input from a user of a network;
creating, based at least in part on the user input, the first debugging template that defines a first set of debugging steps that, when performed by a computing system, enable the computing system to determine whether the first potential cause led to the certain networking malfunction; and
creating, based at least in part on the user input, the second debugging template that defines a second set of debugging steps that, when performed by the computing system, enable the computing system to determine whether the second potential cause led to the certain networking malfunction;
detecting a computing event that is indicative of the certain networking malfunction within a network node included in the network;
determining, based at least in part on the computing event, potential causes of the certain networking malfunction, wherein the potential causes comprise the first potential cause of the certain networking malfunction and the second potential cause of the certain network malfunction;
performing the first set of debugging steps defined by the first debugging template that corresponds to the first potential cause, wherein the first debugging template comprises a generic debugging template that enables the computing system to determine that the certain networking malfunction resulted from the first potential cause irrespective of a software configuration of the network node; and
determining, based at least in part on the first set of debugging steps defined by the first debugging template, that the certain networking malfunction resulted from the first potential cause.

US Pat. No. 10,169,132

PREDICTING A LIKELIHOOD OF A CRITICAL STORAGE PROBLEM

International Business Ma...

1. A method for predicting, by a computerized storage-management system, a likelihood of a critical storage problem, the method comprising:a processor of a computerized storage controller of the computerized storage-management system receiving a sample set, where a sample size identifies a number of samples in the sample set, and where a first sample of the sample set identifies a first amount of storage space that was available to a storage device at a time when the first sample was recorded;
the processor deriving a mean of the sample set and a standard deviation of the sample set;
the processor further deriving a Chi-square statistic of the sample set as a function of the first mean and the first standard deviation, where the Chi-square statistic identifies whether the sample size is large enough to ensure that the sample set is statistically valid;
the processor, as a function of the deriving and of the further deriving, determining whether the sample set is statistically valid;
the processor, as a function of the determining, identifying a likelihood of a critical storage problem occurring within a threshold time;
the processor, as a further function of the determining, directing the computerized storage-management system to select an adjusted sample-set size,
where the duration of the threshold time is selected to allow the processor to perform the identifying in real time, and
where the adjusted sample-set size identifies a size of a future sample set that the computerized storage controller will request and receive at a future time in order to identify available a future available storage space of the storage device at the future time.

US Pat. No. 10,169,131

DETERMINING A TRACE OF A SYSTEM DUMP

International Business Ma...

1. A method for improving system analytics by determining an extra trace of a system dump after an event triggering the system dump, the method comprising:receiving, by one or more computer processors, a system dump request, wherein the system dump request includes performing a system dump utilizing a dumping tool, wherein the system dump includes a trace wherein the trace comprises one or more trace entries collected in a trace table;
determining, by one or more computer processors, an initial trace of the system dump;
determining, by one or more computer processors, the extra trace, wherein determining the extra trace includes determining a time period subsequent to the initial trace of the system dump to collect trace entries, and wherein the extra trace refers to a plurality of trace data entries collected during the time period subsequent to the initial trace of the system dump and subsequent to an event triggering the system dump;
determining, by one or more computer processors, an updated trace table, wherein determining the updated trace table includes collecting the plurality of trace entries during the time period subsequent to the initial trace of the system dump and subsequent to an event triggering the system dump, appending the trace table with the plurality of trace entries, and wrapping the one or more trace entries collected in the initial trace of the system dump in the event the updated trace table cannot store all of the plurality of trace entries; and
displaying, by one or more computer processors, the extra trace at the end of the initial trace.

US Pat. No. 10,169,130

TAILORING DIAGNOSTIC INFORMATION IN A MULTITHREADED ENVIRONMENT

International Business Ma...

1. A computer-implemented method for tailoring diagnostic information specific to current activity of multiple threads within a computer system, the method comprising:creating, by one or more processors, a system dump, including main memory and system state information;
storing, by one or more processors, the system dump to a database;
executing, by one or more processors, a program to provide tailored diagnostic information;
creating, by one or more processors, a virtual memory image of a system state, based on the memory dump, in the address space of the program, by creating a second hardware memory mapping of the hardware memory addresses of the address space of the program to the virtual memory addresses of the virtual memory image of the system state;
scanning, by one or more processors, the virtual memory image and system state information, using the second hardware memory mapping, to identify tasks that were running, tasks that have failed due to an error, and tasks that were suspended when the system dump was made;
collecting and collating based on task number, by one or more processors, from the system dump, using the second hardware memory mapping, state information and control blocks associated with the identified tasks; and
storing, by one or more processors, to the database, a formatted system dump, including the collected and collated state information and control blocks for the identified tasks.

US Pat. No. 10,169,127

COMMAND EXECUTION RESULTS VERIFICATION

International Business Ma...

1. A computer program product comprising a computer readable storage medium that is not a transitory signal per se, the computer readable storage medium having computer readable codes stored thereon that cause one or more devices to conduct a method comprising:receiving, by a processor, a file including a plurality of commands and an expected result related to the plurality of commands from a command line interface, the command line interface operating in a script mode that allows a user, with a single login to the command line interface, to define a list of commands to be executed in order by the command line interface;
executing the plurality of commands to create one or more processes for performing one or more tasks corresponding to the plurality of commands;
performing the one or more tasks;
generating one or more result codes corresponding to performance of the one or more tasks, the one or more result codes comprising a first indication of successful command execution or a second indication of errors;
determining whether the one or more result codes satisfy the expected result based on the first indication or the second indication in the one or more result codes matching the expected result; and
sending a response to the command line interface in response to determining whether the one or more result codes satisfy the expected result,
wherein:
the response includes one of an error message and a success code,
the error message comprises an error code indicating one of an unexpected error and an unexpected success in the one or more result codes,
determining whether the one or more result codes satisfy the expected results comprises determining whether the first indication of successful command execution or the second indication of errors matches at least a subset of the expected results,
sending the response to the command line interface comprises:
sending the success code to the command line interface in response to determining a match, and
sending the error code to the command line interface in response to determining a non-match,
the error code comprises one of:
a first error indicating an unexpected error in the one or more result codes in response to the subset of expected results including a successful result, and
a second error indicating an unexpected success in the one or more result codes in response to the subset of expected results including an error result.

US Pat. No. 10,169,126

MEMORY MODULE, MEMORY CONTROLLER AND SYSTEMS RESPONSIVE TO MEMORY CHIP READ FAIL INFORMATION AND RELATED METHODS OF OPERATION

SAMSUNG ELECTRONICS CO., ...

1. A memory module comprising:first to Mth memory chips (where M is an integer that is equal to or greater than 2) mounted on a module board and storing data; and
an (M+1)th memory chip mounted on the module board and storing a parity code associated with multi-chip data having data portions stored by respective ones of the first to Mth memory chips, the parity code containing information to correct at least one bit error resulting from a read operation of a data portion stored by a failed one of the first to Mth memory chips,
wherein each of the first to (M+1)th memory chips comprises an on-chip error correction circuit to detect a bit error within the corresponding stored data portion of the multi-chip data and to provide a corresponding fail bit to indicate a result of the detection of a bit error, and
wherein the memory module comprises a circuit connected to receive the fail bits from the first to (M+1)th memory chips and to output fail information as a result of a calculation performed on the fail bits.

US Pat. No. 10,169,125

RE-ENCODING DATA IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method comprises:determining to create a new set of encoded data slices based on an unfavorable storage performance level associated with one or more storage units (SUs) within a dispersed storage network (DSN);
partially decoding, by a storage unit (SU) of the DSN, a first encoded data slice of a set of encoded data slices in accordance with previous dispersed storage error encoding parameters having a previous threshold number to produce a partially decoded first encoded data slice, wherein the first encoded data slice is stored by another SU of the DSN and is transmitted from the another SU via the DSN and received via an interface of the SU that is configured to interface and communicate with the DSN, and wherein a data segment of a data object is encoded into the set of encoded data slices in accordance with the previous dispersed storage error encoding parameters;
partially re-encoding, by the SU, the partially decoded first encoded data slice in accordance with updated dispersed storage error encoding parameters having an updated threshold number to produce a first partially re-encoded data slice, wherein the first partially re-encoded data slice is used to create a new first encoded data slice of the new set of encoded data slices that corresponds to the data segment being dispersed storage error encoded in accordance with the updated dispersed storage error encoding parameters, wherein
the partially re-encoding comprises:
obtaining a new encoding matrix corresponding to the updated dispersed storage error encoding parameters;
reducing the new encoding matrix based on a matrix position corresponding to the new first encoded data slice of the new set of encoded data slices that corresponds to the data segment being dispersed storage error encoded in accordance with the updated dispersed storage error encoding parameters; and
matrix multiplying the reduced new encoding matrix with the partially decoded first encoded data slice to produce the first partially re-encoded data slice;
receiving, by the SU via the DSN and via the interface of the SU, a plurality of second partially re-encoded data slices from a sub-set of other SUs of the DSN, wherein the plurality of second partially re-encoded data slices is created in accordance with the updated dispersed storage error encoding parameters based on partially re-encoding by the sub-set of other SUs of the DSN; and
generating, by the SU, a new second encoded data slice of the new set of encoded data slices from the plurality of second partially re-encoded data slices.

US Pat. No. 10,169,124

UNIFIED OBJECT INTERFACE FOR MEMORY AND STORAGE SYSTEM

SAMSUNG ELECTRONICS CO., ...

1. A memory management device, comprising:a memory;
a data structure stored in the memory, the data structure including:
an identifier of an object, wherein the identifier of the object is a hash; and
a tuple including an identifier of a physical device and a location on the physical device; and
a second data structure, the second data structure including:
a second identifier of the object; and
the hash,
wherein the object is stored on one of a plurality of physical devices including at least one volatile storage device and at least one non-volatile storage device,
wherein the second data structure maps the second identifier of the object to the hash and the data structure maps the hash to the tuple to access the object
wherein data for the object may be accessed on behalf of an application or an operating system using the second identifier of the object, and
wherein the memory management device uses the second identifier of the object, the data structure, and the second data structure to determine the identifier of the physical device and the location on the physical device in the tuple.

US Pat. No. 10,169,123

DISTRIBUTED DATA REBUILDING

INTERNATIONAL BUSINESS MA...

1. A method for use in a distributed storage network (DSN) storing sets of encoded data slices in sets of storage units, the method comprising:identifying, by a first storage unit included in a set of storage units, a storage error associated with an encoded data slice of a set of encoded data slices, the encoded data slice assigned to be stored in the first storage unit;
selecting a second storage unit to generate a rebuilt encoded data slice to replace the encoded data slice assigned to be stored in the first storage unit;
transmitting, from the first storage unit to the second storage unit, a rebuild request associated with the storage error;
generating, by the second storage unit, the rebuilt encoded data slice in response to the rebuild request;
transmitting the rebuilt encoded data slice from the second storage unit to the first storage unit; and
storing the rebuilt encoded data slice in the first storage unit.

US Pat. No. 10,169,122

METHODS FOR DECOMPOSING EVENTS FROM MANAGED INFRASTRUCTURES

Moogsoft, Inc., San Fran...

1. A system for clustering events, comprising:a first engine that receives message data from a managed infrastructure that includes managed infrastructure physical hardware which supports the flow and processing of information;
a second engine that determines common characteristics of events and produces clusters of events relating to the failure of errors in the managed infrastructure, where membership in a cluster indicates a common factor of the events that is a failure or an actionable problem in the physical hardware managed infrastructure directed to supporting the flow and processing of information, and producing events that relate to the managed infrastructure while converting the events into words and subsets used to group the events that relate to failures or errors in the managed infrastructure, including the managed infrastructure physical hardware;
a compare and merge engine that receives outputs from the second engine, the compare and merge engine communicating with one or more user interfaces in a situation room; and
wherein the second engine or a third engines uses a source address for each event make a change to at least a portion of the managed infrastructure, and in response to producing events that relate to the managed infrastructure while converting the events into words and subsets physical changes are made to managed infrastructure physical hardware.

US Pat. No. 10,169,121

WORK FLOW MANAGEMENT FOR AN INFORMATION MANAGEMENT SYSTEM

Commvault Systems, Inc., ...

1. A computer-implemented method for distributing tasks, in an information management system, using first and second work queues managed by a storage manager, the method comprising:receiving tasks to be performed in the information management system at the storage manager, which facilitates a transfer of data between primary storage devices and secondary storage devices in the information management system, and which schedules and manages the tasks for multiple, different client computing devices in the information management system;
scheduling information management policy tasks for a client computing device using the storage manager,
wherein scheduling the information management policy tasks includes populating the first work queue with the information management policy tasks,
wherein the information management policy tasks include data storage operations that are defined by a data storage policy and include creating secondary copies of data on secondary storage devices from primary copies of data stored on primary storage devices, restoring the secondary copies of data from the secondary storage devices to the primary storage devices, or retaining the secondary copies of data on the secondary storage devices, and
wherein the secondary storage devices are located remotely from the primary storage devices;
transmitting the information management policy tasks from the storage manager to the client computing device, in accordance with the first work queue;
scheduling information management system tasks for the client computing device using the storage manager,
wherein scheduling the information management system tasks includes populating the second work queue with the information management system tasks,
wherein the information management system tasks include tasks that are related to maintenance of software or hardware components of the information management system and that do not read or write data to the secondary storage devices;
transmitting the information management system tasks from the storage manager to the client computing device in accordance with the second work queue and based on an availability of the client computing device;
executing, at the client computing device, the transmitted information management policy tasks, in accordance with the first work queue, and the transmitted information management system tasks, in accordance with the second work queue;
determining parameters of an information management system operation failure; and
providing an alert of failure if at least one of the parameters exceeds a predetermined threshold.

US Pat. No. 10,169,120

REDUNDANT SOFTWARE STACK

International Business Ma...

1. A method for creating redundant software stacks, the method comprising:identifying, by one or more computer processors, a first container with a set of rules and with first software stack and a valid multipath configuration, wherein the first software stack is a first path of the valid multipath configuration;
creating, by one or more computer processors, a second container, wherein the second container has the same set rules as the first container;
creating, by one or more computer processes, a second software stack in the second container, wherein the second software stack is a redundant software stack of the first software stack;
creating, by one or more computer processors, a second path from the first container to the second software stack, wherein the second path bypasses the first software stack;
identifying, by one or more computer processors, a data load on the first path that is creating latency; and
sending, by one or more computer processors, at least a portion of the data load on the first path to the second path to reduce the latency on the first path.

US Pat. No. 10,169,119

METHOD AND APPARATUS FOR IMPROVING RELIABILITY OF DIGITAL COMMUNICATIONS

1. A method performed by a radio comprising:receiving a network identifier comprising a data unit identifier, the data unit identifier configured to identify a type of a data unit being communicated;
checking validity of the data unit identifier;
combinatorially processing a data unit for which the data unit identifier is uncertain according to a plurality of possible data unit identifier values;
selecting a most likely data unit identifier value based according to results of the combinatorially processing the data unit;
performing subsequent processing of the data unit in accordance with the most likely data unit identifier value.

US Pat. No. 10,169,118

REMOTE PRODUCT INVOCATION FRAMEWORK

INTERNATIONAL BUSINESS MA...

1. A method for remote product invocation comprising:configuring an invocation framework, the invocation framework comprising an integration module and an endpoint/handler module;
wherein the integration module is configured to:
receive a source object;
format data from the source object based on requirements of a target machine supporting an external service that performs a desired operation;
utilize the endpoint/handler module, which comprises two distinct subcomponents, an endpoint and a handler, the endpoint to contain information for making a connection to an external service, and the handler to use the information from the endpoint to make connection to the external service and execute the desired operation using the data from the source object; and
with a logical management operation of the invocation framework, defining an action to be executed in response to receiving the source object so as to provide an interface between an entity submitting the source object to the integration module and the integration module.

US Pat. No. 10,169,117

INTERFACING BETWEEN A CALLER APPLICATION AND A SERVICE MODULE

International Business Ma...

1. A method for interfacing between a caller application and a service module, said method comprising:receiving, by a processor of a computer system, a request for performing a transaction from the caller application, wherein the request comprises at least one caller application attribute describing the request;
subsequent to said receiving the request, said processor building a service module data structure pursuant to the received request, wherein the service module data structure comprises a generic service document and at least one service module attribute, and wherein said building the service module data structure comprises:
creating one or more containers in the generic service document, wherein each container of the one or more containers is respectively associated with each service module attribute of the at least one service module attribute in each mapping of the at least one mapping in a mapping table of the service module data structure, wherein each container comprises a data value for each service module attribute of the at least one service module attribute; and
subsequent to said creating the one or more containers in the generic service document, naming each container of said at least one container in the generic service document after each mapping of said at least one mapping in the mapping table;
subsequent to said building the service module data structure, said processor storing each service module attribute in a relational table of the service module data structure;
subsequent to said storing each service module attribute, said processor servicing the request within the service module data structure, wherein said servicing results in instantiating the generic service document, and wherein said servicing comprises: performing the transaction, retrieving each mapping of at least one mapping in the mapping table of the service module data structure, and reloading each container of at least one container from the relational table into respective containers of the generic service document according to each retrieved mapping; and
subsequent to said servicing, said processor returning the generic service document to the caller application.

US Pat. No. 10,169,116

IMPLEMENTING TEMPORARY MESSAGE QUEUES USING A SHARED MEDIUM

International Business Ma...

1. A method for implementing temporary message queues using a shared medium at a coupling facility shared between multiple systems each having a queue manager handling messages from the system's applications, the method carried out at the coupling facility comprising:defining a list structure on the shared medium wherein the list structure has multiple lists;
providing a list which is allocated to a single queue manager in which message entries are located which belong to multiple shared temporary dynamic queues (STDQs) created by the single queue manager, wherein the message entries are located by reference to a key which determines a message entry's position in the list, the list including:
a list header which can be partitioned for multiple current STDQs by assignment of key ranges to message entries belonging each current STDQ; and
a list control entry which holds information about the assignment of key ranges to the multiple current STDQs and shares the information with other queue managers using the STDQs, wherein the list control entry is updated by the single queue manager when an STDQ is created or deleted, wherein the list control entry includes a name of an STDQ which is in accordance with a queue naming convention and includes: an indication of the list structure, an indication of the single queue manager, a list header number, a start key on the list header, and a unique identifier for the STDQ; and
sending a message from a first system of the multiple systems to a second system of the multiple systems based on the list.

US Pat. No. 10,169,115

PREDICTING EXHAUSTED STORAGE FOR A BLOCKING API

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for operating a blocking application program interface (API) of an application, the method comprising:receiving, from a requestor, a request for data from the application;
creating, by the blocking API of the application, a buffer for the data;
receiving, by the application, a data record corresponding to the request;
storing, by the blocking API, the data record in the buffer;
based on a determination that the buffer is full, providing, by the blocking API, data records in the buffer to the requestor; and
based on a determination that the buffer is not full, determining by the blocking API, based at least in part on an amount of available storage in the buffer, whether to provide the data records in the buffer to the requestor or to wait for another data record before providing the data records in the buffer to the requestor.

US Pat. No. 10,169,114

PREDICTING EXHAUSTED STORAGE FOR A BLOCKING API

INTERNATIONAL BUSINESS MA...

8. A computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:receive, from a requestor, a request for data from an application;
create, by a blocking API of the application, a buffer for the data;
receive, by the application, a data record corresponding to the request;
store, by the blocking API, the data record in the buffer;
based on a determination that the buffer is full, provide, by the blocking API, data records in the buffer to the requestor; and
based on a determination that the buffer is not full, determine by the blocking API, based at least in part on an amount of available storage in the buffer, whether to provide the data records in the buffer to the requestor or to wait for another data record before providing the data records in the buffer to the requestor.

US Pat. No. 10,169,113

STORAGE AND APPLICATION INTERCOMMUNICATION USING ACPI

International Business Ma...

1. A method for event-driven intercommunication, the method comprising:issuing an interrupt based on a first event from a first kernel-mode module to a second kernel-mode module via an interface,
wherein the first event corresponds to an operational parameter of a first node based, at least in part, on a shared namespace accessible by the first kernel-mode module and the second kernel-mode module, wherein the operational parameter of the first node is an anticipated status of the first node, based on one or more non-consecutive operations scheduled to be executed by the first node,
wherein the first node is a storage subsystem of a computing device in communication with the second node, the second node comprising a user-level application stored externally and accessed through a communication network by the computing device, and
issuing, by the second kernel-mode module, a second event to a second node, wherein the second event corresponds to an object of the shared namespace.

US Pat. No. 10,169,112

EVENT SEQUENCE MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:obtaining, by one or more processors, an action sequence that includes a plurality of actions executed on behalf of a plurality of users for achieving at least one goal;
generating, by one or more processors, from the obtained action sequence an event sequence that includes a plurality of events, wherein the event sequence includes respective time points at which respective actions associated with respective events are executed, and wherein each of the plurality of events is associated with a unique type of action from the plurality of actions;
determining, by one or more processors, an association model based on the generated event sequence, wherein the association model defines a chronological relationship among events associated with the at least one goal;
building, by one or more processors, a plurality of sub-models from a plurality of sub-sequences that are extracted from the event sequence, wherein at least one of the plurality of sub-models defines a chronological relationship among events associated with a portion of the at least one goal;
combining, by one or more processors, the plurality of sub-models into the association mode;
for a specific event from the plurality of events included in the event sequence, extracting from the event sequence, by one or more processors, a group of sub-sequences that include the specific event;
determining, by one or more processors, a sub-model from the extracted group of sub-sequences based on respective time points included in the sub-sequences; and
selecting from the event sequence, by one or more processors, a sub-sequence that ends at the specific event for inclusion in the sub-model.

US Pat. No. 10,169,111

FLEXIBLE ARCHITECTURE FOR NOTIFYING APPLICATIONS OF STATE CHANGES

MICROSOFT TECHNOLOGY LICE...

1. A method for providing notifications to clients in response to state property changes, comprising:receiving a notification request at an Application Program Interface (API) from a client application on the computing device to receive a notification in response to an event that originates on the computing device; wherein the event is associated with a change in a state property of the computing device; wherein the Application Program Interface (API) is utilized by the client application to register the notification request;
ensuring that the state property is registered via the API, wherein the API is useable to register for notifications regarding state properties that are updated by different components within the computing device;
determining when the state property changes, wherein determining when the state property changes comprises using the API to specify a batching operation on changes to the state property that occur within a predetermined time period; wherein a call to the API batching operation specifies a time period for which a value of the state property is to remain constant before notifying the client application of a change to the state property;
determining when the client should receive notification of the state property change; and
notifying the client of the state property change on the computing device when determined that the client should receive notification of the state property change;
wherein the call to the API batching operation reduces a number of instances of notifying the client of the state property change during the time period.

US Pat. No. 10,169,108

SPECULATIVE EXECUTION MANAGEMENT IN A COHERENT ACCELERATOR ARCHITECTURE

International Business Ma...

1. A computer-implemented method for speculative execution management in a coherent accelerator architecture, the method comprising:detecting, with respect to a set of cache lines of a single shared memory in the coherent accelerator architecture, a first access request from a first Accelerator Functional Unit (AFU);
detecting, with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, a second access request from a second AFU; and
processing, by a speculative execution management engine using a speculative execution technique, the first and second access requests with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, wherein the speculative execution technique is configured to allow both the first and second AFU's to simultaneously access the set of cache lines without a locking mechanism;
determining a number of data entries in the set of cache lines of the single shared memory;
comparing the number of data entries in the set of cache lines of the single shared memory to a threshold number of data entries;
determining an elapsed period since a previous capture on the set of cache lines of the set of single shared memory;
comparing the elapsed period since the previous capture to a time threshold;
capturing, in response to the number of data entries exceeding the threshold number of data entries and in response to the elapsed period exceeding the time threshold, by the speculative execution management engine, a set of checkpoint roll-back data, wherein the set of checkpoint roll-back data includes an image of the first AFU at a first point in time, an image of the second AFU at the first point in time, and an image of the set of cache lines of the single shared memory at the first point in time;
evaluating the first and second access requests, wherein evaluating the first and second access requests further comprises:
identifying a first subset of target cache lines of the set of cache lines for the first access request;
identifying a second subset of target cache lines of the set of cache lines for the second access request, wherein the first and second subsets of target cache lines indicate read and write operations by the first and second access requests; and
determining, based on the identified first and second subset of target cache lines, whether a conflict exists;
in response to a determination that a conflict does not exist:
updating, in response to processing the first and second access requests with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, a host memory directory in a batch fashion which includes a set of update data for both the first and second access requests in a single set of data traffic; and
in response to a determination that a conflict exists:
identifying a subset of cache lines of the set of cache lines where the conflict exists;
determining a number of cache lines in the subset of cache lines;
comparing the number of cache lines to a severity threshold;
rolling-back, in response to the number of cache lines exceeding the severity threshold, based on the set of checkpoint roll-back data, the coherent accelerator architecture to a prior state, wherein the rolling-back includes rolling back only the subset of cache lines where the conflict exists;
retrying, without using the speculative execution technique and in a separate fashion in relation to the second access request, the first access request with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture; and
retrying, without using the speculative execution technique and in the separate fashion in relation to the first access request, the second access request with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture.

US Pat. No. 10,169,107

ALMOST FAIR BUSY LOCK

International Business Ma...

1. A method comprising:publishing a current state of a lock and a claim non-atomically to the lock by a next owning thread, in an ordered set of threads, that has requested to own the lock, the claim comprising a structure capable of being read and written only in a single memory access,
obtaining, by each thread in the ordered set of threads, a ticket,
wherein the claim comprises an identifier of a ticket obtained by the next owning thread, and an indication that the next owning thread is claiming the lock;
comparing the ticket obtained by the next owning thread with a current ticket;
responsive to a match between the ticket obtained by the next owning thread and the current ticket, preventing thread monitoring preemptions; and
responsive to a match between the ticket obtained by the next owning thread and the current ticket, non-atomically acquiring the lock.

US Pat. No. 10,169,106

METHOD FOR MANAGING CONTROL-LOSS PROCESSING DURING CRITICAL PROCESSING SECTIONS WHILE MAINTAINING TRANSACTION SCOPE INTEGRITY

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for managing critical section processing, the method comprising:generating, using a processor, a transaction scope for a process in response to processing in a critical section;
collecting data related to the process;
generating, using the processor, a plurality of requests using the collected data;
storing the plurality of requests and data as a plurality of pending items chained together to form an ordered list in a private storage during critical section processing; and
processing, using the processor, the request based on the transaction scope, the processing comprising implementing a check of the process for any pending items in response to a transaction scope application programming interface being called or other processing relating to the pending items,
wherein the pending items are processed in the order they are created by using the ordered list, and
wherein one of the plurality of requests is a rollback request that includes at least one from a group consisting of removing the pending items from the private storage, releasing the private storage for all pending items, and resuming normal processing.

US Pat. No. 10,169,105

METHOD FOR SIMPLIFIED TASK-BASED RUNTIME FOR EFFICIENT PARALLEL COMPUTING

QUALCOMM Incorporated, S...

1. A method of scheduling and executing lightweight computational procedures in a computing device, comprising:determining whether a first task pointer in a task queue is a simple task pointer for a lightweight computational procedure;
scheduling a first simple task for the lightweight computational procedure for execution by a first thread in response to determining that the first task pointer is a simple task pointer;
retrieving a kernel pointer for the lightweight computational procedure from an entry of a simple task table, wherein the entry is associated with the simple task pointer; and
directly executing the lightweight computational procedure as the first simple task.

US Pat. No. 10,169,104

VIRTUAL COMPUTING POWER MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:receiving a request for a computing resource for a computing task, wherein the computing task is active in a computing system, and wherein the computing system includes a current power consumption profile for the computing task and a historical power consumption profile for the computing task;
determining whether a peak in a current power consumption profile is expected based on the historical power consumption profile for the computing task; and
responsive to determining a peak is expected, delaying the request for the computing resource by: (i) initiating an allocation timeout, (ii) determining whether the allocation timeout is effective in reducing the current power consumption profile, (iii) responsive to determining the allocation timeout is not effective in reducing the current power consumption profile, granting the request for the computing resource, and (iv) updating the historical power consumption profile.

US Pat. No. 10,169,103

MANAGING SPECULATIVE MEMORY ACCESS REQUESTS IN THE PRESENCE OF TRANSACTIONAL STORAGE ACCESSES

International Business Ma...

1. A processing unit, comprising:a processor core;
a cache memory coupled to the processor core; and
transactional memory logic that, responsive to receipt of a speculative memory access request at the cache memory that includes a target address of data speculatively requested for the processor core, determines whether the target address of the speculative memory access request matches an address in a set of addresses forming a store footprint of a memory transaction and, responsive to determining that the target address of the speculative memory access request matches an address in the set of addresses forming the store footprint of a memory transaction, causes the cache memory to reject servicing the speculative memory access request;
wherein:
the transactional memory logic determines whether the speculative memory access request is a transactional speculative memory access request or a non-transactional speculative memory access request;
the transactional memory logic causes the cache memory to reject servicing the speculative memory access request in response to determining the memory access request is a transactional speculative memory access request; and
the transactional memory logic fails the memory transaction in response to determining the speculative memory access request is a non-transactional speculative memory access request.

US Pat. No. 10,169,102

LOAD CALCULATION METHOD, LOAD CALCULATION PROGRAM, AND LOAD CALCULATION APPARATUS

FUJITSU LIMITED, Kawasak...

7. A load calculation method comprising:acquiring processor usage information including usage of a processor of a managed physical machine, and usage of the processor for each of a plurality of virtual machines generated by a hypervisor executed on the managed physical machine;
acquiring data transmission information including a data transmission amount for each of a plurality of virtual network interfaces used by the plurality of virtual machines;
determining a virtual network interface having a second correlation between overhead processor usage, obtained by subtracting a sum of processor usage of the plurality of virtual machines from processor usage of the managed physical machine, and a data transmission amount of the virtual network interface, to be a second virtual network interface that performs data transmission via the hypervisor among the plurality of virtual network interfaces;
determining a virtual network interface having a first correlation that is smaller than the second correlation between the overhead processor usage and the data transmission amount of the virtual network interface, to be a first virtual network interface that performs data transmission without routing through the hypervisor, among the plurality of virtual network interfaces;
calculating load information including amount of increase or decrease of the processor usage for data transmission in the managed physical machine, which a virtual machine to be added or deleted will be added to or deleted from, based on whether each of the virtual machines uses the first virtual network interface or the second virtual interface; and
adding or deleting the virtual machine to be added or deleted to or from a managed physical machine that is selected based on the calculated amount of increase or decrease of the processor usage for data transmission.

US Pat. No. 10,169,101

SOFTWARE BASED COLLECTION OF PERFORMANCE METRICS FOR ALLOCATION ADJUSTMENT OF VIRTUAL RESOURCES

International Business Ma...

1. A method for collecting and processing performance metrics, the method comprising:assigning, by the one or more computer processors, an identifier corresponding to a first workload, wherein a first workload includes inbound input-output transaction from input-output devices and accelerators associated with a first virtual machine, wherein the first virtual machine is a container;
recording, by the one or more computer processors, resource consumption data, wherein the resource consumption data is selected from a group consisting of: one or more time stamps, one or more identified workloads, and one or more resource consumption estimates associated with the one or more time stamps, of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
creating, by the one or more computer processors, a relational association of the first workload and the first virtual machine to the resource consumption data of the at least one processor, wherein creating a relational association between the first workload and the first virtual machine further comprises using the calculated difference in resource consumption between the performance monitoring interrupt and a previous interrupt to track a change in resource consumption of the at least one processor over time;
determining, by the one or more computer processors, if the first workload is complete; responsive to determining that the first workload is not complete, calculating, by the one or more computer processors, a difference in recorded resource consumption data between the performance monitoring interrupt and a previous performance monitoring interrupt;
assigning, by the one or more computer processors, an identifier corresponding to a second workload associated with the first virtual machine;
recording, by the one or more computer processors, resource consumption data of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
creating, by the one or more computer processors, a relational association of the second workload and the first virtual machine to the resource consumption data of the at least one processor;
determining, by the one or more computer processors, if the second workload is complete;
responsive to determining that the second workload is complete, switching, by the one or more computer processors, the first virtual machine to a third workload;
aggregating, by the one or more computer processors, the recorded resource consumption data to provide one or more resource consumption estimates; and
notifying, by the one or more computer processors, a resource manager, wherein the resource manager is a hardware component, of a workload switch between the second workload and the third workload and data regarding changes in resource consumption of the at least one processor over time.

US Pat. No. 10,169,100

SOFTWARE-DEFINED STORAGE CLUSTER UNIFIED FRONTEND

INTERNATIONAL BUSINESS MA...

1. A method, comprising:initializing a plurality of first layer software defined storage (SDS) clusters, each of the first layer SDS clusters comprising multiple storage nodes, each of the multiple storage nodes executing in separate independent virtual machines on respective separate independent servers;
defining a second layer SDS cluster comprising a combination of the first layer SDS clusters; and
managing, using a distributed management application, the second layer SDS cluster, the distributed management application comprising multiple management nodes executing on all of the servers; wherein each of the separate independent virtual machines comprises a first virtual machine, and wherein each server comprises a second virtual machine that executes a given management node; and wherein the distributed management application comprising the multiple management nodes executing on all of the servers provides a unified front-end interface for accessing each of the first layer SDS cluster and the second layer SDS clusters.

US Pat. No. 10,169,099

REDUCING REDUNDANT VALIDATIONS FOR LIVE OPERATING SYSTEM MIGRATION

International Business Ma...

1. A method to reduce redundant validations for live operating system migration to increase performance of a previous mobility event, the method comprising:monitoring, by a virtualization manager, for configuration changes in a validation inventory, wherein the validation inventory comprises data selected from a group consisting of: physical hardware data related to a previous mobility event, and virtual hardware data related to the previous mobility event, and wherein the live operating system migration is performed by a control point and the virtualization manager in combination, and wherein monitoring for the configuration changes in the validation inventory based on determining configuration changes in one or more of a Virtual Fiber Channel (VFC), a Storage Area Network (SAN), and an external storage subsystem;
receiving a request to perform the live operating system migration of a logical partition (LPAR) from a source LPAR on a source computer to a target LPAR on a target computer, wherein the target LPAR is created using the validation inventory corresponding to the source LPAR,
receiving from a repository of validation inventory and based on the received request the validation inventory corresponding to the source LPAR;
in response to determining that the monitored validation inventory has changed, re-validating the received validation inventory prior to beginning the live operating system migration of the source LPAR to the target LPAR and re-caching the repository of validation inventory with the re-validated validation inventory, perform the live operating system migration of the source LPAR to the target LPAR;
and
in response to determining that the monitored validation inventory has not changed, perform the live operating system migration of the source LPAR to the target LPAR by using contents of the unchanged validation inventory, allowing the source LPAR to continually run during the live operating system migration, and without performing the re-validation of the received validation inventory.

US Pat. No. 10,169,098

DATA RELOCATION IN GLOBAL STORAGE CLOUD ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method of data relocation in global storage cloud environments, comprising:providing a computer system, being operable to:
mapping a user device to a home data server to store data of a user;
locating a data server near a travel location of the user based on one or more travel plans of the user, the one or more travel plans include one or more final travel locations and one or more intermediate travel locations including temporary locations the user travels prior to reaching the one or more final travel locations including a stopover or a layover;
locating the one or more intermediate travel locations during a user's travels using online travel web sites;
indexing and sorting one or more user-defined policies based on an owner and class of each policy of the one or more policies;
accessing the one or more user-defined policies by a primary key which includes an owner and a class of a desired policy out of the one or more user-defined policies;
filtering data from the stored data based on the one or more user-defined policies to determine which stored data is to be transferred; and
transferring the filtered data from the home data server near a home location of the user to the data server near the travel location.

US Pat. No. 10,169,097

DYNAMIC QUORUM FOR DISTRIBUTED SYSTEMS

Microsoft Technology Lice...

1. In a distributed computing system in which performance of a computing task within the distributed system is based at least in part upon each of a minimum number of nodes or devices providing authorization for performance of the computing task, a method of dynamically managing the minimum number of nodes or devices required to enable performance of the computing task, the method comprising:instantiating a dynamic quorum daemon in the distributed system, the dynamic quorum daemon running as a background task in the distributed system and the dynamic quorum daemon managing a set of nodes within the distributed system that are enabled to authorize performance of a computing task in the distributed system;
establishing that each of one or more nodes and zero or more devices in the distributed system is designated as an authorizing entity enabled to authorize performance of a computing task in the distributed system;
establishing a minimum number of the authorizing entities which are required to authorize performance of the computing task in order to allow performance of the computing task in the distributed system;
the dynamic quorum daemon determining that a state of a node or device in the distributed system has changed;
based on the determined change in the state of a node or device, the dynamic quorum daemon changing a designation of whether the node or device is an authorizing entity that is enabled to authorize performance of a computing task in the distributed system; and
based on the change of the designation of the node or device, the dynamic quorum daemon adjusting the minimum number of authorizing entities which are required to authorize performance of a computing task in order to allow performance of the computing task in the distributed system, the adjustment of the minimum number of authorizing entities being based at least in part upon a quorum policy which comprises one of node-majority with disk witness or node-majority with file share witness.

US Pat. No. 10,169,096

IDENTIFYING MULTIPLE RESOURCES TO PERFORM A SERVICE REQUEST

Hewlett-Packard Developme...

1. A method for scheduling a service request, the method comprising:receiving the service request including a latency associated with a publication of a result of the service request;
retrieving heterogeneous data upon the receipt of the service request;
filtering the heterogeneous data to obtain data relevant to the service request;
determining an amount of relevant data prior to computing a workload for the service request;
computing the workload for the service request;
identifying multiple resources, based on the latency and the workload, to perform the service request;
distributing the workload to the identified multiple resources; and
publishing the results of the service request in accordance with the latency.

US Pat. No. 10,169,092

SYSTEM, METHOD, PROGRAM, AND CODE GENERATION UNIT

International Business Ma...

1. A system for performing exclusive control among tasks, the system comprising:a lock status storage unit for storing update information that is updated in response to acquisition and release of an exclusive lock by one task and for storing task identification information for identifying a task that has acquired an exclusive lock;
an exclusive execution unit for causing processing in a critical section included in a first task to be performed by acquiring the exclusive lock, wherein the exclusive execution unit releases the exclusive lock and updates the update information after the processing in the critical section included in the first task; and
a nonexclusive execution unit for causing processing in a critical section included in a second task, the processing in the critical section included in the second task excluding acquiring the exclusive lock;
wherein, when the processing in the critical section by the second task has not been successfully completed when a predetermined condition has been reached and the update information has been changed, the nonexclusive execution unit acquires the exclusive lock and the processing in the critical section included in the second task is performed.

US Pat. No. 10,169,091

EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS

NVIDIA CORPORATION, Sant...

1. A method for scheduling tasks for execution in a parallel processor comprising two or more streaming multiprocessors, the method comprising:receiving a set of tasks associated with a first processing context related to a first page table included in a plurality of page tables;
selecting a first task that is associated with a first address space identifier (ASID) from the set of tasks and associated with the first processing context;
determining a minimum a number of streaming multiprocessors included in the two or more streaming multiprocessors able to execute the tasks included in the set of tasks based on a number of tasks each streaming multiprocessor is able to execute concurrently, wherein the minimum number of streaming multiprocessors includes at least a first streaming multiprocessor;
assigning the tasks included in the set of tasks to the minimum number of streaming multiprocessors;
selecting the first streaming multiprocessor from the two or more streaming multiprocessors to execute the first task;
scheduling the first task to execute on the first streaming multiprocessor;
selecting a second task that is associated with a second ASID from the set of tasks and associated with the first processing context; and
scheduling the second task to execute on the first streaming multiprocessor, wherein scheduling the second task occurs prior to scheduling any other task from the set of tasks to execute on a second streaming multiprocessor included in the two or more streaming multiprocessors.

US Pat. No. 10,169,089

COMPUTER AND QUALITY OF SERVICE CONTROL METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A computer, comprising:a system bus comprising a bus management device;
a processor coupled to the system bus;
a storage coupled to the system bus, the storage comprising an operating system, and the operating system comprising a scheduling subsystem; and
at least one other device coupled to the system bus,the processor being configured to invoke the scheduling subsystem to allocate, to at least one container of the computer, a container identity (ID) corresponding one-to-one to the at least one container,the processor or the at least one other device being configured to send a bus request carrying the container ID and a hardware device ID of a hardware device used by the at least one container indicated by the container ID to the system bus, the hardware device comprising a memory in the storage or the processor, and the bus request being sent to the system bus comprising sending, to the system bus using a first memory management (MM) subsystem in the operating system, the bus request carrying the container ID and the hardware device ID, and
the bus management device being configured to:
search, according to the bus request, for a quality of service (QoS) parameter corresponding to both the container ID and the hardware device ID, the QoS parameter being stored in the bus management device; and
configure, according to the found QoS parameter, a resource required when the at least one container corresponding to the QoS parameter uses the hardware device corresponding to the QoS parameter, the resource comprising at least one of bandwidth, a delay, or a priority.

US Pat. No. 10,169,088

LOCKLESS FREE MEMORY BALLOONING FOR VIRTUAL MACHINES

1. A method of managing memory, comprising:receiving, by a hypervisor, an inflate notification from a guest running on a virtual machine, the virtual machine and the hypervisor running on a host machine, the inflate notification including a first identifier corresponding to a first time, and the inflate notification indicating that a set of guest memory pages is unused by the guest at the first time;
determining whether the first identifier precedes a last identifier corresponding to a second time and included in a previously sent inflate request to the guest;
if the first identifier does not precede the last identifier:
for a first subset of the set of guest memory pages modified since the first time, determining, by the hypervisor, to not reclaim a first set of host memory pages corresponding to the first subset of guest memory pages, and
for a second subset of the set of guest memory pages not modified since the first time, reclaiming, by the hypervisor, a second set of host memory pages corresponding to the second subset of guest memory pages; and
if the first identifier precedes the last identifier, discarding the inflate notification, wherein discarding the inflate notification includes determining to not reclaim a set of host memory pages corresponding to the set of guest memory pages specified in the inflate notification.

US Pat. No. 10,169,087

TECHNIQUE FOR PRESERVING MEMORY AFFINITY IN A NON-UNIFORM MEMORY ACCESS DATA PROCESSING SYSTEM

International Business Ma...

1. A non-transitory computer readable device having a computer program product for preserving memory affinity in a non-uniform memory access data processing system, said non-transitory computer readable device comprising:program code for, in response to a request for memory access to a page within a first memory affinity domain, determining whether or not said request is initiated by a remote processor associated with a second memory affinity domain;
program code for, in response to a determination that said request is initiated by a remote processor associated with a second memory affinity domain, determining whether or not a page migration tracking module associated with said first memory affinity domain includes an entry for said remote processor;
program code for, in response to a determination that said first page migration tracking module includes an entry for said remote processor, incrementing an access counter associated with said entry within said page migration tracking module;
program code for determining whether or not there is a page ID match with an entry within said page migration tracking module:
program code for, in response to a determination that there is no page ID match with any entry within said page migration tracking module, selecting an entry within said page migration tracking module and providing said entry with a new page ID and a new memory affinity ID;
program code for, in response to the determination that there is a page ID match with an entry within said page migration tracking module, determining whether or not there is a memory affinity ID match with said entry having the page ID field match;
program code for in response to a determination that there is no memory affinity ID match, updating said entry with the page ID field match with a new memory affinity ID;
program code for, in response to a determination that there is a memory affinity ID match, incrementing an access counter of said entry having the page ID field match;
program code for determining whether or not said access counter has reached a predetermined threshold; and
program code for, in response to a determination that said access counter has reached a predetermined threshold, migrating said page from said first memory affinity domain to said second memory affinity domain.

US Pat. No. 10,169,086

CONFIGURATION MANAGEMENT FOR A SHARED POOL OF CONFIGURABLE COMPUTING RESOURCES

International Business Ma...

1. A computer-implemented method of managing a shared pool of configurable computing resources, the method comprising:collecting a set of scaling factor data related to an active workload on a configuration of the shared pool of configurable computing resources, wherein the set of scaling factor data includes an actual number of transactions per time period being processed;
ascertaining a set of workload resource data associated with the active workload, wherein the set of workload resource data includes a hardware configuration template specifying a processor requirement and a memory requirement for an expected number of transactions per time period to be processed;
computing, by subtracting the actual number of transactions per time period from the expected number of transactions per time period, a transaction per time period difference value;
comparing the transaction per time period difference value to a transaction per time period difference threshold to determine whether the transaction per time period difference value exceeds the transaction per time period difference threshold;
detecting, in response to a determination that the transaction per time period difference value exceeds the transaction per time period difference threshold, a triggering event; and
performing, in response to detecting the triggering event, a configuration action with respect to the configuration of the shared pool of configurable computing resources, wherein the configuration action includes:
reconfiguring the configuration of the shared pool of configurable computing resources.

US Pat. No. 10,169,085

DISTRIBUTED COMPUTING OF A TASK UTILIZING A COPY OF AN ORIGINAL FILE STORED ON A RECOVERY SITE AND BASED ON FILE MODIFICATION TIMES

International Business Ma...

1. A computer-implemented method comprising:receiving a request to perform a task indicating at least a first file used to perform the task, wherein the first file is modified at a first update time and is stored on a production site comprising hardware resources configured to store original data and process tasks associated with the original data, and wherein a first copy of the first file is created at a first copy time and stored on a recovery site comprising hardware resources configured to store copies of the original data and process tasks associated with the copies of the original data;
determining the task is a candidate for processing on the recovery site by:
determining that processing the task comprises reading the first file and creating a result file based on the first file;
determining that processing the task can be completed without user input;
determining that the task does not define a physical location for processing the task; and
determining that the task does not alter the first file as a result of performing the task;
determining the first file and the first copy of the first file match by determining the first update time is earlier than the first copy time, wherein determining the first file and the first copy of the first file match further comprises:
determining a first difference between a current time and the first copy time is above a time threshold, wherein the first copy time indicates a time the first copy of the first file began to be created, wherein the time threshold comprises an amount of time greater than an amount of time used to create the first copy of the first file;
performing the task using resources of the recovery site and using the first copy of the first file stored in the recovery site in response to determining that the first file and the first copy of the first file match, wherein performing the task further comprises:
selecting a first resource on the recovery site for processing the task based on the first resource having a first processing utilization below a processing utilization threshold, wherein the first processing utilization comprises an ongoing processing amount in the first resource divided by a processing capacity amount of the first resource, wherein the processing utilization threshold comprises 80%;
selecting the first resource on the recovery site for processing the task further based on the first resource having a first network speed above a network speed threshold, wherein the first network speed is calculated by sending a test file from the first resource to a second resource connected to the first resource via a network and measuring a transfer speed of the test file, wherein the network speed threshold comprises five megabytes per second; and
storing a result file in the recovery site in response to performing the task; and
outputting a file path in response to performing the task, wherein the file path indicates a location of the result file stored in the recovery site.

US Pat. No. 10,169,084

DEEP LEARNING VIA DYNAMIC ROOT SOLVERS

International Business Ma...

1. A computer implemented method comprising:identifying, by a host computer processor, graphic processor units (GPUs) that are available (available GPUs);
identifying, by the host computer processor, GPUs that are idle (initially idle GPUs) among the available GPUs for an initial iteration of deep learning;
choosing, by the host computer processor, one of the initially idle GPUs as an initial root solver GPU for the initial iteration;
initializing, by the host computer processor, weight data for an initial set of multidimensional data;
transmitting, by the host computer processor, the initial set of multidimensional data to the available GPUs;
forming, by the host computer processor, an initial set of GPUs into an initial binary tree architecture, wherein the initial set comprises the initially idle GPUs and the initial root solver GPU, wherein the initial root solver GPU is the root of the initial binary tree architecture;
calculating, by the initial set of GPUs, initial gradients and a set of initial adjusted weight data with respect to the weight data and the initial set of multidimensional data via the initial binary tree architecture;
in response to the calculating the initial gradients and the initial adjusted weight data, identifying, by the host computer processor, a first GPU among the available GPUs to become idle (first currently idle GPU) for a current iteration of deep learning;
choosing, by the host computer processor, the first currently idle GPU as a current root solver GPU for the current iteration;
transmitting, by the host computer processor, a current set of multidimensional data to the current root solver GPU;
in response to the identifying the first currently idle GPU, identifying, by the host computer processor, additional GPUs that are currently idle (additional currently idle GPUs) among the available GPUs;
transmitting, by the host computer processor, the current set of multidimensional data to the additional currently idle GPUs;
forming, by the host computer processor, a current set of GPUs into a current binary tree architecture, wherein the current set comprises the additional currently idle GPUs and the current root solver GPU, wherein the current root solver GPU is the root of the current binary tree architecture;
calculating, by the current set of GPUs, current gradients and a set of current adjusted weight data with respect to at least the weight data and the current set of multidimensional data via the current binary tree architecture;
in response to the initial root solver GPU receiving a set of calculated initial adjusted weight data, transmitting, by the initial root solver GPU, an initial update to the weight data to the available GPUs;
in response to the current root solver GPU receiving a set of current initial adjusted weight data, transmitting, by the current root solver GPU, a current update to the weight data to the available GPUs; and
repeating the identifying, the choosing, the transmitting, the forming, and the calculating with respect to the weight data, updates to the weight data, and subsequent sets of multidimensional data.

US Pat. No. 10,169,083

SCALABLE METHOD FOR OPTIMIZING INFORMATION PATHWAY

EMC IP Holding Company LL...

1. An apparatus comprising:a receiving module configured to receive a request for task execution at a central processing node for worldwide data; wherein the central processing node is connected to sub-processing network nodes; wherein the sub-processing network nodes are grouped into clusters; wherein each cluster has a distributed file system mapping out network nodes for each respective cluster; wherein each cluster stores a subset of the worldwide data; and wherein each cluster is enabled to use the network nodes of the cluster to perform parallel processing; wherein the central processing node is communicatively coupled to a global distributed file system that maps over each of the cluster's distributed file systems to enable orchestration between the clusters;
a dividing module configured to divide by a worldwide job tracker the request for task execution into worldwide task trackers to be distributed to sub-processing network nodes of the clusters; wherein the network sub-nodes manages a portion of the worldwide data for each respective cluster; wherein each worldwide task tracker maintains records of sub-activities executed as part of the worldwide job;
a transmitting module configured to transmit to each of the sub-processing network nodes for each respective cluster the respective portion of the divided task execution by assigning each worldwide task tracker corresponding to the respective portion to the respective each cluster; and
a leveraging module configured to generate a graph layout of data pathways, the pathways calculated based upon physical distance between the processing nodes and bandwidth constraints, the leveraging module further configured to distribute task execution based upon the processing power of the processing nodes, graph layout, and the size of data processed by the sub-processing network nodes to reduce data movement between the central processing node and the sub-processing nodes.

US Pat. No. 10,169,082

ACCESSING DATA IN ACCORDANCE WITH AN EXECUTION DEADLINE

INTERNATIONAL BUSINESS MA...

1. A method for execution by a processing module of a dispersed storage and task (DST) execution unit that includes a processor, the method comprises:receiving a data request for execution by the DST execution unit, the data request including an execution deadline;
comparing the execution deadline to a current time, which includes:
determining an estimated un-accelerated processing duration and an estimated accelerated processing duration;
determining that the execution deadline compares favorably to the current time when an addition of the estimated un-accelerated processing duration to the current time does not exceed the execution deadline; and
determining that the execution deadline compares favorably to the current time when an addition of the estimated accelerated processing duration to the current time does not exceed the execution deadline and the addition of the estimated un-accelerated processing duration to the current time exceeds the execution deadline;
generating an error response when the execution deadline compares unfavorably to the current time; and
when the execution deadline compares favorably to the current time:
determining a priority level based on the execution deadline; and
executing the data request in accordance with the priority level, wherein executing the data request includes accelerating the executing of the data request when the addition of the estimated accelerated processing duration to the current time does not exceed the execution deadline and the addition of the estimated un-accelerated processing duration to the current time exceeds the execution deadline.

US Pat. No. 10,169,081

USE OF CONCURRENT TIME BUCKET GENERATIONS FOR SCALABLE SCHEDULING OF OPERATIONS IN A COMPUTER SYSTEM

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions, which when executed by one or more hardware processors, cause performance of operations comprising:determining a time for performing an action on a first object stored in a data repository, wherein the action comprises one of:
deleting the first object from the data repository,
modifying content of the first object, or
transferring the first object from one location in the repository to another location in the repository;
responsive to determining, at runtime, that a first time bucket generation of a plurality of time bucket generations is a time bucket generation last-configured for storing references included in an object processing index: selecting the first time bucket generation of the plurality of time bucket generations for storing a first reference to the first object, wherein each time bucket generation comprises time buckets that are (a) of a same interval size and (b) correspond to different time periods;
wherein the object processing index comprises references to objects that are to be processed at a particular time;
responsive to selecting the first time bucket generation: selecting a first time bucket of the first time bucket generation based on the time for performing the action on the first object;
storing the first reference to the first object in the first time bucket of the first time bucket generation;
adding a second time bucket generation to the plurality of time bucket generations by configuring the second time bucket generation for the object processing index;
wherein the first time bucket generation and the second time bucket generation are concurrently configured for the object processing index on a temporary basis while the object processing index is transitioned from using the first time bucket generation to using the second time bucket generation;
determining a time for performing an action on a second object stored in the data repository, wherein the action comprises one of:
deleting the second object from the data repository,
modifying the content of the second object, or
transferring the second object from one location in the repository to another location in the repository;
responsive to determining, at runtime, that the second time bucket generation of the plurality of time bucket generations is the time bucket generation last-configured for storing references included in the object processing index: selecting the second time bucket generation of the plurality of time bucket generations for storing a second reference to the second object;
responsive to selecting the second time bucket generation: selecting a second time bucket of the second time bucket generation based on the time for performing the action on the second object;
storing the second reference to the second object in the second time bucket of the second time bucket generation,
wherein the first object corresponding to the first time bucket in the first time bucket generation and the second object corresponding to the second time bucket in the second time bucket generation are processed in accordance with the object processing index.

US Pat. No. 10,169,080

METHOD FOR WORK SCHEDULING IN A MULTI-CHIP SYSTEM

Cavium, LLC, Santa Clara...

1. A method of processing work items in a multi-chip system, the method comprising:designating, by a work source component associated with a source chip device, a work item to a scheduler processor for scheduling, the source chip device being one of multiple chip devices of the multi-chip system, the work source component comprising a core processor or a coprocessor configured to create work items;
assigning, by the scheduler processor, the work item to a destination chip device of the multiple chip devices for processing, the scheduler processor being one of one or more scheduler processors each associated with a corresponding chip device of the multiple chip devices.

US Pat. No. 10,169,079

TASK STATUS TRACKING AND UPDATE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A method for providing status updates while collaboratively resolving an issue, the method comprising:receiving, using a processing device, an electronic text-based message from a user;
identifying, using the processing device, one or more key phrases in the electronic text-based message, wherein the one or more key phrases are identified based at least in part on training a neural network using training data and applying the neural network to the electronic text-based message, wherein the training data includes key phrases manually indicated by a user;
in response to identifying the one or more key phrases in the received electronic text-based message, automatically displaying, by the processing device, the one or more key phrases to the user with highlighted text;
receiving, by the processing device, a selection from a user of a displayed key phrase from the one or more key phrases that were displayed with highlighted text; and
in response to the user selecting, the displayed key phrase from the one or more key phrases displayed with highlighted text, providing at least one status-based suggestion to the user to change a status milestone associated with a problem resolution based on the user selected key phrase;
wherein the providing of the at least one status-based suggestion to the user based on the user selected key phrase comprises:
building a table to map a key phrase to one or more status identifiers;
mapping the key phrase to one or more status identifiers to associate the key phrase with the at least one status-based suggestion;
in response to the user selecting the displayed key phrase having highlighted text, matching the highlighted text to the key phrase of the table to identify the at least one status-based suggestion that is associated with the matching key phrase in the table and then displaying the at least one status-based suggestion to the user for selection; and
displaying a corresponding status milestone based on the user selecting from the at least one status-based suggestion.

US Pat. No. 10,169,078

MANAGING THREAD EXECUTION IN A MULTITASKING COMPUTING ENVIRONMENT

International Business Ma...

1. A method for managing thread execution, the method comprising:predicting, by one or more computer processors, an amount of processor usage that would be used by a thread in a computing system for execution of a critical section of code, where the critical section of code is defined by a starting marker and an ending marker in a program code that contains the critical section of code;
determining that the thread has a sufficient processor usage allowance to execute the critical section of code to completion; and
in response to determining that the thread has sufficient processor usage allowance to execute the critical section of code to completion:
scheduling, by one or more computer processors, the thread for execution of the critical section of code;
receiving, by one or more computer processors, a request to deschedule the thread, wherein the request is made in response to determining that the thread has insufficient processor usage allowance to continue execution;
responsive to receiving a request to deschedule the thread, scheduling, by one or more computer processors, the thread to complete execution of the critical section of code;
responsive to scheduling the thread to complete execution, determining, by one or more computer processors, processor usage debt accumulated by the thread;
determining that the thread has completed execution of the critical section of code;
responsive to determining that the thread has completed execution of the critical section of code, suspending the thread; and
preventing further execution of the thread until after the processor has executed one or more other threads for an amount of time equal to the amount of processor usage debt accumulated by the thread;
wherein:
the predicted amount of processor usage is a percentage of total execution capacity of the processor that the thread is predicted to use during execution of the critical section of code;
the processor usage debt comprises an amount of time for which the thread is executing while the thread has both insufficient processor usage allowance to continue execution and is executing the critical section of code; and
the one or more computer processors are one or more field programmable gate arrays.

US Pat. No. 10,169,077

SYSTEMS, DEVICES, AND METHODS FOR MAINFRAME DATA MANAGEMENT

United Services Automobil...

1. A method comprising:loading, by a processor, a utility program onto an operating system of a mainframe, wherein the operating system hosts an application that includes a plurality of running jobs, wherein the utility program includes a set of configuration metadata for the application;
configuring, by the processor, the utility program such that the utility program is configured to receive a user input from a workstation that is in communication with the mainframe and the utility program is configured to interface with the application based on the set of configuration metadata responsive to the user input, wherein the mainframe is in communication with the workstation;
creating, by the processor, a job via the utility program interfacing with the application based on the set of configuration metadata responsive to the user input;
submitting, by the processor, the job to the application via the utility program interfacing with the application based on the set of configuration metadata responsive to the user input;
querying, by the processor, the job at the application before completion for an error via the utility program based on the set of configuration metadata;
triggering, by the processor, an alert based on the set of configuration metadata via the utility program responsive to the error; and
outputting, by the processor, the alert to the workstation in communication with the utility program.

US Pat. No. 10,169,076

DISTRIBUTED BATCH JOB PROMOTION WITHIN ENTERPRISE COMPUTING ENVIRONMENTS

International Business Ma...

1. A computer-implemented method for batch code promotion between enterprise scheduling system environments, the method comprising the steps of:connecting, by one or more processors, a graphical interface of an entity to one or more enterprise scheduling environments for promoting changes of batch code of the entity between the one or more enterprise scheduling environments, the batch code is processed during a batch job, the batch job is a low priority job, wherein the low priority batch job is processed by the one or more enterprise scheduling environments;
mapping, by the one or more processors, parameters to batch code fields of the batch code that changes between a first scheduling level of the one or more enterprise scheduling environments to a second scheduling level of the one or more enterprise scheduling environments to create a mapping table to the batch code fields that changes from the first scheduling level and the second scheduling level, wherein the parameters include at least one batch job scheduling object identification, wherein the scheduling object identification further includes a container for all low priority batch jobs, an identification of the batch code, and an identification of the network workstations of the one or more enterprises scheduling environments for promoting the batch code between the first scheduling level to the second scheduling level;
generating a backup, in memory, of the mapping table to the batch code fields;
in response to an action on the graphical interface to promote the changes of the batch code fields between the mapped parameters of the first scheduling level and the second scheduling level, assigning, by the one or more processors, identification to the changes of the batch code fields;
in response to a request to promote the identified changes of the batch code fields, promoting, by the one or more processors, the requested identified changes from the first scheduling level to the second scheduling level using the mapped parameters of the first scheduling level and the second scheduling level; and
correlating, by the one or more processors, the mapping table of changed batch code fields of the first scheduling level with the mapping table of changed batch code fields of the second scheduling level, wherein the correlated mapping table of the batch code fields that change between the first scheduling level and the second scheduling level includes metadata of batch code for each one of the first and the second scheduling levels, and wherein the metadata of the batch code for each one of the first and the second scheduling levels is identified for promoting changes of the batch code fields from first scheduling level to the second scheduling level, further includes the steps of:
creating the batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
verifying the created batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
operating the verified batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
generating a second mapping table based on the mapped parameters and the created, verified, and operated batch job of batch code fields; and
promoting the operated batch job between the second scheduling level and a third scheduling level based on the second mapping table.

US Pat. No. 10,169,075

METHOD FOR PROCESSING INTERRUPT BY VIRTUALIZATION PLATFORM, AND RELATED DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A method for processing an interrupt by a virtualization platform, wherein the method is applied to a computing node, wherein the computing node comprises a physical hardware layer, a host running at the physical hardware layer, at least one virtual machine (VM) running on the host, and virtual hardware that is virtualized on the at least one VM, wherein the physical hardware layer comprises X physical central processing units (pCPUs) and Y physical input/output devices, wherein the virtual hardware comprises Z virtual central processing units (vCPUs), wherein the Y physical input/output devices comprise a jth physical input/output device, wherein the at least one VM comprises a kth VM, wherein the jth physical input/output device directs to the kth VM, wherein the method is executed by the host, and wherein the method comprises:determining an nth pCPU from U target pCPUs when an ith physical interrupt occurs in the jth physical input/output device, wherein the U target pCPUs are pCPUs that comprise an affinity relationship with both the ith physical interrupt and V target vCPUs, wherein the V target vCPUs are vCPUs that are virtualized on the kth VM and comprise an affinity relationship with an ith virtual interrupt, wherein the ith virtual interrupt corresponds to the ith physical interrupt, wherein the X pCPUs comprise the U target pCPUs, and wherein the Z vCPUs comprise the V target vCPUs;
setting the nth pCPU to process the ith physical interrupt;
determining the ith virtual interrupt according to the ith physical interrupt; and
determining an mth vCPU from the V target vCPUs such that the kth VM uses the mth vCPU to execute the ith virtual interrupt,
wherein X, Y, and Z are positive integers greater than 1, wherein U is a positive integer greater than or equal to 1 and less than or equal to X, wherein V is a positive integer greater than or equal to 1 and less than or equal to Z, and wherein i, j, k, m, and n are positive integers.

US Pat. No. 10,169,074

MODEL DRIVEN OPTIMIZATION OF ANNOTATOR EXECUTION IN QUESTION ANSWERING SYSTEM

International Business Ma...

1. A method, in a data processing system comprising a processor and a memory, for scheduling execution of pre-execution operations of an annotator of a question and answer (QA) system pipeline, the method comprising:using, by the data processing system, a model to represent a system of annotators of the QA system pipeline, wherein the model represents each annotator in the system of annotators as a node having one or more performance parameters for indicating a performance of an execution of an annotator corresponding to the node, wherein each annotator in the s stem of annotators is a program that takes a portion of unstructured input text, extracts structured information from the portion of the unstructured input text, and generates annotations or metadata that are attached by the annotator to a source of the unstructured input text, wherein, for each node in the model, the one or more performance parameters corresponding to the node comprise an arrival rate parameter and a service rate parameter of the annotator associated with the node, wherein the arrival rate parameter indicates a number of jobs arriving in the node per second, and wherein the service rate parameter indicates a number of jobs being serviced by the node per second;
determining, by the data processing system, for each annotator in a set of annotators of the system of annotators, an effective response time for the annotator based on the one or more performance parameters;
calculating, by the data processing system, a pre-execution start interval for a first annotator based on an effective response time of a second annotator, wherein execution of the first annotator is sequentially after execution of the second annotator; and
scheduling, by the data processing system, execution of pre-execution operations associated with the first annotator based on the calculated pre-execution start interval for the first annotator.

US Pat. No. 10,169,073

HARDWARE ACCELERATORS AND METHODS FOR STATEFUL COMPRESSION AND DECOMPRESSION OPERATIONS

Intel Corporation, Santa...

1. A hardware processor comprising:a core to execute a thread and offload at least one of a compression thread and a decompression thread; and
a hardware compression and decompression accelerator to execute the at least one of the compression thread and the decompression thread to consume input data and generate output data, wherein the hardware compression and decompression accelerator is coupled to a plurality of input buffers to store the input data, a plurality of output buffers to store the output data, an input buffer descriptor array with an entry for each respective input buffer, an input buffer response descriptor array with a corresponding response entry for each respective input buffer, an output buffer descriptor array with an entry for each respective output buffer, and an output buffer response descriptor array with a corresponding response entry for each respective output buffer.

US Pat. No. 10,169,072

HARDWARE FOR PARALLEL COMMAND LIST GENERATION

NVIDIA CORPORATION, Sant...

1. A method for providing an initial default state for a multi-threaded processing environment, the method comprising:receiving, from an application program, a plurality of separate command lists corresponding to a plurality of parallel threads associated with the application program, wherein each thread in the plurality of parallel threads generates a separate command list in the plurality of command lists;
causing a first command list associated with a first thread included in the plurality of parallel threads to be executed by a processing unit based on a first processing state, wherein the first processing state includes a set of graphics parameters;
after the processing unit executes the first command list, causing a second command list associated with a second thread included in the plurality of parallel threads to be executed by the processing unit based on the first processing state inherited from the first command list;
causing a single unbind method to be executed by the processing unit, wherein the unbind method resets one or more parameters included in the set of graphics parameters to an initial processing state; and
causing commands included in a third command list to be executed by the processing unit after the unbind method is executed.

US Pat. No. 10,169,071

HYPERVISOR-HOSTED VIRTUAL MACHINE FORENSICS

MICROSOFT TECHNOLOGY LICE...

1. A computing system comprising:a processor; and
memory storing instructions executable by the processor, wherein the instructions, when executed, provide a hypervisor configured to:
host a virtualization environment that includes a set of virtual machine (VM) partitions that each include an isolated execution environment managed by the hypervisor, the set of VM partitions comprising:
a root virtual machine (VM) partition,
a first child VM partition that is hypervisor-aware,
a second child VM partition that is non-hypervisor-aware, and
a forensics VM partition that:
includes a forensics service application programming interface (API),
is configured to directly access hardware resources associated with the computing system, and
is separate from, and more privileged than, the first child VM partition; and
create, in the virtualization environment:
a first inter-partition communication mechanism configured to provide a communication channel between the forensics VM partition and the first child VM partition, and
a second inter-partition communication mechanism;
wherein the forensics VM partition is configured to:
acquire, by the first inter-partition communication mechanism, forensics data from a VM running in the first child VM partition; and
provide the forensics data to a forensics service using the forensics service API.

US Pat. No. 10,169,070

MANAGING DEDICATED AND FLOATING POOL OF VIRTUAL MACHINES BASED ON DEMAND

United Services Automobil...

1. A computer-implemented method for managing demand of a pool of virtual machines, the method comprising:determining a demand for a use of virtual machines in a pool of virtual machines, wherein, for a pool that is managed as a dedicated pool, the demand is determined based on resource usage per virtual machine in the pool, and, for a pool that is managed as a floating pool, the demand is determined based on times that one or more virtual machines in the pool are unassigned to users of the pool;
identifying that the determined demand is outside a threshold resource usage of the pool; and
provisioning one or more additional resources to the pool.

US Pat. No. 10,169,069

SYSTEM LEVEL UPDATE PROTECTION BASED ON VM PRIORITY IN A MULTI-TENANT CLOUD ENVIRONMENT

International Business Ma...

1. A computer-implemented method for managing system activities in a cloud computing environment, comprising:determining a type of system activity to perform on one or more servers in the cloud computing environment;
identifying a set of locking parameters at a plurality of hierarchal levels of the cloud computing environment available for restricting system activity on the one or more servers, wherein each locking parameter corresponds to a different type of system activity and is associated with a particular hierarchal level of the plurality of hierarchal levels;
determining whether to perform the type of system activity based on a value of a locking parameter of the set of locking parameters that is associated with the type of system activity, the particular hierarchal level of the plurality of hierarchal levels associated with the locking parameter of the set of locking parameters, and a priority associated with the type of system activity; and
performing the type of system activity after determining to perform the type of system activity.

US Pat. No. 10,169,068

LIVE MIGRATION FOR VIRTUAL COMPUTING RESOURCES UTILIZING NETWORK-BASED STORAGE

Amazon Technologies, Inc....

1. A system, comprising:a plurality of compute nodes comprising one or more processors and memory, configured to implement:
a plurality of hosts for virtual compute instances;
a control plane; and
the control plane, configured to:
for a virtual compute instance that is identified for migration from a source host to a destination host and is a client of a network-based storage resource that stores data for which access is enforced according to a lease state for hosts connected to the network-based resource:
direct the destination host to establish a connection with the network-based storage resource with a standby lease state; and
direct that a request be sent to the network-based storage resource to promote the standby lease state for the destination host to a primary lease state and to change a primary lease state for the source host to another lease state.

US Pat. No. 10,169,067

ASSIGNMENT OF PROXIES FOR VIRTUAL-MACHINE SECONDARY COPY OPERATIONS INCLUDING STREAMING BACKUP JOB

Commvault Systems, Inc., ...

1. A computer-readable medium, excluding transitory propagating signals, storing instructions that, when executed by a computing device having one or more processors and non-transitory computer-readable memory, cause the computing device to perform a method comprising:identifying, by a first data agent executing on the computing device, one or more proxies in a storage management system that are eligible to back up a given virtual machine in a first set of virtual machines in the storage management system,
wherein any one proxy among the one or more proxies is one of:
(a) a first virtual machine that executes on a first computing device, wherein the first virtual machine executes a second data agent for virtual-machine backup, and
(b) a second computing device that executes a second data agent for virtual-machine backup;
wherein the identifying comprises:
(i) determining (A) a set of candidate proxies for backing up the given virtual machine, and (B) a mode of access available to each respective candidate proxy for accessing the given virtual machine's data as a source for backup,
wherein the mode of access has a predefined tier of preference,
wherein the determining is based on analyzing, by the first data agent, data from a database that is associated with a storage manager component that manages the storage management system, and
wherein the storage manager component designates the first data agent as a coordinator data agent for a first backup job for the first set of virtual machines,
(ii) classifying each candidate proxy in the set of candidate proxies based on the predefined tier of preference for the respective candidate proxy's mode of access to the given virtual machine's data as the source for backup, and
(iii) defining one or more candidate proxies that are classified in a highest tier of preference as being eligible to back up the given virtual machine; and
wherein if the defining results in the given virtual machine being stranded without an eligible proxy, subsequently defining one or more candidate proxies, which are classified in a next highest tier of preference that is less than the highest tier of preference, as being eligible to back up the given virtual machine.

US Pat. No. 10,169,065

LIVE MIGRATION OF HARDWARE ACCELERATED APPLICATIONS

Altera Corporation, San ...

1. A method of migrating a hardware accelerated application from a source server to a destination server, wherein the source server comprises a processor connected to an external migration controller and reconfigurable circuitry, wherein the reconfigurable circuitry includes a plurality of accelerator resource slots and a memory management unit for the accelerator resource slots, the method comprising:at the source server, receiving a migration notification from the migration controller, wherein the migration notification specifies a set of accelerator resource slots of the plurality of accelerator resource slots to be migrated and an identifier for the resources in the memory management unit to be migrated; wherein the migration controller is further configured for:
saving an image of state information associated with the hardware accelerated application from the source server to network attached storage in response to receiving the migration notification;
copying the image of state information associated with the hardware accelerated application from the network attached storage to the destination server; and
running the hardware accelerated application in parallel on the source server and the destination server.

US Pat. No. 10,169,063

HYPERVISOR CAPABILITY ACCESS PROVISION

Red Hat Israel, LTD., Ra...

1. A method, comprising:receiving, via a user interface provided by a host controller, a first request for a first hypervisor capability of a hypervisor executing on a host server;
determining, by a processing device, that the first hypervisor capability can be provisioned by a virtualization manager executing on the host controller in view of inclusion within a hypervisor capability subset offered by the virtualization manager;
receiving, via the user interface, a second request for a second hypervisor capability of the hypervisor;
determining, by the processing device, that the virtualization manager cannot provision the second hypervisor capability in view of lack of inclusion within the hypervisor capability subset offered by the virtualization manager;
providing, via the user interface, a first indication of successful provision of the first hypervisor capability in response to the first hypervisor capability being provisioned by the virtualization manager; and
providing, via the user interface, a second indication of successful provision of the second hypervisor capability in response to the second hypervisor capability being provisioned by a hypervisor accessor bypassing the virtualization manager and using one or more first common gateway interface (CGI) scripts hosted by the hypervisor accessor to directly access the hypervisor via a command line tool of the hypervisor, wherein a set of hypervisor capabilities is accessible by the hypervisor accessor executing on the host server, the set comprising the hypervisor capability subset that is accessible by the virtualization manager and a plurality of hypervisor capabilities that are inaccessible by the virtualization manager, the plurality of hypervisor capabilities comprising the second hypervisor capability.

US Pat. No. 10,169,062

PARALLEL MAPPING OF CLIENT PARTITION MEMORY TO MULTIPLE PHYSICAL ADAPTERS

International Business Ma...

1. A method for performing an input/output (I/O) request, the method comprising:Mapping an address for at least a first page associated with a virtual I/O request to an entry in a virtual translation control entry (TCE) table;
Identifying a plurality of physical adapters required to service the virtual I/O request;
And upon determining, for each of the identified physical adapters, that an entry in the respective physical TCE table corresponding to the physical adapter is available, for the identified available physical adapters:
mapping the entry in the virtual TCE table to an entry in the respective physical TCE table corresponding to the identified available physical adapters in parallel, and
issuing a physical I/O request corresponding to each physical TCE table entry to the respective available physical adapters in parallel.

US Pat. No. 10,169,061

SCALABLE AND FLEXIBLE OPERATING SYSTEM PLATFORM

FORD GLOBAL TECHNOLOGIES,...

1. A system, comprising a computer having a processor and a memory, wherein the memory includes:at least one bootloader program that includes instructions to instantiate a management layer that includes a first operating system kernel and a virtual machine manager that executes in the context of the operating system kernel;
instructions in the management layer to instantiate, after the management layer is running, at least one second operating system that executed in the context of the virtual machine manager; and wherein the memory of the computer further includes at least one application; and
at least one security layer that includes instructions for receiving, from at least one application, a request to instantiate and execute, and for identifying a guest operating system to instantiate and execute the at least one application;
wherein the at least one boot loader program includes a primary boot loader and a secondary boot loader, wherein the secondary boot loader includes instructions for receiving updates to support the guest operating system according to a download initiated by the guest operating system.

US Pat. No. 10,169,060

OPTIMIZATION OF PACKET PROCESSING BY DELAYING A PROCESSOR FROM ENTERING AN IDLE STATE

Amazon Technologies, Inc....

1. A method, comprising:determining, by a computer system, a first processing time for a first stage of a pipeline and a second processing time for a second stage of the pipeline, the first stage and the second stage included in a plurality of stages of the pipeline;
determining a first delay value based at least in part on the first processing time, the first delay value being associated with the first stage of the pipeline;
obtaining a first network packet at the pipeline, the first network packet being obtained from a particular source within a first time that includes at least the first processing time and the second processing time;
causing a processor to process the first network packet at the first stage of the pipeline while the processor is in a processing state;
causing the processor to remain in the processing state after processing the first network packet at the first stage of the pipeline for at least the first delay value;
causing the processor to process the first network packet at the second stage of the pipeline while the processor is in the processing state;
determining a second delay value based at least in part on the second processing time, the second delay value being associated with the second stage of the pipeline, the first delay value being different from the second delay value, and the second delay value being calculated based at least in part on which of polling the processor at the first stage of the pipeline, the first processing time, or the second processing time is a greatest value;
causing the processor to remain in the processing state after processing the first network packet for at least the second delay value; and
obtaining a second network packet at the pipeline, the second network packet being obtained after the first network packet is obtained, and the second network packet being able to be processed while the processor is in the processing state.

US Pat. No. 10,169,059

ANALYSIS SUPPORT METHOD, ANALYSIS SUPPORTING DEVICE, AND RECORDING MEDIUM

FUJITSU LIMITED, Kawasak...

11. An analysis support computer, comprising:a memory; and
a processor coupled to the memory and configured to:
first search, in configuration identification information that identifies configuration values indicating identification of software, hardware resource specification, and hardware resource utilization of physical machines and virtual machines executed on the physical machines, before and after respective changes of virtual machines on the physical machines and of the virtual machines respectively executed on the physical machines, for a second physical machine that has configuration identification information which configuration values of software, hardware resource specification and hardware resource utilization are similar to configuration values of a first physical machine, on which configuration identification information of a first virtual machine to be analyzed for migration is executed, before and after a change of a state of at least one virtual machine among virtual machines executed on the second physical machine,
second search in the configuration identification information for a second virtual machine among the at least one virtual machine that is executed on the second physical machine and which configuration values of software, hardware resource specification and hardware resource utilization are similar to configuration values of software, hardware resource specification and hardware resource utilization of the first virtual machine, before and after the change of the state of the second virtual machine executed on the second physical machine, to output information for execution of a process to control the migration of the first virtual machine, in response to the first and second searching, and
perform analysis processing that analyzes operational trends of the virtual machines by using the hardware resource utilization of the first virtual machine and the hardware resource utilization of the second virtual machine.

US Pat. No. 10,169,058

SCRIPTING LANGUAGE FOR ROBOTIC STORAGE AND RETRIEVAL DESIGN FOR WAREHOUSES

1. A system for scripting language for design and operation of a robotic storage and retrieval system in a warehouse, said system comprising:processor and memory operable to provide;
a scripting language framework for directed operation of a control system of said robotic storage and retrieval system, said scripting language framework providing a shelving descriptor and a robot descriptor;
said shelving descriptor operable to model a shelving to be deployed in said warehouse, said shelving descriptor further having associated shelving attributes defining properties of said shelving descriptor;
said robot descriptor operable to model a robot to be deployed in said warehouse, said robot descriptor further having associated robot attributes defining properties of said robot descriptor;
a scripting editor comprising a user interface operable to receive input scripting language code conforming to said scripting language framework and based on warehouse metadata;
a parser operable to interpret or compile said input scripting language code into a runtime system;
said runtime system configured to issue control operations to a robot in said warehouse and communicatively interposed between said robot and a control system of said robotic storage and retrieval system.

US Pat. No. 10,169,057

SYSTEM AND METHOD FOR DEVELOPING AN APPLICATION

Taplytics Inc., Toronto ...

1. A method of remotely modifying a user interface of an application deployed on a plurality of computing devices, the method to be performed at a server that is remote from the computing devices, the method comprising:identifying a first set of parameters corresponding to at least one user interface element of the user interface;
identifying a second set of parameters, the second set of parameters including second update parameters for updating the at least one user interface element of the user interface, the at least one user interface element being identified at the server by a programming language unit for the user interface element in the program code of the application;
identifying at least one first computing device and at least one second computing device in the plurality of computing devices;
associating the at least one first computing device with the first set of parameters;
associating the at least one second computing device with the second set of parameters;
sending the second update parameters to the at least one second computing device, wherein each computing device in the at least one second computing device
updates the at least one user interface element of the deployed application on the second computing device with the second update parameters; and
displays a modified user interface for the deployed application, the modified user interface comprising the updated at least one user interface element.

US Pat. No. 10,169,056

EFFECTIVE MANAGEMENT OF VIRTUAL CONTAINERS IN A DESKTOP ENVIRONMENT

International Business Ma...

1. A method for identifying installed software components in a container running in a virtual execution environment, wherein the container is created by instantiating image data, the method comprising:determining a respective identifier for each of individual layers of a layered structure of the image data;
retrieving from a repository storage arrangement storing information for non-container-based software and container-based software, the information for the container-based software identifying at least one of the installed software components in the container based on the respective identifier for at least one of the individual layers;
forming, from the information stored in the repository storage arrangement, a displayable data structure allowing row filtering for software management and at least specifying as a respective row in the displayable data structure, for each of the installed software components, (i) a type as one of the non-container-based software or the container-based software, (ii) a virtual machine and an operating system corresponding thereto, and (iii) an operating status of started or stopped; and
displaying, on a display device, the displayable data structure.

US Pat. No. 10,169,055

ACCESS IDENTIFIERS FOR GRAPHICAL USER INTERFACE ELEMENTS

SAP SE, Walldorf (DE)

1. A non-transitory computer readable storage medium storing instructions, which when executed by a computer cause the computer to perform operations comprising:receiving a trigger to render at least one graphical user interface element on a graphical user interface associated with a display;
retrieving one or more pre-defined accessibility parameters associated with the at least one graphical user interface element, wherein retrieving the one or more pre-defined accessibility parameters associated with the at least one graphical user interface element comprises accessing one or more application programming interfaces (APIs) associated with the at least one graphical user interface element, and wherein the one or more APIs returning a set of requirements associated with rendering the triggered at least one graphical user interface element;
performing an access control check in real time to determine accessibility information associated with an application and corresponding to the pre-defined accessibility parameters, wherein the access control check determines whether the accessibility information meets the one or more pre-defined accessibility parameters based on whether the one or more pre-defined accessibility parameters are met;
associating a visual identifier representing an accessibility status to the at least one graphical user interface element determined based on the access control check; and
rendering the at least one graphical user interface element with the visual identifier on the graphical user interface, wherein each of the at least one graphical user interface elements are augmented with the associated visual identifier indicating a real-time accessibility status of the associated graphical user interface element.

US Pat. No. 10,169,054

UNDO AND REDO OF CONTENT SPECIFIC OPERATIONS

International Business Ma...

1. A method for performing undo or redo requests, the method comprising:receiving, by one or more computer processors, a list of performed operations, wherein the list of performed operations contains all operations performed in an order of processing;
receiving, by one or more computer processors, a request from a user, wherein the request includes at least one of an undo request of a last performed operation or a redo request of a last performed undo request from the list of performed operations;
requesting, by one or more computer processors, the user provide a selection of at least one content type, wherein the at least one content type is at least one of the following categories: text, audio, video, images;
receiving, by one or more computer processors and from the user, the selection of at least one content type;
determining, by one or more computer processors, a content type of each performed operation in the list of performed operations;
determining, by one or more computer processors, a group of all performed operations from the list of performed operations that have a content type the same as one content type of the at least one content types;
determining, by one or more computer processors, that the group of all performed operations from the list of performed operations that have a content type the same as one content type of the at least one content types consists of zero performed operations;
requesting, by one or more computer processors, that the user provide an additional selection of at least one content type which consists of one or more performed operations;
receiving, by one or more computer processors and from the user, the additional selection of at least one content type; and
responsive to determining the group of all performed operations from the list of performed operations that have a content type the same as one content type of the at least one content types, performing, by one or more computer processors, the at least one of the undo request of a last performed operation or the redo request of a last performed undo request from the list of performed operations that have one content type of the at least one content types.

US Pat. No. 10,169,053

LOADING A WEB PAGE

International Business Ma...

1. A method for loading a web page, the method comprising:searching, by one or more processors, a web application for user interface change portions, wherein execution of the user interface change portions triggers a user interface to change, and wherein the web application is renderable on the user interface as a web page by a browser;
marking, by one or more processors, the user interface change portions to interrupt, upon execution of the web application, the execution of the web application;
interrupting, by one or more processors, execution of the web application upon an initial execution of the web application;
displaying, by one or more processors, the user interface change portions;
displaying, by one or more processors, other portions of the web page at an N unit time delay after a time that the user interface change portions are displayed;
storing, by one or more processors, code for identified user interface change portions from the web page in a ready queue;
storing, by one or more processors, code for the other portions of the web page in a candidate queue, wherein the ready queue and the candidate queue are different queues;
retrieving and executing, by one or more processors, the code for the identified user interface change portions from the ready queue in order to display the identified user interface change portions of the web page;
in response to retrieving and executing the code from the ready queue in order to display the identified user interface change portions of the web page, moving, by one or more processors, the code for the other portions of the web page from the candidate queue to the ready queue; and
retrieving and executing, by one or more processors, the code in the ready queue for the other portions in order to display the other portions of the web page.

US Pat. No. 10,169,052

AUTHORIZING A BIOS POLICY CHANGE FOR STORAGE

Hewlett-Packard Developme...

1. A method executable by a computing device, the method comprising:receiving a basic input output system (BIOS) policy change;
authorizing the BIOS policy change; and
upon the authorization of the BIOS policy change, storing a first copy of the BIOS policy change in a first memory accessible by a central processing unit and transmitting a second copy of the BIOS policy change for storage in a second memory electrically isolated from the central processing unit;
wherein the BIOS policies comprises at least one of a boot order of the BIOS, hardware configurations, and a BIOS security mechanism, and
wherein the BIOS policy change is a modification to the at least one of a boot order of the BIOS, hardware configurations, and a BIOS security mechanism.

US Pat. No. 10,169,051

DATA PROCESSING DEVICE, PROCESSOR CORE ARRAY AND METHOD FOR CHARACTERIZING BEHAVIOR OF EQUIPMENT UNDER OBSERVATION

Blue Yonder GmbH, Karlsr...

1. A data processing device for characterizing behavior properties of an equipment under observation, the data processing device comprising:a plurality of processing units configured to:
pre-process historic data from a plurality of master equipment in order to define a configuration in advance; and
process input values based on numerical transfer functions to generate output values by implementing an input to output mapping based on the configuration defined, wherein the configuration corresponds to behavior properties of one of the plurality of master equipment, wherein some of the output values represent the behavior properties of the equipment under observation, and wherein the plurality of processing units is cascaded into a first processing stage and a second processing stage, wherein
the first processing stage comprises a plurality of first processing units that are adapted to receive a plurality of equipment data values from the equipment under observation as input, and adapted to provide a plurality of intermediate data values as output, according to a plurality of first numerical transfer functions, and
the second processing stage comprises a second processing unit that is adapted to receive the plurality of intermediate data values as input and adapted to provide behavior data as output values according to a second numerical transfer function.

US Pat. No. 10,169,050

SOFTWARE APPLICATION PROVISIONING IN A DISTRIBUTED COMPUTING ENVIRONMENT

International Business Ma...

1. A software provisioning system for a computer system comprising client devices connected via a communication network to a computing infrastructure, the computing infrastructure being configured to provide, upon a user's request, a software application package to an already running machine, wherein the software provisioning system is configured to:retrieve session information about a user logged in to the computing infrastructure via a client device, thereby creating a session;
determine a list of software application packages that the user is entitled to request to be provided to the running machine so that the user is able to use a software application contained in the software application packages; and
calculate software application usage information from the session information and the list of software application packages.

US Pat. No. 10,169,048

PREPARING COMPUTER NODES TO BOOT IN A MULTIDIMENSIONAL TORUS FABRIC NETWORK

International Business Ma...

1. A method for preparing a plurality of computer nodes to boot in a multidimensional fabric network, comprising:retrieving, by a fabric processor (FP) of a computer node within the multidimensional fabric network, a MAC address from a baseboard management controller (BMC) of the computer node and configuring a DHCP discovery packet using the BMC MAC address and sending that packet into the multi-host switch, wherein the BMC is directly connected to the FP by a management port, and wherein the BMC, the multi-host switch, and the FP are located inside the computer node;
establishing an exit node from the multidimensional fabric network to a service provisioning node (SPN) outside the multidimensional fabric network, wherein the SPN is not part of the multidimensional fabric network;
forwarding, by the exit node to the SPN, DHCP requests for IP addresses from the multi-host switch of the computer node within the multidimensional fabric network, wherein the computer node is identified by the BMC MAC address found in the DHCP discovery packet coming from that node's multi-host switch;
receiving, from the SPN by the exit node, a location-based IP address, and forwarding the received location-based IP address to the computer node, wherein the location-based IP address is a computed IP address that uniquely identifies the physical location of the computer node within the multidimensional fabric network;
calculating, by the FP, a host MAC address, wherein the host MAC address is the FP received location-based IP address plus a value of one, combined with a fixed, three byte value for a high twenty-four bits of a forty-eight bit MAC address, the fixed three byte value being known by all nodes and by the SPN; and
programming, by the FP, the calculated host MAC address onto the multi-host switch, wherein the calculated host MAC address replaces the factory default MAC address in NVRAM.

US Pat. No. 10,169,047

COMPUTING DEVICES, METHODS, AND STORAGE MEDIA FOR A SENSOR LAYER AND SENSOR USAGES IN AN OPERATING SYSTEM-ABSENT ENVIRONMENT

Intel Corporation, Santa...

1. A computing device for computing, comprising:a processor; and
firmware to be operated by the processor while the computing device is operating without an operating system (OS) that includes one or more modules, including an environmental factor boot module, and a sensor layer,
wherein the sensor layer is to:
receive sensor data produced by a plurality of sensors, wherein the plurality of sensors is of the computing device or operatively coupled with the computing device;
aggregate the sensor data from the plurality of sensors; and
selectively provide the sensor data or the aggregated sensor data to the one or more modules via an interface of the sensor layer that abstracts the plurality of sensors; and
wherein the environmental factor boot module is to selectively instantiate one or more drivers for one or more corresponding sensors of the plurality of sensors, based at least in part on a portion of sensor data or aggregated sensor data associated with one or more environmental factors.

US Pat. No. 10,169,046

OUT-OF-ORDER PROCESSOR THAT AVOIDS DEADLOCK IN PROCESSING QUEUES BY DESIGNATING A MOST FAVORED INSTRUCTION

International Business Ma...

1. A processor for executing software instructions, the processor comprising:a plurality of processing queues that process the software instructions and provide out-of-order processing of the software instructions when specified conditions are satisfied;
an instruction sequencing unit circuit that determines a sequence of the software instructions executed by the processor, wherein the instruction sequencing unit circuit comprises a most favored instruction circuit that selects an instruction as the most favored instruction (MFI) and communicates the MFI to the plurality of processing queues; and
wherein at least one of the plurality of processing queues comprises a plurality of slots that receive any instruction that is not the most favored instruction when written to one of the plurality of slots, and a dedicated slot for processing the MFI, wherein the dedicated slot cannot process any instruction that is not the MFI.

US Pat. No. 10,169,045

METHOD FOR DEPENDENCY BROADCASTING THROUGH A SOURCE ORGANIZED SOURCE VIEW DATA STRUCTURE

Intel Corporation, Santa...

1. A method for dependency broadcasting through a source organized source view data structure, the method comprising:receiving an incoming instruction sequence using a global front end;
grouping the instructions to form instruction blocks;
populating the register template with block numbers corresponding to the instruction blocks, wherein the block numbers corresponding to the instruction blocks indicate interdependencies among the instruction blocks wherein an incoming instruction block writes its respective block number into fields of the register template corresponding to destination registers referred to by the incoming instruction block;
populating a source organized source view data structure, wherein the source view data structure stores the instruction sources corresponding to the instruction blocks as read from the register template by incoming instruction blocks;
upon dispatch of one block of the instruction blocks, broadcasting a number belonging to the one block to a row of the source view data structure that relates to the one block and marking sources of the row accordingly; and
updating dependency information of remaining instruction blocks in accordance with the broadcast.

US Pat. No. 10,169,044

PROCESSING AN ENCODING FORMAT FIELD TO INTERPRET HEADER INFORMATION REGARDING A GROUP OF INSTRUCTIONS

Microsoft Technology Lice...

1. A method comprising:fetching a group of instructions, configured to execute atomically by a processor, and a group header for the group of instructions, wherein the group header comprises a plurality of fields including an encoding format field, wherein the encoding format field is configured to provide to the processor information concerning how to interpret a format of at least one of a remaining of the plurality of fields of the group header for the group of instructions, and wherein the plurality of fields of the group header comprises: a first field comprising first information regarding exit types for use by a branch predictor in making branch predictions for the group of instructions and a second field comprising second information about whether during execution of the group of instructions each of the group of instructions requires independent vector lanes, a third field comprising third information about whether during the execution of the group of instructions branch prediction is inhibited, and a fourth field comprising fourth information about whether during the execution of the group of instructions predicting memory dependencies between memory operations is inhibited; and
processing the encoding format field to: (1) interpret the first information in the first field to generate a first signal for a branch predictor associated with the processor, (2) interpret the second information in the second field to generate a second signal for an instruction decoder or an instruction scheduler associated with the processor, (3) interpret the third information in the third field to generate a third signal for the branch predictor associated with the processor, and (4) interpret the fourth information in the fourth field to generate a fourth signal to inhibit dependencies between memory operations, including load/store operations.

US Pat. No. 10,169,043

EFFICIENT EMULATION OF GUEST ARCHITECTURE INSTRUCTIONS

Microsoft Technology Lice...

1. In a computing environment a method of converting data for 80 bit registers of a guest architecture to data for 64-bit registers on a host system, the method comprising:determining that an operation should be performed to restore a first set of 80 bits stored in memory for a first 80 bit register of a guest architecture on a host having 64-bit registers;
storing a first set of 64 bits from the first set of 80 bits stored in memory, wherein the first set of 64 bits could be used for 64-bit SIMD operations in the guest architecture, in a first host register;
storing a first set of remaining 16 bits from the first set of 80 bits stored in memory in a supplemental memory storage;
documenting that the remaining 16 bits stored in the supplemental memory are padding bits;
identifying a SIMD operation that should be performed to operate on the first 80-bit register for the guest architecture; and
as a result of identifying a SIMD operation that should be performed to operate on the first 80-bit register for the guest architecture, determining to not convert the first set of 64 bits in the first host register to a floating point number.

US Pat. No. 10,169,042

MEMORY DEVICE THAT PERFORMS INTERNAL COPY OPERATION

Samsung Electronics Co., ...

1. A memory device comprising:a memory cell array including at least one bank that includes at least one block, each of the at least one block having a plurality of memory cell rows having memory cells therein; and
processing circuitry configured to,
receive, from an external source, an internal copy command along with a source address and a destination address associated therewith, the source address indicating a source bank of the at least one bank and a source block of the at least one block within the source bank, and the destination address indicating a destination bank of the at least one bank and a destination block of the at least one block within the destination bank,
compare one or more of (i) the source bank and the destination bank and (ii) the source block and the destination block,
generate one or more of a bank comparison signal and a block comparison signal based on a result of comparing the one or more of (i) the source bank with the destination bank and (ii) the source block with the destination block, the bank comparison signal indicating whether the source bank and the destination bank are a same bank or different banks, and the block comparison signal indicating whether the source block and the destination block are a same block or different blocks,
select a selected internal copy operation from among an internal block copy operation, an inter-bank copy operation or an internal bank copy operation based on the one or more of the bank comparison signal and the block comparison signal,
perform the selected internal copy operation on the memory cell array from the memory cells associated with the source address to the memory cells associated with the destination address, and
output a copy-done signal indicating that the selected internal copy operation is complete, if the selected internal copy operation is complete.

US Pat. No. 10,169,041

EFFICIENT POINTER LOAD AND FORMAT

International Business Ma...

1. A method comprising:receiving a microprocessor instruction for processing by a microprocessor;
maintaining, by the microprocessor, the microprocessor instruction as a single instruction in an instruction queue until the microprocessor instruction is removed from the instruction queue for processing by the microprocessor;
processing the microprocessor instruction in a multi-cycle operation, wherein processing the microprocessor instruction comprises:
retrieving, by a load-store unit of the microprocessor and from cache memory of the microprocessor, a unit of data having a plurality of ordered bits during a first clock cycle;
zeroing, after the retrieving and during the first clock cycle, any of the bits that are not required for use with a predefined addressing mode;
shifting, by the load store unit, the unit of data by a number of bits during a second clock cycle immediately following the first clock cycle;
placing, after the shifting and during the second clock cycle, the unit of data into a register of the microprocessor; and
providing, after the shifting and during the second clock cycle, the unit of data to comparison logic of the microprocessor, wherein the microprocessor instruction is maintained as a single instruction in a completion table during the processing of the microprocessor instruction.

US Pat. No. 10,169,040

SYSTEM AND METHOD FOR SAMPLE RATE CONVERSION

Ceva D.S.P. Ltd., Herzli...

1. A method for performing sample rate conversion by an execution unit, the method comprising:receiving an instruction, wherein the instruction comprises an irregular shifting pattern of data elements stored in a vector register; and
shifting the data elements in the vector register according to the irregular shifting pattern,
wherein the sample rate conversion comprises downsampling, and wherein the irregular shifting pattern is provided by an indication stating whether a memory element in the input vector register loads a data element from an immediate next memory element, or whether the memory element loads a data element previously stored in a shadow vector register and the data element stored in the immediate next memory element is loaded into the shadow vector register.

US Pat. No. 10,169,039

COMPUTER PROCESSOR THAT IMPLEMENTS PRE-TRANSLATION OF VIRTUAL ADDRESSES

OPTIMUM SEMICONDUCTOR TEC...

1. A processor, comprising:a register file comprising one or more registers; and
processing logic circuit, communicatively coupled to the register file, to:
identify a value stored in a first register of the register file as a virtual address, the virtual address comprising a corresponding virtual base page number;
translate the virtual base page number to a corresponding real base page number and zero or more real page numbers, wherein zero or more real page numbers correspond to zero or more virtual page numbers associated with the virtual base page number;
store, in the one or more registers, the real base page number and the zero or more real page numbers;
responsive to identifying at least one input value stored in at least one register of the register file specified by an instruction, combine the at least input value to produce a result value;
compute, based on real translation information stored in the one or more registers, a real translation to a real address of the result value; and
access, based on the computed real translation, a memory.

US Pat. No. 10,169,037

IDENTIFYING EQUIVALENT JAVASCRIPT EVENTS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented process for identifying equivalent JavaScript events, comprising:receiving source code containing two JavaScript events;
extracting, to form extracted HTML elements, an HTML element containing an event from each of the two JavaScript events; and
identifying that the two JavaScript events are equivalent based upon:
a determination that the extracted HTML elements are of a same type according to equivalency criteria B,
a determination that the extracted HTML elements have a same number of attributes according to equivalency criteria C,
a determination that JavaScript function calls of each of the two JavaScript events are similar according to equivalency criteria A, and
a determination that other attributes of the extracted HTML elements satisfy equivalency criteria D.

US Pat. No. 10,169,036

SYNCHRONIZING COMMENTS IN SOURCE CODE WITH TEXT DOCUMENTS

International Business Ma...

1. A method, with a processor of an information processing system, for synchronizing comments in a source code file with text of a source code document, the method comprising:analyzing a source code file;
identifying, based on the analyzing, a set of source code comment text within the source code file;
extracting, based on the identifying, a set of text from the set of source code comment text that has been identified;
generating, based on the identifying, a set of metadata for at least the set of text, the set of metadata comprises at least a unique representation of the set of text, and wherein the set of metadata at least identifies one or more line numbers in the source code file associated with the set of text;
applying a plurality of markup tags to the set of text, the plurality of markup tags at least one of formatting and stylizing the set of text when presented to the user; and
generating a source code document comprising one or more of the set of text, the set of metadata, and the plurality of markup tags.

US Pat. No. 10,169,035

CUSTOMIZED STATIC SOURCE CODE ANALYSIS

INTERNATIONAL BUSINESS MA...

1. A system comprising:a memory; and
a processor coupled with the memory, the processor configured to perform a customized static source code analysis of a source code, the customized static source code analysis comprising:
parsing a source code, the parsing comprising identifying a first application programming interface (API) call, and a second API call;
identifying a first analysis configuration file corresponding to the first API call, and a second analysis configuration file corresponding to the second API call;
determining, based on the first analysis configuration file, a description of the first API call and an identification of a first target resource invoked by the first API call;
determining, based on the second analysis configuration file, a second description of the second API call and an identification of a second target resource invoked by the second API call; and
generating a static source code analysis report that includes the description of the first API call and the identification of the first target resource corresponding to the first API call, and the description of the second API call and the identification of the second target resource corresponding to the second API call.

US Pat. No. 10,169,034

VERIFICATION OF BACKWARD COMPATIBILITY OF SOFTWARE COMPONENTS

International Business Ma...

1. A method of determining backward compatibility of a software component, the method comprising:generating, by a processor, respective tree structures for both a first version of the software component and a second version of the software component, wherein the respective tree structures include a name for each respective attribute, a type for each respective attribute, a name for each respective operation, and a type for each respective operation included in the respective tree structure;
identifying, by the processor, one or more programming interfaces that are exposed by the first version of the software component by: converting attributes of exposed programming interfaces into corresponding operations of a first tree structure that includes a name and a type for each parameter, return, and fault associated with each respective operation; and
determining, by the processor, a backward compatibility of the first version of the software component by comparing the operations of the first version of the software component to one or more operations of the second version of the software component based on the respective tree structures.

US Pat. No. 10,169,033

ASSIGNING A COMPUTER TO A GROUP OF COMPUTERS IN A GROUP INFRASTRUCTURE

International Business Ma...

1. A computer-implemented method for assigning a given computer to a computer group of a set of computer groups, the method comprising:scanning, by a computer, software components installed on the given computer, resulting in a list of discovered software components of the given computer;
for at least one computer group of the set of computer groups:
obtaining, by the computer, a first list of software components most frequently installed on computers of the at least one computer group, wherein the first list is unique for all computer groups of the set of computer groups; and
comparing, by the computer, the first list with the list of discovered software components and, based on the comparison, computing a first likelihood that the given computer belongs to the at least one computer group; and
in case only one of the first likelihoods exceeds a first threshold, assigning, by the computer, the given computer to the at least one computer group for which the first likelihood exceeds the first threshold.

US Pat. No. 10,169,032

PARALLEL DEVELOPMENT OF DIVERGED SOURCE STREAMS

International Business Ma...

1. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions, when executed by a computer, cause the computer to:manage, using a parallel development tool implemented in programmable logic circuitry, diverged source streams, wherein the parallel development tool is to:
track, on a position-by-position basis in a diverged code history associated with a diverged source stream, an origin source stream and an original position of code contained within the diverged source stream, wherein the position-by-position basis is one or more of an argument order or an operand order, and wherein the diverged code history is to be a separate file;
detect a modification to a first portion of the code contained within the diverged source stream at a first position;
automatically document the modification and the first position in the diverged code history, wherein the modification triggers a modification indicator to be prepended to an origin stream indicator and an original position indicator in the diverged code history, and wherein the modification indicator indicates that the modification occurred in a transition to a version of the diverged source stream;
detect a move of a second portion of the code in the diverged source stream from a second position to a third position;
automatically document the move and the second position in the diverged code history, wherein the move triggers a move indicator to be prepended to the origin stream indicator and the original position indicator in the diverged code history, and wherein the move indicator indicates the diverged source stream and a version level at which the move occurred;
detect an addition of a third portion of the code in the diverged source stream at a fourth position;
automatically document the addition with a timestamp in the diverged code history;
present, through a user interface device, an option to ignore the detected modification;
receive a request to merge shifted code contained within the diverged source stream with the origin source stream;
search, using the modification indicator, the original position indicator, the origin stream indicator, the move indicator, and the timestamp, an origin code history for content corresponding to the shifted code in the diverged code history to resolve the request;
track, on a position-by-position basis in the origin code history associated with the origin source stream, the origin source stream, an original position, the diverged source stream and a diverged position of code contained within the origin source stream, using the modification indicator, the original position indicator, the origin stream indicator, the move indicator, and the timestamp;
receive a request to merge out-of-order code contained within the diverged source stream with the origin source stream, wherein the out-of-order code is to be modified prior to other code that has already been merged; and
present, through the user interface device, an alignment option to merge conflicted code, wherein the conflicted code is to be bounded by modified code, wherein the alignment option identifies the modified code at one or more locations to use to correctly merge the conflicted code, and wherein at least one of the one or more locations of the conflicted code includes one or more of the modification, the move, the shifted code, the addition or the out-of-order code.

US Pat. No. 10,169,031

PROGRAM CODE LIBRARY SEARCHING AND SELECTION IN A NETWORKED COMPUTING ENVIRONMENT

International Business Ma...

1. A computer-implemented method for searching for program code libraries in multiple programming languages in a networked computing environment, comprising:receiving, in a computer memory medium, a request to search at least one program code library repository associated with an integrated development environment (IDE) for a program code library;
searching, based on an annotation to the request, the at least one program code library repository for the program code library corresponding to a programming language of a project;
expanding the searching, using emulation based on another annotation to the request, to include a substitute programming language, wherein the emulation creates a wrapper method in the substitute programming language and emulates a library programming language during execution; and
providing, based on the expanded searching, a result to a device hosting the IDE.

US Pat. No. 10,169,030

REFRESHING A SOFTWARE COMPONENT WITHOUT INTERRUPTION

International Business Ma...

1. A computer-implemented method for refreshing a software component without interruption, comprising:detecting when a current instance of the software component is inactive;
activating a refresh process of the software component in parallel to the current instance, including starting a new instance of the software component;
monitoring a state of the current instance and, when the current instance ceases to be inactive, canceling the refresh process;
determining that the refresh process is complete; and
switching from the current instance to the new instance of the software component.

US Pat. No. 10,169,029

PATTERN BASED MIGRATION OF INTEGRATION APPLICATIONS

International Business Ma...

1. A method for migrating applications, the method comprising:obtaining configuration information from a source integration application;
determining a set of features for the source integration application based on the configuration information;
identifying a set of integration patterns, wherein each integration pattern in the set of integration patterns defines a set of expected characteristics;
determining a respective fitness score for each of the set of integration patterns by, for each respective integration pattern of the set of integration patterns:
determining a respective feature score for at least two respective features of the set of features, wherein each of the respective feature scores represent a likelihood that the respective feature matches the respective integration pattern; and
aggregating the respective feature scores to generate the respective fitness score for the respective integration pattern;
selecting one or more integration patterns from the set of integration patterns based on the respective fitness score associated with each of the respective integration patterns; and
migrating the source integration application based on the selected one or more integration patterns.

US Pat. No. 10,169,028

SYSTEMS AND METHODS FOR ON DEMAND APPLICATIONS AND WORKFLOW MANAGEMENT IN DISTRIBUTED NETWORK FUNCTIONS VIRTUALIZATION

Ciena Corporation, Hanov...

1. A workloads management method for on-demand applications in distributed Network Functions Virtualization Infrastructure (dNFVI), the workloads management method comprising:receiving usage data from a unikernel implementing one or more functions of a plurality of functions related to a Virtual Network Function (VNF);
determining an update to the one or more functions in the unikernel based on the usage data;
updating the unikernel by requesting generation of application code for the unikernel based on the update; and
starting the updated unikernel and redirecting service requests thereto,
wherein the unikernel and the updated unikernel are each a specialized, single address space machine image constructed using library operating systems which is executed directly on a hypervisor.

US Pat. No. 10,169,027

UPGRADE OF AN OPERATING SYSTEM OF A VIRTUAL MACHINE

International Business Ma...

1. A method, comprising:receiving, by one or more processors of a computer system, a virtual machine (VM) deletion request, wherein if the VM deletion request includes a first flag then the VM deletion request is a request to upgrade a base operating system (OS) of the VM, and wherein if the VM deletion request does not include the first flag then the VM deletion request is a request to delete the VM;
said one or more processors determining whether the received VM deletion request includes the first flag;
in response to the one or more processors determining that the received VM deletion request includes the first flag, said one or more processors storing metadata of the VM into a resource registry;
after said storing the metadata of the VM into the resource registry, said one or more processors receiving a VM creation request, wherein if the VM creation request includes a second flag then the VM deletion request is a request to upgrade the base OS of the VM, and wherein if the VM creation request does not include the second flag then the VM creation request is a request to create a new VM;
said one or more processors determining whether the received VM deletion request includes the second flag;
in response to the one or more processors determining that the received VM creation request includes the second flag, said one or more processors retrieving the metadata from the resource registry;
after said retrieving the metadata from the resource registry, said one or more processors loading a new version of the base OS onto the VM and using the retrieved metadata to configure the VM with the new version of the base OS; and
said one or more processors deploying the VM with the new version of the base OS.

US Pat. No. 10,169,026

TRANSFERRING OPERATING ENVIRONMENT OF REGISTERED NETWORK TO UNREGISTERED NETWORK

KT CORPORATION, Gyeonggi...

1. A method of providing an operation environment of a registered network having first devices to a user in an unregistered network having second devices, the method comprising:detecting the second devices in the unregistered network when user equipment associated with the user enters a service area of the unregistered network;
as compatible devices, selecting devices compatible with the first devices in the registered network from the detected second devices;
obtaining system images of the first devices compatible with the selected compatible devices;
installing the obtained system images of the first devices at the selected compatible devices, respectively;
generating a user interface for enabling the user to control at least one of the compatible devices; and
transmitting the generated user interface the user equipment,
wherein the user equipment provides the user interface, receives a user input to control the at least one of the compatible devices through the user interface, and controls the at least one of the compatible devices installed with the system image and in the unregistered device based on the received user input by generating a control signal based on the user input and transmitting the generated control signal to the at least one of the compatible devices.

US Pat. No. 10,169,025

DYNAMIC MANAGEMENT OF SOFTWARE LOAD AT CUSTOMER PREMISE EQUIPMENT DEVICE

ARRIS Enterprises LLC, S...

1. A method comprising:detecting a request to load a requested executable software component to volatile memory;
determining that the size of the requested executable software component is greater than the size of available space in the volatile memory;
determining a probability to unload value for each executable software component of one or more executable software components currently loaded in the volatile memory, wherein the probability to unload value for each respective one executable software component is calculated based upon one or more criteria associated with an execution of the respective one executable software component;
based upon the probability to unload values determined for each of the one or more executable software components that are currently loaded in the volatile memory, identifying one or more of the executable software components for removal from the volatile memory;
removing the identified one or more executable software components from the volatile memory; and
loading the requested executable software component to the volatile memory.

US Pat. No. 10,169,024

SYSTEMS AND METHODS FOR SHORT RANGE WIRELESS DATA TRANSFER

Arm Limited, Cambridge (...

1. A method for device control of a primary device using an accessory device as a proxy, the method comprising:executing a device application in an operating system of the primary device;
establishing a short range wireless link between the device application and an accessory application on the accessory device in accordance with a protocol implemented by a device low energy stack on the primary device and an accessory low energy stack on the accessory device, where a procedure of the protocol implemented in the device low energy stack is inaccessible by the device application via the operating system and where the procedure accesses connection parameters of the link;
sending a device control message from a device application to an accessory application requesting performance of the procedure;
responsive to the device control message, the accessory application connecting with the accessory low energy stack and requesting the procedure;
the accessory low energy stack performing the procedure to access the connection parameters of the short range wireless link;
sending, by the accessory application, a response to the device control message; and
receiving the response, at the device application, for the device control message.

US Pat. No. 10,169,022

INFORMATION PROCESSING APPARATUS AND RESOURCE MANAGEMENT METHOD

Canon Kabushiki Kaisha, ...

1. An information processing apparatus that consumes resources of an amount that depends on how many libraries are open, comprising:a determining unit, configured to determine whether the number of libraries including classes set for an installed program is two or more; and
an integrating unit, configured to integrate, in a case where it is determined that the number of libraries is two or more, the classes included to the libraries into libraries of smaller number than the number of libraries, wherein
the determining unit specifies, with reference to a class path indicating class locations that is described in a configuration file of the installed program, a library including a class that is set for the program,
the program includes a system program and an application, with a boot class path and a system class path being described in the configuration file with regard to the system program and an application class path being described in the configuration file with regard to the application, and
the integrating unit integrates the classes included to the library based on the class paths.

US Pat. No. 10,169,021

SYSTEM AND METHOD FOR DEPLOYING A DATA-PATH-RELATED PLUG-IN FOR A LOGICAL STORAGE ENTITY OF A STORAGE SYSTEM

1. A method for deploying a data-path-related plug-in for a logical storage entity of a storage system, the method comprising:deploying the data-path-related plug-in for the logical storage entity, wherein the deploying includes creating a plug-in inclusive data-path specification and wherein the plug-in inclusive data-path specification includes operation of the data-path-related plug-in;
creating a verification data path specification, wherein the verification data-path specification does not include operation of the data-path-related plug-in;
executing a related to the data-path-related plug-in task on a data-path defined by the plug-in inclusive data-path specification to yield a first execution result;
executing the task on a data-path defined by the verification data-path specification to yield a second execution result;
verifying the first execution result using the second execution result thereby validating the task execution;
if any discrepancy exists between the first execution result and the second execution result, performing one or more failure actions; and
removing the verification data-path and performing one or more validation actions when a validation of the data-path-related plug-in is complete, wherein the one or more validation actions include one or more of the following actions:
(a) increasing a grade associated with the data-path-related plug-in; and
(b) issuing a notification indicating that the validation is complete to a user of the logical storage entity.

US Pat. No. 10,169,020

SOFTWARE GLOBALIZATION OF DISTRIBUTED PACKAGES

International Business Ma...

1. A method for globalizing distributed software packages using a global application programming interface (API), the method comprising:extracting, by a computer, a text string in a source language from an independent package program code;
calculating, by the computer using an algorithm, a resource message key for the extracted text string from content of the extracted text string;
storing, by the computer, the resource message key and the extracted text string in a source language resource file;
translating, by the computer, the extracted text string into an additional language to create a translated text string;
storing, by the computer, the translated text string with the resource message key in an additional language resource file; and
distributing, by the computer, an independent package with the source language resource file, the additional language resource file, and the independent package program code bundled in the independent package.

US Pat. No. 10,169,019

CALCULATING A DEPLOYMENT RISK FOR A SOFTWARE DEFINED STORAGE SOLUTION

International Business Ma...

1. An apparatus comprising:a processor;
a memory storing code executable by the processor to perform:
querying a deployed data storage solution for performance data, wherein the data storage solution provides data storage using hardware elements, software elements, an operating system, and drivers for the software elements;
receiving the performance data from the deployed data storage solution;
storing the performance data;
receiving failure data;
calculating discrepancy data for the deployed data storage solution from the failure data;
storing the discrepancy data;
generating data storage solution that provides configurable data storage for data storage deployment, wherein the data storage solution is organized as a data structure comprising a plurality of data storage components, and each data storage component comprises a hardware identifier for the hardware elements, and software prerequisites for the software elements, the operating system identifier, and a driver identifier for software elements;
calculating a deployment risk for the data storage solution using a trade-off analytics function performed by a neural network and based on the discrepancy data, the performance data, a product match, an operating system match, and a software prerequisites match between the data storage solution and data storage parameters, wherein the neural network is trained on data storage solution field data comprising discrepancy data, performance data, and failure data for deployed data storage solutions; and
in response to the deployment risk not exceeding a risk threshold, deploying the data storage solution by providing the hardware and software elements.

US Pat. No. 10,169,018

DOWNLOADING A PACKAGE OF CODE

International Business Ma...

1. A computer-implemented method comprising:receiving at a server a request from a client, for download of a package of code, the request specifying the package of code to be downloaded, wherein upon receipt of the package of code, the client can execute source code comprising the package of code locally;
acquiring information from the request received relating to a user of the client, wherein the information comprises a role of the user of the client, wherein the acquiring information relating to the user of the client comprises identifying the user of the client in an entry in a user registry and ascertaining user access rights from the entry in the user registry to determine the role of the user of the client;
automatically modifying the package of code according to the acquired information to provide a modified package of code specific to the user of the client, wherein functionality of the modified package of code, when executed on the client, is based on the role of the user of the client, wherein the automatically modifying the package of code according to the acquired information to produce the modified package of code comprises automatically removing one or more methods from the package of code, wherein each method of the one or more methods comprises source code, and wherein the one or more methods comprise a known access level, and wherein the automatically removing the one or more methods from the package of code is based on the role of the user of the client not permitting use of methods of the known access level of the one or more methods;
compiling, at the server, the modified package of code; and
transmitting the modified package of code to the client, wherein the client can immediately execute source code comprising the modified package of code locally.

US Pat. No. 10,169,017

CROWDSOURCING LOCATION BASED APPLICATIONS AND STRUCTURED DATA FOR LOCATION BASED APPLICATIONS

International Business Ma...

1. A method for deploying a location based applications providing crowdsourced structured points of input for data entry, the method comprising:selecting different combinations of different location based application components by different end users from over a computer communications network in order to define a different deployable application for each of the different end users;
storing by each of the different end users, a corresponding one of the different deployable applications in a deployable application repository from which each deployable application is downloaded and consumed by others of the different end users and associating during the storing, a map with a corresponding one of the location based components in each deployable application for each corresponding one of the different end users;
defining for each map, a point of input into which structured data defined by the point of input as being a pre-defined selection of selectable data is received in connection with a single location on the map irrespective of a particular one of the combinations of the different location based components of an associated one of the different deployable applications;
deploying each deployable application to a corresponding one of the different end users; and,
subsequent to the deploying of each deployable application to a corresponding one of the different one of the end users, in each one of the deployable applications that has been deployed to a corresponding one of the different one of the end users, receiving from a corresponding one of the different end users, a selection through the point of input of one of the selectable data presented in connection with the single location on the map and aggregating each selection by each corresponding one of the different end users in a repository in association with the single location on the map and thereafter, in response to a request by a new one of the different end users to deploy a particular one of the deployable applications to a mobile device of the new one of the different end users, deploying the particular one of the deployable applications with the single location on the map and all aggregated selections of the selectable data from others of the different end users as stored in connection with the deployable application.

US Pat. No. 10,169,016

EXECUTING OPTIMIZED LOCAL ENTRY POINTS

International Business Ma...

1. A computer system comprising:a memory;
a processor, communicatively coupled to the memory;
an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and
the processor executing program code from an application module loaded into memory, the loaded application selected from the group consisting of: the first application module, the second application module, and the third application module, the executing including:
to load an address of a function, including load an entry point address of a function, wherein the symbolic reference can be resolved using a local entry point address of the function, the entry point address selected from the group consisting of: a local entry point address of the function defined in the loaded application module and a global entry point address, and transferring execution to the entry point address; and
to perform the indirect function call using the address of the function, the performance of the indirect function call further including to transfer execution to the entry point address selected from the group consisting of: the local entry point address of the function defined in the loaded application module and the global entry point address of the function, wherein resolution of the symbolic reference to the function address through the local entry point address of the function comprises a reduction of a quantity of operations executed through the program code.

US Pat. No. 10,169,015

COMPACT DATA MARSHALLER GENERATION

International Business Ma...

1. A method for compact data marshaller generation, comprising:determining a plurality of data types having a same memory layout from data to be marshalled using a processor, each of the plurality of data types being associated with separate data marshallers; and
modifying marshalling code to marshal the data by:
unifying the separate data marshallers to provide a single data marshaller for the plurality of data types for compact data marshaller generation; and
identifying and replacing consecutive data definitions in a first class of the data types that are also part of a second class of the data types with a call to the second class.

US Pat. No. 10,169,014

COMPILER METHOD FOR GENERATING INSTRUCTIONS FOR VECTOR OPERATIONS IN A MULTI-ENDIAN INSTRUCTION SET

International Business Ma...

1. A computer-implemented method executed by at least one processor for a compiler to process a plurality of instructions in a computer program, the method comprising the steps of:specifying to the compiler a code generation endian preference;
the compiler reading the plurality of instructions;
the compiler selecting a vector instruction in the plurality of instructions;
the compiler determining when the vector instruction generates a vector load instruction that does not satisfy the code generation endian preference; and
when the vector instruction is a vector load instruction that does not satisfy the code generation endian preference, the compiler adding to the plurality of instructions in the computer program at least one vector element reverse instruction after the vector load instruction to correct a mismatch between an endian bias of the vector load instruction and the code generation endian preference.

US Pat. No. 10,169,013

ARRANGING BINARY CODE BASED ON CALL GRAPH PARTITIONING

International Business Ma...

1. A method, in a data processing system, for arranging binary code to reduce instruction cache conflict misses, comprising:generating, by a processor of the data processing system executing a compiler, a call graph of a portion of code;
weighting, by the compiler, nodes and edges in the call graph to generate a weighted call graph;
partitioning, by the compiler, the weighted call graph according to the weights, affinities between nodes of the call graph, and the size of cache lines in an instruction cache of the data processing system, so that binary code associated with one or more subsets of nodes in the call graph are combined into individual cache lines based on the partitioning, wherein the partitioning comprises, for each iteration in a plurality of iterations, merging nodes of unprocessed, maximum weight edges of the weighted call graph into a new node to modify the weighted call graph, until for a next iteration, no maximum weight unprocessed edge is present in the modified weighted call graph; and
outputting, by the compiler, the binary code corresponding to the partitioned call graph for execution in a computing device, wherein each node in the call graph is weighted according to a size of code associated with the node.

US Pat. No. 10,169,012

COMPILER OPTIMIZATIONS FOR VECTOR OPERATIONS THAT ARE REFORMATTING-RESISTANT

International Business Ma...

1. An apparatus comprising:at least one processor;
a memory coupled to the at least one processor;
a computer program residing in the memory, the computer program including a plurality of instructions that includes at least one vector operation and that includes a plurality of reformatting-resistant vector operations that comprises a sink instruction without a corresponding reformatting operation; and
a compiler residing in the memory and executed by the at least one processor, the compiler including a vector instruction optimization mechanism that optimizes at least one of the plurality of reformatting-resistant vector operations in the computer program to enhance run-time performance of the computer program.

US Pat. No. 10,169,011

COMPARISONS IN FUNCTION POINTER LOCALIZATION

International Business Ma...

1. A computer system comprising:a memory;
a processor, communicatively coupled to the memory;
an indirect function call configuration, the configuration to define a first application module with a target function of an indirect function call, a second application module with a symbolic reference to the target function of the indirect function call, and a third application module to originate the indirect function call; and
a compiler in communication with the processor, the compiler to generate program code for an application module selected from the group consisting of: the first application module, the second application module, and the third application module, the generating including:
create a group with at least two symbolic references;
load addresses of functions by using the at least two symbolic references contained in the group;
determine that employed values of the at least two symbolic references are used to perform an operation selected from the group consisting of: the indirect function call in the first application module, a comparison to at least one symbolic reference contained in the group, and a comparison to a NULL value; and
indicate, in the generated program code, that the at least two symbolic references can be resolved using local entry point addresses of the functions, wherein resolution of the at least two symbolic references to function addresses through local entry point addresses of the functions comprises a reduction of a quantity of operations executed through the generated program code.

US Pat. No. 10,169,010

PERFORMING REGISTER PROMOTION OPTIMIZATIONS IN A COMPUTER PROGRAM IN REGIONS WHERE MEMORY ALIASING MAY OCCUR AND EXECUTING THE COMPUTER PROGRAM ON PROCESSOR HARDWARE THAT DETECTS MEMORY ALIASING

International Business Ma...

1. A method for running a computer program comprising:compiling the computer program using a compiler that comprises an optimizer that performs register promotion optimizations using a special store instruction for regions of a computer program where memory aliasing can occur;
executing the compiled computer program on a processor that includes:
instruction decode logic that recognizes the special store instruction;
a plurality of registers that each includes an address tag for storing an address; and
hardware that detects memory aliasing at run-time using the address tags for the plurality of registers and recovers from the memory aliasing to provide functional correctness when memory aliasing occurs, wherein the hardware in the processor comprises a load/store unit that includes logic for handling the special store instruction and providing in-order execution of instructions when out-of-order execution of instructions produces memory aliasing, wherein the load/store unit detects when a younger load instruction targeting the address for the special store instruction executes before the special store instruction, and in response, flushes instructions in an instruction pipeline of the processor after the special store instruction.

US Pat. No. 10,169,009

PROCESSOR THAT DETECTS MEMORY ALIASING IN HARDWARE AND ASSURES CORRECT OPERATION WHEN MEMORY ALIASING OCCURS

International Business Ma...

1. A processor for executing software instructions, the processor comprising:instruction decode logic that recognizes a special store instruction that is used in regions of a computer program where memory aliasing can occur;
a plurality of registers that each includes an address tag for storing an address; and
a load/store unit that includes logic for handling the special store instruction and providing in-order execution of instructions when out-of-order execution of instructions produces memory aliasing, wherein the load/store unit detects when a younger load instruction targeting the address for the special store instruction executes before the special store instruction, and in response, flushes instructions in an instruction pipeline of the processor after the special store instruction.

US Pat. No. 10,169,008

INFORMATION PROCESSING DEVICE AND COMPILATION METHOD

FUJITSU LIMITED, Kawasak...

1. An information processing device comprising:a memory; and
a processor coupled to the memory and the processor configured to:
extract a class in which a copy constructor included in a source code or an assignment operator included in the source code are used,
identify a call to the copy constructor or assignment operator included in the class extracted by the processor,
calculate a number of times of access to member variables, indicated in the call identified by the processor and a periphery of the call, of a copy source and a copy destination of a copy process executed based on the call;
compare the calculated number with a number of times of memory access related to a copy source and a copy destination of the call and the periphery of the call, based on a default copy process being executed by the processor based on the call, and
generate an intermediate code having, added thereto, information to be used to execute a process for copying the constructor or the assignment operator in units of member variables and generate an intermediate code having information added thereto based on the call when the number, calculated by the processor, of times of the access is smaller than a number of times of the memory access.