US Pat. No. 10,922,245

INTELLIGENT BLUETOOTH BEACON I/O EXPANSION SYSTEM

Geotab Inc., Oakville (C...

16. A system comprising:a plurality of objects, each object of the plurality of objects being associated with a device of a plurality of devices;
a vehicle; and
at least one monitoring device comprising:
at least one processor; and
at least one non-transitory computer-readable storage medium having encoded thereon executable instructions that, when executed by the at least one processor, cause the at least one processor to carry out a method comprising:
determining objects associated with the vehicle, wherein determining the objects associated with the vehicle comprises:
at a first location of the vehicle, identifying a first set of devices within communication range of the at least one monitoring device;
for each first device of the first set of devices within communication range of the at least one monitoring device at the first vehicle location:
 obtaining information indicating a first location of the first device;
 comparing the information indicating the first location of the first device to a geofence associated with the vehicle; and
 when a result of the comparing indicates that the first device is located within the geofence associated with the vehicle at the first vehicle location, determining that the first device is associated with the vehicle at the first vehicle location;
at a second location of the vehicle, identifying a second set of devices within communication range of the at least one monitoring device;
for each second device of the second set of devices within communication range of the at least one monitoring device at the second vehicle location:
 obtaining information indicating a second location of the second device;
 comparing the information indicating the second location of the second device to the geofence associated with the vehicle; and
 when a result of the comparing indicates that the second device is located within the geofence associated with the vehicle at the second vehicle location; and
determining whether an object of the plurality of objects is associated with the vehicle based on determinations of whether the first set of devices is associated with the vehicle at the first vehicle location and the second set of devices is associated with the vehicle at the second vehicle location, wherein determining whether the object of the plurality of objects is associated with the vehicle comprises:
determining that the object of the plurality of objects is associated with the vehicle in response to determining that i) a device associated with the object is determined to be associated with the vehicle at the first vehicle location and ii) the device associated with the object is determined to be associated with the vehicle at the second vehicle location.

US Pat. No. 10,922,244

SECURE STORAGE OF DATA THROUGH A MULTIFACETED SECURITY SCHEME

Wells Fargo Bank, N.A., ...

1. A method comprising:storing, by a computing system and across a subset of nodes of a plurality of nodes on a network, a plurality of data blocks, each data block generated by encrypting one or more fragments of data;
storing, by the computing system, instructions for assembling a file from the plurality of data blocks;
moving, by the computing system, each of the plurality of data blocks to a different subset of nodes on the network;
storing, by the computing system, updated instructions for assembling the file, wherein the updated instructions enable reassembling the file after moving the plurality of data blocks to the different subset of nodes; and
continuing to move at least some of the data blocks to other subsets of nodes on the network, wherein the data blocks are moved at a frequency based on utilization of the nodes on the network, and wherein the data blocks are moved more frequently during periods of low utilization.

US Pat. No. 10,922,243

SECURE MEMORY

Intel Corporation, Santa...

1. A data storage system with encryption support comprising:a data storage device comprising a plurality of storage locations, the plurality of storage locations comprising a first storage location associated with a first address;
a storage controller to receive a write request comprising an indication of a first data unit and address data indicating the first address;
an encryption system to generate a master parity key;
wherein the encryption system is also to generate a first location parity key based at least in part on the master parity key and the first address;
wherein the encryption system is also to encrypt a first set of parity bits for the first data unit based at least in part on the first location parity key for the first address to generate an encrypted first set of parity bits; and
wherein the storage controller is to write the first data unit and the encrypted first set of parity bits to the data storage device.

US Pat. No. 10,922,242

ADAPTABLE LOGICAL TO PHYSICAL TABLES FOR MULTIPLE SECTOR PATTERN SUPPORT

WESTERN DIGITAL TECHNOLOG...

1. A storage device, comprising:a memory device containing a logical to physical table for a computing arrangement, wherein the table provides a map of logical sector addresses to physical address and a mapped state of the memory device, wherein sectors of the memory device are grouped into indirection units; and
a controller coupled to the memory device, the controller is configured to:
receive a read command specifying read data at the controller for the memory device, the read command originating from a host;
determine, through the logical to physical table, if the read command received at the controller corresponds to an unmapped portion of the memory device;
perform a read of the memory device when the logical to physical table indicates a mapped sector;
determine a desired sector pattern from the logical to physical entry when the logical to physical table is unmapped;
reviewing the read command received at the controller to determine if a pattern is specified for at least one sector and that the pattern specified is different than an existing pattern for the at least one sector; and
return a specified pattern for the read data to the host.

US Pat. No. 10,922,241

SUPPORTING SECURE MEMORY INTENT

Intel Corporation, Santa...

1. A system on a chip (SoC) comprising:an interconnect;
a processor coupled with the interconnect, the processor including:
a shared cache; and
a plurality of cores, including a first core, coupled to the shared cache, the first core including a decode unit to decode instructions, including a given instruction received from a guest virtual machine and indicating a given virtual address;
at least one translation lookaside buffer (TLB) to store translations of virtual addresses to physical addresses;
a page miss handler to perform a page table walk in page tables to identify a page table entry to map the given virtual address to a corresponding physical address and having a security indicator bit corresponding to the physical address, the security indicator bit to either be set to one to indicate a page of a memory at the physical address is an encrypted page, or cleared to zero to indicate the page is an unencrypted page;
a plurality of memory controllers, including a first memory controller, coupled with the interconnect, the first memory controller to control access to the page of the memory;
a memory encryption engine to encrypt data stored to the page, and decrypt data read from the page, if the security indicator bit is set to one;
a register to indicate which bit of the page table entry is the security indicator bit, wherein the register is readable by software;
a plurality of bus controller units coupled with the interconnect, the plurality of bus controller units to control access to a bus; and
a system agent unit coupled with the interconnect, the system agent unit to regulate a power state of the plurality of cores.

US Pat. No. 10,922,240

MEMORY SYSTEM, STORAGE SYSTEM AND METHOD OF CONTROLLING THE MEMORY SYSTEM

TOSHIBA MEMORY CORPORATIO...

1. A memory system connectable to a host, comprising:a nonvolatile memory configured to store a multi-level mapping table for logical-to-physical address translation;
a cache configured to cache a part of the multi-level mapping table; and
a controller configured to control the nonvolatile memory and the cache,
the multi-level mapping table including a plurality of tables corresponding to a plurality of hierarchical levels, each of the tables containing a plurality of address translation data portions, and each of the plurality of address translation data portions included in the table of each of the hierarchical levels covering a logical address range according to each hierarchical level, wherein
the controller is further configured to:
execute, by referring to the address translation data portion corresponding to each of the hierarchical levels stored in the cache, an address resolution process for the logical-to-physical address translation and an address update process of updating a physical address corresponding to a logical address;
obtain, for each of the hierarchical levels, a degree of bias of reference with respect to the address translation data portion stored in the cache;
set a priority for each of the hierarchical levels such that a high priority is set to a hierarchical level in which the degree of bias of reference is weak and a low priority is set to a hierarchical level in which the degree of bias of reference is strong, based on the degree of bias of reference for each of the hierarchical levels obtained; and
execute an operation of preferentially caching each of the address translation data portions of the hierarchical level with the high priority into the cache, over each of the address translation data portions of the hierarchical level with the low priority.

US Pat. No. 10,922,239

DEVICE FOR PERFORMING ITERATOR OPERATION IN DATABASE

Samsung Electronics Co., ...

1. A storage device, comprising:a first memory to store data using entire keys that are character strings; and
a controller, wherein:
the controller is to receive, from a host, a request for information about any and all entire keys having a subset that matches a partial key, the partial key being a subset of an entire character string of an entire key,
the controller is to manage partial key-value address mapping information indicating a correspondence relationship between the partial key and a value address, the value address indicating a region of the first memory,
the controller is to return, to the host in response to the host request, the requested information about any and all entire keys having the subset that matches the partial key regardless of other characters outside the partial key in the entire character string, and
the controller is to determine the information to be returned to the host by:
determining a partial region of the first memory based on the partial key-value address mapping information, and
performing a read operation on the partial region to obtain an entire key.

US Pat. No. 10,922,238

METHOD FOR STORING CONTENT, METHOD FOR CONSULTING CONTENT, METHOD FOR MANAGING CONTENT AND CONTENT READERS

ORANGE, Paris (FR)

1. A storage method comprising:storing content by a first content reader, the first content reader comprising a processor, a first content storage memory, a virtualization layer and a hardware abstraction layer, wherein the storing comprises:
receiving a request for storing a first content of a given format;
in response to the request, generating a first autonomous content, comprising creating a first container in which are stored at least the first content to be stored in the given format and a first access processing suited to the given format and associated with the first content to be stored, and wherein data stored in the first container constitutes the first autonomous content;
generating a first execution processing, stored in the virtualization layer in association with the first autonomous content, the first execution processing being adapted to execute the first access processing of the first autonomous content using the hardware abstraction layer of the first content reader; and
in response to a modification, on the first content reader, of the first access processing, updating, as a function of the modified first access processing, at least the first access processing associated with the first content in the first container of the first autonomous content stored on the first content storage memory.

US Pat. No. 10,922,237

ACCELERATING ACCESSES TO PRIVATE REGIONS IN A REGION-BASED CACHE DIRECTORY SCHEME

Advanced Micro Devices, I...

1. A system comprising:a plurality of processing nodes, wherein each processing node of the plurality of processing nodes comprises one or more processors and a cache subsystem;
one or more memory devices; and
one or more region-based cache directories, wherein each region-based cache directory is configured to track shared regions of memory which have cache lines cached by at least two different processing nodes; and
wherein each processing node of the plurality of processing nodes is configured to maintain an entry with a reference count field to track a number of accesses by the processing node to separate cache lines of a given region responsive to receiving an indication from a corresponding region-based cache directory that the given region is private, wherein a private region is accessed by only a single processing node.

US Pat. No. 10,922,236

CASCADE CACHE REFRESHING

Advanced New Technologies...

1. A computer-implemented method for refreshing a cascade cache, the method comprising:obtaining a dependency relationship between a plurality of caches in the cascade cache;
determining, based on the dependency relationship, one or more cache priorities, wherein a particular cache that does not depend on any other cache has a higher priority than another cache that depends on a different cache;
determining, based on the cache priorities, a cache refreshing sequence associated with the plurality of caches in the cascade cache;
determining, based on the cache refreshing sequence, that a first cache of the plurality of caches is to be refreshed; and
responsive to determining that the first cache is to be refreshed, refreshing the first cache; and determining whether a second cache of the plurality of caches, the second cache following the first cache in the cache refreshing sequence, is to be refreshed after the first cache is refreshed.

US Pat. No. 10,922,235

METHOD AND SYSTEM FOR ADDRESS TABLE EVICTION MANAGEMENT

Western Digital Technolog...

1. An apparatus, comprising:a cache utilization manager configured to track cache utilization metrics of a mapping table cache based on a logical address, the logical address comprising a partition identifier, a starting logical block address, and a count; and
a cache swap manager configured to, in response to a cache miss, replace a cache entry in the mapping table cache with a replacement cache entry, the cache entry having cache utilization metrics based on a partition access frequency, the cache utilization metrics satisfying a cache eviction threshold.

US Pat. No. 10,922,234

METHOD AND SYSTEM FOR ONLINE RECOVERY OF LOGICAL-TO-PHYSICAL MAPPING TABLE AFFECTED BY NOISE SOURCES IN A SOLID STATE DRIVE

Alibaba Group Holding Lim...

1. A computer-implemented method for facilitating error recovery, the method comprising:receiving an input/output request indicating data associated with a first logical block address;
detecting, in a mapping table stored in a memory, an error associated with the first logical block address, wherein the mapping table maps logical block addresses to physical block addresses;
identifying a dedicated block which stores log entries with logical block addresses corresponding to sequentially programmed physical blocks;
performing a search in the dedicated block for an earliest log entry and other log entries for the first logical block address, which comprises:
obtaining, based on the earliest log entry in the dedicated block for the first logical block address, an earliest physical block address corresponding to the first logical block address based on the sequentially programmed physical blocks; and
in response to obtaining no other log entries for the first logical block address, using the earliest physical block address as a first physical block address to be accessed in executing the I/O request;
determining the first physical block address corresponding to the first logical block address based on the sequentially programmed physical blocks; and
executing the I/O request by accessing the first physical block address.

US Pat. No. 10,922,233

STORAGE CLASS MEMORY QUEUE DEPTH THRESHOLD ADJUSTMENT

Hewlett Packard Enterpris...

1. A method of a storage system comprising:determining, with a first controller of the storage system, a representative input/output (IO) request latency between the first controller and a storage class memory (SCM) read cache during a given time period in which the first controller and a second controller of the storage system are sending IO requests to the SCM read cache, the first and second controllers each comprising a respective main cache, and the storage system comprising backend storage;
adjusting at least one SCM queue depth threshold of the first controller, in response to a determination that the determined representative IO request latency exceeds an IO request latency threshold for the SCM read cache;
in response to an IO request of the first controller for the SCM read cache, comparing an SCM queue depth of the first controller to one of the at least one SCM queue depth threshold of the first controller;
selecting between processing the IO request using the SCM read cache, dropping the IO request, and processing the IO request without using the SCM read cache, based on a type of the IO request and a result of the comparison; and
performing the selected processing or dropping in response to the selection.

US Pat. No. 10,922,232

USING CACHE MEMORY AS RAM WITH EXTERNAL ACCESS SUPPORT

Apple Inc., Cupertino, C...

1. An apparatus, comprising:a cache memory circuit configured to store a plurality of cache lines in different ones of a plurality of regions; and
a control circuit configured to:
receive a first access request and a second access request to access the cache memory circuit;
in response to a determination that the first access request is from a processor core coupled to the control circuit, and that the first access request includes a first address associated with a particular cache line in a particular region of the plurality of regions, store the first access request in a cache access queue;
in response to a determination that the second access request is received via a communication bus from a functional circuit, and that the second access request includes a second address that is included in a range of a memory address space mapped to a subset of the plurality of regions, store the second access request in a memory access queue, wherein the subset of the plurality of regions excludes the particular region; and
arbitrate access to the cache memory circuit between the first access request and the second access request.

US Pat. No. 10,922,231

SYSTEMS AND METHODS THAT PREDICTIVELY READ AHEAD DIFFERENT AMOUNTS OF DATA FOR DIFFERENT CONTENT STORED IN A MULTI-CLIENT STORAGE SYSTEM

Open Drives LLC, Culver ...

1. A method comprising:receiving a first set of requests at a first rate to read a first set of data of first content from a first storage device in a storage system, and a second set of requests at a different second rate to read a first set of data of second content from a second storage device in the storage system;
determining a first measure of performance for the first storage device that is different than a second measure of performance for the second storage device;
prioritizing an allocation of cache based on a first difference between the first rate and the second rate, and a second difference between the first measure of performance and the second measure of performance; and
prefetching a first amount of data for the first content from the first storage device and a different second amount of data for the second content from the second storage device to the cache based on the first difference and the second difference.

US Pat. No. 10,922,230

SYSTEM AND METHOD FOR IDENTIFYING PENDENCY OF A MEMORY ACCESS REQUEST AT A CACHE ENTRY

ADVANCED MICRO DEVICES, I...

1. A method, comprising:in response to a cache miss for a first request for first data at a cache, assigning, by a processor, a cache entry of the cache to store the first data;
while the first data is being retrieved from a different level of a memory subsystem to the cache,
storing an indicator at the cache entry that the first request is pending at the cache, the indicator comprising a main memory address of the first data and a status bit indicating that the first data is the subject of a pending cache miss; and
storing an identifier comprising a data index and location information of the cache entry in a buffer;
in response to receiving, at the cache, a second request for the first data while the first data is being retrieved to the cache, reading the indicator at the cache entry;
in response to identifying, based on the indicator, that the first request is pending at the cache, placing the second request for the first data in a pending state until the first data has been retrieved to the cache and treating the second request as a cache hit that has already been copied to the processor; and
in response to receiving the first data at the cache, storing the first data at the cache entry based on the identifier.

US Pat. No. 10,922,229

IN-MEMORY NORMALIZATION OF CACHED OBJECTS TO REDUCE CACHE MEMORY FOOTPRINT

Microsoft Technology Lice...

1. A computing system, comprising:at least one processor; and
memory storing instructions executable by the at least one processor, wherein the instructions, when executed, cause the computing system to:
a data access request from a requesting computing system and identify a data table in a database to be accessed;
obtain the identified data table from the database;
parse the data table into a normalized cache data object, including non-sharable data properties and a reference to an inter-row sharable data object, that includes data properties that are sharable across a row of a table; and
load the normalized cache data object into a cache store corresponding to the requesting computing system.

US Pat. No. 10,922,228

MULTIPLE LOCATION INDEX

EMC IP HOLDING COMPANY LL...

1. In a system that includes a memory, a cache and a storage system, a method for accessing data stored in the cache and/or the storage system, the cache comprising a solid state memory, the method comprising:performing an access, modify and write operation such that an index lookup operation in an index stored in the cache is performed a single time for data accessed by the access, modify and write operation by:
performing the single index lookup operation for requested data stored in at least one of the cache and the storage system in an index stored in the cache, wherein each entry in the index includes one or more locations of data and wherein an entry associated with the requested data includes at least two locations of the requested data;
determining which of the at least two locations are valid locations;
determining that a first location of the valid locations is an optimal location;
returning the requested data from the first location and returning location information for the requested data based on the single lookup operation with the requested data, wherein the location information returned with the requested data identifies that the requested data was retrieved from the first location;
storing the location information in a location manager, wherein the location manager is in a memory separate from the cache;
invalidating the requested data associated with the location information when the requested data has been modified without accessing the index by marking an entry in the location manager stored in the memory that corresponds to the entry in the index and is associated with the requested data that is being invalidated; and
performing a batch operation to clean the index based on invalidated entries in the location manager.

US Pat. No. 10,922,227

RESOURCE-SPECIFIC FLUSHES AND INVALIDATIONS OF CACHE AND MEMORY FABRIC STRUCTURES

Intel Corporation, Santa...

1. An apparatus comprising:a substrate; and
pipeline logic coupled to the substrate, wherein the pipeline logic includes a plurality of stages and is implemented in one or more of configurable logic or fixed-functionality logic hardware, the pipeline logic to:
detect, a current stage of the pipeline logic, a flush request with respect to the first resource, wherein the flush request is to be detected based on a signal from an upstream stage,
execute, by the current stage, one or more transactions associated with a second resource,
conduct, by the current stage, one or more flush operations on the first resource, and
communicate the flush request to a downstream stage.

US Pat. No. 10,922,226

SCRATCHPAD MEMORY MANAGEMENT IN A COMPUTING SYSTEM

XILINX, INC., San Jose, ...

1. A computing system, comprising:a memory including regular memory and scratchpad memory;
a peripheral device configured to send a page request for accessing the memory, the page request indicating whether the page request is for the regular memory or the scratchpad memory; and
a processor having a memory management unit (MMU), the MMU configured to:
receive the page request from the peripheral device;
determine whether the request is for scratchpad memory; and
in response to a determination that it is, prevent any of the requested pages from being marked as dirty.

US Pat. No. 10,922,225

FAST CACHE REHEAT

Drobo, Inc., Sunnyvale, ...

1. A method for fast cache reheat in a data storage system, the method comprising:providing a volatile memory cache that includes content corresponding to a set of data blocks in a non-volatile second data store, wherein the non-volatile second data store is disk storage or solid state storage;
periodically storing, in a non-volatile first data store, a snapshot of an index identifying storage locations of the set of data blocks, wherein such periodically storing of the snapshot is performed prior to a restart of the data storage system;
upon a restart of the data storage system causing loss or corruption of the contents of the volatile memory cache, restoring the cache from data in the non-volatile second data store that is persistently stored and available for retrieval upon the restart of the data storage system by:
retrieving from the first data store the index from the last snapshot stored prior to the restart, the last snapshot identifying content of the cache at a time of storing the last snapshot;
retrieving, from the non-volatile second data store, data from the storage locations identified in the index from the last snapshot; and
storing the retrieved data in the cache, wherein the cache is separate from the first and second data stores.

US Pat. No. 10,922,224

METHOD AND DEVICE FOR PROCESSING RECLAIMABLE MEMORY PAGES, AND STORAGE MEDIUM

GUANGDONG OPPO MOBILE TEL...

1. A method for processing a memory, the method being carried out in an electronic device and comprising:determining a plurality of reclaimable memory pages occupied by an application to be processed, comprising:
determining all memory pages occupied by the application to be processed by querying a memory occupied by the application to be processed; and
determining as the plurality of reclaimable memory pages of the all memory pages except at least one particular memory page, wherein data stored on each of the at least one particular memory page is being used by the application to be processed or is data necessary for keeping normal running of the application to be processed;
determining a reclaiming portion corresponding to the application to be processed, wherein the reclaiming proportion represents a proportion of memory pages to be reclaimed in the plurality of reclaimable memory pages;
determining a reclaiming number according to the plurality of reclaimable memory pages and the reclaiming proportion; and
selecting the reclaiming number of reclaimable memory pages from the plurality of reclaimable memory pages and reclaiming the reclaiming number of memory pages,
wherein the determining a reclaiming proportion corresponding to the application to be processed comprises;
determining restarting durations and an application type of the application to be processed, the application type indicating correspondences between restarting durations for the application to be processed and reclaiming proportions, each restarting duration indicating a duration taken for restarting the application to be processed after a reclaiming proportion of the plurality of reclaimable memory pages of the application to be processed is reclaimed; and
determining a reclaiming proportion corresponding to the application to be processed according to a reclaiming model corresponding to the application type.

US Pat. No. 10,922,223

STORAGE DEVICE, COMPUTING SYSTEM INCLUDING STORAGE DEVICE, AND METHOD OF OPERATING THE SAME

SK hynix Inc., Gyeonggi-...

15. A computing system comprising:a storage device; and
a host configured to access the storage device,
wherein the storage device comprises:
a memory device configured to store logical to physical (L2P) mapping information including a plurality of L2P address segments; and
a memory controller including a map cache for storing map data, and configured to:
store the plurality of L2P address segments received from the memory device;
provide at least one L2P address segment of the plurality of L2P address segments to the host in response to a map data request of the host; and
remove, a L2P address segment from the map cache, and
wherein the L2P address segment is selected, among the plurality of L2P address segments, based on a least recently used (LRU) frequency and whether the L2P address segment is provided to the host.

US Pat. No. 10,922,222

DATA PROCESSING SYSTEM AND OPERATING METHOD FOR GENERATING PHYSICAL ADDRESS UPDATE HISTORY

SK hynix Inc., Gyeonggi-...

1. A memory system comprising:a memory device configured to store data in a location determined by a physical address corresponding to a logical address; and
a controller configured to update the physical address corresponding to the logical address for moving the data associated with the logical address to another location, store the data including update history of the physical address corresponding to the logical address into the another location determined by the updated physical address, and track the data associated with the logical address based on the update history of the physical address.

US Pat. No. 10,922,221

MEMORY MANAGEMENT

Micron Technology, Inc., ...

1. A method for memory management, comprising:maintaining a first data structure comprising logical address to physical address mappings for managed units corresponding to a memory;
maintaining a second data structure whose entries correspond to respective physical managed unit addresses, wherein each entry of the second data structure comprises:
an activity counter field corresponding to the respective physical managed unit address; and
a number of additional fields indicating whether the respective physical managed unit address is in one or more of a number of additional data structures that are accessed in association with performing at least one of:
a wear leveling operation on the respective physical managed unit address; and
a neighbor disturb mitigation operation on physical managed unit addresses corresponding to neighbors of the respective physical managed unit address; and
performing a write operation to the memory, wherein performing the write operation to the memory comprises:
accessing the first data structure to determine a particular physical managed unit address to which a logical managed unit address is mapped;
accessing the second data structure based on the determined particular physical managed unit address;
incrementing a value of the activity counter field of the entry in the second data structure corresponding to the particular physical managed unit address;
adding the particular physical managed unit address to a first of the number of additional data structures responsive to determining the value of the activity counter field has reached a disturb threshold; and
adding the particular physical managed unit address to the first and a second of the number of additional data structures responsive to determining the value of the activity counter field has reached a swap threshold.

US Pat. No. 10,922,220

READ AND PROGRAM OPERATIONS IN A MEMORY DEVICE

Intel Corporation, Santa...

1. A system operable to program memory cells, the system comprising:a plurality of memory cells; and
a memory controller comprising logic to:
receive a page of data;
segment the page of data into a group of data segments; and
program the group of data segments to memory cells in the plurality of memory cells that are associated with a memory cell region, wherein the group of data segments for the page of data is programmed using all bits included in each of the memory cells associated with the memory cell region, wherein the page of data is allocated into one wordline using all bits included in each of the memory cells associated with the memory cell region.

US Pat. No. 10,922,219

A/B TEST APPARATUS, METHOD, PROGRAM, AND SYSTEM

SONY CORPORATION, Tokyo ...

1. An A/B test system comprising:developer interface communicatively coupled to a test deployment module, and the test deployment module communicatively coupled to a database server;
wherein the database server includes a first database and a second database, wherein the first database stores data that is executed during an operation initiated at a client terminal, the operation causing accessing of the first database, and the second database is used during a time when the test deployment module runs an A/B test;
wherein, upon a start of the A/B test, the test deployment module duplicates the first database to the second database;
wherein the developer interface receives an indication of a first developer submitted input indicating that the A/B test affects the first database; and
wherein the developer interface receives a second developer submitted input indicating a test definition, wherein the test definition is registered at a test management database that is communicatively coupled to the test deployment module; and
the test deployment module deploys, to a test implementation server, a configuration in accordance with the test definition causing a proxy server to distribute a request from the client terminal to an implementation server and to the test implementation server in accordance with the test definition; and
when the A/B test ends, in accordance with the indication of the first developer submitted input indicating that the A/B test affects the first database, the test deployment module writes back contents of the second database to the first database.

US Pat. No. 10,922,218

IDENTIFYING SOFTWARE INTERDEPENDENCIES USING LINE-OF-CODE BEHAVIOR AND RELATION MODELS

Aurora Labs Ltd., Tel Av...

1. A non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform operations for identifying software interdependencies based on functional line-of-code behavior and relation models, comprising:identifying a first portion of controller-executable code associated with a first controller;
accessing a functional line-of-code behavior and relation model representing functionality of the first portion of controller-executable code and a second portion of controller-executable code, wherein:
the functional line-of-code behavior and relation model was generated by simulating execution of at least the first and second portions of controller-executable code on a simulation controller in a physical or virtual computing environment; and
the functional line-of-code behavior and relation model models symbols and symbol relationships wherein the modeling is based on the first and second portions of controller-executable code;
determining, based on the functional line-of-code behavior and relation model, that the second portion of controller-executable code is interdependent with the first portion of controller-executable code; and
generating, based on the determined interdependency, a report identifying the interdependent first portion of controller-executable code and second portion of controller-executable code.

US Pat. No. 10,922,217

ADAPTIVE REGRESSION TESTING

APPLIED MATERIALS ISRAEL ...

1. A method for adaptive regression testing, the method comprises:generating or receiving monitoring results that are indicative of: (i) relevant data segments, wherein the relevant data segments are data segments that were accessed during an execution of regression tests of a source code or amended during the execution of the regression tests, wherein the executing of the regression tests comprises executing multiple test cases, and (ii) relevant source code segments related to the relevant data segments;
generating, based on the monitoring results, a first mapping that maps test cases of the multiple test cases to at least some of the relevant data segments;
detecting detected data changes introduced during a monitoring period that follows the execution of the regression tests;
selecting, based on the detected data changes and the first mapping, one or more selected test cases for evaluating an impact of the detected data changes;
evaluating the impact of the detected data changes by executing the one or more selected test cases;
generating the monitoring results by monitoring the executing of the regression tests;
generating, based on the monitoring results, a second mapping that maps the relevant data segments to the relevant source code segments; and
generating, based on the second mapping, a third mapping that maps the relevant data segments to test cases.

US Pat. No. 10,922,216

INTELLIGENT AUTOMATION TEST WORKFLOW

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions which, when executed by one or more hardware processors, causes performance of operations comprising:determining a plurality of programmatic flows from a software product, each programmatic flow being accessed by at least one of a plurality of test cases for testing the software product;
subsequent to initial testing of the software product using the plurality of test cases: determining that a particular code artifact, of a plurality of code artifacts of the software product, is affected by one or more changes to the software product;
mapping the particular code artifact to at least a first programmatic flow and a second programmatic flow of the plurality of programmatic flows, wherein mapping the particular code artifact to the first programmatic flow comprises operations for:
determining a first subset of code artifacts from the plurality of code artifacts that are: (a) accessed by the first programmatic flow, or (b) dependent on the first programmatic flow; and
storing a relationship between (a) the first programmatic flow, and (b) the first subset of code artifacts, wherein the first subset of code artifacts comprises the particular code artifact;
responsive to mapping the particular code artifact to at least the first programmatic flow and the second programmatic flow: selecting the first programmatic flow and the second programmatic flow for testing;
mapping the first programmatic flow and the second programmatic flow to a first subset of test cases from the plurality of test cases; and
testing the software product by executing the first subset of test cases without executing a second subset of test cases that are: (a) not mapped to the first programmatic flow, and (b) not mapped to the second programmatic flow.

US Pat. No. 10,922,215

FEATURE TOGGLING USING A PLUGIN ARCHITECTURE IN A REMOTE NETWORK MANAGEMENT PLATFORM

ServiceNow, Inc., Santa ...

8. A computer-implemented method comprising:operating, by one or more server devices, a remote network management platform software application containing one or more application programming interfaces (APIs) configured to facilitate the use of first plugin software and second plugin software with the remote network management platform software application, wherein the one or more APIs are associated with logic configured to check whether a toggle variable is active or inactive, wherein the first plugin software is implemented in a scripting language, wherein the first plugin software comprises a first unit of program code within the remote network management platform software application, wherein the second plugin software is implemented in the scripting language, wherein the second plugin software comprises a second unit of program code within the remote network management platform software application, and wherein the second plugin software defines the toggle variable as active or inactive;
based on the toggle variable being inactive, executing the first unit of program code within the remote network management platform software application;
and
based on the toggle variable being active, executing the second unit of program code within the remote network management platform software application.

US Pat. No. 10,922,214

SYSTEM AND METHOD FOR IMPLEMENTING A TEST OPTIMIZATION MODULE

JPMORGAN CHASE BANK, N.A....

1. A method for implementing a test optimization module for automated testing of an application by utilizing one or more processors and one or more memories, the method comprising:providing a repository that stores a first version of an application and a plurality of test cases;
providing a modification to a source code of the first version of the application;
causing a memory to store the modified first version of the application onto the repository as a second version of the application;
determining what files and line-numbers have been changed in the source code based on comparing the first and second versions of the application;
generating a change scope based on the determined changed files and line-numbers in the source code;
statically analyzing byte code for each test class to find all relations inside the source code affected by the modification to the source code;
creating a change dependency graph (CDG) based on the change scope and the analyzed bytecode;
traversing the CDG to generate a list of test cases among the plurality of test cases that are directly and/or indirectly related to the modification to the source code; and
automatically executing only the test cases selected from the generated list to test the second version of the application,
wherein creating the CDG further comprises:
providing a git interface to generate a git blame command;
for each node in the CDG, calling the git blame command to generate a list of Jira identifications (Jira_Ids) by line numbers corresponding to the list of test cases among the plurality of test cases that are directly and/or indirectly related to the modification to the source code; and
enriching the CDG with the list of Jira_Ids as metadata.

US Pat. No. 10,922,213

EMBEDDED QUALITY INDICATION DATA FOR VERSION CONTROL SYSTEMS

Red Hat, Inc., Raleigh, ...

1. A method comprising:accessing a code object in a version data store, wherein the version data store comprises a change set applied to the code object, wherein the code object comprises a plurality of versions arranged on a plurality of branches, and wherein the change set is incorporated into versions of the code object on at least two of the plurality of branches;
initiating a test of the code object;
accessing test data for the code object, wherein the test data comprises output of the test;
storing the test data in the version data store;
associating the change set with the test data in the version data store;
detecting that the change set is incorporated into a new version of the code object;
analyzing the test data associated with the change set to determine a testing configuration that has not yet been tested;
initiating a test of the new version of the code object on the untested configuration; and
provisioning one or more virtual machine instances with the new version of the code object.

US Pat. No. 10,922,212

SYSTEMS AND METHODS FOR SERVICE CATALOG ANALYSIS

ServiceNow, Inc., Santa ...

1. A system, comprising:non-transitory memory; and
one or more hardware processors configured to read instructions from the non-transitory memory to perform operations comprising:
receiving a test having one or more steps to be executed on a service catalog item of a plurality of service catalog items, wherein the service catalog item comprises an item presented using a service catalog that is configured to offer goods or services;
obtaining test step settings for a plurality of test steps corresponding to steps of the test, wherein the test step settings comprise variables that are entered into the service catalog item during execution of the test;
receiving a selection of a variable of the variables through a graphic interface used to display the service catalog item to potential customers;
receiving an indication to execute the test;
automating through the plurality of test steps step-by-step based at least in part on the test step settings;
responsive to the selection of the variable, initiating a watch session to track changes to the variable;
during the watch session, tracking each change to the variable in the test; and
displaying the change in a variable log along with an indicator of a source of the change.

US Pat. No. 10,922,211

TESTING RESPONSES OF SOFTWARE APPLICATIONS TO SPATIOTEMPORAL EVENTS USING SIMULATED ENVIRONMENTS

Red Hat, Inc., Raleigh, ...

1. A system comprising:a processing device; and
a memory device including instructions that are executable by the processing device for causing the processing device to:
execute simulation software to generate a simulated environment having simulated distributed devices positioned at a plurality of spatial locations in the simulated environment, wherein simulated distributed devices are configured to emulate operation of physical distributed devices corresponding to the simulated distributed devices;
simulate a spatiotemporal event propagating through the simulated environment by modifying a device simulation property of each simulated distributed device based on the spatiotemporal event and a respective spatial location of the simulated distributed device in the simulated environment to produce simulation outputs impacted by the spatiotemporal event; and
provide the simulation outputs as input to a target software application that is separate from the simulated environment to test a response to the spatiotemporal event by the target software application.

US Pat. No. 10,922,210

AUTOMATIC SOFTWARE BEHAVIOR IDENTIFICATION USING EXECUTION RECORD

MICROSOFT TECHNOLOGY LICE...

1. A computing system comprising:one or more processors; and
one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by the one or more processors, cause the computing system to perform the following:
analyze a plurality of execution records of a software application in a debugging environment to identify one or more pattern-behavior pairs, each of the one or more pattern-behavior pairs including a code execution pattern and a corresponding execution behavior that is likely to occur when the code execution pattern exists within the software application;
record a new execution of the software application in the debugging environment as a new execution record, each of the plurality of execution records and the new execution record comprising execution traces that reproducibly represent the execution of the software application;
analyze the new execution record in the debugging environment by using the new execution record to return the new execution of the software application, in order to find at least one particular code execution pattern that is produced by rerunning the new execution record of the software application;
identify a particular execution behavior corresponding to the at least one particular code execution pattern based on the one or more pattern-behavior pairs; and
in response to identifying the particular execution behavior, suggest a modification of the at least one particular code execution pattern within the software application to prevent the particular execution behavior from occurring during execution of the software application.

US Pat. No. 10,922,209

DEVICE AND METHOD FOR AUTOMATICALLY REPAIRING MEMORY DEALLOCATION ERRORS

KOREA UNIVERSITY RESEARCH...

1. A device for automatically repairing memory deallocation errors, the device comprising:a static analysis unit configured to generate status information for each one of objects included in a source code of a program by way of a static analysis of the source code, the status information comprising position information, pointer information, and patch information, the position information associated with allocation sites of the objects, the pointer information associated with pointers pointing to the objects, the patch information associated with deallocation statements capable of deallocating the objects for each point of the source code;
a decision unit configured to choose patch candidates from the patch information and decide on a combination of the patch candidates capable of deallocating each of the objects only once; and
a repair unit configured to repair the source code according to the combination of patch candidates,
wherein
the static analysis unit generates the pointer information by way of point-to analysis, and
the pointer information comprises first pointer information, second pointer information, and third pointer information, and
the first pointer information is associated with information on pointers obtained by over-approximating all pointers possibly pointing to the one object, the second pointer information is associated with information on pointers pointing to the one object, and the third pointer information is associated with pointers not pointing to the one object.

US Pat. No. 10,922,208

OBSERVER FOR SIMULATION TEST AND VERIFICATION

The MathWorks, Inc., Nat...

1. A computer-implemented method comprising:for an executable simulation model that includes a component and an observer, wherein the observer is configured to access at least one of (i) model data of the component generated during execution of the component or (ii) an intrinsic execution event generated during the execution of the component,
establishing, by at least one processor, for the executable simulation model, a first execution space and a second execution space, where the first execution space is separate from the second execution space, such that values of attributes of the observer are inaccessible to the first execution space;
executing, by the at least one processor, the component of the executable simulation model utilizing the first execution space, the executing of the component of the executable simulation model producing at least one of (i) the model data of the component generated during the execution of the component or (ii) the intrinsic execution event generated during the execution of the component;
executing, by the at least one processor, the observer of the executable simulation model utilizing the second execution space, such that the values of the attributes of the observer are blocked from propagating to the component; and
providing at least one of (i) the model data of the component generated during the execution of the component or (ii) the intrinsic execution event generated during the execution of the component to the observer during execution of the observer of the executable simulation model utilizing the second execution space.

US Pat. No. 10,922,207

METHOD, APPARATUS, AND COMPUTER-READABLE MEDIUM FOR MAINTAINING VISUAL CONSISTENCY

SPARTA SYSTEMS, INC., Ha...

1. A method executed by one or more server-side computing devices for maintaining visual consistency of a presentation layer, the method comprising:receiving, by at least one of the one or more server-side computing devices, one or more images and associated metadata from one or more client-side computing devices, the associated metadata indicating a corresponding feature and a corresponding state for each image in the one or more images;
retrieving, by at least one of the one or more server-side computing devices, a baseline image corresponding to each image in the one or more images from a server-side memory based on the corresponding feature and the corresponding state for each image;
performing, by at least one of the one or more server-side computing devices, a visual regression analysis between each image in the one or more images and the corresponding baseline image to determine one or more values of one or more indicators;
identifying, by at least one of the one or more server-side computing devices, at least one key performance indicator in the one or more indicators based at least in part on one or more user settings associated with the one or more client-side computing devices;
determining, by at least one of the one or more server-side computing devices, whether at least one value of the at least one key performance indicator is outside of a predetermined range of values defined in the one or more user settings or does not match an expected value defined in the one or more user settings; and
transmitting, by at least one of the one or more server-side computing devices, the one or more alerts to at least one of the one or more client-side computing devices based at least in part on a determination that the at least one value of the at least one key performance indicator is outside of the predetermined range of values defined in the one or more user settings or does not match the expected value defined in the one or more user settings.

US Pat. No. 10,922,206

SYSTEMS AND METHODS FOR DETERMINING PERFORMANCE METRICS OF REMOTE RELATIONAL DATABASES

Capital One Services, LLC...

1. A method comprising:automatically discovering one or more relational databases stored on one or more remote servers accessible through a network connection, wherein the automatically discovering is performed by auto-discovery code;
receiving an application programming interface (API) call at a gateway, the gateway interfacing with a time series collector to collect time series performance metrics relating to the one or more relational databases;
extracting, from the one or more relational databases via the network connection, performance data relating to a performance of the one or more relational databases, wherein extracting the performance data is performed by auto-scaling triggered code configured not to incur a charge when the auto-scaling triggered code is not running;
converting the performance data into performance metrics, the performance metrics represented as time series data configured to be stored in a time series database accessible to the time series collector; and
responding to the API call with the performance metrics.

US Pat. No. 10,922,205

MONITORING APPLICATIONS RUNNING ON CONTAINERS

VMWARE, INC., Palo Alto,...

1. A method of monitoring an application executing across a plurality of containers in a computing system, comprising:requesting a list of containers created on a computing system;
retrieving information associated with a creation of each container in the list;
parsing the information associated with each container in the list to identify a cluster of related containers that are running the application; and
assessing a health of the application executing on the cluster of related containers based on metrics collected from the cluster of related containers, wherein parsing the information associated with each container in the list to identify the cluster of related containers comprises identifying the cluster of related containers by parsing the information to search for at least one of:
a link command,
a common network,
creation of a configuration file for the application,
a semaphore command related to one or more containers,
a command creating a control group, or
containers that use a same storage volume.

US Pat. No. 10,922,204

EFFICIENT BEHAVIORAL ANALYSIS OF TIME SERIES DATA

CA, Inc., New York, NY (...

1. A method comprising:gathering, by a first processor of a computing device, first data measured during execution of an application by another computing device in communication with the computing device, the first data comprising time-series measurements related to operation of a second processor of the another computing device, a second memory of the another computing device, or a second disk interface of the another computing device;
generating, by the first processor, one or more tiles for each of a plurality of segments of the first data;
determining, by the first processor, that a first tile for a first segment of the plurality of segments matches a second tile of a second segment of the plurality of segments; and
based on determining that the first tile matches the second tile of the second segment of the plurality of segments, indicating, by the first processor, a relationship between the first segment and the second segment;
generating, by the first processor, a behavioral analysis for the application based, at least in part, on the relationship between the first segment and the second segment;
comparing, by the first processor, a third segment of second data collected from the application to segments included in the behavioral analysis; and
based on determining that no tiles of the third segment match tiles of segments included in the behavioral analysis, indicating, by the first processor, that the application is experiencing an anomaly.

US Pat. No. 10,922,203

FAULT INJECTION ARCHITECTURE FOR RESILIENT GPU COMPUTING

NVIDIA Corporation, Sant...

23. A system, comprising:a processing unit configured to:
execute a multi-threaded program;
receive a command to emulate a soft error causing a state error at a specified physical storage node within a specified subsystem of the processing unit;
wait for al: least one halt condition to be satisfied;
responsive to satisfaction of the at least one halt condition, halt execution of the multi-threaded program within the subsystem;
inject the state error at the specified physical storage node by:
configuring data storage circuits within the subsystem to form a scan chain having a scan input signal at the beginning of the scan chain and a scan output signal at the end of the scan chain, wherein each data storage circuit of the data storage circuits includes an initial data value;
configuring the data storage circuits to receive a scan clock signal;
synchronously enabling the scan clock signal after a functional clock signal is disabled for a functional mode of the processing unit;
shifting, synchronously with the scan clock signal, data stored in the data storage circuits along the scan chain through one complete traverse of the scan chain, wherein the scan output signal is configured to bop back into the scan input signal, and wherein the specified physical storage node includes a data storage circuit comprising one of a flip-flop, a latch, or a random access memory (RAM) bit and an initial data value for the specified physical storage node is inverted at the scan input signal during a corresponding clock count of the scan clock signal;
synchronously disabling the scan clock signal before the functional clock signal is enabled;
configuring the data storage circuits to receive the functional clock signal; and
configuring the data storage circuits to operate in the functional mode; and
resume execution of the multi-threaded program, wherein the command is transmitted from a host processor to a programmable management unit within the processing unit through a host interface; and
a host processor coupled to the processing unit and configured to:
execute a user application configured to initiate execution of the multi-threaded program; and
execute a fault injection system, wherein the fault injection system generates and transmits the command, wherein the state error is injected during execution of the multi-threaded program.

US Pat. No. 10,922,202

APPLICATION SERVICE-LEVEL CONFIGURATION OF DATALOSS FAILOVER

MICROSOFT TECHNOLOGY LICE...

1. A computer system comprising:one or more processor(s); and
one or more computer-readable hardware storage device(s) having stored thereon computer-executable instructions that are executable by the one or more processor(s) to cause the computer system to trigger either a lossless failover of data for an application service from a primary data store to a secondary data store in which none of the data is lost and/or a dataloss failover of the application service's data from the primary data store to the secondary data store in which at least some of the data is lost by causing the computer system to at least:
determine a lossless failover parameter for the application service, the lossless failover parameter defining a first point in time that, if reached subsequent to a time when the primary data store becomes at least partially unavailable, triggers an attempt to perform the lossless failover of the application service's data from the primary data store to the secondary data store;
determine a dataloss failover parameter for the application service, the dataloss failover parameter defining a second point in time that, if reached subsequent to the time when the primary data store becomes at least partially unavailable, triggers initiation of the dataloss failover of the application service's data from the primary data store to the secondary data store, wherein the second point in time occurs after the first point in time such that the attempt to perform the lossless failover occurs prior to the initiation of the dataloss failover; and
in response to i) the primary data store becoming at least partially unavailable and ii) the first point in time occurring subsequent to the time when the primary data store becomes at least partially unavailable, trigger the attempt to perform the lossless failover when the first point in time is reached.

US Pat. No. 10,922,201

METHOD AND DEVICE OF DATA REBUILDING IN STORAGE SYSTEM

EMC IP Holding Company LL...

1. A method of data rebuilding in a storage system, comprising:in response to failure of a first disk in the storage system, determining a second disk having a high risk of failure in the storage system;
determining whether the second disk contains a second data block that is associated with a first data block to be rebuilt in the first disk, the first and second data blocks being from a same data stripe in the storage system; and
in response to determining that the second disk contains the second data block and the second data block has not yet been replicated into a third disk for backup in the storage system,
reading the second data block from the second disk to rebuild the first data block, and
replicating the read second data block into the third disk.

US Pat. No. 10,922,200

MEMORY SYSTEM AND METHOD OF OPERATING THE SAME

SK hynix Inc., Gyeonggi-...

1. A memory system, comprising:a memory controller; and
a plurality of memory devices coupled to the memory controller through a channel,
wherein the memory controller sets memory blocks respectively included in the plurality of memory devices as a first super block,
wherein, when a memory block among the memory blocks in the first super block is determined as a bad block, the memory controller generates a second super block by replacing the memory block with a normal memory block included in another super block, and
wherein, when the memory system is in a power saving mode, the memory controller performs a determination operation of determining whether rest memory blocks other than the memory block included in the first super block are bad blocks.

US Pat. No. 10,922,199

ROLE MANAGEMENT OF COMPUTE NODES IN DISTRIBUTED CLUSTERS

VMWARE, INC., Palo Alto,...

1. A distributed cluster comprising:a plurality of compute nodes comprising a master node and a replica node;
an in-memory data grid formed from memory associated with the plurality of compute nodes;
a first high availability agent running on the replica node to:
determine a failure of the master node by accessing data in the in-memory data grid; and
designate a role of the replica node as a new master node to perform cluster management tasks of the master node upon determining the failure; and
a second high availability agent running on the master node to:
determine that the new master node is available in the distributed cluster by accessing the data in the in-memory data grid when the master node is restored after the failure; and
demote a role of the master node to a new replica node upon determining the new master node.

US Pat. No. 10,922,198

CLONING FAILING MEMORY DEVICES IN A DISPERSED STORAGE NETWORK

PURE STORAGE, INC., Moun...

1. A method for execution by a processing unit that includes a processor, the method comprises:identifying a failing memory device based on memory device diagnostic data;
initiating a cloning duration time period to store, in a replacement memory device, encoded slices stored in the failing memory device;
receiving a write request via a network at a receiving time during the cloning duration time period that includes a new encoded slice, wherein the new encoded slice is assigned to a temporary memory device based on an identifier of the new encoded slice;
storing the new encoded slice in the temporary memory device; and
transferring the new encoded slice from the temporary memory device to the replacement memory device in response to an elapsing of the cloning duration time period.

US Pat. No. 10,922,197

CREATING A CUSTOMIZED BOOTABLE IMAGE FOR A CLIENT COMPUTING DEVICE FROM AN EARLIER IMAGE SUCH AS A BACKUP COPY

Commvault Systems, Inc., ...

1. A method comprising:using one or more computing devices comprising one or more hardware processors:
creating a generic bootable image that includes a default kernel and one or more default drivers;
creating a first image of a first computing device that includes a kernel of the first computing device and one or more first drivers, wherein the first image comprises a non-default system state of the first computing device at a first time;
restoring a target computing device to the non-default system state at the first time, comprising:
(i) creating a customized bootable image configured to directly restore the target computing device to the non-default system state, based on the first image and the generic bootable image,
wherein the customized bootable image comprises the kernel from the non-default system state of the first computing device, and excludes any default drivers from the generic bootable image that are unsuitable for hardware at the target computing device, and includes first drivers from the first image that are suitable for hardware at the target computing device,
(ii) loading the customized bootable image at the target computing device, and
(iii) booting the target computing device from the kernel of the first computing device, without booting from the default kernel of the generic bootable image and further without modifying drivers included in the customized bootable image.

US Pat. No. 10,922,196

METHOD AND DEVICE FOR FILE BACKUP AND RECOVERY

EMC IP Holding Company LL...

1. A method for file backup, comprising:receiving, at a file backup system from a data storage device, a file to be backed up and metadata describing attributes of the file, the attributes of the file including a type of the file and a storage position of the metadata in the data storage device;
storing the metadata into a cache of the file backup system;
storing the file into a backup memory;
receiving, at the file backup system from the backup memory, information that indicates a storage position of the file in the backup memory;
determining an address of the cache of the file backup system based on the type of the file and the storage position of the metadata in the data storage device, the storage position of the metadata in the data storage device being indicated by an index node value;
mapping the type of the file to a storage address of a root node of a corresponding tree in the cache of the file backup system; and
mapping the metadata to a leaf node of the corresponding tree based on the index node value; and
storing, in the cache of the file backup system at the determined address, the information that indicates the storage position of the file in the backup memory.

US Pat. No. 10,922,195

CONSENSUS SYSTEM DOWNTIME RECOVERY

ADVANCED NEW TECHNOLOGIES...

1. A computer-implemented consensus method to be implemented on a blockchain maintained by a number (N) of nodes, wherein one of the nodes acts as a primary node and the other (N?1) nodes act as backup nodes, and the method is performed by one of the backup nodes, the method comprising:obtaining a pre-prepare message from the primary node;
multicasting a prepare message to at least some of the primary node and the other (N?2) backup nodes, the prepare message indicating an acceptance of the pre-prepare message;
obtaining (Q?1) or more prepare messages respectively from (Q?1) or more of the backup nodes, wherein Q (quorum) is (N+F+1)/2 rounded up to the nearest integer, and F is (N?1)/3 rounded down to the nearest integer;
storing at least a minimal amount of consensus messages for recovery after one or more of the N nodes crash, wherein the minimal amount of consensus messages comprise the pre-prepare message and at least (Q?1) of the (Q?1) or more prepare messages;
after the one or more of the N nodes crash, loading at least the stored minimal amount of consensus messages;
based on the loaded at least the stored minimal amount of consensus messages,
multicasting a commit message to at least some of the primary node and the other backup nodes, the commit message indicating that the one backup node agrees to the (Q?1) or more prepare messages; and
obtaining, respectively from Q or more nodes among the primary node and the backup nodes, Q or more commit messages each indicating that the corresponding node a corresponding node of the Q or more nodes agrees to (Q?1) or more prepare messages received by the corresponding node.

US Pat. No. 10,922,194

DATA BACKUP METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM

GUANGDONG OPPO MOBILE TEL...

1. A data backup method in a terminal, comprising:acquiring application data to be backed up and update frequencies and data changes of the application data in the terminal;
generating backup priorities based on the update frequencies and data changes by querying an association table with correspondence between data changes, update frequencies and backup priorities; and
transmitting the application data to be backed up to a server based on the backup priorities;
wherein transmitting the application data to be backed up to the server based on the backup priorities comprises:
transmitting the application data to be backed up corresponding to a current priority to the server;
detecting whether the application data to be backed up corresponding to the current priority is completely transmitted; and
transmitting the application data to be backed up corresponding to a next priority to the server, in response to detecting that the application data to be backed up corresponding to the current priority is completely transmitted, wherein the current priority is higher than the next priority;
wherein, in response to detecting that the application data to be backed up corresponding to the current priority is not completely transmitted, the method further comprises:
detecting whether a current network connection state of the terminal is in a disconnection state; and
recording backup progress information of the application data to be backed up corresponding to the current priority, in response to detecting that the current network connection state of the terminal is in the disconnection state;
wherein recording the backup progress information of the application data to be backed up corresponding to the current priority comprises:
acquiring an initial amount of the application data to be backed up corresponding to the current priority and a backup amount of the application data to be backed up corresponding to the current priority;
calculating a ratio of the backup amount to the initial amount to acquire a backup proportion; and
recording the backup proportion as the backup progress information.

US Pat. No. 10,922,193

DATA BACKUP METHOD, STORAGE MEDIUM, AND TERMINAL

GUANGDONG OPPO MOBILE TEL...

1. A data backup method, comprising:acquiring remaining storage capacity of a cloud space;
collecting usage characteristic information of an application when the remaining storage capacity is less than a preset value;
grading the application based on the usage characteristic information of the application, to acquire a grade of the application; and
backing up updated data to the cloud space when the grade of the application is consistent with a preset grade and data in the application is updated;
wherein collecting the usage characteristic information of the application when the remaining storage capacity is less than the preset value comprises: collecting a clicking rate of the application within a preset time period when the remaining storage capacity is less than the preset value; grading the application based on the usage characteristic information of the application, to acquire the grade of the application comprises: grading the application based on the clicking rate of the application, to acquire the grade of the application.

US Pat. No. 10,922,192

METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR DATA BACKUP INCLUDING DETERMINING A HARDWARE CONFIGURATION ALLOCATED TO A PROXY VIRTUAL MACHINE BASED ON A WORKLOAD OF A BACKUP JOB

EMC IP Holding Company LL...

1. A method of data backup, comprising:receiving, at a managing node and from a destination node, a workload of a backup job, wherein the workload is determined by the destination node in response to a request for the backup job from a source node;
determining, by the managing node and based on the workload, a hardware configuration to be allocated to a proxy virtual machine arranged in a plurality of virtual machines on the source node, the proxy virtual machine comprising a backup application for performing data backup for the plurality of virtual machines; and
transmitting, by the managing node, an indication of the hardware configuration to the proxy virtual machine of the source node to enable the backup application to perform the backup job using the hardware configuration.

US Pat. No. 10,922,191

VIRTUAL PROXY BASED BACKUP

EMC IP HOLDING COMPANY LL...

1. A backup method, comprising:configuring, by one or more processors, one or more virtual proxies associated with backup operations, wherein the one or more virtual proxies are hosted by one or more physical nodes in a cluster environment;
localizing, by one or more processors, one or more shared disks that store data of at least one of one or more virtual machines in the cluster environment, the one or more shared disks being localized to at least one of the one or more physical nodes in the cluster environment;
assigning, by one or more processors, the one or more virtual machines in the cluster environment to a corresponding at least one of the one or more virtual proxies; and
performing, by one or more processors, data rollover during backup of at least one of the one or more virtual machines in the cluster environment that is subjected to backup using the corresponding at least one of the one or more virtual proxies to which the at least one of the one or more virtual machines is assigned.

US Pat. No. 10,922,190

UPDATING DATABASE RECORDS WHILE MAINTAINING ACCESSIBLE TEMPORAL HISTORY

INTUIT, INC., Mountain V...

1. A method for updating database records while maintaining accessible temporal history, comprising:receiving a request, at a database, to select an instance of a record from the database at a specific point in time;
reading the instance of the record from a snapshot of the database, wherein the snapshot of the database was made prior to the specific point in time;
loading one or more deltas associated with the record from the database, wherein each delta in the one or more deltas comprises a difference between a new state of the record and a prior state of the record;
chronologically applying the one or more deltas to the instance of the record to create the instance of the record;
returning the instance of the record;
determining that the request has made a percentage of recent requests exceed a threshold for requests for most-current data; and
creating a new snapshot of the database.

US Pat. No. 10,922,189

HISTORICAL NETWORK DATA-BASED SCANNING THREAD GENERATION

Commvault Systems, Inc., ...

1. A computer-implemented method of performing multi-threaded scanning of a network storage system, the computer-implemented method comprising:as implemented by a data agent executing within a client computing device configured in a primary storage environment, wherein the client computing device comprises one or more hardware processors, determining whether historical network characteristics of a network, which was previously used by the client computing device to communicate with a network storage system, are stored in a management database associated with a storage manager that manages at least in part backup operations of files stored at the network storage system, wherein:
the historical network characteristics are weighted according to a recency in time when the historical network characteristics were obtained,
more recent historical network characteristics are weighted heavier than less recent historical network characteristics, and
recency in time for weighting the historical network characteristics is measured from a time when a corresponding backup process was performed for corresponding historical network characteristics;
in response to determining, by the data agent, the historical network characteristics are stored at the management database:
accessing the historical network characteristics at the management database;
determining, by the data agent, current network characteristics of the network;
determining, by the data agent, an amount of scanning threads to initiate based at least in part on an aggregation of the historical network characteristics and the current network characteristics between the client computing device and the network storage system, wherein:
the historical network characteristics and the current network characteristics are weighted differently in the aggregation of the historical network characteristics and the current network characteristics, and
the scanning threads are configured to scan a network storage repository of the network storage system to identify files to back up that are stored at the network storage repository;
triggering initiation of the amount of the scanning threads at the network storage system; and
initiating scanning of the network storage repository using the scanning threads to identify the files to back up.

US Pat. No. 10,922,188

METHOD AND SYSTEM TO TAG AND ROUTE THE STRIPED BACKUPS TO A SINGLE DEDUPLICATION INSTANCE ON A DEDUPLICATION APPLIANCE

EMC IP Holding Company LL...

1. A method for managing backups, the method comprising:receiving, by a tag router and via a first backup stream, first data associated with a first tagged backup stripe,
wherein the first tagged backup stripe is associated with a first routing tag,
wherein a host generated a first backup stripe and second backup stripe from a backup;
wherein the host assigned the first routing tag to the first backup stripe to generate the first tagged backup stripe based on an association of the first backup stripe to the backup,
wherein the host assigned the first routing tag to the second backup stripe to generate a second tagged backup stripe based on an association of the second backup stripe to the backup, and
wherein the host initiated transmission of the first tagged backup stripe to the tag router;
directing, by the tag router and based on the first routing tag, the first data to a first backup instance;
receiving, by the tag router and via a second backup stream, second data associated with the second tagged backup stripe wherein the host initiated transmission of the second tagged backup stripe to the tag router;
directing, by the tag router and based on the first routing tag, the second data to the first backup instance; and
performing, after the first data and the second data are stored in the first backup instance and in the first backup instance, a first deduplication operation on the first data and the second data,
wherein a backup storage system comprises the tag router and the first backup instance,
wherein the host is operatively connected to the backup storage system.

US Pat. No. 10,922,187

DATA REDIRECTOR FOR SCALE OUT

Quantum Corporation, San...

1. A non-transitory computer-readable storage device storing instructions that when executed by a processor control the processor to perform operations for distributing data from a source to a plurality of deduplication blockpools, the operations comprising:accessing a binary large object (BLOB) having a first size, where the BLOB includes a plurality of blocklets, a blocklet having a hash value and a second size, where the second size is smaller than the first size;
upon determining that the plurality of blocklets includes less than a threshold number of blocklets:
selecting, according to a first rule set, a target blockpool from among the plurality of deduplication blockpools;
upon determining that the plurality of blocklets includes at least the threshold number of blocklets:
selecting, according to a second, different rule set, a target blockpool from among the plurality of deduplication blockpools, where the second, different rule set includes a BalanceQuery rule that computes a fitness value for a member of the plurality of deduplication blockpools, and that selects a target blockpool from among the plurality of deduplication blockpools based, at least in part, on the fitness value; and
providing the BLOB to the target blockpool.

US Pat. No. 10,922,186

METHOD AND SYSTEM FOR IMPLEMENTING CURRENT, CONSISTENT, AND COMPLETE BACKUPS BY ROLLING A CHANGE LOG BACKWARDS

GRAVIC, INC., Malvern, P...

1. A method of backing up an online database, the online database being actively changed by one or more applications, and subsequently restoring the backed up online database, the online database including a plurality of records, the method comprising:(a) backing up the online database by:
(i) copying the plurality of records in the online database to a storage device, and
(ii) during the copying of the plurality of records in the online database to the storage device, writing changes that are made to the plurality of records in the online database to a change log for a portion of the plurality of records in the online database that has already been copied to the storage device, and not writing changes that are made to the plurality of records in the online database to the change log for a portion of the plurality of records in the online database that has not yet been copied to the storage device; and
(b) restoring the backed up online database by:
(i) loading the copied plurality of records in the online database from the storage device to a target database, wherein the target database is a distinct storage entity from the storage device, and
(ii) applying the changes in the change log to the target database by rolling the change log backwards, and applying only the most recent change contained in the change log for each data item in the target database,
wherein the change log includes one or more rows, records or blocks of data that have been changed.

US Pat. No. 10,922,185

I/O TO UNPINNED MEMORY SUPPORTING MEMORY OVERCOMMIT AND LIVE MIGRATION OF VIRTUAL MACHINES

Google LLC, Mountain Vie...

1. A method of error handling in a network interface card of a host device, the method comprising:receiving, at a processor of the network interface card, a data packet addressed to a virtual machine executing on the host device;
selecting, by the processor, a receive queue corresponding to the virtual machine and residing in a memory of the network interface card;
retrieving, by the processor, a buffer descriptor from the receive queue, wherein the buffer descriptor includes a virtual memory address; and
in response to determining that the virtual memory address is not associated with a valid translated memory address associated with the virtual machine:
retrieving, by the processor, a backup buffer descriptor from a hypervisor queue residing in the network interface card memory and corresponding to a hypervisor executing on the host device; and
storing, by the processor, contents of the data packet in a host device memory location indicated by a backup memory address in the backup buffer descriptor.

US Pat. No. 10,922,184

DATA ACCESS DURING DATA RECOVERY

EMC IP HOLDING COMPANY LL...

1. A method, comprising:intercepting, during a data recovery operation, a request to access an object that is being recovered as part of the data recovery operation, wherein the object includes one or more sub-objects;
in response to intercepting the access request, determining a recovery status for a first sub-object of the one or more sub-objects in relation to the data recovery operation;
determining an access method to be implemented during the data recovery operation, the access method being implemented to make available at least one of the one or more sub-objects for access in connection with the request to access the object wherein:
the access method is determined based at least in part on the determined recovery status of at least the first object;
the determining the access method comprises changing a recovery priority of the first-sub object relative to recovery priorities of the one or more sub-objects; and
the recovery priorities of the one or more sub-objects pertain to an order with which the one or more sub-objects are recovered; and
providing, in connection with the request to access the object, access to the at least one of the one or more sub-objects in accordance with the determined access method.

US Pat. No. 10,922,183

IN-PLACE DISK UNFORMATTING

MICROSOFT TECHNOLOGY LICE...

1. A distributed system comprising:a preparation component configured for:
creating a backup file on a disk;
a preformatting component configured for:
using the backup file to occupy a predetermined location that defines a backup zone on the disk, wherein the backup file is stored in the backup zone,
wherein the backup zone includes a first portion of the backup file in a first portion of the backup zone as a space-holder for backing up primal data, and
wherein the backup zone includes a second portion of the backup file in a second portion of the backup zone as a space-holder for backing up file table data;
copying primal data from a primal zone to the first portion of the backup zone;
overwriting the first portion of the backup file in the first portion of the backup zone, wherein overwriting the first portion of the backup file backs up metadata files in the first portion of the backup zone, wherein the primal data comprises metadata;
copying file table data to the second portion of the backup zone; and
overwriting the second portion of the backup file in the second portion of the backup zone, wherein overwriting the second portion of the backup file backs up a file system structure in the in the second portion of the backup zone, wherein the file table data comprises the file system structure; and
a formatting component configured for:
formatting the disk having the primal data and the file table data in the backup zone;
an unformatting component configured for:
copying the primal data from the backup zone to the primal zone;
copying the file table data to at least a file table zone; and
unformatting the disk to a preformat configuration.

US Pat. No. 10,922,182

MOTOR DRIVING DEVICE AND DETERMINATION METHOD

FANUC CORPORATION, Yaman...

1. A motor driving device for driving a motor, comprising:a rectifier circuit configured to rectify an AC input voltage supplied from an AC power supply to a DC voltage;
a smoothing capacitor configured to smooth the DC voltage rectified by the rectifier circuit;
an inverter configured to convert a capacitor voltage across the smoothing capacitor into an AC voltage to drive the motor;
a relay configured to be turned on and output a contact signal when the input voltage is input to the rectifier circuit from the AC power supply;
a controller programmed to perform the following steps:
an input voltage detecting step of detecting the input voltage;
a capacitor voltage detecting step of detecting the capacitor voltage; and
a backup start determining step of determining whether or not to start a backup operation of transferring the information stored in a first storage to a second storage, based on at least one of the contact signal output from the relay, the input voltage, and the capacitor voltage,
wherein the backup start determining step determines whether the rectifier circuit is being driven,
when the rectifier circuit is being driven,
the backup start determining step determines to start the backup operation when both the input voltage and the capacitor voltage have lowered,
when the rectifier circuit is not being driven,
the backup start determining step determines to start the backup operation when the input voltage has lowered.

US Pat. No. 10,922,181

USING STORAGE LOCATIONS GREATER THAN AN IDA WIDTH IN A DISPERSED STORAGE NETWORK

PURE STORAGE, INC., Moun...

1. A method comprises:encoding, by a processing unit of a storage network, a data segment using an information dispersal algorithm (IDA) with a first pillar width number to produce a set of encoded data slices;
generating, by the processing unit, a set of storage addresses for the set of encoded data slices based on the first pillar width number, a second pillar width number and a storage address mapping function, wherein the second pillar width number is greater than the first pillar width number;
identifying, by the processing unit, based on the set of storage addresses, a first group of storage units of a set of storage units, wherein the set of storage units includes the second pillar width number of storage units and wherein the first group of storage units includes the first pillar width number of storage units; and
sending, by the processing unit, the set of encoded data slices to the first group of storage units in accordance with the set of storage addresses for storage therein.

US Pat. No. 10,922,180

HANDLING UNCORRECTED MEMORY ERRORS INSIDE A KERNEL TEXT SECTION THROUGH INSTRUCTION BLOCK EMULATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for handling an uncorrected memory error (UE) inside a kernel text section, the method comprising:detecting a UE that affects a kernel text section stored in a memory that is operably coupled to a CPU executing kernel program instructions;
identifying a current instruction affected by the UE;
recovering the UE-affected instruction by loading a copy thereof into the memory from a kernel image maintained in persistent storage;
emulating the UE-affected instruction using the copy of the UE-affected instruction;
incrementing an instruction pointer of the CPU to point to a next instruction in the memory that would normally be executed following the UE-affected instruction had there been no UE; and
storing the copy of the UE-affected instruction in an emulated instruction cache in the memory for future lookup in response to another UE so that the UE-affected instruction can be subsequently recovered and emulated from the emulated instruction cache instead of the persistent storage.

US Pat. No. 10,922,179

POST REBUILD VERIFICATION

PURE STORAGE, INC., Moun...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:determining an encoded data slice to verify;
obtaining the encoded data slice;
determining a dispersed storage (DS) unit of a storage set of DS units to produce a selected DS unit;
sending an encoded data slice request message to each of the storage set of DS units, including the selected DS unit;
receiving encoded data slice response messages to produce a selected encoded data slice;
determining encoded data slice partials of the encoded data slice;
determining whether a sum of the encoded data slice partials compares favorably to the selected encoded data slice wherein encoded data slice partials of the sum of encoded data slice partials includes the encoded data slice partial of the encoded data slice and excludes an encoded data slice partial associated with the selected DS unit;
indicating a failed test when the processing module determines that the comparison is not favorable; and
indicating a passed test when the processing module determines that the comparison is favorable.

US Pat. No. 10,922,178

MASTERLESS RAID FOR BYTE-ADDRESSABLE NON-VOLATILE MEMORY

Hewlett Packard Enterpris...

1. A system comprising:a memory semantic fabric, wherein the memory semantic fabric is to:
track transactions received from a requester, and
route the transactions to a NVM media controller;
a plurality of byte-addressable non-volatile memory (NVM) modules; and
the NVM media controller, wherein the NVM media controller comprises a RAID grouping mapping table and a link layer device, and the NVM media controller is to:
connect to the memory semantic fabric via the link layer device internal to the NVM media controller, and
connect to a fabric bridge via the link layer device internal to the NVM media controller,
wherein the RAID grouping mapping table provides RAID functionality for the NVM media controller, and
wherein the NVM media controller is communicatively coupled to other media controllers to cooperatively provide redundant array of independent disks (RAID) functionality at a granularity at which the NVM modules are byte-addressable without employing a master RAID controller.

US Pat. No. 10,922,177

METHOD, DEVICE AND COMPUTER READABLE STORAGE MEDIA FOR REBUILDING REDUNDANT ARRAY OF INDEPENDENT DISKS

EMC IP Holding Company LL...

1. A method of rebuilding a redundant array of independent disks (RAID), the method comprising:in response to detecting at least one fault disk in the RAID, adding a new disk to the RAID for rebuilding;
determining, from a mapping table, a first set of storage blocks marked as “free” in the at least one fault disk, the mapping table indicating usage state of storage space in the RAID; and
writing a predetermined value into a second set of storage blocks corresponding to the first set of storage blocks in the new disk,
wherein the mapping table indicates usage state of storage blocks in the RAID, the storage blocks corresponding to logic storage slices in a mapping logic unit, and wherein the method further comprises:
in response to the mapping logic unit receiving a write request, (i) allocating a new slice from a slice pool and (ii) sending a first indication to a RAID controller that controls the RAID, the first indication identifying the new slice;
in response to the RAID controller receiving the first indication, marking as used in the mapping table a block that corresponds to the new slice;
sending, by the RAID controller to the mapping logic unit, a second indication that the new slice is available; and
in response to the mapping logic unit receiving the second indication, marking the new slice as available in the mapping logic unit.

US Pat. No. 10,922,176

RECOVERY OF PARITY BASED STORAGE SYSTEMS

SEAGATE TECHNOLOGY LLC, ...

1. A storage system comprising:a plurality of storage nodes in a group of storage nodes, each storage node including one or more storage containers, the one or more storage containers including one or more data containers, one or more parity containers, or one or more spare containers, or any combination thereof; and
a hardware controller configured to:
identify a first failed storage container on a first storage node from the group of storage nodes, identify data associated with the first failed storage container on at least a second storage container on a second storage node from the plurality of storage nodes, and recover the data associated with the first failed storage container from at least the second storage container on the second storage node, wherein recovery of the data associated with the first failed storage container is based at least in part on a first parity container of the first storage node; and
identify data associated with a second failed storage container on a third storage node from the group of storage nodes from the recovered data associated with the first failed storage container, and recover the data associated with the second failed storage container from at least the recovered data associated with the first failed storage container.

US Pat. No. 10,922,175

METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR FAILURE RECOVERY OF STORAGE SYSTEM

EMC IP Holding Company LL...

1. A method for failure recovery of a storage system, comprising:in response to detecting that a disk group of a memory system failed, recording failure duration of the disk group; and
maintaining the disk group in a degraded but not ready state if the failure duration does not reach a predetermined ready time limit and the disk group is in a degraded state, wherein the predetermined ready time limit is shorter than a logic unit number debounce time limit to avoid a data unavailable event.

US Pat. No. 10,922,174

SELECTIVE ERROR RATE INFORMATION FOR MULTIDIMENSIONAL MEMORY

Micron Technology, Inc., ...

1. A memory apparatus, comprising:a memory device including three-dimensional memory entities each comprising a plurality of two-dimensional memory entities; and
a controller coupled to the memory device, wherein the controller is configured to:
determine a quantity of two-dimensional memory entities that have a greater error rate than a remainder of the two-dimensional memory entities based on error rate information;
determine a quantity of portions of three-dimensional memory entities that have a greater error rate than a remainder of the portions of three-dimensional memory entities based on the error rate information excluding error rate information for portions of the two-dimensional memory entities associated with the quantity of two-dimensional memory entities; and
cull the quantity of the two-dimensional memory entities and the quantity of the portions of the three-dimensional memory entities from the memory apparatus.

US Pat. No. 10,922,173

FAULT-TOLERANT DISTRIBUTED DIGITAL STORAGE

1. A method for operating a distributed digital storage system comprising at least one processor and a plurality of digital storage devices, the method comprising:using the at least one processor to direct storing of a set of k source data symbols, wherein k is an integer greater than 1, on the plurality of digital storage devices by:
generating a plurality of encoding symbols from the set of k source data symbols using a Fountain encoder;
wherein generating the plurality of encoding symbols comprises systematic encoding via concatenation of the k source data symbols with a number of non-systematic symbols;
wherein each of the non-systematic symbols comprises a subset of d source data symbols selected uniformly at random from the set of k source data symbols and the Fountain encoded symbols are calculated as an exclusive-or combination of the uniformly selected subset of d source data symbols;
wherein a distribution for d=2,3, . . . k, comprises a multi-objective optimization performed in the following steps:
(i) determining probability of failure Pfail of a Belief Propagation decoder;
(ii) maximizing the probability of successful decoding based on Pfail;
(iii) minimizing an average encoding/decoding complexity to obtain an objective value ƒ*.
(iv) minimizing an average repair locality subject to a constraint based on the objective value ƒ* for an expected encoding degree E(?(d));
storing the plurality of encoding symbols over the plurality of digital storage devices;
wherein the steps (i)-(iv) enable:
determining a minimum repair locality within the set of the k source data symbols; and
reducing computational complexity during decoding of a random subset of the encoded symbols by using a low complexity decoder.

US Pat. No. 10,922,172

ON THE FLY RAID PARITY CALCULATION

Toshiba Memory Corporatio...

1. A data storage device comprising:a storage array containing first data;
a buffer containing RAID units; and
a controller in communication with the storage array and the buffer,
the controller configured to:
receive a read request from a host device for a second data stored in the storage array,
determine an identifier associated with the requested second data,
determine if the requested second data contains an unrecoverable error, and
accumulate first data, including a parity value, contained in the storage array associated with the same identifier as the requested second data in a reconstruction buffer, if the requested second data contains an unrecoverable error.

US Pat. No. 10,922,171

ERROR CORRECTION CODE CIRCUITS, SEMICONDUCTOR MEMORY DEVICES AND MEMORY SYSTEMS

Samsung Electronics Co., ...

1. An error correction code (ECC) circuit of a semiconductor memory device, the ECC circuit comprising:a syndrome generation circuit configured to generate a syndrome based on a message and first parity bits in a codeword read from a memory cell array by using one of a first parity check matrix and a second parity check matrix, in response to a decoding mode signal, the message including system check bits;
an ECC encoder configured to generate the first parity bits based on the message; and
a correction circuit configured to
receive the codeword,
correct at least a portion of (t1+t2) error bits in the codeword, based on the syndrome, wherein t1 and t2 are natural numbers, and
output a corrected message.

US Pat. No. 10,922,170

MEMORY MODULE INCLUDING A VOLATILE MEMORY DEVICE, MEMORY SYSTEM INCLUDING THE MEMORY MODULE AND METHODS OF OPERATING A MULTI-MODULE MEMORY DEVICE

Samsung Electronics Co., ...

1. A memory system comprising:a memory device having a plurality of volatile memory modules therein; and
a memory controller electrically coupled to the plurality of volatile memory modules, said memory controller configured to correct an error in a first of the plurality of volatile memory modules in response to generation of an alert signal by the first of the plurality of volatile memory modules, concurrently with an operation to refresh at least a portion of a second of the plurality of volatile memory modules in response to the generation of the alert signal, which is provided from the first of the plurality of volatile memory modules to the second of the plurality of volatile memory modules.

US Pat. No. 10,922,169

ERROR DETECTING MEMORY DEVICE

GSI Technology Inc., Sun...

1. A comparing unit comprising:a non-destructive memory array comprising memory cells arranged in rows and columns;
a plurality of word lines, each word line to activate memory cells in a row, thereby to establish an activated row;
first bit lines and second bit lines to connect memory cells in columns, each said first bit line providing the result of a Boolean AND operation between data stored in a first activated row and data stored in a second activated row, and each said second bit line providing the result of a Boolean NOR operation between data stored in said first activated row and data stored in said second activated row; and
a NOR gate per column, each connected to said per column first and second bit lines thereby comparing data stored in said first activated row with data stored in said second activated row.

US Pat. No. 10,922,168

DYNAMIC LINK ERROR PROTECTION IN MEMORY SYSTEMS

QUALCOMM Incorporated, S...

1. An apparatus, comprising:a memory configured to communicate with a host over a link, the memory comprising:
a mode register corresponding to a link speed and comprising a plurality of operands to indicate write link error correction code disable or write link error correction code enable, and to indicate read link error correction code disable or read link error correction code enable,
wherein the mode register is one of a plurality of mode registers of the memory, the plurality of mode registers corresponding to different link speeds.

US Pat. No. 10,922,167

STORAGE DEVICE AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A memory controller for controlling a memory device including a register for storing a plurality of parameters, the memory controller comprising:a register information storage configured to receive, when power is supplied to the memory device, the plurality of parameters from the memory device, and store the plurality of parameters as a plurality of setting parameters respectively corresponding to the plurality of parameters;
a register controller configured to provide the memory device with a parameter change command for requesting a selected parameter, among the plurality of parameters, to be changed to a set value, and acquire, from the memory device, Cyclic Redundancy Check (CRC) calculation information that is obtained by performing a CRC calculation on the plurality of parameters including the selected parameter changed in response to the parameter change command;
a CRC reference information generator configured to generate CRC reference information by performing a CRC calculation on the plurality of setting parameters including a setting parameter that corresponds to the selected parameter and is changed to the set value; and
a CRC information comparator configured to determine whether an error is included in the plurality of parameters according to a comparison result between the CRC calculation information and the CRC reference information,
wherein the plurality of parameters represent device setting information related to the memory device, and
wherein the device setting information is used in a read operation, a program operation or an erase operation of the memory device.

US Pat. No. 10,922,166

APPARATUS AND METHOD FOR PROBABILISTIC ERROR CORRECTION OF A QUANTUM COMPUTING SYSTEM

Intel Corporation, Santa...

1. An apparatus comprising:a quantum controller to generate sequences of pulses associated with qubits on a quantum processor in response to operations specified in a quantum runtime;
quantum measurement circuitry to measure quantum values associated with the qubits following completion of at least a first cycle of quantum runtime operations; and
a probabilistic compute engine to analyze the quantum values using inferencing and to responsively adjust a quantum error correction depth value for minimizing a number of errors to be detected on subsequent cycles of the quantum runtime operations, wherein the quantum error correction depth value maps to a depth of quantum error correcting code.

US Pat. No. 10,922,165

SEMICONDUCTOR DEVICE AND SEMICONDUCTOR SYSTEM EQUIPPED WITH THE SAME

RENESAS ELECTRONICS CORPO...

1. A semiconductor device comprising:a master circuit which outputs a first write request signal used for requesting to write data;
a bus which receives the data and the first write request signal;
a bus control unit which is arranged on the bus, generates an error detection code for the data and generates a second write request signal which includes second address information corresponding to first address information included in the first write request signal; and
a memory controller which writes the data transmitted from the bus into a storage area of an address which is designated by the first write request signal transmitted from the bus and writes the error detection code transmitted from the bus into a storage area of an address which is designated by the second write request signal transmitted from the bus in storage areas of a memory,
wherein the memory controller comprises:
a first memory controller which accesses a first memory included in the memory; and
a second memory controller which accesses a second memory included in the memory,
wherein the bus control unit is configured to 1) transmit the first write request signal from the bus to the first memory controller via a first signal path and 2) transmit the second write request signal to the second memory controller via a second signal path, and
wherein an entirety of the second signal path is distinct from the first signal path such that the second write request signal does not travel through any part of the first signal path.

US Pat. No. 10,922,164

FAULT ANALYSIS AND PREDICTION USING EMPIRICAL ARCHITECTURE ANALYTICS

Accenture Global Solution...

1. A method for fault analysis and prediction in an enterprise environment, the method comprising:obtaining data from a plurality of sources in the enterprise environment, wherein the plurality of sources includes at least one or more systems, users, or applications;
associating the obtained data with identifiers that include a theme selected from a set of themes and one or more keywords, wherein the keywords are specific to each theme;
generating a workflow for a user based on a session identifier and/or timestamps associated with activity by the user, wherein the workflow identifies a time-based sequence of interactions by the user with the at least one or more systems or applications in the enterprise environment; and
determining at least one fault identification or fault prediction associated with the enterprise environment based on the workflow and identifiers associated with the obtained data that corresponds to the workflow.

US Pat. No. 10,922,163

DETERMINING SERVER ERROR TYPES

Verizon Patent and Licens...

1. A device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
obtain a plurality of server logs from a plurality of servers,
wherein each server log, of the plurality of server logs, includes a plurality of log entries;
generate, based on the plurality of log entries of the plurality of server logs, a plurality of data structures,
wherein each data structure, of the plurality of data structures, includes one or more log entries of the plurality of log entries, of the plurality of server logs, that concern a server request;
identify a set of data structures, of the plurality of data structures, associated with one or more server errors;
process the set of data structures using an artificial intelligence technique and a machine learning technique to determine a respective classification score of each data structure of the set of data structures,
wherein the one or more processors process the set of data structures using a machine learning model trained on historical data structure data to determine the respective classification score of each data structure;
determine, based on the respective classification score of each data structure of the set of data structures, a respective server error type of each data structure of the set of data structures;
cause display of information concerning the set of data structures and at least one server error type associated with the set of data structures; and
cause at least one server of the plurality of servers to perform an action concerning the at least one server error type.

US Pat. No. 10,922,162

CAPTURING VIDEO DATA AND SERIAL DATA DURING AN INFORMATION HANDLING SYSTEM FAILURE

Dell Products, L.P., Rou...

1. A method for capturing a screenshot during an information handling system (IHS) failure, the method comprising:detecting, via a controller, an occurrence of a system event log (SEL) incident in the IHS, wherein the SEL incident comprises an intelligent platform management interface (IPMI) SEL addition command;
in response to detecting the occurrence of the SEL incident in the IHS, retrieving a data recording window from a volatile controller memory, the data recording window containing video data and serial data recorded for a time period from prior to and up to a time of detection of the SEL incident; and
storing the data recording window including the video data and the serial data for the time period to a non-volatile controller memory.

US Pat. No. 10,922,161

APPARATUS AND METHOD FOR SCALABLE ERROR DETECTION AND REPORTING

Intel Corporation, Santa...

1. A method comprising:detecting an error in a component of a first tile within a tile-based hierarchy of a processing device;
classifying the error and recording first error data based on the classification;
communicating the first error data related to the error to a first tile interface, the first tile interface to combine the first error data with second error data received from one or more other components associated with the first tile to generate first accumulated error data;
communicating the first accumulated error data to a master tile interface, the master tile interface to combine the first accumulated error data with second accumulated error data received from at least one other tile interface to generate third accumulated error data; and
providing the third accumulated error data to a host executing an application to process the third accumulated error data.

US Pat. No. 10,922,160

MANAGING PHYS OF A DATA STORAGE TARGET DEVICE BACKGROUND OF THE DISCLOSURE

WESTERN DIGITAL TECHNOLOG...

1. A data storage device, comprising:a non-volatile memory, wherein the data storage device is configured to read data from the non-volatile memory and write data to the non-volatile memory;
a plurality of phys; and
a controller coupled to the non-volatile memory and the plurality of phys, the controller, when coupled to a host device, configured to:
complete a first link reset on a first phy of the data storage device;
check whether a wide port auto configuration is enabled for the data storage device upon completion of the first link reset on the first phy;
receive a first host phy SAS address for a first host phy on the first phy from the host device upon completion of the first link reset on the first phy;
record the first host phy SAS address for the first phy in a lookup table, wherein the lookup table comprises a list of host phy SAS addresses;
search for a matching host phy SAS address in the lookup table, wherein searching comprises determining whether the first host phy SAS address corresponds to a host phy SAS address from the list of host phy SAS addresses;
determine the first host phy SAS address matches a first host phy SAS address in the lookup table;
transmit a first identify frame, based on the searching, for the first phy to the host device, wherein an identify frame comprises a target phy SAS address and a phy identifier, wherein the first identify frame links the first phy to the first host phy SAS address, and wherein the first identify frame comprises a first matched host phy SAS address; and
establish a link between the data storage device and the host device based on the transmitted first identify frame.

US Pat. No. 10,922,159

MINIMALLY DISRUPTIVE DATA CAPTURE FOR SEGMENTED APPLICATIONS

International Business Ma...

1. A method for performing a data dump of a segmented application, the method comprising:detecting an error in a segmented application, the segmented application comprising an address space for storing executables and a buffer for storing data; and
in response to detecting the error, performing the following:
quiescing the address space and copying content of the address space to another location while the address space is quiesced;
reactivating the address space after the content of the address space is completely copied;
suspending write access to the buffer and copying content of the buffer to another location while write access to the buffer is suspended;
while write access to the buffer is suspended, redirecting writes intended for the buffer to a temporary storage area, and directing reads intended for the buffer to both the buffer and the temporary storage area depending on where valid data is stored; and
maintaining an index to determine where valid data is stored in the buffer and the temporary storage area.

US Pat. No. 10,922,158

METHOD AND SYSTEM FOR TRANSFORMING INPUT DATA STREAMS

OPEN TEXT SA ULC, Halifa...

1. A system for processing input data streams comprising:a processing system coupled to an input and an output, the processing system having a non-transitory computer readable medium comprising instructions to:
read an electronic input data stream of data received at the input;
in a job thread, detect patterns in the input data stream to identify events;
in the job thread, create a message for each event, the message containing data associated with the event according to a data structure corresponding to the event;
in the job thread, execute a process configured to generate a meta record based on the message and send the meta record to an output pipeline; and
provide an output data stream to a destination from the output pipeline via the output, wherein at least a portion of the output data stream contains data from the message converted according to the meta record.

US Pat. No. 10,922,157

MANAGING FUNCTIONS ON AN IOS MOBILE DEVICE USING ANCS NOTIFICATIONS

CELLCONTROL, INC., Baton...

1. A method for preventing user interaction with a mobile device, comprising:establishing, by a processor of the mobile device, a connection between the mobile device and an external control device connected with a vehicle via an application downloaded to the mobile device used to prevent user interaction with the mobile device, wherein the external control device is to transmit commands to the mobile device to prevent user interaction with one or more functions on the mobile device based on a status of the mobile device within the vehicle;
detecting, by an event notification service native to an operating system (OS) of the mobile device and by the application, an initiation of at least one of the functions on the mobile device;
sending, by the event notification service and the application via the connection, an event notification to the external control device indicative of the initiation of the at least one of the functions on the mobile device;
receiving, from the external control device via the connection, an action responsive to the event notification; and
processing the action, wherein the action causes the mobile device to prevent the user interaction with the at least one of the functions on the mobile device.

US Pat. No. 10,922,156

SELF-EXECUTING BOT BASED ON CACHED USER DATA

PAYPAL, INC., San Jose, ...

1. A system, comprising:a non-transitory memory; and
one or more hardware processors coupled to the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising:
obtaining a first cached data from a first device, the first cached data including data saved on the first device in response to electronic searches or electronic messaging performed by a first user using the first device;
determining, at least in part via the first cached data, an intended use context associated with the electronic searches or the electronic messaging;
determining, based on the intended use context, a first confidence level; and
in response to the determined first confidence level meeting or exceeding a predefined first threshold, automatically executing a transaction involving the first user.

US Pat. No. 10,922,155

METHODS OF COMMUNICATION BETWEEN A REMOTE RESOURCE AND A DATA PROCESSING DEVICE

ARM IP LIMITED, Cambridg...

1. A method of providing for asynchronous communication between a remote resource and a data processing device, the method including:storing in a transaction queue a first message received from the remote resource, the transaction queue being configured such that a plurality of remote resources can post messages to the transaction queue but only the data processing device can pull messages from the transaction queue;
receiving a request from the data processing device to create a guest transaction queue;
creating the guest transaction queue, the guest transaction queue being accessible by both the data processing device and the remote resource and being configured such that a plurality of remote resources can pull messages from the guest transaction queue without requiring credentials;
in response to a pull request received from the data processing device, providing the first message stored in the transaction queue to the data processing device;
storing, in the guest transaction queue, a second message received from the data processing device in response to the first message; and
in response to a pull request received from the remote resource, providing the second message stored in the guest transaction queue to the remote resource,
wherein the request to create the guest transaction queue received from the data processing device is sent to the data processing device by the remote resource.

US Pat. No. 10,922,154

SYSTEMS AND METHODS FOR INTER-PROCESS COMMUNICATION WITHIN A ROBOT

X Development LLC, Mount...

1. A method comprising:creating a publisher configured to send messages over a channel having a shared memory, wherein the shared memory comprises a plurality of sequentially-related memory slots, and wherein each sent message sequentially occupies a memory slot of the plurality of memory slots;
creating at least one subscriber configured to receive the messages over the channel by sequentially referencing memory slots of the plurality of memory slots;
determining, at a first attempt for sending a message by the publisher, based on an indicator associated with a next sequential memory slot in the plurality of memory slots, that the next sequential memory slot is currently referenced by a subscriber;
delaying sending the message by the publisher based on determining that the next sequential memory slot is currently referenced by the subscriber;
receiving an event trigger indicative of message reading by the subscriber;
responsive to receiving the event trigger, determining, at a second attempt for sending the message by the publisher, based on the indicator associated with the next sequential memory slot, that the next sequential memory slot is not currently referenced by any of the at least one subscriber; and
sending, by the publisher, the message to the next sequential memory slot based on determining that the next sequential memory slot is not currently referenced by any of the at least one subscriber.

US Pat. No. 10,922,153

COMMUNICATION METHOD AND DEVICE FOR VIRTUAL BASE STATIONS

Alcatel Lucent, Nozay (F...

1. A method implemented at a baseband processing unit having a plurality of virtual base stations arranged thereon, the method comprising:enabling a hardware accelerator in the baseband processing unit to process data to be transmitted in the plurality of virtual base stations, the processed data being stored in a first group of a plurality of buffers in the hardware accelerator which are associated with the plurality of virtual base stations;
reading, from the first group of the plurality of buffers, the processed data in a predetermined order associated with the plurality of virtual base stations; and
writing the processed data into a second group of a plurality of buffers in a general purpose processor of the baseband processing unit for further processing in the general purpose processor.

US Pat. No. 10,922,152

EVENT HANDLER NODES FOR VISUAL SCRIPTING

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:receiving, from a visual scripting interface of a visual scripting service, a selection of an event-based message stream;
generating, using the visual scripting system, an event node corresponding to the event-based message stream;
configuring the event node with at least one event handler, to receive messages transmitted on the event-based message stream; and
generating, through the visual scripting system, application code based at least in part on the messages received by the at least one event handler.

US Pat. No. 10,922,151

UNIFIED EVENTS FRAMEWORK

SAP SE, Walldorf (DE)

15. A system, comprising:a computing device; and
a computer-readable storage device coupled to the computing device and having instructions stored thereon which, when executed by the computing device, cause the computing device to perform operations for detecting and managing events from data of an Internet-of-Things (IoT) network, the operations comprising:
providing a unified events framework that is accessible to each of a plurality of enterprises operating respective IoT networks, that is agnostic to the respective IoT networks and to enterprises operating the respective IoT networks, the unified events framework comprising an event configuration component and an event ingestion framework;
storing, by the unified events framework, one or more event configurations, each event configuration being customized by an enterprise of the plurality of enterprises through the event configuration component, each enterprise associated with a respective application, each event configuration being specific to a respective event and comprising an event type, one or more statuses defining respective states of the respective event, a severity and a code associated with one of messaging and notification of the respective event;
receiving, by the unified events framework, calls from respective applications of enterprises in the plurality of enterprises, the calls comprising a first call from a first application that executes functionality specific to a first enterprise of the plurality of enterprises, the first call comprising an identifier and timeseries data from one or more IoT devices in a first IoT network, the first IoT network being exclusive to the first enterprise, the identifier identifying the first enterprise among the plurality of enterprises;
retrieving, by the event ingestion framework of the unified events framework, a rule set for processing the timeseries data, the rule set being selected from a plurality of rule sets based on the identifier; and
determining, by the event ingestion framework of the unified events framework, that an anomaly is represented in the timeseries data based on the rule set, and in response:
selecting a first event configuration from an event configuration data store of the unified events framework, the event configuration data store storing event configurations for each enterprise of the plurality of enterprises, the first event configuration being specific to the first enterprise and being distinct from a second event configuration that is specific to a second enterprise,
generating an event based on the first event configuration that defines the event,
executing an event workflow to transition the event between states, and
transmitting an event response to the first application.

US Pat. No. 10,922,150

DEEP HARDWARE ACCESS AND POLICY ENGINE

Dell Products L.P., Roun...

1. An information handling system comprising:a host system comprising at least one processor;
a management controller communicatively coupled to the at least one processor and configured to provide out-of-band management of the information handling system;
a debugging circuit; and
a logic device coupled to the host system and to the management controller, wherein the logic device is configured to:
determine that a trigger event has taken place, wherein the trigger event comprises a boot failure of the management controller caused by a failure of a power supply of the information handling system; and
in response to the trigger event, provide a serial data stream corresponding to the trigger event to the debugging circuit, wherein the serial data stream includes debugging data from the power supply;
wherein the debugging circuit is configured to send the serial data stream to a debugging information handling system via a wireless interface without a request from the debugging information handling system, wherein the serial data stream is automatically transmitted repeatedly.

US Pat. No. 10,922,149

SYSTEM COMPRISING A PLURALITY OF VIRTUALIZATION SYSTEMS

OPENSYNERGY GMBH, Berlin...

1. A system comprising:a processor comprising a plurality of cores having a same instruction set architecture;
at least one memory connected to said processor;
a plurality of guest systems;
a plurality of virtualization systems, each virtualization system running on a separate corresponding one of said cores, each virtualization system being a respective hypervisor between the corresponding core and the guest system, and each hypervisor assigning hardware resources to respective guest systems, the plurality of virtualization systems comprising:
a first virtualization system adapted to run on a first core; and
a second virtualization system adapted to run on a second core, wherein the first virtualization system has a first characteristic with a first parameter and the second virtualization system has a second characteristic with a second parameter, and wherein the parameter of the first characteristic and the parameter of the second characteristic are incompatible when implemented in a single virtualization system; and
a communication module for enabling said plurality of virtualization systems to communicate with each other.

US Pat. No. 10,922,148

INTEGRATED ANDROID AND WINDOWS DEVICE

Intel Corporation, Santa...

1. An apparatus, comprising:a housing;
a first processor board, including,
a first processor;
a graphics processor unit (GPU), either built into the first processor or operatively coupled to the first processor;
first memory, operably coupled to the first processor;
display output circuitry, either built into one of the first processor and the GPU or operatively coupled to at least one of the first processor and the GPU;
a touchscreen display, communicatively coupled to the display output circuitry; and
non-volatile storage in which first instructions are stored comprising an Android operating system and a plurality of Android applications;
a second processor board, communicatively coupled to the first processor board, including,
a second processor;
second memory, operatively coupled to the second processor; and
non-volatile storage in which second instructions are stored comprising a Windows operating system and a plurality of Windows applications,
wherein, upon operation the apparatus enables a user to selectively run the plurality of Android applications and the plurality of Windows applications, wherein the plurality of Android applications are executed on the first processor board and the plurality of Windows application are executed on the second processor board, and wherein the first processor board is installed within the housing and the second processor board is one of installed within the housing or installed in a backpack coupled to the housing.

US Pat. No. 10,922,147

STORAGE SYSTEM DESTAGING BASED ON SYNCHRONIZATION OBJECT WITH WATERMARK

EMC IP Holding Company LL...

1. An apparatus comprising:a storage system comprising a plurality of storage devices, a data structure associated with at least one of the plurality of storage devices, and a storage controller associated with the data structure and the plurality of storage devices;
wherein the storage controller is configured to:
obtain a threshold value for a synchronization object associated with the data structure; and
activate a plurality of threads, each thread configured to:
determine a count value of the synchronization object, the count value corresponding to a number of entries in the data structure;
determine whether the count value of the synchronization object exceeds the threshold value plus a predetermined number of entries; and
destage a number of entries of the data structure equal to the predetermined number of entries in response to determining that the count value of the synchronization object exceeds the threshold value plus the predetermined number of entries.

US Pat. No. 10,922,146

SYNCHRONIZATION OF CONCURRENT COMPUTATION ENGINES

Amazon Technologies, Inc....

1. A computer-implemented method for generating, by a compiler, program code for an integrated circuit device having multiple concurrently operating execution engines, the method comprising:receiving, at a computing device, an input data set, wherein the input data set is organized in a graph, wherein nodes in the graph represent operations to be performed by a first execution engine or a second execution engine of the integrated circuit device, and wherein connections between the nodes represent data or resource dependencies between the nodes;
identifying a first node in the input data set that includes a first operation to be performed by the first execution engine, wherein the first operation generates a result;
identifying a second node in the input data set that has a connection from the first node, wherein the second node includes a second operation to be performed by the second execution engine, wherein the second operation uses the result generated by the first operation;
assigning a checkpoint value to the connection, wherein the checkpoint value is associated with a checkpoint register of the integrated circuit device;
generating a first set of program code including instructions for performing the first operation, wherein a last instruction in the first set of program code causes the first execution engine to set the checkpoint value corresponding to the connection in the checkpoint register; and
generating a second set of program code including instructions for performing the second operation, wherein a first instruction in the second set of program code is a wait instruction, wherein the wait instruction causes the second execution engine to wait for a condition associated with the checkpoint value corresponding to the connection in the checkpoint register,
wherein the checkpoint register comprises a plurality of checkpoint registers, and wherein each of the plurality of checkpoint registers is accessible to both the first execution engine and the second execution engine, and
wherein, when the checkpoint value is set in the checkpoint register, the integrated circuit device is operable to broadcast the checkpoint value to the second execution engine.

US Pat. No. 10,922,145

SCHEDULING SOFTWARE JOBS HAVING DEPENDENCIES

Target Brands, Inc., Min...

1. A computer-implemented method comprising:obtaining a job, wherein the job defines one or more tasks that are to be performed;
analyzing the job to identify one or more dependencies of the job, wherein the analyzing includes:
identifying that the job relies on one or more mutable data structures; and
identifying each of the one or more mutable data structures as a dependency of the job required as a data source to complete one or more of the tasks;
generating a directed graph describing the job based on the analyzing of the job, wherein the directed graph includes the identified one or more dependencies of the job as nodes of the directed graph;
determining, using the directed graph, whether to take an action with respect to the job; and
taking the action responsive to determining to take the action with respect to the job.

US Pat. No. 10,922,144

ATTRIBUTE COLLECTION AND TENANT SELECTION FOR ON-BOARDING TO A WORKLOAD

Microsoft Technology Lice...

1. A method performed by a computing system, the method comprising:collecting tenant attributes representing a plurality of different tenants to be on-boarded to a workload;
selecting, based on the tenant attributes, a model that models tenant usage behavior relative to the workload;
for each particular tenant of the plurality of tenants and prior to on-boarding of the particular tenant to the workload,
generating an engagement value indicative of a likely usage of the workload by the particular tenant based on the model of tenant usage behavior;
generating a rank ordered list of the tenants based on the engagement value for each tenant; and
controlling a display system to generate an interactive user interface display that displays the rank ordered list.

US Pat. No. 10,922,143

SYSTEMS, METHODS AND DEVICES FOR DETERMINING WORK PLACEMENT ON PROCESSOR CORES

Intel Corporation, Santa...

1. An apparatus, comprising:a processor including:
a plurality of cores; and
a favored core module operable by the processor to:
generate a ranking of the plurality of cores based on differing physical characteristics the plurality of cores;
identify one or more cores of the plurality of cores as favored with respect to one or more other cores of the plurality of cores based on the ranking, and include the one or more cores identified as favored in a favored core list;
for each thread of a plurality of threads:
determine whether a demand of the thread is greater than a threshold; and
in response to a determination that the demand of the thread is greater than the threshold, select the thread for affinitization;
determine whether a first count of selected thread(s) is less than or equal to a second count of the core(s) of the favored core list; and
in response to a determination that the first count is less than or equal to the second count, affinitize all selected thread(s) to a respective one of the core(s) of the favored core list, including:
perform a first affinitization of an initial thread of the selected thread(s) to a core of an initial entry in the favored core list; and
perform one or more second affinitizations of any one or more next threads, respectively, of the selected thread(s) to one or more cores of the one or more next entries in the favored core list, respectively; and
in response to a determination that the first count is greater than the second count, unaffinitize at least one thread-core affinitization.

US Pat. No. 10,922,142

MULTI-STAGE IOPS ALLOCATION

Nutanix, Inc., San Jose,...

1. A method for multi-stage input/output operations (IOPS) allocations, the method comprising:identifying at least one policy that specifies at least one IOPS limit;
determining two or more virtual machines associated with the at least one policy;
determining two or more nodes that host the two or more virtual machines;
performing a first allocation to apportion the at least one IOPS limit over the two or more nodes, the first allocation resulting in two or more node-level IOPS apportionments that correspond to the two or more nodes; and
performing a second allocation to apportion the node-level IOPS apportionments to the two or more virtual machines, the second allocation resulting in one or more VM-level IOPS apportionments to the two or more virtual machines.

US Pat. No. 10,922,141

PRESCRIPTIVE ANALYTICS BASED COMMITTED COMPUTE RESERVATION STACK FOR CLOUD COMPUTING RESOURCE SCHEDULING

Accenture Global Solution...

1. A system comprising:network interface circuitry configured to:
receive historical resource utilization data for a set of virtual machines;
receive consumption metric data for the set of virtual machines;
receive tagging data defining a functional grouping for a first virtual machine of the set of virtual machines;
send a reservation matrix to a host interface configured to control static reservation and dynamic requisition for at least the first virtual machine, the reservation matrix including one or more requests for one or more committed compute virtual machines of the set of virtual machines;
reservation circuitry in data communication with the network interface circuitry, the reservation circuitry configured to execute a committed compute reservation (CCR) stack,
the CCR stack comprising:
a data staging layer;
an input layer;
a transformation layer;
an iterative analysis layer; and
a prescriptive engine layer;
the CCR stack configured to:
obtain, via a data control tool at the input layer, the historical utilization data, the consumption metric data, and the tagging data;
store, at the data staging layer, the historical utilization data, the consumption metric data, and the tagging data;
access, at the transformation layer, the historical utilization data and the tagging data via a memory resource provided by the data staging layer;
process, at the transformation layer, the historical utilization data and the tagging data to generate a time-mapping of active periods for virtual machines within the functional grouping;
store, via operation at the data staging layer, the time-mapping;
obtain, at the iterative analysis layer, a current committed compute (CC) state for the set of virtual machines, the current CC state detailing one or more pre-analysis committed compute states for one or more virtual machines of the set of virtual machines;
access, at the iterative analysis layer, the consumption metric data;
within an analysis space of CC states, use boundary conditions to determine a search space for a non-linear search, the search space around the current CC state, where a disallowed CC state is outside the search space but within the analysis space;
based on the time-mapping, the consumption metric data, and the previous CC state, determine, responsive to a non-linear search iteration of the non-linear search, a consumption-constrained CC state comprising a static reservation prescription for at least the first virtual machine;
store, via operation at the data staging layer, the consumption-constrained CC state;
access the consumption-constrained CC state at the prescriptive engine layer;
analyze the consumption-constrained CC state responsive to a feedback history, the feedback history based on commands from a committed compute (CC) control interface;
responsive to analyzing the consumption-constrained CC state, generate, at the prescriptive engine layer, a committed compute window (CC-window) presentation for the CC control interface; and
after generation of the CC-window presentation, generate the reservation matrix based on at least the consumption-constrained CC state and the feedback history.

US Pat. No. 10,922,140

RESOURCE SCHEDULING SYSTEM AND METHOD UNDER GRAPHICS PROCESSING UNIT VIRTUALIZATION BASED ON INSTANT FEEDBACK OF APPLICATION EFFECT

SHANGHAI JIAOTONG UNIVERS...

1. A resource scheduling system in a virtualized environment, comprising a physical GPU instruction dispatch, a physical GPU interface, an agent and a scheduling controller, which are all in a host wherein:the agent is connected between the physical GPU instruction dispatch and the physical GPU interface;
the scheduling controller is connected with the agent, and
the scheduling controller receives user commands, and delivers the user commands to the agent; the agent receives the user commands coming from the scheduling controller, monitors a set of GPU conditions of a guest application executing in a virtual machine, transmits GPU condition results of the guest application to the scheduling controller, calculates periodically/on an event basis a minimum delay time necessary to meet the GPU conditions of the guest application according to a scheduling algorithm designated by the scheduling controller, and delays sending instructions and data in the physical GPU instruction dispatch to the physical GPU interface; and the scheduling controller receives a scheduling result and a scheduling condition coming from the agent, processes the scheduling result and the scheduling condition, and displays the processed scheduling result and scheduling condition.

US Pat. No. 10,922,139

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR PROCESSING LARGE DATA SETS BY BALANCING ENTROPY BETWEEN DISTRIBUTED DATA SEGMENTS

Visa International Servic...

11. A computer program product for load balancing for processing large data sets based on distribution entropy using a plurality of processors, comprising at least one non-transitory computer-readable medium including program instructions that, when executed by at least one processor, cause the at least one processor to:identify a number of segments based on the number of the plurality of processors, and a transaction data set comprising transaction data for a plurality of payment transactions distributed over a range of transaction values, the transaction data for each payment transaction of the plurality of payment transactions comprising a transaction value representing a payment amount;
determine a distribution entropy of the transaction data set based on the transaction value of each payment transaction of the plurality of payment transactions;
repeatedly segment the transaction data set into respective pairs of segments until the number of segments is reached, each segment of the number of segments having various sizes, based on the distribution entropy of the transaction data set and balancing respective distribution entropies of each segment of the number of segments such that respective distribution entropies of each segment are balanced to within a predefined tolerance of the distribution entropy of at least one adjacent segment; and
distribute processing tasks associated with each segment of the number of segments to the plurality of processors to process each transaction in each respective segment.

US Pat. No. 10,922,138

RESOURCE CONSERVATION FOR CONTAINERIZED SYSTEMS

Google LLC, Mountain Vie...

1. A method comprising:receiving, at data processing hardware, an event-criteria list from a resource controller, the event-criteria list comprising one or more events watched by the resource controller, the resource controller controlling at least one target resource and configured to respond to the one or more events from the event-criteria list that occur;
determining, by the data processing hardware, whether the resource controller is idle; and
when the resource controller is idle:
terminating, by the data processing hardware, the resource controller;
determining, by the data processing hardware, whether any event from the event-criteria list occurs after terminating the resource controller; and
when at least one event of the one or more events from the event-criteria list occurs after terminating the resource controller, recreating, by the data processing hardware, the resource controller.

US Pat. No. 10,922,137

DYNAMIC THREAD MAPPING

Hewlett Packard Enterpris...

1. A method for dynamic thread mapping, comprising:initiating a set of thread-specific registers of a multi-threaded central processing unit (CPU) having multiple cores, wherein the set of thread-specific registers comprises a first counter of the set of thread-specific registers for first memory loads and stores and a second counter of the set of thread-specific registers for second memory loads and stores;
tracking using the first counter a first number of in-flight memory accesses of the first memory;
tracking using the second counter a second number of in-flight memory accesses of the second memory;
when a first thread activity is classified as a load operation of the first memory, incrementing the first counter;
when a second thread activity is classified as a commit operation of the second memory, decrementing the second counter;
assigning each thread to the multiple cores wherein the number of cores having only first memory accesses and the number of cores having only second memory accesses are both maximized based at least in part on the set of thread-specific registers;
migrating the assigned threads to the respective CPU cores: and
adjusting at least one of voltage and frequency to increase the performance of the multiple CPU cores with only first memory accesses and decreasing the performance of the multiple CPU cores with only second memory accesses.

US Pat. No. 10,922,136

SUBSCRIPTION SERVER

iHeartMedia Management Se...

8. A subscription server including:a processor;
memory coupled to the processor;
a communications interface coupled to the processor;
the processor configured to:
implement a clock generation module configured to generate a clock template including a plurality of slots, each slot associated with at least one content type, and wherein the clock template including information indicating timing relationships of the plurality of slots relative to one another;
determine that a media log is to be generated from the clock template for one or more subscribers;
implement a subscription verification module configured to obtain trust parameters associated with the one or more subscribers;
implement a log generation module configured to generate a media log including at least one slot assigned a restriction level determined based on the trust parameters; and
the communications interface configured to transmit the media log, including information indicating the restriction level, to at least one of the one or more subscribers.

US Pat. No. 10,922,135

DYNAMIC MULTITASKING FOR DISTRIBUTED STORAGE SYSTEMS BY DETECTING EVENTS FOR TRIGGERING A CONTEXT SWITCH

EMC IP Holding Company LL...

1. A method for dynamic multitasking in a storage system, the storage system including a first storage server configured to execute a first I/O service process and one or more second storage servers, the method comprising:detecting a first event for triggering a context switch;
causing the second storage servers to stop transmitting internal I/O requests to the first I/O service process;
deactivating the first I/O service process by pausing one or more components of the first I/O service process; and
after the first I/O service process is deactivated, executing a first context switch between the first I/O service process and a second process.

US Pat. No. 10,922,134

METHOD, DEVICE AND COMPUTER PROGRAM PRODUCT FOR PROCESSING DATA

EMC IP Holding Company LL...

1. A method of processing data, comprising:determining priorities of a plurality of cores of a processor based on metadata stored in a plurality of queues associated with the plurality of cores, respectively, the metadata being related to data blocks to be processed that are associated with the respective cores, and the metadata in each of the queues being sorted by arrival times of the respective data blocks to be processed;
storing core identifiers of the plurality of cores into a cache by an order of the priorities;
determining a core identifier with the highest priority from the cache;
setting the priority of the core corresponding to the core identifier as a predefined priority; and
updating the priorities of the plurality of cores based on the predefined priority;wherein the processor resides within a data storage system that performs host input/output (I/O) operations on a behalf of a set of host computers; andwherein storing the core identifiers of the plurality of cores into the cache includes:writing the core identifiers of the plurality of cores into the cache while the data storage system performs the host I/O operations on behalf of the set of host computers.

US Pat. No. 10,922,133

METHOD AND APPARATUS FOR TASK SCHEDULING

ALIBABA GROUP HOLDING LIM...

1. A method comprising:obtaining, by a task management device of a distributed system, network resources needed to perform cross-cluster reading and writing, the network resources associated with a task and comprising an amount of input data and an amount of output data for the task based on a history record of past executions of the task;
analyzing, by the task management device of the distributed system, the network resources by calculating an input-output ratio representing a proportion of the network resources needed for reading and writing for the task, the input-output ratio equal to a ratio of the amount of input data to the amount of output data; and
scheduling, by the task management device, the task according to the network resources.

US Pat. No. 10,922,132

SECURE MIGRATION OF SERVERS FROM CUSTOMER NETWORKS TO SERVICE PROVIDER SYSTEMS

Amazon Technologies, Inc....

1. A computer-implemented method comprising:receiving, over one or more networks, a first request to securely migrate a virtual machine (VM) from a customer network to a service provider system, wherein the first request includes an identifier of a key encryption key (KEK) associated with a customer account of the service provider system, and wherein the customer network is managed by an entity that is separate from an entity that manages the service provider system;
sending a second request to a backup proxy located within the customer network, wherein the second request includes an identifier of the KEK, and wherein the second request causes the backup proxy to:
generate replication data for the VM,
obtain a data encryption key associated with the KEK from a key management service (KMS),
encrypt the replication data using the data encryption key to obtain encrypted replication data, and
upload the encrypted replication data to a storage location at the service provider system;
obtaining the encrypted replication data from the storage location at the service provider system;
obtaining the data encryption key associated with the customer account from the KMS, wherein the data encryption key is obtained based on the identifier of the KEK;
decrypting the encrypted replication data using the data encryption key to obtain decrypted replication data; and
generating, based on the decrypted replication data, a VM image to be used to create migrated VM instances at the service provider system.

US Pat. No. 10,922,131

APPLICATION FUNCTION CONTROL METHOD AND RELATED PRODUCT

GUANGDONG OPPO MOBILE TEL...

1. A mobile terminal, comprising a universal processor, configured to:in response to detecting a starting instruction for a first application, generate a first instruction containing an application identifier of the first application;
in response to finding out according to the first instruction that a disabled function set comprises at least one first function of the first application, generate a second instruction containing a function identifier of the at least one first function; and
running, according to the second instruction, one or more functions, except the at least one first function, in multiple functions of the first application.

US Pat. No. 10,922,130

INFORMATION PROCESSING DEVICE FOR APPLYING CHANGES TO A TASK OF A PROCESS FLOW WHEN A DYNAMICALLY CHANGEABLE FLAG IS ATTACHED TO THE TASK

FUJI XEROX CO., LTD., To...

1. An information processing device connected to a management device that stores a master process flow comprising a plurality of tasks, sets a dynamically changeable flag on at least one task of the plurality of tasks, and installs a process corresponding to the master process flow on the information processing device for execution, the information processing device, comprising:a memory storing instructions for executing the installed process flow comprising the plurality of tasks including a first task and a second task subsequent to the first task; and
a processor programmed to execute the instructions to:
start executing the installed process flow including the first task that does not have the dynamically changeable flag attached;
after executing the first task:
determine whether the dynamically changeable flag is attached to the second task when execution of the second task is started;
in response to determination that the dynamically changeable flag is attached to the second task:
communicate with the management device to confirm whether or not a change exists in a corresponding second task of the master process flow being managed by the management device;
upon confirming by the management device that the change exists in the corresponding second task of the master process flow, determine whether a parameter of the second task in the information processing device is dynamically changeable;
apply and execute the change to the second task in the information processing device by updating the parameter of the second task when the parameter of the second task is determined to be dynamically changeable; and
upon determining that the change is not confirmed by the management device or upon determining that the parameter of the second task is not dynamically changeable, execute the process flow without applying the change to the second task; and
in response to determination that the dynamically changeable flag is not attached to the second task, execute the second task without communicating with the management device to confirm whether the change exists in the master process flow.

US Pat. No. 10,922,129

OPERATION PROCESSING DEVICE AND CONTROL METHOD OF OPERATION PROCESSING DEVICE

FUJITSU LIMITED, Kawasak...

1. A processor core for out-of-order execution comprising:an operation unit configured to execute an operation;
a first register unit including a plurality of first registers configured to hold data to be used for the operation in the operation unit;
a first selection unit configured to receive a read address signal and to select data held by a first register, indicated by the read address signal, among the plurality of first registers;
a second selection unit configured to select, based on a bypass selection signal, data from a data group including the data selected by the first selection unit and data indicative of a result of the operation executed by the operation unit;
a second register unit configured to hold the data selected by the second selection unit and to output the held data to the operation unit;
a timing adjustment unit configured to output the read address signal to the first selection unit;
a storage unit configured to sequentially store the read address signal; and
a bypass control unit configured to generate the bypass selection signal based on a write address signal of a preceding instruction and a read address signal of a subsequent instruction or a read address signal of a preceding instruction and the read address signal of the subsequent instruction, where the write address signal of the preceding instruction, the read address signal of the subsequent instruction, or the read address signal of the preceding instruction is indicative of a first register in which the data indicative of the result of the operation in the operation unit is to be stored,
wherein when the read address signal of the subsequent instruction matches the read address signal of the preceding instruction, the second selection unit is configured to select data read by the preceding instruction for the read address signal without instruction retirements based on the bypass selection signal and the bypass control unit is configured to stop an operation of the timing adjustment unit, and
wherein when the read address signal of the subsequent instruction matches the write address signal of the preceding instruction, the second selection unit is configured to select the data indicative of the result of the operation executed by the operation unit without instruction retirements based on the bypass selection signal and the bypass control unit is configured to stop the operation of the timing adjustment unit.

US Pat. No. 10,922,128

EFFICIENTLY MANAGING THE INTERRUPTION OF USER-LEVEL CRITICAL SECTIONS

VMWARE, INC., Palo Alto,...

1. A method comprising:executing, by a physical central processing unit (CPU) of a computer system, a first critical section of a user-level thread of an application, wherein program code for the first critical section is marked with one or more CPU instructions indicating that the first critical section should be executed in an atomic manner by the physical CPU;
detecting, by the physical CPU while executing the first critical section, a first event to be handled by an operating system (OS) kernel of the computer system;
upon detecting the first event, reverting, by the physical CPU, memory writes performed by the physical CPU within the first critical section;
invoking, by the physical CPU, a trap handler of the OS kernel a first time, wherein in response to the invoking of the trap handler the first time, the OS kernel invokes a first instance of a user-level handler of the application;
executing, by the physical CPU, a second critical section within the first instance of the user-level handler, wherein program code for the second critical section is marked with one or more CPU instructions indicating that the second critical section should be executed in an atomic manner by the physical CPU;
detecting, by the physical CPU while executing the second critical section, a second event to be handled by the OS kernel; and
upon detecting the second event, invoking, by the physical CPU, the trap handler a second time, wherein in response to the invoking of the trap handler the second time, the OS kernel invokes a second instance of the user-level handler and passes information to the second instance of the user-level handler including:
an identity of the user-level thread;
an indication of the first event;
a state of the physical CPU upon detecting the first event; and
an indication that the first instance of the user-level handler was interrupted in the second critical section, while handling an interruption of the user-level thread in the first critical section.

US Pat. No. 10,922,127

TRANSACTIONAL MESSAGING SUPPORT IN CONNECTED MESSAGING NETWORKS

Snap Inc., Santa Monica,...

1. A method for transactional messaging support in connected messaging networks, the method comprising:receiving, at a computing system comprising a proxy application operating between a first messaging network which does not support transactional processing, and a second messaging network which does support transactional processing, a message from a first application on the first messaging network comprising instructions to use advanced features supported by the second messaging network on behalf of the first application;
reading the message to obtain instructions from the first application regarding transactional processing supported by the second messaging network for the message;
adding, by the computing system, the transactional processing to the message according to the instructions of the first application obtained from the message;
forwarding, by the computing system, to the second messaging network, the message with the added transactional processing for the advanced features supported by the second messaging network;
determining an outcome of the message by determining a success or failure of the message processed on behalf of the first application and generating a commit response; and
returning the outcome of the message to the first application on the first messaging network by sending the generated commit response message to the first application.

US Pat. No. 10,922,126

CONTEXT PROCESSING METHOD AND APPARATUS IN SWITCHING PROCESS OF MULTIPLE VIRTUAL MACHINES, AND ELECTRONIC DEVICE

CLOUDMINDS (SHENZHEN) ROB...

1. A context processing method in a switching process of multiple virtual machines, comprising:in a Kernel Virtual Machine (KVM) module, receiving a switching request of switching from a third party Hypervisor to a platform Hypervisor, the third party Hypervisor being a Hypervisor other than the platform Hypervisor;
in the KVM module, triggering an exception to the third party Hypervisor; and
in the third party Hypervisor, storing a context of the third party Hypervisor in a specified location of a storage space of the third party Hypervisor, and loading the context of the platform Hypervisor pre-stored in the storage space of the third party Hypervisor; and
wherein a data structure form of the storage space is a stack, the method further comprising updating a recording state of the context at the most bottom of a stack space of the third party Hypervisor.

US Pat. No. 10,922,125

CAPABILITY LIVENESS OF CONTAINERIZED SERVICES

MICRO FOCUS LLC, Santa C...

1. An apparatus comprising:a processor; and
a non-transitory computer readable medium on which is stored instructions that when executed by the processor, cause the processor to:
access an output of a containerized service, wherein the containerized service is to execute in a container and is to provide a capability based on successful execution of the containerized service in the container;
determine, based on the output, a capability liveness that indicates whether the containerized service is able to provide the capability, wherein the capability liveness is separate from a container liveness check that determines whether the container is responsive to requests;
wherein to determine the capability liveness based on the output, the instructions, when executed by the processor, cause the processor to: access a set of log patterns associated with the containerized service having an inability to provide the capability; parse the output based on the set of log patterns; and determine whether the output matches one or more log patterns from among the set of log patterns;
determine that the containerized service is unable to provide the capability based on the determined capability liveness; and
cause the container to be restarted responsive to the determination that the containerized service is unable to provide the capability.

US Pat. No. 10,922,124

NETWORK CONTROL SYSTEM FOR CONFIGURING MIDDLEBOXES

NICIRA, INC., Palo Alto,...

1. A non-transitory machine readable medium storing a network controller application for execution by at least one processing unit, the network controller application comprising sets of instructions for:receiving a middlebox configuration for distribution to a middlebox element having a plurality of middlebox instances;
generating an identifier for association with a particular middlebox instance at the middlebox element;
distributing the middlebox configuration and the generated identifier to the middlebox element; and
distributing the generated identifier to a plurality of managed forwarding elements executing on a plurality of host computers in order for each managed forwarding element to exchange data packets with the middlebox element.

US Pat. No. 10,922,123

CONTAINER MIGRATION IN COMPUTING SYSTEMS

Microsoft Technology Lice...

1. A method performed in a computing system having a source device interconnected to a destination device by a computer network, the method comprising:receiving, at the destination device, a request to migrate a source container currently executing on the source device to the destination device, the source container including a software package having a software application in a filesystem sufficiently complete for execution of the software application in an operating system by a processor of the source device to provide a display output of the software application; and
in response to the received request from the source device, at the destination device,
starting a virtual machine having an operating system that is compatible with that of the source device;
instantiating, in the started virtual machine, a destination container using a copy of an image and a memory snapshot of the source container on the source device to execute a copy of the software application to produce a remote display output of the software application; and
upon completion of instantiating the destination container at the destination device to execute the another copy of the software application, transmitting, via the computer network, the produced remote display output of the copy of the software application from the destination container to the source device to be surfaced on the source device in place of the display output from the source container.

US Pat. No. 10,922,122

SYSTEM AND METHOD FOR VIRTUAL MACHINE RESOURCE TAGGING

EMC IP Holding Company LL...

1. A remote agent for managing virtual machines, comprising:a persistent storage that stores backup/restoration policies comprising:
a first portion of policies that are selectively keyed to tags applied to the virtual machines, and
a second portion of policies that are selectively keyed to the virtual machines;
a backup manager that generates backups of the virtual machines based on the backup/restoration policies; and
a resource tagger programmed to:
obtain a management request for a virtual machine of the virtual machines;
in response to obtaining the management request:
perform a remote resource analysis of the virtual machine to obtain an application profile of the virtual machine;
perform a multidimensional application analysis of the application profile to identify at least one tag that indicates a consumption rate of computing resources by an application specified by the application profile;
apply the at least one tag to the virtual machine; and
generate a backup of the virtual machine based on:
one of the first portion of the policies, and
one of the second portion of the policies.

US Pat. No. 10,922,120

SYSTEM AND METHOD FOR GUIDED SYSTEM RESTORATION

EMC IP Holding Company LL...

1. A remote agent for managing virtual machines, comprising:a persistent storage that stores backup/restoration policies;
a graphical user interface manager; and
a restoration manager programmed to:
obtain a restoration request via a first pane of a graphical user interface generated by the graphical user interface manager, wherein the restoration request is for a virtual machine of the virtual machines, wherein the virtual machines are hosted by production hosts;
in response to obtaining the restoration request:
predict a restoration load for performing a restoration of the virtual machine, wherein the restoration load is based on a number of backups and on a size of each of the number of backups;
perform a resource availability analysis of the production hosts using the restoration load to obtain a list of production hosts for performing a restoration of the virtual machine;
make a first determination that the list specifies at least one production host of the production hosts; and
in response to the first determination:
modify a second pane of the graphical user interface based on the list to obtain a modified second pane;
obtain a user selection of a restoration option displayed in the modified second pane; and
restore the virtual machine using the restoration option and the backup/restoration policies,
wherein performing the resource availability analysis of the production hosts comprises:
identifying a non-restoration load for each of the production hosts, wherein the non-restoration load is identified by at least one of: tracking an amount of data stored in each of the production hosts, tracking an average amount of computing processing performed by each of the production hosts during a predetermined period of time, and tracking memory usage of each of the production hosts during the predetermined period of time;
generating future load estimates for the production hosts based on both of the restoration load and each respective non-restoration load;
identifying a portion of the production hosts that each have more computing resources than a future load estimate of the future load estimates associated with each respective production host; and
populating the list with the identified portion of the production hosts.

US Pat. No. 10,922,119

COMMUNICATIONS BETWEEN VIRTUAL DUAL CONTROL MODULES IN VIRTUAL MACHINE ENVIRONMENT

EMC IP Holding Company LL...

1. A computer-implemented method, comprising:deploying a first virtual control module and a second virtual control module in a virtual storage, the first virtual control module and the second virtual control module being redundant with each other;
creating a virtual Peripheral Component Interconnect Express (PCIe) switch for emulating a physical PCIe switch; and
synchronizing cache data between the first virtual control module and the second virtual control module via the virtual PCIe switch,
wherein deploying a first virtual control module and a second virtual control module comprises deploying a respective virtual PCIe interface within each of the virtual control modules to interface the virtual control modules to the virtual PCIe switch, using single-root I/O virtualization by which the virtual control modules can share a single PCIe hardware interface,
and wherein the virtual PCIe interface includes a transaction layer, a data link layer and a hardware abstraction layer, the transaction layer including a plurality of virtual circuits as master interfaces for communicating with other components in the virtual PCIe interface, the data link layer including a PCIe transmission module for implementing data communication at a link layer, and the hardware abstraction layer being configured and operative to abstract hardware circuitry to provide operational interface to the transaction layer and data link layer in which hardware and virtual hardware are indistinguishable;
and wherein the hardware abstraction layer includes a DMA module configured and operative to provide direct-memory access (DMA) operation between the virtual control modules via the virtual PCIe switch, the DMA operation being used in the synchronizing of cache data between the first virtual control module and the second virtual control module via the virtual PCIe switch.

US Pat. No. 10,922,118

DISTRIBUTED CONTAINER IMAGE REPOSITORY SERVICE

INTERNATIONAL BUSINESS MA...

1. A method for use with a plurality of peer container host nodes, the plurality of peer container host nodes including a first container host node and a second container host node and defining a set of peer container host nodes of the first container host node, the method comprising:receiving, by a first container host node, container image configuration data, the container image configuration data referencing a plurality of images;
determining, by the first container host node, that a first image of the plurality of images is not available at the first container host node;
responsive to the determination that the first image of the plurality of images is not available at the first container host node, multicasting, by the first container host node, to the set of peer container host nodes of the first container host node, a request for the first image;
responsive to the multicasting of the request, receiving, by the first container host node, from a second container host node, the first image;
building, by the first container host node, a container image according to the container image configuration data, the building including using the first image received from the first container host node; and
instantiating, by the first container host node, a container from the container image;
wherein the method includes hashing container images downloaded from a container image repository to (a) the first container host node and (b) container host nodes of the set of peer container host nodes to provide image hash table data that associates container image IDs and container image hash IDs to timestamps that specify a time of performing the hashing, wherein the method includes sending, by the first container host node, an image pull request to pull a certain image from a certain container host node of the set of peer container host nodes, wherein the method includes determining the certain container host node using the image hash table data, the image hash table data including a timestamp that indicates that the certain container host node stores a most recent version of the certain image accessible from the set of peer container host nodes.

US Pat. No. 10,922,117

VTPM-BASED VIRTUAL MACHINE SECURITY PROTECTION METHOD AND SYSTEM

Huawei Technologies Co., ...

1. A virtual trusted platform module (vTPM)-based virtual machine security protection method, wherein the method comprises:receiving, by a physical host, a primary seed acquisition request sent by a virtual machine, wherein the primary seed acquisition request includes at least a universally unique identifier (UUID) of the virtual machine and requests acquisition of a primary seed from a key management center (KMC) that is external to the physical host, wherein the primary seed is used by the virtual machine to create a root key of a virtual trusted platform module (vTPM);
sending, by the physical host, the UUID to the KMC, wherein a primary seed is generated by the KMC using the UUID; and
receiving, by the physical host, the primary seed generated by the KMC; and
sending the primary seed to the virtual machine, wherein the primary seed is used by the virtual machine to create the root key of the vTPM, and wherein the root key is used by the vTPM to create a key for the virtual machine to protect security of the virtual machine.

US Pat. No. 10,922,116

CREATING OPERATING SYSTEM VOLUMES

Hewlett Packard Enterpris...

1. A method, comprising:creating, by a computing device, a volume as a file in memory, wherein the volume is based on an operating system (OS) image;
executing a virtual machine;
attaching the OS image to a virtual machine;
attaching the file to the virtual machine as a disk of the virtual machine;
booting the virtual machine using the attached OS image;
determining, by the virtual machine, a set of advanced configuration power management interface (ACPI) tables for different permutations of hardware;
storing, by the virtual machine, the set of ACPI tables in the file;
modifying, by the virtual machine, the volume based on the set of determined ACPI tables such that the modified volume is bootable by any of the different permutations of hardware; and
storing, by the computing device, the modified volume on a storage device of the computing device.

US Pat. No. 10,922,115

COMMISSIONING OF VIRTUALIZED ENTITIES

TELEFONAKTIEBOLAGET LM ER...

1. A method for configuring a first virtualized entity in a computerized virtualization environment, the method comprising:the first virtualized entity acquiring environment data from the computerized virtualization environment for determining what kind of functionality the first virtualized entity is to provide;
after acquiring the environment data, the first virtualized entity determining, based on the acquired environment data, a functionality the first virtualized entity is to provide;
the first virtualized entity acquiring configuration parameters based on the acquired environment data;
the first virtualized entity configuring itself based on the acquired configuration parameters, wherein the configuring comprises the first virtualized entity applying the acquired configuration parameters;
before the first virtualized entity acquires the environment data from the computerized virtualization environment, a central management node (CMN) of the computerized virtualization environment monitoring performance of the computerized virtualization environment; and
before the first virtualized entity acquires the environment data from the computerized virtualization environment, the CMN determining whether a condition to add a virtualized entity to the computerized virtualization environment is satisfied, wherein the first virtualized entity is created as a result of the CMN determining that the condition is satisfied.

US Pat. No. 10,922,114

SYSTEM AND METHOD TO IMPROVE NESTED VIRTUAL MACHINE MONITOR PERFORMANCE

Intel Corporation, Santa...

1. A processing system, comprising:a first register to store an invalidation mode flag associated with a virtual processor identifier (VPID); and
a processing core, communicatively coupled to the first register, the processing core comprising a logic circuit to execute a virtual machine monitor (VMM) environment, the VMM environment comprising a root mode VMM supporting a non-root mode VMM, the non-root mode VMM to execute a virtual machine (VM) identified by the VPID, the logic circuit further comprising an invalidation circuit to:
execute a virtual processor invalidation (INVVPID) instruction issued by the non-root mode VMM, the INVVPID instruction comprising a reference to an INVVPID descriptor that specifies a linear address and the VPID; and
responsive to determining that the invalidation mode flag is set, invalidate, without triggering a VM exit event, a memory address mapping associated with the linear address.

US Pat. No. 10,922,113

METHOD FOR VEHICLE BASED DATA TRANSMISSION AND OPERATION AMONG A PLURALITY OF SUBSCRIBERS THROUGH FORMATION OF VIRTUAL MACHINES

Volkswagen AG

1. A method for data transmission and operation among a plurality of subscribers including a first vehicle-based subscriber having a vehicle operating system, a second vehicle-based subscriber, and at least one further subscriber, wherein at least one of the first and second vehicle-based subscribers is formed on a vehicle infotainment system of a transportation vehicle and the other of the first and second vehicle-based subscribers is hosted by the same transportation vehicle, and the at least one further subscriber is remote to the same transportation vehicle, the method comprising:forming, from the second vehicle-based subscriber, at least one virtual machine having an operating system different from the vehicle operating system of the first vehicle-based subscriber, wherein the at least one virtual machine forms part of a client-server communication network for communicating with the at least one further subscriber;
performing stateless communication between the second vehicle-based subscriber and the at least one further subscriber in the client-server communication network, such that communication is initiated by a client-based request by one of the subscribers acting as a client in accordance with a first transmission protocol in the client-server communication network, and such that communication is initiated by a server-based event by one of the subscribers acting as a server in accordance with a further transmission protocol, different from the first transmission protocol, in the client-server communication network, and such that the at least one further subscriber transmits persistent data that requires the operating system of the virtual machine different than the vehicle operating system of the first vehicle-based subscriber when communicating to the at least one virtual machine that acts as the server of the client-server communication network; and
operating using the persistent data via the operating system of the virtual machine different than the vehicle operating system of the first vehicle-based subscriber, based on communication between the operating system of the virtual machine and the vehicle operating system of the first vehicle-based subscriber.

US Pat. No. 10,922,112

APPLICATION AWARE STORAGE RESOURCE MANAGEMENT

VMware, Inc., Palo Alto,...

1. A method for provisioning a virtual machine in a virtual machine infrastructure management server comprising:receiving, by a computer, configuration information that identifies a plurality of data devices along communication channels connected between a plurality of virtual machine host computer systems and a plurality of storage devices;
receiving, by the computer, capability information for one or more of the data devices, the capability information comprising a metric relating to how one or more functions are performed by each data device;
receiving, by the computer, a virtual machine (VM) profile for a VM to be provisioned, wherein the VM profile sets forth minimum capabilities or capacities of one or more switch devices for the VM to be provisioned; and
provisioning, by the computer, the VM, including selecting one or more switch devices from among the plurality of data devices using the VM profile, each selected switch device having a capability or capacity that is equal to or greater than the minimum capability or capacity of a switch device set forth in the VM profile, wherein the virtual machine is provisioned using the selected one or more switch devices, wherein a communication channel between said one of the virtual machine host computer systems and one of the storage devices comprises the selected one or more switch devices.

US Pat. No. 10,922,111

INTERRUPT SIGNALING FOR DIRECTED INTERRUPT VIRTUALIZATION

INTERNATIONAL BUSINESS MA...

1. A computer program product for providing an interrupt signal to a guest operating system executed using one or more processors of a plurality of processors of a computer system assigned for usage by the guest operating system, the computer program product comprising:at least one computer readable storage medium readable by at least one processing circuit and storing instructions for performing a method comprising:
receiving, by a bus attachment device from a bus connected module of a plurality of bus connected modules operationally coupled to the plurality of processors via the bus attachment device, an interrupt signal with an interrupt target ID, the interrupt target ID identifying one processor of the plurality of processors assigned for usage by the guest operating system as a target processor to handle the interrupt signal;
selecting, by the bus attachment device, a directed interrupt signal vector assigned to the interrupt target ID to which the interrupt signal is addressed;
selecting, by the bus attachment device in the directed interrupt signal vector, a directed interrupt signal indicator assigned to the bus connected module which issued the interrupt signal;
updating, by the bus attachment device, the directed interrupt signal indicator such that the directed interrupt signal indicator indicates that there is the interrupt signal issued by the bus connected module and addressed to the interrupt target ID to be handled; and
forwarding, by the bus attachment device, the interrupt signal to the target processor.

US Pat. No. 10,922,110

METHOD FOR STORING DATA IN A VIRTUALIZED STORAGE SYSTEM

Bull Sas, Les Clayes-sou...

1. A method for storing data of an application running on a virtual machine managed by a virtualization hypervisor, in a virtualized storage system corresponding to the emulation of at least one magnetic tape and at least one associated magnetic tape drive at least partially unsupported and incompatible by the virtualization hypervisor, said virtualized storage system being neither real magnetic tape nor real magnetic tape drive, said virtualized storage system providing for a sequential data access for reads and writes, via a network-level data exchange protocol which is not dedicated to data storage and which is supported by the virtualization hypervisor, said data exchange protocol performing an encapsulation of commands, incompatible and unsupported by the virtualization hypervisor, from the virtualized storage device so as to render the commands transparent for the virtualization hypervisor which, as it no longer sees incompatible characteristic of the virtualized storage device, will completely accept supporting the incompatible commands, without obstruction of the commands from the virtualization hypervisor, bypassing the incompatibility between the virtualization hypervisor and the virtualized storage device.

US Pat. No. 10,922,109

COMPUTER ARCHITECTURE FOR EMULATING A NODE IN A CORRELITHM OBJECT PROCESSING SYSTEM

Bank of America Corporati...

1. A device configured to emulate a node in a correlithm object processing system, comprising:a memory operable to store a node table that identifies:
a plurality of source correlithm objects, wherein each source correlithm object is a point in a first n-dimensional space represented by a binary string; and
a plurality of target correlithm objects, wherein:
each target correlithm object is a point in a second n-dimensional space represented by a binary string, and
each target correlithm object is linked with a source correlithm object from among the plurality of source correlithm objects; and
a node operably coupled to the memory, the node implemented by a processor and configured to:
receive an input correlithm object;
determine n-dimensional distances between the input correlithm object and each of the source correlithm objects in the node table in response to receiving the input correlithm object;
determine that the input correlithm object is not within an n-dimensional distance threshold from any of the source correlithm objects in the node table;
add the input correlithm object to the node table as a new source correlithm object in response to determining that the input correlithm object is not within the n-dimensional distance threshold from any of the source correlithm objects already in the node table; and
link a new target correlithm object to the new source correlithm object in the node table.

US Pat. No. 10,922,108

INFORMATION PROCESSING APPARATUS, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING PROGRAM

Ricoh Company, Ltd., Tok...

1. An information processing apparatus comprising:a processor; and
a memory storing program instructions that cause the processor to
execute an installation process including
installing a program,
validating provisional installation information for the program, the validated provisional installation information indicating that the program has not been executed after installation of the program,
executing the program upon installation of the program, and
invalidating the provisional installation information in a case where the execution of the program upon the installation of the program is successful, the invalidated provisional installation information indicating that a normal execution of the program has been checked upon installation; and
execute a launch process of the program installed in the information processing apparatus upon starting up the information processing apparatus, the launch process including
launching the program installed in the information processing apparatus; and
determining whether the provisional installation information of the program is validated when the program is launched upon starting up the information processing apparatus wherein the program is not launched in a case where it is determined that the provisional installation information is validated.

US Pat. No. 10,922,107

APPARATUS AND METHOD FOR REALIZING RUNTIME SYSTEM FOR PROGRAMMING LANGUAGE

International Business Ma...

1. An apparatus for realizing a runtime system for a class-based object-oriented programming language, comprising:a storage unit configured to store a first class that is an existing class in the class-based object-oriented programming language, and a second class that is a class that includes a member that is accessible from outside of the first class and that is a class which is specialized for a specific use, wherein the first class comprises a java.math.BigInteger class, and wherein the second class comprises a 32-bit field containing the bit length of the bit string and a length until a first non-zero bit appears in the bit string; and
a processing unit configured to perform processing using the second class in accordance with a predetermined instruction in software that realizes the runtime system, wherein performing processing using the second class further comprises storing a bit string having a bit length less than 128, and further configured to perform processing using the first class in accordance with an instruction to check an identity of the second class in a user program product that is executed by the runtime system.

US Pat. No. 10,922,106

SYSTEMS AND METHODS FOR PROVIDING GLOBALIZATION FEATURES IN A SERVICE MANAGEMENT APPLICATION INTERFACE

Cherwell Software, LLC, ...

1. A computer-implemented method comprising:receiving one or more text strings written in a first language, wherein a first component comprises the one or more text strings, and a second component comprises the one or more text strings, wherein the first component and the second component are within an application program;
identifying, within the one or more text strings, a definition comprising at least one function written in the first language;
identifying, within the definition, at least one embedded function corresponding to the at least one function and written in a second language and a third language;
receiving a first indication of the first component to be translated to the second language;
receiving a second indication of the second component to be translated to the third language;
converting, based at least in part on the at least one embedded function and the first indication, the one or more text strings associated with the first component to first translated text strings written in the second language and comprising at least a first portion of the definition;
converting, based at least in part on the at least one embedded function written and the second indication, the one or more text strings associated with the second component to second translated text strings written in the third language and comprising at least a second portion of the definition;
altering the first component, wherein the first component comprises one or more of a database schema, object definition, and/or object definition storage to facilitate localization and translation without manual input from a user;
altering the second component;
displaying the first translated text strings; and
displaying the second translated text strings.

US Pat. No. 10,922,105

REALTIME GENERATED ASSISTANCE VIDEO

International Business Ma...

1. A computer-implemented method comprising:detecting a user performing a first process on a first computing device, wherein the first process includes one or more discrete tasks;
retrieving, based on detection of the first process, a process flow graph related to the first process, the process flow graph having been generated before the performing of the first task;
identifying, based on the first process and based on the process flow graph, a user action that relates to the one or more discrete tasks of the first process;
determining, based on the process flow graph and based on identification of the user action, a current state of the first process; and
generating, based on the current state of the first process and based on the user action, a first video that depicts one or more future actions that may be performed by the user to successfully perform the first process on the first computing device.

US Pat. No. 10,922,104

SYSTEMS AND METHODS FOR DETERMINING AND PRESENTING A GRAPHICAL USER INTERFACE INCLUDING TEMPLATE METRICS

Asana, Inc., San Francis...

1. A system configured to provide a graphical user interface including template metrics, the system comprising:one or more hardware processors configured by machine-readable instructions to:
manage environment state information maintaining a collaboration environment, the collaboration environment being configured to facilitate interaction by users with the collaboration environment, the environment state information including user records and work unit records, the user records including values of user parameters corresponding to the users interacting with and viewing the collaboration environment, the work unit records including values for work unit parameters associated with units of work managed by individual users, created by the individual users, and assigned to the individual users within the collaboration environment;
manage templates for the work unit records, wherein the templates for the work unit records pre-populate the values for a portion of the work unit parameters in the work unit records;
create individual sets of the work unit records based on individual ones of the templates;
monitor the units of work created using the templates to determine template information by tracking one or more of the users associated with the units of work, status updates for the units of work, the interaction by the users with the units of work, or changes to the values for the work unit parameters associated with the units of work;
determine template metric values for template metrics associated with the templates based on the template information, the template metrics including one or more of a completion metric, a collaboration metric, or a personalized metric; and
effectuate presentation of a graphical user interface including the templates and the template metric values for the template metrics associated with the templates.

US Pat. No. 10,922,103

ELECTRONIC TRANSACTION METHOD AND APPARATUS

1. An electronic product transaction method at a merchant aggregator server operated by a merchant aggregator service provider, the method comprising:generating and outputting an interface to a user, the interface comprising a plurality of selectable merchant options, each merchant option comprising a link to an interface for a plurality of selectable product options for a respective merchant, each product option comprising information on a product offered for sale, rent or hire by the respective merchant, wherein the interfaces are hosted by the merchant aggregator server;
receiving, from the user, a selection of a plurality of merchant options and a plurality of product options for the plurality of merchants using the interfaces for the purchase of the products; and
generating and outputting a merchant interface to receive product data from the merchants to generate the product options for the merchant option for each merchant, including receiving code from the merchants to generate interfaces for at least one of the merchant options and the product options on the merchant aggregator server, and modifying the code to any links and functions in the code to operate on the merchant aggregator server.

US Pat. No. 10,922,102

METHOD OF CONTROLLING APPLICATIONS IN A TERMINAL AND TERMINAL

BOE TECHNOLOGY GROUP CO.,...

1. A method of controlling applications in a terminal, the method comprising:receiving a user operation instruction during a display of a first operation interface of a first application by the terminal, wherein the user operation instruction includes a voice instruction or a gesture instruction that is not in contact with the terminal;
determining a first control corresponding to the user operation instruction from at least one control on the first operation interface, which includes:
obtaining a first control instruction corresponding to both the first operation interface and the user operation instruction from a database; and
obtaining identification information of the first control corresponding to the first control instruction from the database;
executing a response program of the first control;
wherein, the method further comprising:
displaying setting information of applications installed in the terminal, wherein the setting information is used to indicate whether configuration files of the applications are included in the database, wherein,
if setting information of any application indicates that a configuration file of the application is included in the database, receiving a first setting instruction directed at the application, and deleting the configuration file of the application from the database according to the first setting instruction; and
if setting information of any application indicates that a configuration file of the application is not included in the database, receiving a second setting instruction directed at the application, and adding the configuration file of the application into the database according to the second setting instruction.

US Pat. No. 10,922,101

USER INTERFACE WIDGET RECOMMENDATION

International Business Ma...

1. A computer-implemented method comprising:receiving, by one or more processors, a plurality of widgets from one or more sources, each widget including one or more components that perform a specific user function;
applying, by one or more processors, natural language processing to the plurality of widgets to determine features wherein:
the features include contexts and layouts associated with the plurality of widgets, and
applying the natural language processing includes using computer vision techniques: (i) to identify the contexts and layouts for the plurality of widgets which are non-declarative widgets and (ii) to find the non-declarative widgets by using background color and edge detection;
training, by one or more processors, a widget classifier based on the determined features, the widget classifier predicting a widget type;
training, by one or more processors, a component classifier based on the widget type associated with the determined features, the component classifier predicting a component type and a component element type;
presenting, by one or more processors, a first widget to a user based on the trained widget classifier, the trained component classifier, and an input of the user; and
performing, by one or more processors, an inference operation to identify a second widget to present to the user.

US Pat. No. 10,922,100

METHOD AND ELECTRONIC DEVICE FOR CONTROLLING DISPLAY

Samsung Electronics Co., ...

1. An electronic device comprising:a housing including a first plate and a second plate facing in a direction opposite the first plate;
a touch screen display located between the first plate and the second plate, viewable through the first plate, and having a first aspect ratio;
a wireless communication circuit located within the housing;
at least one processor located within the housing and electrically connected to the display and the wireless communication circuit; and
a memory located within the housing and electrically connected to the processor,
wherein the memory is configured to store at least one application and further store instructions, which when executed, cause the processor to:
execute an application, from among the at least one application, selected from a home screen;
acquire a second aspect ratio of a user interface of the application,
compare the second aspect ratio with the first aspect ratio,
when the second aspect ratio is smaller than the first aspect ratio, display the user interface in a first area of the touch screen display having a ratio substantially equal to the second aspect ratio, and display function buttons that are for navigating to other applications and are not a part of the application, in a second area of the touch screen display that does not overlap the first area, and
when the second aspect ratio is substantially equal to the first aspect ratio, display the user interface in substantially an entire area of the touch screen display, and
display the function buttons in the second area of the touch screen display, such that the function buttons overlap the user interface,
wherein the function buttons include a first function button and a second function button, and
wherein the first function button and the second function button are displayed in the second area based on a size of the second area being greater than or equal to a first size, and a third function button, having a first function of the first function button and a second function of the second function button, is displayed in the second area based on the size of the second area being smaller than the first size.

US Pat. No. 10,922,099

METHODS FOR USER INTERFACE GENERATION AND APPLICATION MODIFICATION

VERSATA FZ-LLC, Austin, ...

1. A method of modifying behavior of a user interface comprising a plurality of user interface elements, the method comprising:performing by a computer system programmed with code stored in a memory and executing by a processor of the computer system to configure the computer system into a machine for:
detecting an event representing activity with the user interface at a communicating object representing one of a plurality of communicating objects, wherein a model of the user interface is defined using the plurality of communicating objects;
receiving, via a message bus, a notification of occurrence of the detected event that indicates the activity associated with the event;
processing the notification of the event detected at one of the plurality of communicating objects to determine a modification from a first behavior of a first function of a first application computer program associated with the communicating object to a second behavior of a second function of the first application computer program in response to the detected event to modify the user interface, wherein processing the notification of the event comprises:
copying a portion of the first function of a first application computer program to be modified from a first memory location to a second memory location and adding a return call in the second memory location to after the first memory location to return to the first application computer program after modifying behavior of the user interface;
replacing the copied portion of the first application computer program with a function call located within the first memory location, wherein the function call is to the second function to modify the user interface with the second behavior;
executing the function call to the second function to replace the first function with the second function in response to the detected event;
executing the second function;
modifying the behavior of the user interface from the first behavior to the second behavior in accordance with the executed second function;
returning to the portion of the first application computer program copied to the second memory location; and
executing the portion of the first application computer program copied to the second memory location to return control to the first application computer program copied to the first memory location.

US Pat. No. 10,922,098

DSP EXECUTION SLICE ARRAY TO PROVIDE OPERANDS TO MULTIPLE LOGIC UNITS

Micron Technology, Inc., ...

1. An apparatus, comprising:a plurality of configurable logic blocks including a plurality of digital signal processing (DSP) slices; and
an interconnect configured to connect the plurality of configurable logic blocks,
wherein a first DSP slice of the plurality of DSP slices is connected to at least a second DSP slice of the plurality of DSP slices, the first DSP slice comprising:
a plurality of configurable logic units including a first configurable logic unit and a second configurable logic unit;
an operation mode control configured to receive a control signal indicating an operation mode of the first DSP slice;
a first operand register configured to communicate a first operand to the first configurable logic unit; and
a switch configured to receive, directly from the interconnect, the first operand, wherein the switch is further configured to receive, directly from the interconnect, a second operand, wherein the switch is further configured to receive the control signal, wherein the switch is further configured to communicate the first operand or the second operand to the second configurable logic unit based on the control signal, and wherein the interconnect is configured to provide the first operand to the first operand register and the switch.

US Pat. No. 10,922,097

COLLABORATIVE MODEL EXECUTION

International Business Ma...

1. A computing node comprising:a network interface configured to receive a request to execute a software model that has been decomposed into a plurality of sequential sub-components; and
a processor configured to execute a sub-component from among the plurality of sub-components based on input data included in the received request to generate output data, hash the input data and the output data to generate a hashed execution result of the sub-component, and store the hashed execution result of the sub-component within a block among a hash-linked chain of blocks which include hashed execution results of other sub-components of the software model executed by other nodes;
wherein the processor is further configured to aggregate the hashed execution result of the sub-component with the hashed execution results of the other sub-components to generate an aggregated execution result of the software model.

US Pat. No. 10,922,096

REDUCING SUBSEQUENT NETWORK LAUNCH TIME OF CONTAINER APPLICATIONS

VMware, Inc., Palo Alto,...

1. In a computer system configured to support execution of a virtualized application, wherein files of the virtualized application are stored in and retrieved from network storage, a method of launching the virtualized application, said method comprising:performing a first launch of the virtualized application, wherein the performing of the first launch comprises executing a driver to:
fetch, from the network storage, a subset of the files of the virtualized application that are required for launching the virtualized application, and
store the subset of the files in a system memory of the computer system;
closing the virtualized application;
performing a second launch of the virtualized application;
creating a file object for each file of the fetched subset of the files, each file object having a pointer; and
modifying the pointer of each file object to point to a location in the system memory, the location in the system memory having data stored thereon from a previous launch of the file with which the file object is associated.

US Pat. No. 10,922,095

SOFTWARE APPLICATION PERFORMANCE REGRESSION ANALYSIS

SALESFORCE.COM, INC., Sa...

19. A method comprising:retrieving, by a computer system, a plurality of values of a performance metric for a software application, each respective value associated with a respective time period;
identifying, by the computer system, a regression based on a value from the plurality of values that is beyond a predetermined threshold;
retrieving, by the computer system, metadata associated with the respective time period for the identified value;
retrieving, by the computer system, profile data associated with the performance metric prior to the respective time period for the identified value;
retrieving, by the computer system, profile data associated with the performance metric subsequent to the respective time period for the identified value; and
identifying, by the computer system, a portion of the software application associated with the regression based on: the metadata, the profile data associated with the performance metric prior to the respective time period for the identified value, and the profile data associated with the performance metric subsequent to the respective time period for the identified value, wherein identifying the portion of the software application associated with the regression includes building an aggregated data set based on the profile data associated with the performance metric prior to the respective time period and the profile data associated with the performance metric subsequent to the respective time period.

US Pat. No. 10,922,094

SYSTEMS AND METHODS FOR PROACTIVELY PROVIDING RECOMMENDATIONS TO A USER OF A COMPUTING DEVICE

Apple Inc., Cupertino, C...

1. A method for proactively providing predictions to a user of a mobile device, the method comprising, at a prediction center executing on the mobile device:for each prediction engine of a plurality of prediction engines executing locally on the mobile device:
receiving, from the prediction engine, a respective request for the prediction engine to function as an expert on at least one prediction category of a plurality of prediction categories, and
updating a configuration of the prediction center to reflect a registration of the prediction engine as an expert on the at least one prediction category;
receiving, from a search application executing on the mobile device, a request for an optimized list of applications that the user is likely to activate, wherein the request corresponds to a particular prediction category, and the particular prediction category relates to applications that the user is likely to activate;
identifying, among the plurality of prediction engines, at least two prediction engines that are registered as experts on the particular prediction category;
issuing, to each prediction engine of the at least two prediction engines, requests for respective lists of applications that the user is likely to activate;
receiving the respective lists from the at least two prediction engines;
processing the respective lists of applications to generate the optimized list of applications; and
providing the optimized list to the search application.

US Pat. No. 10,922,093

DATA PROCESSING DEVICE, PROCESSOR CORE ARRAY AND METHOD FOR CHARACTERIZING BEHAVIOR OF EQUIPMENT UNDER OBSERVATION

Blue Yonder Group, Inc., ...

1. A data processing device for characterizing behavior properties of equipment under observation, the data processing device comprising:a processor core array comprising two or more processor cores configured to select processor cores of the two or more processor cores as a first processing stage and processor cores of the two or more processor cores as a second processing stage according to an availability of the processor cores and according to a presence of equipment data values from an equipment under observation, wherein:
the processor cores of the first processing stage receive two or more equipment data values from the equipment under observation as input and provide two or more intermediate data values as output, according to two or more first numerical transfer functions, and wherein the processor cores of the second processing stage receive the two or more intermediate data values as input and provide behavior data as output values according to a second numerical transfer function;
the selected processor cores process input values based on numerical transfer functions to output values by implementing an input-to-output mapping based on a configuration obtained by a pre-processing of historic data from two or more master equipment and related to the behavior properties of the equipment under observation; and
the output values predict future behavior properties of the equipment under observation.

US Pat. No. 10,922,092

ADMINISTRATOR-MONITORED REINFORCEMENT-LEARNING-BASED APPLICATION MANAGER

VMware, Inc., Palo Alto,...

1. An automated administrator-monitored reinforcement-learning-based application manager that manages one or more applications and a computing environment within which the one or more applications run, the computing environment comprising one or more of a distributed computing system having multiple computer systems interconnected by one or more networks, a standalone computer system, and a processor-controlled user device, the administrator-monitored reinforcement-learning based application manager comprising:one or more processors, one or more memories, and one or more communications subsystems;
a control loop that iteratively receives a reward and an observation from the computing environment and, in response to the received reward and observation, consults an internally maintained policy ? to determine a next action to issue to the computing environment;
an action-proposal subsystem through which the reinforcement-learning-based application manager, prior to issuing a next action to the computing environment, proposes the next action to a human administrator and through which the reinforcement-learning-based application manager receives a proposed-action result; and
decision logic that uses the proposed-action result to determine to either issue the next action, determine an alternative action for issuance, or issue no action.

US Pat. No. 10,922,091

DISTRIBUTED REALTIME EDGE-CORE ANALYTICS WITH FEEDBACK

Hitachi, Ltd., Tokyo (JP...

1. An apparatus configured to manage a plurality of edge nodes, the apparatus configured to receive streaming data from the plurality of edge nodes, the plurality of edge nodes configured to output the streaming data based on first parameters, the apparatus comprising:a memory, configured to manage state information for each of the plurality of edge nodes, the state information comprising the first parameters of each of the plurality of edge nodes; and
a processor, configured to, for when an update is issued from the apparatus to the plurality of edge nodes to update the first parameters to second parameters:
determine, from the state information, a first mode or a second mode based on edge nodes that have applied the second parameters, wherein the second mode is indicated by a number of the plurality of edge nodes operating on the second parameters exceeding a threshold number of edge nodes;
for the determination indicative of the first mode, conduct reprocessing on the streaming data from the plurality of edge nodes based on the second parameters, and conduct analytics on the reprocessed streaming data; and
for the determination indicative of the second mode, conduct analytics on the streaming data,
wherein the streaming data output by the plurality of edge nodes comprises temperature data of the plurality of edge nodes and the conducting analytics comprises conducting analytics on the temperature data.

US Pat. No. 10,922,090

METHODS AND SYSTEMS FOR EXECUTING A SOFTWARE APPLICATION USING A CONTAINER

EMC IP HOLDING COMPANY LL...

1. A computer-implemented method for running an application program on a database host, the method comprising:receiving a container image name from an application server;
communicating with a daemon installed on the database host to command the daemon to retrieve a container image of the application program based on the container image name, wherein the daemon is configured to:
determine whether the daemon includes the container image,
retrieve the container image from a container registry of a registry server in response to determining that the daemon does not include the container image, start a container from the container image, the container comprising an instance of the application program, wherein the container includes dependencies to natively run the instance of the application program on the database host, and wherein the instance of the application program runs within the container natively on the database host to query data from a database of the database host and generate one or more outputs based on the queried data, and
the container communicates one or more outputs generated by the instance of the application program to the application server;
inputting credentials of the database as an environment variable of an environment in which the instance of the application program runs to provide the instance of the application program running within the container access to a database within the database host; and
duplicating the container using the container image to start a second container, the container and the second container to execute instances of the application program in parallel.

US Pat. No. 10,922,089

MOBILE SERVICE APPLICATIONS

GROUPON, INC., Chicago, ...

1. A mobile device configured for providing a mobile application, the mobile application including a plurality of service applications, the mobile device comprising circuitry configured to:determine a dependency tree for the mobile application, wherein the dependency tree defines startup and shutdown dependencies among the plurality of service applications, wherein each service application is configured to execute within a separate thread upon startup, and wherein each thread is associated with a unique execution context;
determine to execute a first service application of the plurality of service applications, wherein the first service application is configured to generate an application page for presentation on at least a portion of a user interface operating on the mobile device, and
responsive to receiving a service application request comprising application page input parameters:
determine, based on the dependency tree, one or more parent service applications of the first service application;
startup the one or more parent service applications based on the dependency tree in one or more threads;
subsequent to starting up the one or more parent service applications, startup the first service application in a first thread separate from the one or more threads;
subsequent to starting up the first service application, execute the first service application within the first thread; and
provide, based at least in part on the application page input parameters, the application page corresponding with the first service application for presentation on the at least a portion of the user interface.

US Pat. No. 10,922,088

PROCESSOR INSTRUCTION SUPPORT TO DEFEAT SIDE-CHANNEL ATTACKS

Intel Corporation, Santa...

1. An apparatus comprising:a decoder to decode a first single instruction, the first instruction having at least a first field for a first opcode to indicate that execution circuitry is to set a first flag in a first register to indicate a mode of operation that is to cause a redirection of program flow to an exception handler upon the occurrence of a cache eviction event and a second field to indicate an address for the exception handler;
execution circuitry to execute the decoded first single instruction to set the first flag in the first register to indicate the mode of operation and to store the address of the exception handler in a second register; and
a cache, an entry in the cache including a second flag that, when set, is to identify an entry that, upon eviction, is to cause the first flag in the first register to be cleared and the second flag in the entry to be cleared.

US Pat. No. 10,922,087

BLOCK BASED ALLOCATION AND DEALLOCATION OF ISSUE QUEUE ENTRIES

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:tracking relative ages of instructions in an issue queue of an out-of-order (OoO) processor, the tracking comprising:
grouping entries in the issue queue into a pool of fixed sized blocks, each block containing a plurality of contiguous entries that are configured to be allocated and deallocated as a single unit, each entry in the plurality of entries configured to store an instruction;
selecting, in any order, blocks from the pool of blocks for allocation;
allocating the selected blocks;
tracking relative ages of the allocated blocks based at least in part on an order that the blocks are allocated;
configuring each allocated block as a first-in-first-out (FIFO) queue of entries, each allocated block configured to add instructions to the block in a sequential order and to remove instructions from the block in any order including a non-sequential order; and
tracking relative ages of instructions within each allocated block,
wherein upon allocation of a block, entries in the block are available to store instructions received from a dispatch unit, and
wherein upon deallocation of the block, the block is put back into the pool and entries in the block are no longer available to store instructions received from the dispatch unit.

US Pat. No. 10,922,086

REDUCTION OPERATIONS IN DATA PROCESSORS THAT INCLUDE A PLURALITY OF EXECUTION LANES OPERABLE TO EXECUTE PROGRAMS FOR THREADS OF A THREAD GROUP IN PARALLEL

Arm Limited, Cambridge (...

1. A method of operating a data processing system, the data processing system including:a data processor operable to execute programs to perform data processing operations and in which execution threads executing a program to perform data processing operations may be grouped together into thread groups in which the plural threads of a thread group can each execute a set of instructions in lockstep;
the data processor comprising an execution processing circuit operable to execute instructions to perform processing operations for execution threads executing a program, the execution processing circuit being configured as a plurality of execution lanes, each execution lane being operable to perform processing operations for a respective execution thread of a thread group;
the method comprising:
to cause the data processor to perform a reduction operation to combine initial data values for the threads in a thread group in accordance with an operation defined for the reduction operation:
including in a program to be executed by the execution processing circuit of the data processor, a sequence of one or more instructions for execution by the execution processing circuit for performing the reduction operation, the sequence of instructions being operable to, when executed for each thread of a thread group that is being executed by the execution lanes of the execution processing circuit, perform for each thread of the thread group that is executing the sequence of instructions:
a first combining step that combines in accordance with the operation defined for the reduction operation, an initial data value for the execution lane for the thread with the initial data value for a selected another execution lane of the execution processing circuit, and stores the combined data value result of the combining operation for the thread;
and
one or more further combining steps that each combine in accordance with the operation defined for the reduction operation, the stored combined data value result of the previous combining operation for the thread with the combined data value result of the previous combining operation for a selected another execution lane of the execution processing circuit that has not yet contributed to the stored combined data value result for the thread, and store the combined data value result of the combining operation for the thread,
such that the thread will have stored for it in accordance with the reduction operation, the combination of the initial data values for all the threads of the thread group;
wherein the sequence of instructions is further configured so as to cause the data processor to, when combining a data value for the thread with the combined data value result of a previous combining operation for a selected another execution lane of the execution processing circuit that has not yet contributed to the stored combined data value result for the thread, select as the another execution lane of the execution processing circuit that has not yet contributed to the combined data value result for the thread, an execution lane from a group of execution lanes whose values have been combined in the previous combining step and that have not yet contributed to the combined data value result for the thread, and having a particular relative position in the group of execution lanes;
the method further comprising:
providing the program, including the sequence of one or more instructions for performing the reduction operation, to the data processor for execution by the execution processing circuit of the data processor;
issuing a group of execution threads to the execution processing circuit to execute the program for the execution threads of the thread group;
and
the execution processing circuit of the data processor executing the program, including the sequence of one or more instructions for performing the reduction operation, for the group of execution threads, using a respective execution lane of the execution processing circuit for each thread of the thread group;
wherein the executing the program for the threads of the thread group comprises, in response to the sequence of one or more instructions for performing the reduction operation, performing for each thread in the thread group:
a first combining step that combines in accordance with the operation defined for the reduction operation, an initial data value for the execution lane for the thread with the initial data value for a selected another execution lane of the execution processing circuit, and stores the combined data value result of the combining operation for the thread;
and
one or more further combining steps that each combine in accordance with the operation defined for the reduction operation, the stored combined data value result of the previous combining operation for the thread with the combined data value result of the previous combining operation for a selected another execution lane of the execution processing circuit that has not yet contributed to the stored combined data value result for the thread, and store the combined data value result of the combining operation for the thread,
such that the thread has stored for it in accordance with the reduction operation, the combination of the initial data values for all the threads of the thread group;
wherein the executing the program for a thread of the thread group further comprises, in response to the sequence of one or more instructions for performing the reduction operation:
when combining a data value for the thread with the combined data value result of a previous combining operation for a selected another execution lane of the execution processing circuit that has not yet contributed to the stored combined data value result for the thread, selecting as the another execution lane of the execution processing circuit that has not yet contributed to the combined data value result for the thread, an execution lane from a group of execution lanes whose values have been combined in the previous combining step and that have not yet contributed to the combined data value result for the thread, and having a particular relative position in the group of execution lanes;
wherein the sequence of instructions is operable to cause the execution processing circuit to select the execution lane from a group of execution lanes whose values have been combined in the previous combining step and that have not yet contributed to the combined data value result for the thread that is selected when combining a data value for a thread with the combined data value result of a previous combining operation for a selected another execution lane of the execution processing circuit in accordance with the following calculation:
lane number of the execution lane whose data value to be combined with=(lane number of execution lane that is executing the sequence of instructions XOR modifier value) AND (bitwise complement (modifier value?1)),
where the lane number of the execution lane whose data value to be combined with source lane is the lane number of the another execution lane to retrieve the data value for combining with from; the lane number of execution lane that is executing the sequence of instructions is the number of the execution lane that is executing the instruction; and the modifier value is a numerical value that is used to modify the calculation result.

US Pat. No. 10,922,085

SCHEDULING OF THREADS FOR EXECUTION UTILIZING LOAD BALANCING OF THREAD GROUPS

INTEL CORPORATION, Santa...

1. An apparatus comprising:one or more processors including a graphics processor, the one or more processors to analyze an application kernel to determine a magnitude of barrier messages in the application kernel and generate barrier usage data having a value corresponding to the magnitude of barrier messages in the application kernel; and
a memory to store the barrier usage data;
wherein the graphics processor includes:
a plurality of streaming multiprocessors, each streaming multiprocessor including a plurality of cores for execution of threads, and
a scheduler to schedule a plurality of thread groups for execution by the plurality of streaming multiprocessors according to a scheduling policy, the scheduling policy being based at least in part on load balancing of thread groups across the plurality of streaming multiprocessors, wherein the scheduler is to perform load balancing by performing one or more cost functions based at least in part on thread groups scheduled for the plurality of streaming multiprocessors, the barrier usage data, or both, wherein the one or more cost functions include a first cost function based on thread groups scheduled for each of the plurality of streaming multiprocessors and a second cost function based at least in part on the barrier usage data, the first cost function providing for determining a maximum of a number of free threads in each of the plurality of multiprocessors after scheduling threads of a thread group.

US Pat. No. 10,922,084

HANDLING OF INTER-ELEMENT ADDRESS HAZARDS FOR VECTOR INSTRUCTIONS

ARM Limited, Cambridge (...

1. An apparatus comprising:processing circuitry to perform data processing in response to instructions, wherein in response to a vector load instruction, the processing circuitry is configured to load respective data elements of a vector value with data from respective locations of a data store, and in response to a vector store instruction, the processing circuitry is configured to store data from respective data elements of a vector value to respective locations of the data store;
wherein the processing circuitry is responsive to a transaction start event to speculatively execute one or more subsequent instructions, and responsive to a transaction end event to commit speculative results of the one or more subsequent instructions speculatively executed following the transaction start event;
the apparatus comprises hazard detection circuitry to detect whether an inter-element address hazard occurs between an address corresponding to data element J for an earlier vector load instruction speculatively executed following the transaction start event and an address corresponding to data element K for a later vector store instruction speculatively executed following the transaction start event, where K is different to J, and both the earlier vector load instruction and the later vector store instruction are from the same thread of instructions processed by the processing circuitry;
wherein in response to detecting the inter-element address hazard, the hazard detection circuitry is configured to trigger the processing circuitry to abort further processing of the instructions subsequent to the transaction start event and prevent said speculative results being committed; and
the apparatus comprising hazard tracking storage circuitry to store hazard tracking data for tracking addresses used for one or more earlier vector load instructions speculatively executed following the transaction start event.

US Pat. No. 10,922,083

DETERMINING PROBLEM DEPENDENCIES IN APPLICATION DEPENDENCY DISCOVERY, REPORTING, AND MANAGEMENT TOOL

Capital One Services, LLC...

1. A computer-implemented method comprising:determining a plurality of dependencies associated with a first application and organized as a dependency tree, wherein the plurality of dependencies comprises:
one or more immediate dependencies of the first application; and
one or more sub-dependencies of the first application, wherein each of the sub-dependencies corresponds to a respective dependency of a corresponding immediate dependency or sub-dependency of the first application;
configuring a monitoring application to monitor the plurality of dependencies using a plurality of monitoring interfaces;
automatically identifying, by the monitoring application, a problem dependency of the plurality of dependencies through a tree traversal process, wherein the tree traversal process begins at a first layer, as a current layer, of the dependency tree that corresponds to the one or more immediate dependencies of the first application and comprises:
determining, based on the plurality of monitoring interfaces, an operating status of each dependency included in the current layer of the dependency tree, wherein determining the operating status of a given dependency comprises determining, using a corresponding monitoring interface, at least one metric associated with the given dependency;
when a first dependency of the current layer is identified as having an unhealthy status, continuing to traverse the dependency tree to identify the problem dependency by advancing to a next layer of the dependency tree as the current layer, wherein the next layer of the dependency tree comprises dependencies corresponding to one or more dependencies of the first dependency; and
when each dependency of the current layer is identified as having a healthy status, identifying a parent dependency, of an immediately prior layer of the dependency tree, that has an unhealthy status as the problem dependency; and
generating, by the monitoring application, a notification indicating that the parent dependency is the problem dependency.

US Pat. No. 10,922,082

BRANCH PREDICTOR

Arm Limited, Cambridge (...

1. An apparatus comprising:processing circuitry to perform data processing in response to instructions fetched from an instruction cache wherein the processing circuitry is configured to demand said instructions from the instruction cache;
a branch predictor having at least one branch prediction structure to store branch prediction data for predicting at least one branch property of an instruction fetched for processing by the processing circuitry; and
an instruction prefetcher to speculatively prefetch instructions into the instruction cache before the instructions are demanded by the processing circuitry, wherein the instructions demanded by the processing circuitry are based at least in part on predictions of the branch predictor; wherein:
on prefetching of a given instruction into the instruction cache by the instruction prefetcher before the given instruction is demanded, the branch predictor is configured to perform a prefetch-triggered update of the branch prediction data based on information derived from the given instruction prefetched by the instruction prefetcher.

US Pat. No. 10,922,081

CONDITIONAL BRANCH FRAME BARRIER

Oracle International Corp...

1. One or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause:initiating a first process that traverses each of a plurality of frames on a call stack associated with a second process to perform a particular operation with respect to each of the plurality of frames on the call stack associated with the second process, wherein the plurality of frames includes a first frame associated with a first function, and a second frame associated with a second function that called the first function;
during a first time period in which the first frame has been traversed to perform the particular operation, the second frame has not been traversed to perform the particular operation, and the second process has not returned to the second function:
executing, by the second process, a function epilogue of the first function, wherein the function epilogue comprises:
determining whether a return condition is true based on a first comparison between a first value of a stack pointer and a second value of a process-specific value associated with the second process;
responsive at least to determining that the return condition is true: establishing a frame barrier for the second frame by jumping to a third function rather than returning to the second function;
during a second time period in which the first frame has been traversed to perform the particular operation, the second frame has been traversed to perform the particular operation, a third frame of the plurality of frames has not been traversed to perform the particular operation, and the second process has not returned to the second function:
determining whether the return condition is true based on a second comparison between a third value of the stack pointer and a fourth value of the process-specific value associated with the second process;
responsive to determining that the return condition is not true: returning to the second function.

US Pat. No. 10,922,080

SYSTEMS AND METHODS FOR PERFORMING VECTOR MAX/MIN INSTRUCTIONS THAT ALSO GENERATE INDEX VALUES

Intel Corporation, Santa...

1. A processor comprising:fetch circuitry to fetch a single instruction, a format of the single instruction including a first field to identify a first operand, a second field to identify a second operand, a third field to identify a third operand, and an opcode to indicate that execution circuitry is to determine on a per data element position of the identified first and second operands a maximum value, store the determined maximum values in corresponding data element positions of the identified first operand, and determine and store, in each data element position of the identified third operand, an indication of where the maximum value came from, and a fourth field to identify a iteration number of a sequence of operations, wherein the first operand is the same throughout the sequence of operations;
decode circuitry to decode the fetched single instruction; and
execution circuitry to execute the decoded single instruction to determine on a per data element position of the identified first and second operands a maximum value, store the determined maximum values in corresponding data element positions of the identified first operand, and determine and store, in each data element position of the identified third operand, an indication of where the maximum value came from.

US Pat. No. 10,922,079

METHOD AND APPARATUS TO EFFICIENTLY PERFORM FILTER OPERATIONS FOR AN IN-MEMORY RELATIONAL DATABASE

Intel Corporation, Santa...

1. A processor comprising:a central processing module including at least one processor core; and
a hardware accelerator communicatively coupled to the processor core via a system fabric interconnect to offload computation from the processor core, the hardware accelerator to select a data element from an array of data elements read from a column of a table in a column orientated in-memory database stored in a memory communicatively coupled to the processor core, in response to a command received from the processor core, the command including a bit vector to identify the data element to be selected and a bit width from a range of widths for the data elements in the array of data elements, the hardware accelerator to expand the bit vector based on the bit width for the data elements to provide an expanded bit vector and to use the expanded bit vector to select the data element from the array of data elements, a number of data elements processed per cycle are fully contained within less than a full width of a data path to the hardware accelerator.

US Pat. No. 10,922,078

HOST PROCESSOR CONFIGURED WITH INSTRUCTION SET COMPRISING RESILIENT DATA MOVE INSTRUCTIONS

EMC IP Holding Company LL...

1. A system comprising:a host processor; and
at least one storage device coupled to the host processor;
the host processor being configured to execute instructions of an instruction set, the instruction set comprising a first move instruction for moving data identified by at least one operand of the first move instruction into each of multiple distinct storage locations, the first move instruction being configured to direct movement of data identified by the at least one operand of the first move instruction into each of the multiple distinct storage locations;
wherein the host processor in executing the first move instruction is configured:
to store the data in a first one of the storage locations identified by one or more additional operands of the first move instruction; and
to store the data in a second one of the storage locations identified based at least in part on the first storage location.

US Pat. No. 10,922,077

APPARATUSES, METHODS, AND SYSTEMS FOR STENCIL CONFIGURATION AND COMPUTATION INSTRUCTIONS

Intel Corporation, Santa...

1. An apparatus comprising:a matrix operations accelerator circuit comprising a two-dimensional grid of fused multiply accumulate circuits coupled by a network;
a first plurality of registers that represents a first two-dimensional matrix coupled to the matrix operations accelerator circuit;
a second plurality of registers that represents a second two-dimensional matrix coupled to the matrix operations accelerator circuit;
a decoder, of a core coupled to the matrix operations accelerator circuit, to decode a single instruction into a decoded single instruction; and
a circuit of the core to execute the decoded single instruction to:
switch the matrix operations accelerator circuit from a first mode where each fused multiply accumulate circuit operates on corresponding, same positioned elements of the first two-dimensional matrix and the second two-dimensional matrix to a second mode where a first set of input values from the first plurality of registers is sent to a first plurality of fused multiply accumulate circuits that form a first row of the two-dimensional grid, a second set of input values from the first plurality of registers is sent to a second plurality of fused multiply accumulate circuits that form a second row of the two-dimensional grid, a first coefficient value from the second plurality of registers is broadcast to a third plurality of fused multiply accumulate circuits that form a first column of the two-dimensional grid, and a second coefficient value from the second plurality of registers is broadcast to a fourth plurality of fused multiply accumulate circuits that form a second column of the two-dimensional grid.

US Pat. No. 10,922,076

METHODS AND SYSTEMS FOR MANAGING AGILE DEVELOPMENT

Agile Worx, LLC, Alphare...

1. A system comprising:one or more processors; and
memory storing instructions that, when executed by the one or more processors, cause the system to:
receive development data comprising:
customer data including a first feature of a project;
project timeline data including a feature deadline and a development deadline;
budget data; and
project requirements data comprising:
requirements of the first feature;
a first priority value indicating a level of priority of the first feature;
a first penalty value indicating a penalty for failing to deliver the first feature by at least one of (i) the feature deadline and (ii) the development deadline;
a first cost value indicating a financial cost of developing the first feature; and
a first time value indicating a time required for development of the first feature;
calculate project agility by calculating the ratio of the time value compared to an amount of time remaining until the feature deadline; and
calculate market agility by, at least in part, calculating the ratio of the cost value compared to the budget data.

US Pat. No. 10,922,075

SYSTEM AND METHOD FOR CREATING AND VALIDATING SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC) DIGITAL ARTIFACTS

Morgan Stanley Services G...

1. A computer-implemented system, comprising:a memory; and
a processor coupled to the memory, the processor configured to:
receive a software development lifecycle (SDLC) artifact that includes a template created on a server-based user interface (UI), without coding, via uploading of a document to the UI and/or using the UI to edit a representation of a previously uploaded document, the template's contents comprising one or more sections, each section comprising one or more properties, wherein the UI is configured to represent one or more requirements of the SDLC artifact based upon one or more rules indicating one or more tiers of approval, a tier of approval needed for the SDLC artifact being selected based at least in part on both an estimated technical complexity and a level of asset risk associated with a software change represented by the SDLC artifact;
determine whether the template of the SDLC artifact is valid;
map the one or more sections and the one or more properties to the template in response to a determination that the template of the SDLC artifact is valid;
associate software asset metadata and one or more approvers of the SDLC artifact based upon the selected tier of approval; and
transform the SDLC artifact into a standard-compliant SDLC artifact.

US Pat. No. 10,922,074

DEFERRED STATE MUTATION

Oracle International Corp...

10. A method of providing deferred state mutation, the method comprising:providing a design visual development time tool with a user interface, wherein the user interface provides a set of icon boxes and icon arrows for user definition of actions in action chains of a client application, and wherein the action chains include a first action chain and a second action chain;
receiving, performed by the user interface, information defining the action chains implementing part of the client application, wherein the received information includes explicit computer executable instructions, wherein the explicit computer executable instructions include a first explicit computer executable instruction and a second explicit computer executable instruction, wherein at least one of the explicit computer executable instructions is associated with each of the actions in the action chains, wherein the first action chain has the first explicit computer executable instruction and the second action chain has the second explicit computer executable instruction for modifying a global state associated with each of the action chains, wherein the global state is not associated with a subset of the explicit computer executable instructions, wherein the information defining each of the action chains further includes at least a subset of the icon boxes and at least a subset of the icon arrows joining the at least subset of icon boxes to create visual representations of the action chains, wherein each of the actions in the action chains is defined with one of the subsets of the icon boxes, and wherein the first action chain has a different design of icon boxes and icon arrows than the second action chain;
automatically generating, performed by the design visual development time tool, computer executable instructions for each of the action chains to create respective private views of the global state for each of the action chains;
creating a first implicit computer executable instruction using the first explicit computer executable instruction as a first template;
creating a second implicit computer executable instruction using the second explicit computer executable instruction as a second template, wherein the first template is different from the second template;
automatically associating, performed by the design visual development time tool, the first implicit computer executable instruction with the first explicit computer executable instruction and the second implicit computer executable instruction with the second explicit computer executable instruction to create respective private views of the global state for each of the action chains; and
executing, performed by the design visual development tool, the client application including executing the action chains in parallel, wherein the first and second implicit computer executable instructions are executed during the executing of the client application instead of the first and second explicit computer executable instructions.

US Pat. No. 10,922,073

DISTRIBUTED INCREMENTAL UPDATING OF TRAYS USING A SOURCE CONTROL SYSTEM

ServiceNow, Inc., Santa ...

1. A system for distributed incremental updating of software containers, the system comprising:one or more processors; and
a memory having instructions stored thereon, that when executed, are configured to cause the one or more processors to:
execute an instance of an application of a first version of a software container on a server, wherein the software container includes software dependencies used to execute the instance of the application within a computing environment;
receive, by a software container manager, a request to update the software container to a second version, wherein the request comprises a changeset from the first version to the second version;
determine, by the software container manager, that there are no pending updates preventing an update from the first version to the second version, wherein the determination that there are no pending requests preventing the update is based on a determination by the software container manager that there are no other pending requests that indicate a potential change to a file of the software dependencies that differs from a corresponding potential change to the file in the second version;
determine that the software dependencies of the first version are compatible with the second version; and
responsive to a determination by the software container manager that there are no pending requests preventing an update and that the software dependencies of the first version are compatible with the second version, update the software container from the first version to the second version by updating files in the software container according to the request.

US Pat. No. 10,922,072

APPARATUS, SYSTEM, AND METHOD FOR SEAMLESS AND REVERSIBLE IN-SERVICE SOFTWARE UPGRADES

Juniper Networks, Inc, S...

1. A method comprising:detecting an in-service software upgrade that is to upgrade a first version of an operating system to a second version of the operating system on a computing device;
performing the in-service software upgrade on the computing device by:
constructing a second software stack for the second version of the operating system while a first software stack for the first version of the operating system is active on the computing device, wherein the first software stack and the second software stack:
share one or more filesystem components in common; and
differ from one another with respect to at least one filesystem component;
staging the second software stack while the first software stack is running on the computing device;
identifying one or more active processes that are currently managed by the first version of the operating system on the computing device;
deactivating the first version of the operating system and activating the second version of the operating system such that management of the active processes is transitioned from the first version of the operating system to the second version of the operating system without rebooting the computing device; and
switching a root of a union filesystem on the computing device from the first software stack to the second software stack to facilitate the transition.

US Pat. No. 10,922,071

CENTRALIZED OFF-BOARD FLASH MEMORY FOR SERVER DEVICES

QUANTA COMPUTER INC., Ta...

13. A method of updating a flash Basic Input and Output System (BIOS) firmware image of two or more flash memory components, the method comprising:validating the flash BIOS firmware image;
updating a first flash BIOS firmware image of a first one of the two or more flash memory components to replace a the first flash BIOS firmware image with the validated flash BIOS firmware image, the two or more flash memory components are stored in a centralized flash memory module;
upon determining success of the first updating, updating a second flash BIOS firmware image of a second one of the two or more flash memory components to replace a the second flash BIOS firmware image with the validated flash BIOS firmware image, the second flash BIOS firmware image is associated with a golden flash BIOS image;
identifying, using a flash memory management controller, at least one flash memory component that successfully replaced the second flash BIOS firmware image; and
powering on at least one server device connected to the at least one flash memory component stored in the centralized flash memory module.

US Pat. No. 10,922,070

HARDWARE ASSISTED FIRMWARE DOWNLOAD SYNCING

WESTERN DIGITAL TECHNOLOG...

1. A method for performing a download operation, comprising:detecting an updated firmware for installation, wherein the updated firmware is disposed in a data storage device;
transmitting at least one slice of the updated firmware from an updated firmware location of the data storage device to a second firmware location of the data storage device;
determining if a synchronization has completed with the at least one slice of the updated firmware;
determining if additional slices are to be synchronized when the synchronization has completed with the at least one slice of the updated firmware;
sending at least one additional slice of the updated firmware from the updated firmware location of the data storage device to the second firmware location of the data storage device when the determining if the additional slices are to be synchronized when the synchronization has completed with the at least one slice of the updated firmware; and
performing a synchronization of the at least one additional slice of the updated firmware.