US Pat. No. 10,169,194

MULTI-THREAD SEQUENCING

International Business Ma...

1. A method for identifying errors in a multi-threaded application comprising the steps of:running, by a processor, the multi-threaded application being tested for errors;
generating, by the processor, a thread sequence of the multi-threaded application during runtime;
storing, by the processor, the thread sequence being generated as a thread sequence representation file;
analyzing, by the processor, the thread sequence representation file by comparing the thread sequence stored in the thread sequence representation file with a benchmark thread sequence file;
identifying, by the processor, inconsistencies between the thread sequence representation file and benchmark thread sequence file as a function of the analyzing step, wherein the inconsistencies cause a mis-run thread sequence; and
reporting, by the processor, the inconsistencies to a user.

US Pat. No. 10,169,193

COMMON DEBUG SCRIPTING FRAMEWORK FOR DRIVING HYBRID APPLICATIONS CONSISTING OF COMPILED LANGUAGES AND INTERPRETED LANGUAGES

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:providing, by a processor, a debug extension library on top of a programming language interpreter;
providing, by the processor, a common debug interface as part of the debug extension library;
providing, by the processor, at least two debug interface implementations as part of the common debug interface, a first one of the at least two debug interface implementations being dedicated to a native debugger of an interpreted language computer program and a second one of the at least two debug interface implementations being dedicated to a native debugger of a compiled language computer program, wherein an application contains a first portion written in an interpreted programming language and a second portion written in a compiled programming language; and
responding, by the processor, to a user command provided through a debug script program to debug the application by commanding one of the native debugger of an interpreted language computer program or the native debugger of a compiled language computer program through the corresponding dedicated debug interface implementation, wherein the debug script program is a single script program,
wherein the common debug interface contains a plurality of features abstracted from the native debugger of the interpreted language computer program and from the native debugger of the compiled language computer program, and wherein the plurality of abstracted features are common to both the native debugger of the interpreted language computer program and from the native debugger of the compiled language computer program, the plurality of abstracted common features include launching, breakpoint, stepping, terminate/resume/suspend, variable inspection, expression evaluation, and stack frame source location.

US Pat. No. 10,169,192

AUTOMATIC COLLECTION AND PRESENTATION OF RUNTIME DATA SEMANTICS

International Business Ma...

1. A method for collection and presentation of runtime data semantics in a Software Test Environment, the method comprising:receiving, by one or more computer processors, code-coverage history and system runtime history from a server connected by a network;
receiving, by the one or more computer processors, code version information from the server, wherein the code version information comprises revision data affecting each line of source code and wherein the code version information is based on a collection of change sets comprising a record of revisions from a plurality of source code builds in a source code repository, created during a software build process for the plurality of source code builds and maintained as part of the development of a software product;
responsive to receiving an inspection line of code and an inspection variable, retrieving, by the one or more computer processors, runtime data semantics from the server, comprising the code-coverage history, the system runtime history and the code version information, wherein the system runtime history comprises a current variable memory address range and a historic variable memory address range retrieved from process maps created by memory mapping program variables output during runtime from the plurality of test runs of the software product, and wherein the runtime data semantics are filtered based on the inspection line of code and the inspection variable;
outputting to a debugger, by the one or more computer processors, the runtime data semantics wherein the code-coverage history displays variables affected by the inspection variable value based on a plurality of test runs of the software product and the plurality of source code builds and wherein the code version information displays the record of revisions from the plurality of source code builds based on the inspection line of code and an inspection variable; and
identifying and resolving, by the debugger, logic problems in the source code using the runtime data semantics.

US Pat. No. 10,169,191

WARNING DATA MANAGEMENT WITH RESPECT TO A DEVELOPMENT PHASE

International Business Ma...

1. A computer-implemented method of managing a set of warning data with respect to a development phase in a computing environment, the method comprising:detecting, with respect to the development phase, the set of warning data for utilization to develop an application wherein the set of warning data is collected as compile time warning messages associated with a compiler;
storing the set of warning data with compiled code associated with the application;
identifying, by analyzing the set of warning data, a relationship between the set of warning data and a component of the application;
providing, for utilization to develop the application, an indication of the relationship between the set of warning data and the component of the application wherein the component and the associated warning data are accessed in an environment comprising only compiled code objects;
classifying, with respect to a set of computing challenges, the set of warning data;
correlating the set of computing challenges with the computing object based on annotating a code location of the computing object with a computing challenge tag indicating a relationship to a particular computing challenge, and storing, in an integrated development environment, an element of the correlation;
evaluating the set of computing challenges to identify a set of candidate development actions including both a first candidate development action and a second candidate development action;
computing a first expected resultant computing challenge for the first candidate development action;
computing a second expected resultant computing challenge for the second candidate development action;
comparing the first and second expected resultant computing challenges, and
selecting, based on the second expected resultant computing challenge exceeding the first expected resultant computing challenge, the first candidate development action.

US Pat. No. 10,169,190

CALL TRACE GENERATION VIA BEHAVIOR COMPUTATION

Lenvio Inc., Manassas, V...

1. A method of statically computing a behavior of a computer program in terms of function call traces, the method comprises:tracking, by one or more computing devices, function calls in a synthetic call trace state variable of a computer program;
extending, by the one or more computing devices, instruction semantics of call instructions with additional semantics by adding a current function call, including one or more of a local function call or an external API call, to an existing call trace represented by the synthetic call trace state variable;
adding, by the one or more computing devices, additional updates to a stack register to instruction semantics of one or more instructions of a single function call to account for function argument cleanup processed by the single function call; and
extracting a computed behavior of the computer program.

US Pat. No. 10,169,189

FUNCTIONAL TEST AUTOMATION OF MOBILE APPLICATIONS INTERACTING WITH NATIVE STOCK APPLICATIONS

International Business Ma...

1. A method comprising:receiving, by one or more computer processors, a first view hierarchy data set including information indicative of a first view hierarchy for a first native stock application, with the first native stock application being a system application that is packaged with an operating system (OS) for a first type of mobile computing device, and with the first view hierarchy being data organized into a tree structure that defines relationships among and between views generated by the first native stock application including views structurally specified by parent-child relationships;
generating, by one or more computer processors, based, at least in part, on the first view hierarchy data set, a template table for the first native stock application, with the template table including an identification of the type of first type of mobile computing device, a set of possible orientation(s) of the first type of mobile computing device, an identification of the OS, an identification of a version on the first type of mobile computing device, name of the first native stock application, activity information of the first native stock application, action information for the first native stock application, and bounding coordinates for each feature of a set of feature(s) of the first native stock application; and
performing automated testing of a first application under test (AUT) using the template table.

US Pat. No. 10,169,188

RUNTIME EVALUATION OF USER INTERFACES FOR ACCESSIBILITY COMPLIANCE

INTERNATIONAL BUSINESS MA...

1. A method for runtime evaluation of a User Interface (UI) for accessibility compliance, comprising:determining, from an element hierarchy of the User Interface, a UI element, the UI element being selected for inclusion in the UI at runtime;
determining whether the UI element is an instantiation of one of a native element and a user-defined element;
categorizing, responsive to the UI element being an instantiation of the native element, the UI element to a category of the native element;
categorizing, responsive to the UI element being an instantiation of the user-defined element, the UI element to a category of a parent class of the user-defined element, wherein the parent class of the user-defined element is present in the native class hierarchy of the platform;
associating with the UI element a subset of a set of accessibility compliance rules, wherein the subset of accessibility compliance rules corresponds to the category of the UI element;
analyzing the UI element to determine that the UI element fails to satisfy an accessibility compliance rule in the subset of accessibility compliance rules; and
outputting, responsive to the analyzing, in an accessibility compliance report, a violation information describing the UI element and the accessibility compliance rule from the subset of accessibility compliance rules.

US Pat. No. 10,169,187

PROCESSOR CORE HAVING A SATURATING EVENT COUNTER FOR MAKING PERFORMANCE MEASUREMENTS

INTERNATIONAL BUSINESS MA...

1. A method of monitoring performance of a computer system, the method comprising:within a processor core of the computer system, detecting events indicative of the performance of the computer system and generating one or more event signals indicative of the events;
receiving the one or more event signals by a performance monitor circuit integrated within the processor core;
within the processor core, responsive to the receiving the one of more event signals by the performance monitor circuit, a control logic circuit of the performance monitor circuit generating an increment signal and providing the increment signal to a saturating counter circuit of the performance monitor circuit to cause the saturating counter circuit to increment in response to the one or more event signals being asserted;
at first predetermined periods, clocking a periodic counter circuit within the performance monitor circuit that times a second predetermined period so that the second predetermined period has a duration greater than the first predetermined period;
the periodic counter circuit generating a decrement signal when a count value of the periodic counter circuit has reached a count indicating the second predetermined period has elapsed;
providing the decrement signal to a decrement input of the saturating counter circuit to cause the saturating counter circuit to decrement when the second predetermined period has elapsed;
reading the saturating counter circuit to obtain the count value; and
computing a performance level of the processor from the count value.

US Pat. No. 10,169,186

EFFICIENT TESTING OF DIRECT MEMORY ADDRESS TRANSLATION

International Business Ma...

10. An integrated circuit comprising:a first translation table that includes a plurality of translation entries, wherein each of the plurality of translation entries contains translation information to translate direct memory access (DMA) addresses for one of a plurality of agents connected to the integrated circuit; and
a random DMA mode (RDM) circuit that randomly selects a translation control entry in the first translation table during a test mode to select from a plurality of entries in a translation control entry table when there is only a single agent connected to the integrated circuit;
wherein the RDM circuit comprises:
a random generator signal connected to a specified input of a multiplexer;
a select input of the multiplexer that selects the specified input; and
an output of the multiplexer that provides the random generator signal to the translation table during the test mode to randomly select an entry of the translation table during testing; and
wherein the first translation table is a translation validation table which contains translation validation entries for each of the plurality of agents that point to a translation control entry table and wherein the RDM circuit randomly selects a translation validation entry that points to a translation control entry in the translation control entry table.

US Pat. No. 10,169,185

EFFICIENT TESTING OF DIRECT MEMORY ADDRESS TRANSLATION

International Business Ma...

9. A computer-implemented method of testing a link processing unit on an integrated circuit, the method comprising:loading a first translation table with a plurality of translation entries, wherein each of the plurality of translation entries contains translation information to translate direct memory access (DMA) addresses for one of a plurality of agents connected to the integrated circuit, wherein the agents comprise a central processing unit and at least one graphics processing unit;
using a single agent connected to the integrated circuit in a test mode to test all the translation entries in the translation table by randomly selecting an entry of the table with translation information to run multiple consecutive tests of address translation using the single agent;
providing a random signal on an input of a multiplexer;
selecting the random signal using a select input to the multiplexer; and
providing the random signal from the multiplexer to the translation table to randomly select an entry of a translation control entry table during the test mode.

US Pat. No. 10,169,184

IDENTIFICATION OF STORAGE PERFORMANCE SHORTFALLS

International Business Ma...

1. A method for determining performance shortfall in a storage system, the method comprising:determining, by one or more processors, latency in data transfer rates for a first storage system exceeds a threshold during a specified time frame;
creating, by one or more processors, a snapshot of each write operation performed during the specified time frame based on a log of I/O operations and associated operational parameters, wherein each snapshot identifies a write operation and a set of associated operational parameters;
performing, by one or more processors, on a second storage system, a portion of the write operations based on the created snapshots prior to performing a replay on the second storage system;
performing, by one or more processors, the replay on the second storage system based at least on the log of I/O operations and the associated operational parameters for the specified time frame, wherein the replay includes a remaining portion of the write operations based on the created snapshots;
comparing, by one or more processors, an I/O operation response time on the second storage system to an I/O operation response time on the first storage system for the specified time frame; and
responsive to determining that the I/O operation response time on the second storage system exceeds the I/O operation response time on the first storage system for the specified time frame, identifying, by one or more processors, the first storage system was not the cause of the performance shortfall.

US Pat. No. 10,169,183

MOBILE DEVICE AND CHASSIS WITH CONTACTLESS TAGS TO DIAGNOSE HARDWARE AND SOFTWARE FAULTS

International Business Ma...

1. A method for diagnosing faults in a hardware appliance, the method comprising:providing a distributed contactless tag datastore that includes a plurality of contactless tags with a directory on one of the contactless tags with directory information that identifies one or more other contactless tags of the distributed contactless tag datastore on which information to be read is located, the distributed contactless tag datastore being associated with a plurality of components in a hardware appliance;
writing system status information associated with the plurality of components to the distributed contactless datastore;
detecting a fault in at least one of the plurality of components of the hardware appliance;
writing information associated with the component fault to the distributed contactless datastore; and
writing a pointer to the distributed contactless datastore containing the information associated with the component fault into one or more of the plurality of the contactless tags of the hardware appliance; and
in response to detecting a power fail in the hardware appliance, writing key information associated with the fault from the distributed contactless datastore to a contactless tag of the hardware appliance.

US Pat. No. 10,169,182

MONITORING LEVELS OF UTILIZATION OF DEVICE

International Business Ma...

1. A method for monitoring a level of utilization, the method comprising:determining, by one or more embedded processors of a device, a threshold based, at least in part, on a count of service channels of the device;
determining, by one or more embedded processors of the device, an upper boundary value of a numerical range based, at least in part, on the count of service channels;
determining, by one or more embedded processors of the device, a lower boundary value of the numerical range based, at least in part, on the threshold;
determining, by one or more embedded processors of the device, whether a count of outstanding requests of the device is contained within the numerical range;
determining, by one or more embedded processors of the device, an estimated level of utilization of the device based, at least in part, on the upper boundary value, the lower boundary value, the count of service channels, and the count of outstanding requests; and
reporting, by one or more embedded processors of the device, the estimated level of utilization.

US Pat. No. 10,169,181

EFFICIENT VALIDATION OF TRANSACTIONAL MEMORY IN A COMPUTER PROCESSOR

International Business Ma...

1. An apparatus for testing a computer processor comprising:a processor with a memory and hardware to support a memory transaction;
a test case executor that loads a transactional memory test into the memory of the processor, wherein the transactional memory test includes a transactional memory instruction that indicates to branch and execute code of the transactional memory test in a non-transaction mode when there is a failure of the transaction and after the hardware has attempted to restore the context of the processor to a state before the transaction.

US Pat. No. 10,169,179

METHODS AND SYSTEMS FOR MONITORING THE INTEGRITY OF A GPU

CHANNEL ONE HOLDINGS INC....

1. A method for monitoring integrity of a graphics processing unit (GPU), comprising:a) determining a known-good result associated with an operation of the GPU;
b) generating a test image comprising a test subject using the operation of the GPU, the test subject being associated with the known-good result;
c) writing the test image to a video memory and writing the known-good result to a system memory;
d) writing the test subject from the test image in video memory to the system memory;
e) comparing the test subject in the system memory with the known-good result in the system memory; and
f) writing a flag to system memory indicating failure if comparing the test subject with the known-good result indicates a difference between the test subject and the known-good result.

US Pat. No. 10,169,178

IMPLEMENTING SHARED ADAPTER CONFIGURATION UPDATES CONCURRENT WITH MAINTENANCE ACTIONS IN A VIRTUALIZED SYSTEM

International Business Ma...

1. A method for implementing concurrent shared adapter configuration updates with maintenance actions for an input/output (I/O) adapter in a computer system, said method comprising:decoupling configuration of the adapter from a saved configuration state of the adapter during a recovery period;
responsive to receiving a configuration request during execution of an error recovery sequence, validating the received configuration request including a system hypervisor validating the received configuration request;
responsive to identifying a valid received configuration request, updating the saved configuration state;
responsive to updating the saved configuration state and the adapter being in the error recovery sequence, returning success to the configuration request;
responsive to the adapter completing the error recovery sequence, restoring the adapter to the updated saved configuration state; and
responsive to the updated saved configuration state, updating the adapter configuration providing the updated configuration available for use.

US Pat. No. 10,169,177

NON-DESTRUCTIVE ONLINE TESTING FOR SAFETY CRITICAL APPLICATIONS

XILINX, INC., San Jose, ...

1. An integrated circuit, comprising:boot critical circuitry configured to change a mode of the integrated circuit from a boot mode to a test mode;
first circuitry;
logic built-in self test (LBIST) circuitry configured to perform a test on the first circuitry in response to switching from the boot mode to the test mode, wherein the test is non-destructive to the boot critical circuitry such that a boot mode state of the integrated circuit is preserved when performing the test; and
output isolation circuitry coupled to an output of a first scan chain in the first circuitry, wherein the output isolation circuitry is configured to prevent output signals generated by the first scan chain from destroying the boot mode state during the test.

US Pat. No. 10,169,176

SCALING OUT A HYBRID CLOUD STORAGE SERVICE

International Business Ma...

1. A method comprising:receiving a disaster recovery policy with respect to a first storage system, wherein the disaster recovery policy comprises the following information: (i) identification of a first set of nodes that are available to be used by a gateway in a disaster recovery mode; and (ii) identification of a second set of nodes that are available to be used in a recall storm mode;
pre-deploying resources according to the disaster recovery policy, to provide pre-deployed resources, wherein the pre-deployed resources comprise: (i) network bandwidth sufficient to meet a maximum data recovery time threshold; (ii) at least one load balancer sufficient to meet the maximum data recovery time threshold; (iii) a first storage tiering service installed on the first set of nodes; and (iv) a second storage tiering service installed on the second set of nodes;
monitoring the first storage system with respect to a configuration thereof;
determining that the first storage system underwent a configuration change; and
in response to determining that the first storage system underwent the configuration change, automatically adjusting the disaster recovery policy in accordance with the configuration change
receiving information indicative of a disaster recovery situation with respect to a first set of data stored on the first storage system; and
activating the pre-deployed resources according to the disaster recovery policy, wherein activating the pre-deployed resources according to the disaster recovery policy comprises: (i) automatically adding the first set of nodes to at least one existing node group; (ii) automatically adding the second set of nodes to at least one existing node group; (iii) automatically restoring a name-space with respect to the first storage system; and (iv) initiating restoration of the first set of data.

US Pat. No. 10,169,175

PROVIDING FAILOVER CONTROL ON A CONTROL SYSTEM

GE Aviation Systems LLC, ...

1. A method of providing failover control in a computing system, the method comprising:monitoring a data stream generated by a plurality of computing nodes in a computing system; and
selecting a first subset of the plurality of computing nodes based at least in part on the monitored data stream;
generating one or more control grant signals for each computing node of the first subset;
determining that the one or more control grant signals for one or more of the computing nodes in the first subset satisfies a predetermined threshold;
in response to determining the one or more control grant signals for one or more of the computing nodes in the first subset satisfies the predetermined threshold, granting control authority of the computing system to the one or more computing nodes of the first subset;
subsequent to granting control authority of the computing system to the one or more computing nodes of the first subset, identifying at least one control capable computing node that has not been granted control authority of the computing system; and
selecting the at least one control capable computing node as a second subset of the plurality of computing nodes.

US Pat. No. 10,169,174

DISASTER RECOVERY AS A SERVICE USING VIRTUALIZATION TECHNIQUE

International Business Ma...

1. A method comprising:receiving a replication of an information technology environment;
identifying one or more core applications;
generating a recovery plan for the environment, the recovery plan comprising a first process and a second process, wherein the first process of the recovery plan is generated based on the identified one or more core applications; and
in response to the service provider receiving a disaster recovery request associated with the environment, the service provider executing a disaster recovery protocol, including:
simultaneously executing the first process and the second process, wherein the first process is configured to operate a workload associated with core applications of the environment, and wherein the second process is a background process configured to create a replica of the environment; and
after completion of the replica, migrating the workload to the replica.

US Pat. No. 10,169,173

PRESERVING MANAGEMENT SERVICES WITH DISTRIBUTED METADATA THROUGH THE DISASTER RECOVERY LIFE CYCLE

International Business Ma...

1. A method comprising:during normal operation, at a first site, of a disaster recovery management unit comprising at least one customer workload machine and at least one management service machine implementing at least one management service, replicating to a remote disaster recovery site said at least one customer workload machine, said at least one management service machine, and metadata for said at least one management service implemented on said at least one management service machine, at least a portion of said metadata not being isolated within said at least one management service;
after a disaster at said first site, initiating a failover process comprising:
bringing up, at said remote disaster recovery site, a replicated version of said at least one customer workload machine;
bringing up, at said remote disaster recovery site, a replicated version of said at least one management service machine;
operating, at said remote disaster recovery site, said replicated version of said at least one customer workload machine and said replicated version of said at least one management service machine, in accordance with said metadata for said at least one management service implemented on said at least one management service machine; and
creating an initial snapshot of a distributed metadata state of said metadata for said at least one management service implemented on said replicated version of said at least one management service machine, wherein said distributed metadata is distributed across at least two of a provisioning service, a customer virtual machine, a hypervisor, a network switch or bridge, a storage system, and said replicated version of said at least one management service machine;
subsequent to initiating said failover process, initiating a failback process comprising:
creating a representation of state changes for said at least one management service implemented on said replicated version of said at least one management service machine made in said remote disaster recovery site since said failover process and calculating therefrom a delta description from said initial snapshot;
transmitting said delta description to said first site; and
creating a reverse replica of all the workload components from the remote disaster recovery site at the first site and playing back the delta description to restore a distributed metadata state that existed in the remote disaster recovery site and re-create it in the first site,
wherein said at least one management service comprises a provisioning service, said method further comprising:
subsequent to said step of operating said replicated version of said at least one customer workload machine and said replicated version of said at least one management service machine in accordance with said metadata for said at least one management service, carrying out and tracking additional provisioning at said remote disaster recovery site; and
subsequent to said additional provisioning, upon said first site coming back up, restoring said first site to reflect said tracked additional provisioning.

US Pat. No. 10,169,172

PASSIVE DETECTION OF LIVE SYSTEMS DURING CONTROLLER FAILOVER IN DISTRIBUTED ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method for passive detection of live systems during controller failover in a distributed data processing environment, the method comprising:configuring a second controller system as a failover controller in the distributed data processing environment, wherein the distributed data processing environment comprises a first controller system as a primary controller and a set of member systems reporting to the first controller system;
sorting the set of member systems according to heartbeat periods used by members in the set of member systems;
determining an amount of elapsed time since a failure of the first controller system in the distributed data processing environment;
selecting, from the sorted set of member systems, a first member system due to a first heartbeat period of the first member system being a shortest heartbeat period in all heartbeat periods in the sorted set of member systems;
computing, using a processor and a memory at the second controller system, a timeout period, wherein the timeout period is an amount of time remaining in the first heartbeat period after the amount of elapsed time; and
removing the first member system from the sorted set of member systems after the timeout period expires.

US Pat. No. 10,169,171

METHOD AND APPARATUS FOR ENABLING TEMPORAL ALIGNMENT OF DEBUG INFORMATION

NXP USA, Inc., Austin, T...

1. A signal processing device for communication within a signal processing system comprising a master node and multiple signal processing devices including the signal processing device, the master node including circuitry and being in communication with the multiple signal processing devices, the signal processing device comprising:at least one processing core configured and arranged to execute computer program code and to transmit data across at least one data layer, including a data link layer;
at least one timestamp generation component, including circuitry, configured and arranged to generate at least one local timestamp value, and to provide the at least one local timestamp value;
a data link layer module, including circuitry, configured and arranged to receive the at least one local timestamp value for timestamping of data packets within the data link layer; and
at least one debug module, including circuitry, configured and arranged to:
receive the at least one local timestamp value and to cause temporal alignment of debug information across the multiple signal processing devices within the signal processing system by:
timestamping the debug information corresponding to the signal processing system based at least partly on the at least one local timestamp value and timing information obtained from the master node, and
outputting the timestamped debug information to a debug tool of the signal processing system.

US Pat. No. 10,169,170

CONTROLLING CONFIGURABLE VARIABLE DATA REDUCTION

Quantum Corporation, San...

1. An apparatus, comprising:a processor;
a memory; and
an interface to connect the processor, memory, and a set of logics, the set of logics comprising:
two or more boundary logics configured to determine a set of chunk boundaries for an object to be data reduced by a data reducer, wherein the chunk boundaries are two or more of run based, delimiter based, rules based, or data dependent; and
a control logic to control the data reducer to chunk the object based, at least in part, on the set of chunk boundaries,
where the two or more boundary logics are constrained to place boundaries in locations that satisfy one or more of, a minimum chunk size, and a maximum chunk size.

US Pat. No. 10,169,169

HIGHLY AVAILABLE TRANSACTION LOGS FOR STORING MULTI-TENANT DATA SETS ON SHARED HYBRID STORAGE POOLS

Cisco Technology, Inc., ...

1. A non-transitory machine-readable medium having executable instructions to cause one or more processing units to perform a method to store a transaction entry in a distributed storage system, wherein storage controller functions of the distributed storage system are separated from distributed storage system storage media, the distributed storage system storage media including a plurality of storage pools, the method comprising:receiving the transaction entry in a first storage pool of the plurality of storage pools of the distributed storage system, wherein the transaction entry is associated with storage controller functions of the distributed storage system that indicates an object is to be stored in at least one logical block address space of the distributed storage system storage media, the at least one logical block address space being defined over one or more storage containers of a plurality of storage containers associated with the plurality of storage pools;
looking up a transaction log to store the transaction entry, the transaction log is associated with a second storage pool of the plurality of storage pools, wherein the second storage pool is separate from the first storage pool, and wherein the transaction log is a log that is a history of actions executed by storage controller functions of the distributed storage system and includes one or more logical logs, wherein the logical log is a log defined over a logical block address space;
routing the transaction entry to the second storage pool, wherein the second storage pool stores the transaction entry in the transaction log; and
replicating the transaction log to another transaction log across a plurality of fault domains, wherein the plurality of fault domains comprises the plurality of storage pools and/or the plurality of storage containers; and
wherein a failure of a component for the transaction log associated with the second storage pool does not affect the another transaction log replicated across the plurality of fault domains.

US Pat. No. 10,169,168

METADATA RECOVERY FOR DE-DUPLICATED DATA

International Business Ma...

1. A system comprising: a memory; and a processor in communication with the memory, the processor configured to obtain instructions from the memory that cause the processor to perform a method comprising:receiving a data stream including a file to be stored in storage media, the storage media including a data storage entity and metadata storage entity;
dividing the received file into a plurality of chunks;
comparing each chunk of the plurality of chunks with existing chunks stored in the data storage entity;
for each chunk of the plurality of chunks that does not match any of the existing chunks:
storing the chunk in the data storage entity;
if the stored chunk is not the first chunk in the file or the last chunk in the file, embedding with the stored chunk a metadata field that includes a pointer to a chunk following the stored chunk in the file, and a pointer to a chunk preceding the stored chunk in the file;
if the stored chunk is the first chunk in the file, embedding with the stored chunk a metadata field that includes a pointer to a chunk following the stored chunk in the file and an indicator that the stored chunk is the first chunk in the file;
if the stored chunk is the last chunk in the file, embedding with the stored chunk a metadata field that includes an indicator that the stored chunk is the last chunk in the file, and the metadata field further includes a pointer to a chunk preceding the stored chunk in the file; and updating file metadata stored in the metadata storage entity to include a pointer to the stored chunk;
for each chunk of the plurality of chunks that does match an existing chunk: not storing the chunk in the data storage entity;
if the not-stored chunk is not the first chunk in the file or the last chunk in the file, updating a metadata field embedded in the existing chunk to include a pointer to a chunk following the not-stored chunk in the file, and a pointer to a chunk preceding the not-stored chunk in the file;
if the not-stored chunk is the first chunk in the file, updating the metadata field embedded in the existing chunk to include a pointer to a chunk following the not-stored chunk in the file and an indicator that the not-stored chunk is the first chunk in the file;
if the not-stored chunk is the last chunk in the file, updating the metadata field embedded in the existing chunk to include an indicator that the not-stored chunk is the last chunk in the file, and a pointer to a chunk preceding the not-stored chunk in the file; and
updating file metadata stored in the metadata storage entity to include a pointer to the existing chunk.

US Pat. No. 10,169,167

REDUCED RECOVERY TIME IN DISASTER RECOVERY/REPLICATION SETUP WITH MULTITIER BACKEND STORAGE

International Business Ma...

1. A method for data recovery in a data processing environment, comprising:receiving, by a first computer, a signal that a second computer is back online after being offline, wherein the second computer was offline because of a failure;
taking, by the first computer, a first snapshot of a storage system, wherein the storage system includes a data hierarchy storage system that comprises different storage drives, wherein data that has a higher access frequency is stored on a first drive and data that has a lower access frequency is stored on a second drive;
retrieving, by the first computer, a previously taken second snapshot of the storage from a snapshot storage unit;
determining, by the first computer, a snapshot difference between the first snapshot and the second snapshot;
receiving, by the first computer, a determination if the snapshot difference is accurate or not;
transmitting, by the first computer, the snapshot difference and the first snapshot to the second computer;
transmitting, by the first computer, the data stored on the first drive to the second computer based on the determination if the snapshot difference is accurate or not;
promoting, by the first computer, the data stored on the second drive to be considered equivalent to data stored on the first drive; and
transmitting, by the first computer, the promoted data stored on the second drive to the second computer at the same transmission rate as the data stored on the first drive based on the determination if the snapshot difference is accurate or not.

US Pat. No. 10,169,166

REAL-TIME FAULT-TOLERANT ARCHITECTURE FOR LARGE-SCALE EVENT PROCESSING

BEIJING CHUANGXIN JOURNEY...

1. A system, comprising:one or more processor devices; and
a plurality of event nodes, each for receiving a respective portion of a plurality of event notifications; and
a plurality of log aggregation nodes configured to receive the plurality of event notifications from the plurality of even nodes, a first log aggregation node of the plurality of log aggregation nodes being configured to publish the plurality of event notifications to a log, the log storing the event notifications for a first period of time;
a backup node configured to record the plurality of event notifications from the log in a separate data store and to store the event notifications for a second period of time that is longer than the first period of time; and
a processing node configured to retrieve the plurality of event notifications from the log and to process the plurality of event notifications according to a ruleset,
wherein, upon a failure of the processing node during processing of the plurality of event notifications and subsequent restoration of functionality of the processing node:
the processing node is configured to resume processing of the event notifications by retrieving remaining ones of the plurality of event notifications from the log when the subsequent restoration of functionality of the processing node is prior to expiration of the first period of time; and
the processing node is configured to resume processing of the event notifications by retrieving remaining ones of the plurality of event notifications from the backup node when the subsequent restoration of functionality of the processing node is after expiration of the first period of time.

US Pat. No. 10,169,165

RESTORING DATA

International Business Ma...

1. A computer-implemented method for restoring a plurality of pieces of data in a data processing system, comprising:storing management information for managing a plurality of pieces of data as a plurality of files in a storage device provided in the data processing system to restore the management information, the data including medium identification information for identifying recording media associated with the individual plurality of pieces of data;
accepting connection of the plurality of recording media storing the plurality of files and information on the plurality of files;
storing information on one or more of the plurality of files including one or more pieces of the medium identification information in the storage device, wherein the one or more files comprises less than 0.1% of the plurality of files, wherein the one or more files are grouped by respective tape identifiers in a preferred-recall list, wherein preferred recall is executed on files written within 48 hours before backup, wherein respective groups are sorted by block numbers associated with respective files of the one or more files, and wherein the information for respective files of the one or more files comprises a respective file name, a respective file path, a respective physical position, a respective access control list, and respective extended attributes of the respective file;
switching to a setting for reading the one or more files using the information on the one or more files, the information being stored in the storage device, instead of the information on the plurality of files, the information being stored in the plurality of recording media, wherein switching to a setting for reading the one or more files further comprises designating an option for mounting a tape file system without forming dcache files, wherein designating the option causes only dentry files to be created for respective files in the preferred-recall list and no dcache file to be created;
identifying one or more recording media from which the one or more files are to be read on the basis of the information on the one or more files and reading the one or more files from the identified one or more recording media to the storage device, wherein identifying one or more recording media further comprises identifying the one or more recording media based on respective tape identifiers located in respective file names for the one or more files;
deleting the information on the one or more files from the storage device by unmounting the tape file system;
switching to a second setting for reading the plurality of files using the information on the files stored in the plurality of recording media, wherein the second setting further comprises undesignating the option for mounting a tape file system without forming dcache files; and
with a second data processing system that holds the plurality of files, executing a process of writing the plurality of files to the plurality of recording media, wherein respective files of the plurality of files are in a resident state during writing the plurality of files, wherein the resident state comprises a state in which the file on the drive is deleted and the entity of the file is present only in the shared disk.

US Pat. No. 10,169,164

BACKUPS USING APPLICATION MAPS

EMC IP Holding Company LL...

1. A method for organizing information about one or more client computers connected to a network, wherein each client computer contains one or more data modules, the method comprising:sending, by a profiler executed by a processor, an information request to each client computer of the one or more client computers, the information request requesting a backup degree for each data module of the one or more data modules on the client computer, the backup degree describing at least an amount of data associated with a data module currently stored in the client computer that has been backed up to a backup node, wherein the backup node is separated from the client computer, wherein each data module describes one or more data files or data file locations;
receiving, by the profiler, an information message from each client computer of the one or more client computers, wherein each information message contains the backup degree for each data module of the one or more data modules on the client computer;
storing, by the profiler information contained in the information message in a profile information store, wherein the profile information store comprises a plurality of entries, each entry corresponding to one of a plurality of client computer identifiers, one or more data module identifiers associated with each client computer identifier, and a backup degree associated with each data module identifier;
generating a single graphical map by a mapper executed by the processor, wherein the single graphical map includes a first graphical representation of each of the one or more client computers, a second graphical representation of each of the one or more data modules on each client computer, and a third graphical representation of each data module's backup degree, wherein the third graphical representation of each data module's backup degree is graphically correlated to the second graphical representation of the data module that the backup degree describes, and wherein the third graphical representation of each data module's backup degree graphically indicates a proportional amount of data associated with the data module that has been backed up to the backup node, wherein the single graphical map also includes a fourth graphical representation of one of a total degree of backup or an average degree of backup of all of the data modules contained on all of the one or more client computers connected to the network; and
displaying the single graphical map on a display device.

US Pat. No. 10,169,163

MANAGING BACKUP OPERATIONS FROM A CLIENT SYSTEM TO A PRIMARY SERVER AND SECONDARY SERVER

International Business Ma...

1. A computer program product for replicating client data from a client system between a primary server and a secondary server, wherein the computer program product comprises at least one computer readable storage medium including a client program embodied therewith, wherein the client program is executable by a processor to cause operations, the operations comprising:determining, by the client program, whether a state of data on the secondary server permits a backup operation in response to determining that the primary server is unavailable;
determining whether a failover delay timer has expired in response to determining that the state of the data on the secondary server permits the backup operation; and
attempting, by the client program, to connect to the primary server to perform the backup operation at the primary server in response to determining that failover delay timer has not expired.

US Pat. No. 10,169,162

CONVEYING VALUE OF IMPLEMENTING AN INTEGRATED DATA MANAGEMENT AND PROTECTION SYSTEM

Commvault Systems, Inc., ...

1. A computer-implementable method of providing data associated with implementing an integrated data management and protection system, the method comprising:maintaining value functions for quantifying value associated with implementing an integrated data management and protection system;
receiving user input regarding value associated with implementing the integrated data management and protection system,
wherein the value associated with implementing the integrated data management and protection system relates to at least two of cost reduction, risks, and obtaining value from data;
providing an interface to display a request for a user to submit, via the interface, information associated with data management and protection,
wherein the interface includes data entry fields for receiving information associated with data management and protection, and
wherein the information associated with data management and protection includes information related to at least three of—
complexity of an existing data management and protection system,
data protection reliability,
data recovery time, and
operational oversight for the existing data management and protection system;
receiving user-submitted information via the data entry fields of the interface;
identifying value functions for quantifying value associated with implementing the integrated data management and protection system;
applying the value functions to the received information,
wherein the value functions result in value data for the integrated data management and protection system when applied to the received user-submitted information, and
wherein the applying includes generating a single, combined index score that aggregates the value data obtained by applying the value functions to the user-submitted information; and,
generating a value dashboard that displays the value data for the integrated data management and protection system,
wherein the value data is indicative of value associated with implementing the integrated data management and protection system versus use of the existing data management and protection system.

US Pat. No. 10,169,161

HIGH SPEED BACKUP

EMC IP Holding Company LL...

1. A method of backing up data of a target volume to a virtual hard disk (VHD) format, comprising:receiving a hint data indicating a last known file system extent associated with a previously-processed data zone;
using the hint data to determine a starting file system extent at which to begin processing file system extent data of the target volume to find file system extents associated with a VHD data zone that is currently being processed; and
backing up the data of the target volume at least in part by using the hint data to skip over one or more file system extents found previously to be associated with one or more previously-processed VHD data zones.

US Pat. No. 10,169,160

DATABASE BATCH UPDATE METHOD, DATA REDO/UNDO LOG PRODUCING METHOD AND MEMORY STORAGE APPARATUS

Industrial Technology Res...

1. A database batch update method applicable to a data storage apparatus comprising a first memory, a second memory and a third memory, wherein the data batch update method comprises:sequentially receiving a plurality of data access commands, wherein the data access commands require to access data from the first memory, wherein the third memory is mirrored to the first memory before the data access commands are sequentially received;
determining that a first subset of the data access commands belong to a first type and a second subset of the data access commands belong to a second type, wherein the data access commands belonging to the first type comprises commands that update data without returning it in real-time, and wherein the data access commands belonging to the second type command comprises commands that return data in real-time without updating it;
storing the first subset of the data access commands in the second memory;
sequentially updating the first memory according to the data access commands stored in the second memory in an order of physical addresses of the first memory;
determining whether the data corresponding to the second subset of the data access commands needs to be updated by inspecting the data access commands stored in the second memory;
if it is determined that the data corresponding to the second subset of the data access commands needs to be updated, updating and returning the data corresponding to the second subset of the data access commands according to the data access commands stored in the second memory; and
if it is determined that the data corresponding to the second subset of the data access commands does not need to be updated, accessing and returning the data corresponding to the second subset of the data access commands from the third memory,
wherein an access rate of sequential physical addresses in the first memory is larger than an access rate of random physical addresses in the first memory, and
wherein an access rate of sequential physical addresses in the third memory is larger than an access rate of random physical addresses in the third memory.

US Pat. No. 10,169,159

AUTOMATED DATA RECOVERY FROM REMOTE DATA OBJECT REPLICAS

International Business Ma...

1. A method for recovering data objects in a distributed data storage system, the method comprising:storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network, wherein a first data center of the one or more data centers includes one or more clusters and each cluster of the one or more clusters includes a respective plurality of compute nodes and the each cluster further includes a respective database that stores metadata specifying list of candidate clusters from which the one or more replicas can be recovered;
recording health information metadata that is within the database about said one or more replicas, wherein the health information comprises data about availability of a replica to participate in a restoration process;
in response to determining that the first data object is to be recovered, calculating a query-priority for the first data object;
querying, based on the calculated query-priority, the health information metadata that is within the database for the one or more replicas to determine which of the one or more replicas is available for restoration of the first data object;
calculating a restoration-priority for the first data object based on the health information metadata that is within the database for the one or more replicas; and
restoring the first data object from the one or more of the available replicas, based on querying the list of candidate clusters and further based on the calculated restoration-priority, wherein the query-priority is calculated based on a priority function P(D)=Func(R(D),C(D),n), where:
D represents a data object with multiple replicas in multiple clusters;
R(D)i, i=1 . . . n, where “i” and “n” are natural numbers, with a remote replica indexed i of D out of n remote replicas;
C(D) represents cost of losing N replicas of D;
P(D) represents priority given by the system for the query operation of D; and
Func( ) represents some function.

US Pat. No. 10,169,158

APPARATUS, SYSTEM AND METHOD FOR DATA COLLECTION, IMPORT AND MODELING

International Business Ma...

1. A method for data analysis of a backup system, the method comprising:extracting predetermined configuration and state information from respective dump files of a plurality of different computer systems, the predetermined configuration and state information is in different native formats based on the respective dump file from which it was extracted;
translating the predetermined configuration and state information from a native format used by each of the plurality of different computer systems into a normalized format, wherein the translated configuration and state information comprises configuration and state information irrespective of which of the plurality of different computer systems from which it was generated; and
determining what components are in the backup system, how the backup system works, how data is stored in the backup system, how efficiently data is stored in the backup system, a total capacity of the backup system, a remaining capacity of the backup system, and an operating cost of the backup system by analyzing the normalized predetermined configuration and state information.

US Pat. No. 10,169,157

EFFICIENT STATE TRACKING FOR CLUSTERS

INTERNATIONAL BUSINESS MA...

1. A method for efficient state tracking for clusters by a processor device in a distributed shared memory architecture, the method comprising:performing an asynchronous calculation of deltas while concurrently receiving client requests and concurrently tracking client requests times;
responding to each of the client requests for data of the same concurrency during a certain period with currently executing client requests with updated views based upon results of the asynchronous calculation; concurrently executing each of the client requests occurring after the certain period on the updated views, wherein all deltas and views are updated; and
bounding a latency for the client requests by a time necessitated for the asynchronous calculation of at least two of the deltas; wherein a first state snapshot is atomically taken while simultaneously calculating the at least two of the deltas, and each of the client requests received during the certain period are served with the updated views of the asynchronously calculated at least two of the deltas.

US Pat. No. 10,169,156

AUTOMATIC RESTARTING OF CONTAINERS

International Business Ma...

1. A method for automatically restarting a container, comprising:reading, by a computing device, custom predefined policy information including one or more condition categories, each of which having a respective reference to a respective log file and defining at least one respective condition for restarting the container;
monitoring, by an agent included in a container engine executed by the computing device, one or more respective log files, each of which corresponding to the respective reference to the respective log file of a corresponding condition category of the one or more condition categories, to detect an occurrence of any one of the at least one condition defined by any one of the one or more condition categories of the custom predefined policy information;
detecting, by the agent, the occurrence of the any one of the at least one condition based on a presence of a string of characters, corresponding to the any one of the at least one condition, in a log file of a corresponding condition category of the any one of the at least one condition to which the custom predefined policy information refers, the string of characters being generated to the log file of the corresponding condition category on behalf of the container;
in response to the detecting of the occurrence of the any one of the at least one condition, saving, by the agent, a state of the container including a state of one or more applications within the container;
automatically restarting the container, by the agent, after detecting the occurrence and saving the state of the container; and
after the automatic restarting of the container, restoring, by the agent, the state of the container, including the state of the one or more applications, wherein the one or more applications continue executing from where the one or more applications left off, thereby improving performance of the computing device.

US Pat. No. 10,169,155

SYSTEM AND METHOD FOR SYNCHRONIZATION IN A CLUSTER ENVIRONMENT

EMC IP Holding Company LL...

1. A computer-implemented method comprising:performing, via a first computing device, a copy sweep operation to a first range of data on a source storage device;
determining that the copy sweep operation has failed;
sending a message to a second computing device to suspend I/O operations to the first range of data; and
retrying the copy sweep operation based upon, at least in part, determining that the copy sweep operation has failed, wherein the copy sweep operation is retried without the first computing device receiving acknowledgement that the I/O operations to the first range of data are suspended by the second computing device.

US Pat. No. 10,169,154

DATA STORAGE SYSTEM AND METHOD BY SHREDDING AND DESHREDDING

International Business Ma...

1. A method for encoding data for storage in a plurality of storage units by use of at least one processor comprising:dividing data into a set of separate pieces of data;
performing a redundancy function and a plurality of transformations on a separate piece of data of the set of separate pieces of data to generate a plurality of encoded data elements, wherein a threshold number of encoded data elements of the plurality of encoded data elements is needed to recover the separate piece of data, in which the threshold number of encoded data elements is less than all of the plurality of encoded data elements, wherein the plurality of transformations includes first transformations performed before performing the redundancy function and second transformations performed after performing the redundancy function;
generating metadata regarding the plurality of encoded data elements, wherein the metadata includes identification for each encoded data element and sequencing information regarding an order in which the redundancy function and the plurality of transformations were performed;
sending the plurality of encoded data elements to the plurality of storage units; and
sending the metadata to one of the storage units of the plurality of storage units or to another storage unit separately from sending the plurality of encoded data elements to the plurality of storage units.

US Pat. No. 10,169,153

REALLOCATION IN A DISPERSED STORAGE NETWORK (DSN)

INTERNATIONAL BUSINESS MA...

1. A computing device comprising:an interface configured to interface and communicate with a dispersed storage network (DSN);
memory that stores operational instructions; and
a processing module operably coupled to the interface and to the memory, wherein the processing module, when operable within the computing device based on the operational instructions, is configured to:
within a dispersed or distributed storage network (DSN) that includes a plurality of storage units (SUs) that distributedly store a set of encoded data slices (EDSs) associated with a data object, during a transition from a first system configuration of a Decentralized, or Distributed, Agreement Protocol (DAP) to a second system configuration of the DAP, direct at least one SU of the plurality of SUs to service a data access request based on at least one EDS of the set of EDSs based on a DAP transition mapping between the first system configuration of the DAP to the second system configuration of the DAP, wherein the data object is segmented into a plurality of data segments, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce the set of EDSs.

US Pat. No. 10,169,152

RESILIENT DATA STORAGE AND RETRIEVAL

International Business Ma...

1. A computer-implemented method for data recovery following loss of a volume manager, the method comprising:determining that the volume manager, and an associated volume manager index, for a distributed storage have been lost;
installing, in response to determining that the volume manager has been lost, a new volume manager, the new volume manager lacking an associated volume manager index;
receiving location information and credentials to access the distributed storage;
receiving a command to recover data from the distributed storage, the data to be recovered comprising one or more data files, each data file stored as two or more data portions, each data portion comprising metadata, the metadata comprising a file ID tag;
attempting to retrieve each data portion from the distributed storage;
retrieving a first data portion and recording a first location in the distributed storage that the first data portion was retrieved from;
reading the first file ID tag attached to the first data portion; and
constructing, in response to determining that the associated volume manager index has been lost, a new volume manager index by storing the first file ID tag and the first location associated with the first data portion in the distributed storage in the new volume manager index such that the new volume manager index provides a reference, to the new volume manager, for the first location and the first file ID tag, the reference associated with the first data portion.

US Pat. No. 10,169,151

UTILIZING REQUEST DEADLINES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a dispersed storage and task (DST) processing unit that includes a processor, the method comprises:generating a first plurality of access requests that include a first execution deadline time, the first plurality of access requests for transmission via a network to a corresponding first subset of a plurality of storage units;
receiving a first deadline error notification via the network from a first storage unit of the first subset;
calculating a missed deadline cost value in response to receiving the first deadline error notification;
comparing the missed deadline cost value to a new request cost threshold;
selecting a new one of the plurality of storage units not included in the first subset in response to receiving the first deadline error notification;
generating a new access request for transmission to the new one of the plurality of storage units via the network that includes an updated execution deadline time, wherein the new access request is based on a one of the first plurality of access requests sent to the first storage unit of the first subset, wherein the new one of the plurality of storage units is selected and the new access request is generated for transmission to the new one of the of the plurality of storage units when the missed deadline cost value compares favorably to the new request cost threshold; and
generating a proceed with execution notification for transmission via the network to the first storage unit of the first subset indicating a request to continue executing the access request when the missed deadline cost value compares unfavorably to the new request cost threshold.

US Pat. No. 10,169,150

CONCATENATING DATA OBJECTS FOR STORAGE IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a computing device of a dispersed storage network (DSN), the method comprises:identifying an independent data object of a plurality of independent data objects for retrieval from DSN memory of the DSN, wherein the plurality of independent data objects is combined to produce a concatenated data object and wherein the concatenated data object is encoded in accordance with a dispersed storage error encoding function to produce a set of encoded data slices;
identifying an encoded data slice of the set of encoded data slices corresponding to the independent data object based on a mapping of the plurality of independent data objects in a data matrix;
sending a retrieval request to a storage unit of the DSN memory regarding the encoded data slice; and
when the encoded data slice is received, decoding the encoded data slice in accordance with the dispersed storage error encoding function and the mapping to reproduce the independent data object.

US Pat. No. 10,169,149

STANDARD AND NON-STANDARD DISPERSED STORAGE NETWORK DATA ACCESS

International Business Ma...

1. A method comprises:determining, by a computing device of a dispersed storage network (DSN), whether to utilize a non-standard DSN data accessing protocol or a standard DSN data accessing protocol to access data from the DSN, wherein the data is dispersed storage error encoded into one or more sets of encoded data slices and wherein the one or more sets of encoded data slices are stored in a set of storage units of the DSN;
when the computing device determines to use the non-standard DSN data accessing protocol:
generating, by the computing device, a set of non-standard data access requests regarding the data, wherein a non-standard data access request of the set of non-standard data access requests includes a network identifier of a storage unit of the set of storage units, a data identifier corresponding to the data, and a data access function;
sending, by the computing device, the set of non-standard data access requests to at least some storage units of the set of storage units, which includes the storage unit;
converting, by the storage unit, the non-standard data access request into one or more DSN slice names;
determining, by the storage unit, that the one or more DSN slice names are within a slice name range allocated to the storage unit; and
when the one or more DSN slice names are within the slice name range, executing, by the storage unit, the data access function regarding one or more encoded data slices corresponding to the one or more DSN slice names.

US Pat. No. 10,169,148

APPORTIONING STORAGE UNITS AMONGST STORAGE SITES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method of apportioning storage units in a dispersed storage network (DSN), the method comprising:generating storage unit apportioning data indicating a mapping of a plurality of desired numbers of storage units, represented as a plurality of numerical values in accordance with the plurality of desired numbers, to a plurality of storage sites based on site reliability data, wherein the mapping includes a first desired number of storage units corresponding to a first one of the plurality of storage sites that is greater than a second desired number of storage units corresponding to a second one of the plurality of storage sites in response to the site reliability data indicating that a first reliability score corresponding to the first one of the plurality of storage sites is more favorable than a second reliability score corresponding to the second one of the plurality of storage sites; and
allocating a plurality of storage units to the plurality of storage sites based on the storage unit apportioning data, wherein each of the plurality of storage units includes at least one processor and at least one memory device.

US Pat. No. 10,169,147

END-TO-END SECURE DATA STORAGE IN A DISPERSED STORAGE NETWORK

International Business Ma...

1. A method comprises:generating, by a first computing device of a dispersed storage network (DSN), a set of encryption keys;
encrypting, by the first computing device, a data matrix based on the set of encryption keys to produce an encrypted data matrix, wherein the data matrix includes data blocks of a data segment of a data object;
sending, by the first computing device, the encrypted data matrix to a second computing device of the DSN;
dispersed storage error encoding, by the second computing device, the data matrix to produce a set of encrypted encoded data slices; and
sending, by the second computing device, the set of encrypted encoded data slices to a set of storage units of the DSN for storage therein.

US Pat. No. 10,169,146

REPRODUCING DATA FROM OBFUSCATED DATA RETRIEVED FROM A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a computing device in a dispersed storage network (DSN), the method comprises:first encoding first data into a first plurality of sets of encoded data slices, wherein the first encoding is in accordance with a first dispersed error encoding function such that, for a set of encoded data slices of the first plurality of sets of encoded data slices, a first decode threshold number of encoded data slices is required to recover a corresponding first data segment of the first data;
second encoding second data into a second plurality of sets of encoded data slices, wherein the second encoding is in accordance with a second dispersed error encoding function such that, for a set of encoded data slices of the second plurality of sets of encoded data slices, a second decode threshold number of encoded data slices is required to recover a corresponding second data segment of the first data, wherein the second data segment is different from the first data segment;
creating a plurality of mixed sets of encoded data slices from the first and second plurality of sets of encoded data slices in accordance with a mixing pattern; and
outputting the plurality of sets of mixed encoded data slices to storage units of the DSN for storage therein.

US Pat. No. 10,169,145

READ BUFFER ARCHITECTURE SUPPORTING INTEGRATED XOR-RECONSTRUCTED AND READ-RETRY FOR NON-VOLATILE RANDOM ACCESS MEMORY (NVRAM) SYSTEMS

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a read buffer memory configured to store data to support integrated XOR reconstructed data and read-retry data, the read buffer memory comprising a plurality of read buffers, each read buffer being configured to store at least one data unit; and
a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to:
receive one or more data units and read command parameters used to read the one or more data units from at least one non-volatile random access memory (NVRAM) device;
determine an error status for each of the one or more data units, wherein the error status indicates whether each data unit comprises errored data or error-free data; and
store error-free data units and the read command parameters used to read the error-free data units to a read buffer of the read buffer memory.

US Pat. No. 10,169,144

NON-VOLATILE MEMORY INCLUDING SELECTIVE ERROR CORRECTION

Micron Technology, Inc., ...

1. An apparatus comprising:a first memory area included in a memory device and a second memory area included in the memory device, the first and second memory area selectively coupled to each other through a conductive path in the memory device; and
control circuitry included in the memory device to communicate with a memory controller, the memory controller including an error correction engine, the control circuitry of the memory device configured to retrieve first information stored in the first memory area and store the first information after the error correction engine performs an error detection operation on the first information, and to retrieve second information stored in the first memory area and store the second information in the second memory area without an additional error detection operation performed on the second information such that the error correction engine skips performing an additional error detection operation on the second information if a result from the error detection operation performed by the error correction engine on the first information meets a threshold condition.

US Pat. No. 10,169,143

PREFERRED STATE ENCODING IN NON-VOLATILE MEMORIES

Invensas Corporation, Sa...

1. An electronic system, comprising:a processor;
a main memory coupled to the processor;
a memory interface coupled to the processor and the main memory;
a memory controller coupled to the memory interface; and
a first non-volatile memory (NVM) integrated circuit coupled to the memory controller, the NVM integrated circuit further comprising:
a memory array having a plurality of pages, a page buffer coupled to the memory array, and a data input/output interface coupled to the page buffer,
wherein:
a first page in the memory array is selected for programming,
first user write data is stored in the page buffer,
preferred state encoding (PSE) is applied to the first user data to generate first PSE encoded write data which is stored in the page buffer according to a first allocation map,
error correction code (ECC) encoding is applied to the first PSE encoded write data to generate first ECC encoded write data which is stored in the page buffer according to the first allocation map, and
the contents of the page buffer are programmed into the selected first page in the memory array.

US Pat. No. 10,169,142

GENERATING PARITY FOR STORAGE DEVICE

Futurewei Technologies, I...

1. A method performed by a solid state device (SSD) controller to generate a parity, the method comprising:receiving, by the SSD controller, input data to be stored to first and second pages of a storage device, wherein the first page is allocated with N codewords and at least one non-integer number of codeword, the second page is allocated with M codewords, N and M are integer, each non-integer number of codeword corresponding to a part of a codeword, and wherein a total number of codewords in the first page is different from a total number of codewords in the second page;
determining, by the SSD controller, a max impact number (MIN) of the storage device dynamically, wherein the MIN is an integer no less than N+1 and no less than M;
configuring, by the SSD controller, codewords of the first and second pages into multiple groups, wherein each group has an integer number of codewords, and wherein the integer number of codewords in each group is no less than the MIN;
generating, by the SSD controller, parities for the multiple groups; and
storing, by the SSD controller, the parities to reserved spaces of the storage device.

US Pat. No. 10,169,141

MODIFIABLE STRIPE LENGTH IN FLASH MEMORY DEVICES

SK Hynix Inc., Gyeonggi-...

1. A memory device comprising:a memory comprising a plurality of memory cells for storing data; and
a controller communicatively coupled to the memory and configured to organize the data as a plurality of stripes, wherein each individual stripe of the plurality of stripes comprises:
a plurality of data groups, each of the plurality of data groups stored in the memory using a subset of the plurality of memory cells, wherein:
a stripe length for the individual stripe is determined by the controller based on detecting a condition associated with one or more data groups of the plurality of data groups, and
the stripe length for the individual stripe is a number of the plurality of data groups included in the individual stripe; and
at least one data group of the plurality of data groups for each of the individual stripes comprising parity data for correcting bit errors associated with the subset of the plurality of memory cells for the individual stripe.

US Pat. No. 10,169,140

LOADING A PHASE-LOCKED LOOP (PLL) CONFIGURATION USING FLASH MEMORY

International Business Ma...

1. A method for loading a phase-locked loop (PLL) configuration into a PLL module of an Application Specific Integrated Circuit (ASIC) using Flash memory, the method comprising:responsive to the PLL module in the ASIC locking a current PLL configuration from a set of current configuration registers in the ASIC, loading, by reset logic in the ASIC, a Flash data image configuration from the Flash memory into a set of holding registers in the ASIC;
responsive to the Flash data image configuration failing to be corrupted, comparing, by comparison logic in the ASIC, the Flash data image configuration in the set of holding registers to the current PLL configuration in the set of current configuration registers;
responsive to the Flash data image configuration differing from the current PLL configuration, loading, by the reset logic, the Flash data image configuration onto a PLL module input; and
responsive to the PLL module locking the Flash data image configuration, loading by the reset logic, the Flash data image configuration in the set of holding registers into the set of current configuration registers.

US Pat. No. 10,169,139

USING PREDICTIVE ANALYTICS OF NATURAL DISASTER TO COST AND PROACTIVELY INVOKE HIGH-AVAILABILITY PREPAREDNESS FUNCTIONS IN A COMPUTING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method for proactive natural disaster preparedness, comprising:receiving, at a computer of a computing environment, a notification of an impending natural disaster;
determining, by the computer, a threshold corresponding to a likelihood of the predicted disaster, further comprising:
using the likelihood to index into a data structure that stores, for each of a plurality of likelihood values, a corresponding threshold, wherein:
each of the stored thresholds represents a different cost tier;
each successively-higher cost tier corresponds to a successively-higher weighted business cost value; and
the determined threshold is the stored threshold that corresponds, in the data structure, to a likelihood value that is less than or equal to the likelihood of the predicted disaster;
determining, by the computer for the threshold, at least one proactive measure corresponding thereto for enabling the computing environment to maintain high availability, further comprising:
using the threshold to index into a data store that stores, for each of a plurality of successively-higher weighted business cost values, a corresponding proactive measure and executable functionality to invoke the corresponding proactive measure; and
selecting, as the determined at least one proactive measure, each of the corresponding proactive measures that corresponds, in the data store, to a weighted business cost value that is less than or equal to the threshold; and
automatically causing the computing environment to carry out the executable functionality to invoke each of the at least one determined proactive measure and thereby maintain the high availability of the computing environment without manual invocation.

US Pat. No. 10,169,138

SYSTEM AND METHOD FOR SELF-HEALING A DATABASE SERVER IN A CLUSTER

WALMART APOLLO, LLC, Ben...

1. A system comprising:a plurality of database servers, each database server in the plurality of database servers hosting shards of a database, each shard of the shards of the database having been split from a partition of the database and each partition of the database having been split from the database, each database server in the plurality of database servers having a unique identifier such that a status of each database server in the plurality of database servers can be accessed by other servers in the plurality of database servers, wherein each database server in the plurality of database servers is configured to:
receive a triggering action comprising:
receiving an indication that a minimum timer has expired; and
receiving a pre-determined number of queries;
detect a suspicious observation;
discover that a particular server is underperforming;
compile a plurality of statistics regarding itself, wherein the plurality of statistics is chosen from one of the following: memory usage, disk activity levels, CPU load, and error rates; and
store the plurality of statistics in a data store accessible by:
(1) each database server in the plurality of database servers; and
(2) a load balancer; and
the load balancer configured to:
allocate queries among the plurality of database servers using load balancing techniques;
determine when a condition has occurred by:
accessing the plurality of statistics in the data store; and
determining that a malfunctioning database server of the plurality of database servers is malfunctioning, comprising determining when one or more of the plurality of statistics stored in the data store by the malfunctioning database server does not meet performance thresholds;
initiate an automatic self-corrective action in a database server in the plurality of database servers, the automatic self-corrective action comprising the database server taking itself out of a rotation for a predetermined amount of time configured to allow the database server to catch up; and
perform a corrective action on the malfunctioning database server comprising:
determining that the malfunctioning database server cannot correct itself;
writing an entry in the data store indicating that the malfunctioning database server is not available;
causing the malfunctioning database server to no longer receive instructions; and
forwarding shard-level queries originally directed to the malfunctioning database server to one or more other database servers of the plurality of database servers.

US Pat. No. 10,169,137

DYNAMICALLY DETECTING AND INTERRUPTING EXCESSIVE EXECUTION TIME

International Business Ma...

1. A method, comprising:executing, by a first function of a first process executing on a processor, a plurality of calls to a second function of a second process;
programmatically generating, based on a respective amount of time required for each of the plurality of calls to complete, a time threshold for calls from the first function to the second function; and
subsequent to the plurality of calls completing:
storing, by an operating system (OS) kernel executing on the processor and in a queue of the OS kernel, an indication that the first function of the first process executing on the processor has made an additional call to the second function of the second process;
collecting process data for at least one of the first process and the second process;
determining, by the OS kernel, that an amount of time that has elapsed since the first function of the first process made the additional call to the second function of the second process exceeds the programmatically defined time threshold;
storing the queue and the process data as part of a failure data capture; and
performing a predefined operation on at least one of the first process and the second process.

US Pat. No. 10,169,136

DYNAMIC MONITORING AND PROBLEM RESOLUTION

International Business Ma...

1. A method comprising:determining, by one or more processors, a monitoring tier of a first component, of a plurality of components, that is a cause of a malfunction is activated;
in response to determining the monitoring tier of the first component is activated, determining, by one or more processors, a plurality of measurements for the plurality of components;
identifying, by one or more processors, a component of the plurality of components with the greatest number of activated monitoring tiers, based on the plurality of measurements; and
determining, by one or more processors, whether the component with the greatest number of activated monitoring tiers is the first component.

US Pat. No. 10,169,135

COMPUTER SYSTEM AND METHOD OF DETECTING MANUFACTURING NETWORK ANOMALIES

Uptake Technologies, Inc....

1. A computing system comprising:a network interface;
at least one processor;
a non-transitory computer-readable medium; and
program instructions stored on the non-transitory computer-readable medium that are executable by the at least one processor to cause the computing system to:
monitor operation of a plurality of nodes in a manufacturing network that comprises a plurality of edge nodes, a plurality of intermediate nodes, and a root node;
while monitoring the operation of the plurality of nodes in the manufacturing network, identify a given time at which at least one node in the manufacturing network satisfies node-level threshold criteria indicating anomalous operation of the node;
in response to identifying the given time at which at least one node satisfies the node-level threshold criteria, evaluate the operation of the manufacturing network at the given time using one or more of (a) macro-level threshold criteria indicating anomalous operation of the manufacturing network as a whole, (b) micro-level threshold criteria indicating anomalous operation of any micro-network in the manufacturing network, (c) path-level threshold criteria indicating anomalous operation of any node path in the manufacturing network, and (d) node-level threshold criteria indicating anomalous operation of any individual node in the manufacturing network;
based on the evaluation, identify at least one anomaly in the manufacturing network at the given time; and
cause a client station to present an alert indicating the at least one anomaly identified in the manufacturing network at the given time.

US Pat. No. 10,169,134

ASYNCHRONOUS MIRROR CONSISTENCY AUDIT

International Business Ma...

1. A method for auditing data consistency in an asynchronous data replication environment, the method comprising:executing, at a primary storage system, a “write with no data” command that performs functions associated with a conventional write command but writes no data to a source volume, the “write with no data” command performing the following:
serializing a source data track; and
creating a record set that contains a copy of data in the source data track, a timestamp indicating when the source data track was copied, and a command type indicating that the copy in the record set is not to be applied to a corresponding target data track of a target volume;
replicating the record set from the primary storage system, hosting the source volume, to a secondary storage system, hosting the target volume;
applying, to the target data track, all updates received for the target data track with timing prior to the timestamp;
reading the target data track at the secondary storage system after the updates have been applied;
comparing the target data track to the source data track; and
recording an error if the target data track does not match the source data track.

US Pat. No. 10,169,133

METHOD, SYSTEM, AND APPARATUS FOR DEBUGGING NETWORKING MALFUNCTIONS WITHIN NETWORK NODES

Juniper Networks, Inc., ...

1. A method comprising:building a collection of debugging templates that comprises a first debugging template that corresponds to a first potential cause of a certain networking malfunction and a second debugging template that corresponds to a second potential cause of the certain networking malfunction by:
receiving user input from a user of a network;
creating, based at least in part on the user input, the first debugging template that defines a first set of debugging steps that, when performed by a computing system, enable the computing system to determine whether the first potential cause led to the certain networking malfunction; and
creating, based at least in part on the user input, the second debugging template that defines a second set of debugging steps that, when performed by the computing system, enable the computing system to determine whether the second potential cause led to the certain networking malfunction;
detecting a computing event that is indicative of the certain networking malfunction within a network node included in the network;
determining, based at least in part on the computing event, potential causes of the certain networking malfunction, wherein the potential causes comprise the first potential cause of the certain networking malfunction and the second potential cause of the certain network malfunction;
performing the first set of debugging steps defined by the first debugging template that corresponds to the first potential cause, wherein the first debugging template comprises a generic debugging template that enables the computing system to determine that the certain networking malfunction resulted from the first potential cause irrespective of a software configuration of the network node; and
determining, based at least in part on the first set of debugging steps defined by the first debugging template, that the certain networking malfunction resulted from the first potential cause.

US Pat. No. 10,169,132

PREDICTING A LIKELIHOOD OF A CRITICAL STORAGE PROBLEM

International Business Ma...

1. A method for predicting, by a computerized storage-management system, a likelihood of a critical storage problem, the method comprising:a processor of a computerized storage controller of the computerized storage-management system receiving a sample set, where a sample size identifies a number of samples in the sample set, and where a first sample of the sample set identifies a first amount of storage space that was available to a storage device at a time when the first sample was recorded;
the processor deriving a mean of the sample set and a standard deviation of the sample set;
the processor further deriving a Chi-square statistic of the sample set as a function of the first mean and the first standard deviation, where the Chi-square statistic identifies whether the sample size is large enough to ensure that the sample set is statistically valid;
the processor, as a function of the deriving and of the further deriving, determining whether the sample set is statistically valid;
the processor, as a function of the determining, identifying a likelihood of a critical storage problem occurring within a threshold time;
the processor, as a further function of the determining, directing the computerized storage-management system to select an adjusted sample-set size,
where the duration of the threshold time is selected to allow the processor to perform the identifying in real time, and
where the adjusted sample-set size identifies a size of a future sample set that the computerized storage controller will request and receive at a future time in order to identify available a future available storage space of the storage device at the future time.

US Pat. No. 10,169,131

DETERMINING A TRACE OF A SYSTEM DUMP

International Business Ma...

1. A method for improving system analytics by determining an extra trace of a system dump after an event triggering the system dump, the method comprising:receiving, by one or more computer processors, a system dump request, wherein the system dump request includes performing a system dump utilizing a dumping tool, wherein the system dump includes a trace wherein the trace comprises one or more trace entries collected in a trace table;
determining, by one or more computer processors, an initial trace of the system dump;
determining, by one or more computer processors, the extra trace, wherein determining the extra trace includes determining a time period subsequent to the initial trace of the system dump to collect trace entries, and wherein the extra trace refers to a plurality of trace data entries collected during the time period subsequent to the initial trace of the system dump and subsequent to an event triggering the system dump;
determining, by one or more computer processors, an updated trace table, wherein determining the updated trace table includes collecting the plurality of trace entries during the time period subsequent to the initial trace of the system dump and subsequent to an event triggering the system dump, appending the trace table with the plurality of trace entries, and wrapping the one or more trace entries collected in the initial trace of the system dump in the event the updated trace table cannot store all of the plurality of trace entries; and
displaying, by one or more computer processors, the extra trace at the end of the initial trace.

US Pat. No. 10,169,130

TAILORING DIAGNOSTIC INFORMATION IN A MULTITHREADED ENVIRONMENT

International Business Ma...

1. A computer-implemented method for tailoring diagnostic information specific to current activity of multiple threads within a computer system, the method comprising:creating, by one or more processors, a system dump, including main memory and system state information;
storing, by one or more processors, the system dump to a database;
executing, by one or more processors, a program to provide tailored diagnostic information;
creating, by one or more processors, a virtual memory image of a system state, based on the memory dump, in the address space of the program, by creating a second hardware memory mapping of the hardware memory addresses of the address space of the program to the virtual memory addresses of the virtual memory image of the system state;
scanning, by one or more processors, the virtual memory image and system state information, using the second hardware memory mapping, to identify tasks that were running, tasks that have failed due to an error, and tasks that were suspended when the system dump was made;
collecting and collating based on task number, by one or more processors, from the system dump, using the second hardware memory mapping, state information and control blocks associated with the identified tasks; and
storing, by one or more processors, to the database, a formatted system dump, including the collected and collated state information and control blocks for the identified tasks.

US Pat. No. 10,169,129

DISPERSED B-TREE DIRECTORY TREES

INTERNATIONAL BUSINESS MA...

1. A computing device comprising:an interface configured to interface and communicate with a dispersed or distributed storage network (DSN);
memory that stores operational instructions; and
processing circuitry operably coupled to the interface and to the memory, wherein the processing circuitry is configured to execute the operational instructions to:
obtain, via the DSN and via the interface, directory metrics associated with a directory structure that is associated with a directory file that is segmented into a plurality of data segments, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce a set of encoded directory slices that are stored in at least one DSN memory at least one DSN address corresponding to a source name of the directory file;
determine whether to reconfigure the directory structure based on the directory metrics; and
based on a determination to reconfigure the directory structure based on the directory metrics:
determine a number of layers for a reconfigured directory structure;
determine a number of spans per layer of the number of layers for the reconfigured directory structure;
determine directory entry reassignments; and
reconfigure the directory structure based on the number of layers, the spans per layer, and the directory entry reassignments to generate the reconfigured directory structure including at least one of to create one or more children directory files, facilitate movement within the DSN of one or more directory entries from a parent directory file to the one or more children directory files, or to add pointers associated with the one or more children directory files to the parent directory file.

US Pat. No. 10,169,128

REDUCED WRITE STATUS ERROR POLLING FOR NON-VOLATILE RESISTIVE MEMORY DEVICE

CROSSBAR, INC., Santa Cl...

1. A method for reducing error polling for a memory controller device, comprising:issuing a memory command to a bank of non-volatile resistive switching memory of a non-volatile resistive switching memory device;
receiving a signal on a dedicated error pin for the non-volatile resistive switching memory device;
determining whether the signal indicates occurrence of an error for the memory command; and
referencing a status register associated with the bank of non-volatile resistive switching memory and identifying error information pertaining to the error in response to determining the signal indicates the occurrence of the error.

US Pat. No. 10,169,127

COMMAND EXECUTION RESULTS VERIFICATION

International Business Ma...

1. A computer program product comprising a computer readable storage medium that is not a transitory signal per se, the computer readable storage medium having computer readable codes stored thereon that cause one or more devices to conduct a method comprising:receiving, by a processor, a file including a plurality of commands and an expected result related to the plurality of commands from a command line interface, the command line interface operating in a script mode that allows a user, with a single login to the command line interface, to define a list of commands to be executed in order by the command line interface;
executing the plurality of commands to create one or more processes for performing one or more tasks corresponding to the plurality of commands;
performing the one or more tasks;
generating one or more result codes corresponding to performance of the one or more tasks, the one or more result codes comprising a first indication of successful command execution or a second indication of errors;
determining whether the one or more result codes satisfy the expected result based on the first indication or the second indication in the one or more result codes matching the expected result; and
sending a response to the command line interface in response to determining whether the one or more result codes satisfy the expected result,
wherein:
the response includes one of an error message and a success code,
the error message comprises an error code indicating one of an unexpected error and an unexpected success in the one or more result codes,
determining whether the one or more result codes satisfy the expected results comprises determining whether the first indication of successful command execution or the second indication of errors matches at least a subset of the expected results,
sending the response to the command line interface comprises:
sending the success code to the command line interface in response to determining a match, and
sending the error code to the command line interface in response to determining a non-match,
the error code comprises one of:
a first error indicating an unexpected error in the one or more result codes in response to the subset of expected results including a successful result, and
a second error indicating an unexpected success in the one or more result codes in response to the subset of expected results including an error result.

US Pat. No. 10,169,126

MEMORY MODULE, MEMORY CONTROLLER AND SYSTEMS RESPONSIVE TO MEMORY CHIP READ FAIL INFORMATION AND RELATED METHODS OF OPERATION

SAMSUNG ELECTRONICS CO., ...

1. A memory module comprising:first to Mth memory chips (where M is an integer that is equal to or greater than 2) mounted on a module board and storing data; and
an (M+1)th memory chip mounted on the module board and storing a parity code associated with multi-chip data having data portions stored by respective ones of the first to Mth memory chips, the parity code containing information to correct at least one bit error resulting from a read operation of a data portion stored by a failed one of the first to Mth memory chips,
wherein each of the first to (M+1)th memory chips comprises an on-chip error correction circuit to detect a bit error within the corresponding stored data portion of the multi-chip data and to provide a corresponding fail bit to indicate a result of the detection of a bit error, and
wherein the memory module comprises a circuit connected to receive the fail bits from the first to (M+1)th memory chips and to output fail information as a result of a calculation performed on the fail bits.

US Pat. No. 10,169,125

RE-ENCODING DATA IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method comprises:determining to create a new set of encoded data slices based on an unfavorable storage performance level associated with one or more storage units (SUs) within a dispersed storage network (DSN);
partially decoding, by a storage unit (SU) of the DSN, a first encoded data slice of a set of encoded data slices in accordance with previous dispersed storage error encoding parameters having a previous threshold number to produce a partially decoded first encoded data slice, wherein the first encoded data slice is stored by another SU of the DSN and is transmitted from the another SU via the DSN and received via an interface of the SU that is configured to interface and communicate with the DSN, and wherein a data segment of a data object is encoded into the set of encoded data slices in accordance with the previous dispersed storage error encoding parameters;
partially re-encoding, by the SU, the partially decoded first encoded data slice in accordance with updated dispersed storage error encoding parameters having an updated threshold number to produce a first partially re-encoded data slice, wherein the first partially re-encoded data slice is used to create a new first encoded data slice of the new set of encoded data slices that corresponds to the data segment being dispersed storage error encoded in accordance with the updated dispersed storage error encoding parameters, wherein
the partially re-encoding comprises:
obtaining a new encoding matrix corresponding to the updated dispersed storage error encoding parameters;
reducing the new encoding matrix based on a matrix position corresponding to the new first encoded data slice of the new set of encoded data slices that corresponds to the data segment being dispersed storage error encoded in accordance with the updated dispersed storage error encoding parameters; and
matrix multiplying the reduced new encoding matrix with the partially decoded first encoded data slice to produce the first partially re-encoded data slice;
receiving, by the SU via the DSN and via the interface of the SU, a plurality of second partially re-encoded data slices from a sub-set of other SUs of the DSN, wherein the plurality of second partially re-encoded data slices is created in accordance with the updated dispersed storage error encoding parameters based on partially re-encoding by the sub-set of other SUs of the DSN; and
generating, by the SU, a new second encoded data slice of the new set of encoded data slices from the plurality of second partially re-encoded data slices.

US Pat. No. 10,169,124

UNIFIED OBJECT INTERFACE FOR MEMORY AND STORAGE SYSTEM

SAMSUNG ELECTRONICS CO., ...

1. A memory management device, comprising:a memory;
a data structure stored in the memory, the data structure including:
an identifier of an object, wherein the identifier of the object is a hash; and
a tuple including an identifier of a physical device and a location on the physical device; and
a second data structure, the second data structure including:
a second identifier of the object; and
the hash,
wherein the object is stored on one of a plurality of physical devices including at least one volatile storage device and at least one non-volatile storage device,
wherein the second data structure maps the second identifier of the object to the hash and the data structure maps the hash to the tuple to access the object
wherein data for the object may be accessed on behalf of an application or an operating system using the second identifier of the object, and
wherein the memory management device uses the second identifier of the object, the data structure, and the second data structure to determine the identifier of the physical device and the location on the physical device in the tuple.

US Pat. No. 10,169,123

DISTRIBUTED DATA REBUILDING

INTERNATIONAL BUSINESS MA...

1. A method for use in a distributed storage network (DSN) storing sets of encoded data slices in sets of storage units, the method comprising:identifying, by a first storage unit included in a set of storage units, a storage error associated with an encoded data slice of a set of encoded data slices, the encoded data slice assigned to be stored in the first storage unit;
selecting a second storage unit to generate a rebuilt encoded data slice to replace the encoded data slice assigned to be stored in the first storage unit;
transmitting, from the first storage unit to the second storage unit, a rebuild request associated with the storage error;
generating, by the second storage unit, the rebuilt encoded data slice in response to the rebuild request;
transmitting the rebuilt encoded data slice from the second storage unit to the first storage unit; and
storing the rebuilt encoded data slice in the first storage unit.

US Pat. No. 10,169,122

METHODS FOR DECOMPOSING EVENTS FROM MANAGED INFRASTRUCTURES

Moogsoft, Inc., San Fran...

1. A system for clustering events, comprising:a first engine that receives message data from a managed infrastructure that includes managed infrastructure physical hardware which supports the flow and processing of information;
a second engine that determines common characteristics of events and produces clusters of events relating to the failure of errors in the managed infrastructure, where membership in a cluster indicates a common factor of the events that is a failure or an actionable problem in the physical hardware managed infrastructure directed to supporting the flow and processing of information, and producing events that relate to the managed infrastructure while converting the events into words and subsets used to group the events that relate to failures or errors in the managed infrastructure, including the managed infrastructure physical hardware;
a compare and merge engine that receives outputs from the second engine, the compare and merge engine communicating with one or more user interfaces in a situation room; and
wherein the second engine or a third engines uses a source address for each event make a change to at least a portion of the managed infrastructure, and in response to producing events that relate to the managed infrastructure while converting the events into words and subsets physical changes are made to managed infrastructure physical hardware.

US Pat. No. 10,169,121

WORK FLOW MANAGEMENT FOR AN INFORMATION MANAGEMENT SYSTEM

Commvault Systems, Inc., ...

1. A computer-implemented method for distributing tasks, in an information management system, using first and second work queues managed by a storage manager, the method comprising:receiving tasks to be performed in the information management system at the storage manager, which facilitates a transfer of data between primary storage devices and secondary storage devices in the information management system, and which schedules and manages the tasks for multiple, different client computing devices in the information management system;
scheduling information management policy tasks for a client computing device using the storage manager,
wherein scheduling the information management policy tasks includes populating the first work queue with the information management policy tasks,
wherein the information management policy tasks include data storage operations that are defined by a data storage policy and include creating secondary copies of data on secondary storage devices from primary copies of data stored on primary storage devices, restoring the secondary copies of data from the secondary storage devices to the primary storage devices, or retaining the secondary copies of data on the secondary storage devices, and
wherein the secondary storage devices are located remotely from the primary storage devices;
transmitting the information management policy tasks from the storage manager to the client computing device, in accordance with the first work queue;
scheduling information management system tasks for the client computing device using the storage manager,
wherein scheduling the information management system tasks includes populating the second work queue with the information management system tasks,
wherein the information management system tasks include tasks that are related to maintenance of software or hardware components of the information management system and that do not read or write data to the secondary storage devices;
transmitting the information management system tasks from the storage manager to the client computing device in accordance with the second work queue and based on an availability of the client computing device;
executing, at the client computing device, the transmitted information management policy tasks, in accordance with the first work queue, and the transmitted information management system tasks, in accordance with the second work queue;
determining parameters of an information management system operation failure; and
providing an alert of failure if at least one of the parameters exceeds a predetermined threshold.

US Pat. No. 10,169,120

REDUNDANT SOFTWARE STACK

International Business Ma...

1. A method for creating redundant software stacks, the method comprising:identifying, by one or more computer processors, a first container with a set of rules and with first software stack and a valid multipath configuration, wherein the first software stack is a first path of the valid multipath configuration;
creating, by one or more computer processors, a second container, wherein the second container has the same set rules as the first container;
creating, by one or more computer processes, a second software stack in the second container, wherein the second software stack is a redundant software stack of the first software stack;
creating, by one or more computer processors, a second path from the first container to the second software stack, wherein the second path bypasses the first software stack;
identifying, by one or more computer processors, a data load on the first path that is creating latency; and
sending, by one or more computer processors, at least a portion of the data load on the first path to the second path to reduce the latency on the first path.

US Pat. No. 10,169,119

METHOD AND APPARATUS FOR IMPROVING RELIABILITY OF DIGITAL COMMUNICATIONS

1. A method performed by a radio comprising:receiving a network identifier comprising a data unit identifier, the data unit identifier configured to identify a type of a data unit being communicated;
checking validity of the data unit identifier;
combinatorially processing a data unit for which the data unit identifier is uncertain according to a plurality of possible data unit identifier values;
selecting a most likely data unit identifier value based according to results of the combinatorially processing the data unit;
performing subsequent processing of the data unit in accordance with the most likely data unit identifier value.

US Pat. No. 10,169,118

REMOTE PRODUCT INVOCATION FRAMEWORK

INTERNATIONAL BUSINESS MA...

1. A method for remote product invocation comprising:configuring an invocation framework, the invocation framework comprising an integration module and an endpoint/handler module;
wherein the integration module is configured to:
receive a source object;
format data from the source object based on requirements of a target machine supporting an external service that performs a desired operation;
utilize the endpoint/handler module, which comprises two distinct subcomponents, an endpoint and a handler, the endpoint to contain information for making a connection to an external service, and the handler to use the information from the endpoint to make connection to the external service and execute the desired operation using the data from the source object; and
with a logical management operation of the invocation framework, defining an action to be executed in response to receiving the source object so as to provide an interface between an entity submitting the source object to the integration module and the integration module.

US Pat. No. 10,169,117

INTERFACING BETWEEN A CALLER APPLICATION AND A SERVICE MODULE

International Business Ma...

1. A method for interfacing between a caller application and a service module, said method comprising:receiving, by a processor of a computer system, a request for performing a transaction from the caller application, wherein the request comprises at least one caller application attribute describing the request;
subsequent to said receiving the request, said processor building a service module data structure pursuant to the received request, wherein the service module data structure comprises a generic service document and at least one service module attribute, and wherein said building the service module data structure comprises:
creating one or more containers in the generic service document, wherein each container of the one or more containers is respectively associated with each service module attribute of the at least one service module attribute in each mapping of the at least one mapping in a mapping table of the service module data structure, wherein each container comprises a data value for each service module attribute of the at least one service module attribute; and
subsequent to said creating the one or more containers in the generic service document, naming each container of said at least one container in the generic service document after each mapping of said at least one mapping in the mapping table;
subsequent to said building the service module data structure, said processor storing each service module attribute in a relational table of the service module data structure;
subsequent to said storing each service module attribute, said processor servicing the request within the service module data structure, wherein said servicing results in instantiating the generic service document, and wherein said servicing comprises: performing the transaction, retrieving each mapping of at least one mapping in the mapping table of the service module data structure, and reloading each container of at least one container from the relational table into respective containers of the generic service document according to each retrieved mapping; and
subsequent to said servicing, said processor returning the generic service document to the caller application.

US Pat. No. 10,169,116

IMPLEMENTING TEMPORARY MESSAGE QUEUES USING A SHARED MEDIUM

International Business Ma...

1. A method for implementing temporary message queues using a shared medium at a coupling facility shared between multiple systems each having a queue manager handling messages from the system's applications, the method carried out at the coupling facility comprising:defining a list structure on the shared medium wherein the list structure has multiple lists;
providing a list which is allocated to a single queue manager in which message entries are located which belong to multiple shared temporary dynamic queues (STDQs) created by the single queue manager, wherein the message entries are located by reference to a key which determines a message entry's position in the list, the list including:
a list header which can be partitioned for multiple current STDQs by assignment of key ranges to message entries belonging each current STDQ; and
a list control entry which holds information about the assignment of key ranges to the multiple current STDQs and shares the information with other queue managers using the STDQs, wherein the list control entry is updated by the single queue manager when an STDQ is created or deleted, wherein the list control entry includes a name of an STDQ which is in accordance with a queue naming convention and includes: an indication of the list structure, an indication of the single queue manager, a list header number, a start key on the list header, and a unique identifier for the STDQ; and
sending a message from a first system of the multiple systems to a second system of the multiple systems based on the list.

US Pat. No. 10,169,115

PREDICTING EXHAUSTED STORAGE FOR A BLOCKING API

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for operating a blocking application program interface (API) of an application, the method comprising:receiving, from a requestor, a request for data from the application;
creating, by the blocking API of the application, a buffer for the data;
receiving, by the application, a data record corresponding to the request;
storing, by the blocking API, the data record in the buffer;
based on a determination that the buffer is full, providing, by the blocking API, data records in the buffer to the requestor; and
based on a determination that the buffer is not full, determining by the blocking API, based at least in part on an amount of available storage in the buffer, whether to provide the data records in the buffer to the requestor or to wait for another data record before providing the data records in the buffer to the requestor.

US Pat. No. 10,169,114

PREDICTING EXHAUSTED STORAGE FOR A BLOCKING API

INTERNATIONAL BUSINESS MA...

8. A computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:receive, from a requestor, a request for data from an application;
create, by a blocking API of the application, a buffer for the data;
receive, by the application, a data record corresponding to the request;
store, by the blocking API, the data record in the buffer;
based on a determination that the buffer is full, provide, by the blocking API, data records in the buffer to the requestor; and
based on a determination that the buffer is not full, determine by the blocking API, based at least in part on an amount of available storage in the buffer, whether to provide the data records in the buffer to the requestor or to wait for another data record before providing the data records in the buffer to the requestor.

US Pat. No. 10,169,113

STORAGE AND APPLICATION INTERCOMMUNICATION USING ACPI

International Business Ma...

1. A method for event-driven intercommunication, the method comprising:issuing an interrupt based on a first event from a first kernel-mode module to a second kernel-mode module via an interface,
wherein the first event corresponds to an operational parameter of a first node based, at least in part, on a shared namespace accessible by the first kernel-mode module and the second kernel-mode module, wherein the operational parameter of the first node is an anticipated status of the first node, based on one or more non-consecutive operations scheduled to be executed by the first node,
wherein the first node is a storage subsystem of a computing device in communication with the second node, the second node comprising a user-level application stored externally and accessed through a communication network by the computing device, and
issuing, by the second kernel-mode module, a second event to a second node, wherein the second event corresponds to an object of the shared namespace.

US Pat. No. 10,169,112

EVENT SEQUENCE MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:obtaining, by one or more processors, an action sequence that includes a plurality of actions executed on behalf of a plurality of users for achieving at least one goal;
generating, by one or more processors, from the obtained action sequence an event sequence that includes a plurality of events, wherein the event sequence includes respective time points at which respective actions associated with respective events are executed, and wherein each of the plurality of events is associated with a unique type of action from the plurality of actions;
determining, by one or more processors, an association model based on the generated event sequence, wherein the association model defines a chronological relationship among events associated with the at least one goal;
building, by one or more processors, a plurality of sub-models from a plurality of sub-sequences that are extracted from the event sequence, wherein at least one of the plurality of sub-models defines a chronological relationship among events associated with a portion of the at least one goal;
combining, by one or more processors, the plurality of sub-models into the association mode;
for a specific event from the plurality of events included in the event sequence, extracting from the event sequence, by one or more processors, a group of sub-sequences that include the specific event;
determining, by one or more processors, a sub-model from the extracted group of sub-sequences based on respective time points included in the sub-sequences; and
selecting from the event sequence, by one or more processors, a sub-sequence that ends at the specific event for inclusion in the sub-model.

US Pat. No. 10,169,111

FLEXIBLE ARCHITECTURE FOR NOTIFYING APPLICATIONS OF STATE CHANGES

MICROSOFT TECHNOLOGY LICE...

1. A method for providing notifications to clients in response to state property changes, comprising:receiving a notification request at an Application Program Interface (API) from a client application on the computing device to receive a notification in response to an event that originates on the computing device; wherein the event is associated with a change in a state property of the computing device; wherein the Application Program Interface (API) is utilized by the client application to register the notification request;
ensuring that the state property is registered via the API, wherein the API is useable to register for notifications regarding state properties that are updated by different components within the computing device;
determining when the state property changes, wherein determining when the state property changes comprises using the API to specify a batching operation on changes to the state property that occur within a predetermined time period; wherein a call to the API batching operation specifies a time period for which a value of the state property is to remain constant before notifying the client application of a change to the state property;
determining when the client should receive notification of the state property change; and
notifying the client of the state property change on the computing device when determined that the client should receive notification of the state property change;
wherein the call to the API batching operation reduces a number of instances of notifying the client of the state property change during the time period.

US Pat. No. 10,169,110

NAVIGATION APPLICATION PROGRAMMING INTERFACE

Google LLC, Mountain Vie...

1. A non-transitory computer-readable medium storing instructions that implement an application programming interface for providing a navigation service as part of a software application executed on a computing device, the computing device having one or more processors and a display device, the application programming interface comprising:a first set of instructions specifying one or more first parameters to control the implementation of a navigation service by the software application, the navigation service providing navigation information to a user of the software application, the first set of instructions implemented as a class;
a second set of instructions specifying logic to control interaction with a routing engine based at least in part on the one or more first parameters specified in the first set of instructions;
wherein the first set of instructions specify one or more configurable event listener interfaces, the configurable event listener interfaces operable to obtain data associated with one or more navigation events to update the navigation information provided as part of the navigation service, the one or more configurable event listener interfaces each comprising one or more parameters specifiable by a developer as part of the first set of instructions implemented as a class;
wherein the one or more configurable event listener interfaces are configured to obtain data associated with the one or more navigation events in response to one or more navigation events specified by the one or more parameters specifiable by the developer; and
wherein the one or more configurable event listener interfaces comprise a route changed listener interface operable to be called when a route provided as part of the navigation service changes;
wherein the one or more configurable event listener interfaces comprise an arrival listener interface operable to be called when a user has arrived at a specified waypoint.

US Pat. No. 10,169,109

SWITCHED APPLICATION PROCESSOR APPARATUS FOR CELLULAR DEVICES

ELTA SYSTEMS LTD., Ashdo...

1. A cellular device architecture, comprising:a Modem-AP switch configured to select between different processing routes;
a first application processor adapted for processing a first type of data and selectively connected to said Modem-AP switch;
a second application processor adapted for processing a second type of data and selectively connected to said Modem-AP switch;
at least two modems selectively connected to said Modem-AP switch, each of said modems communicating with a respective antenna; and
a Controller module coupled to said Modem-AP switch and being configured to:
detect whether received data is said first type of data or said second type of data; and
select a processing route having one of the two application processors that matches the detected type of data, said selection comprising:
in response to receipt of data of said first type of data received in said Modem-AP switch as received at one of said modems through an associated antenna, commanding said Modem-AP switch to select a first processing route wherein said first application processor is switched to connect to said modem and to its associated antenna, and said second application processor is not in said first processing route; and
in response to receipt of data of said second type of data received in said Modem-AP switch as received at one of said modems through an associated antenna, commanding said Modem-AP switch to select a second processing route wherein said second application processor is switched to connect to said modem and to its associated antenna, and said first application processor is not in said second processing route.

US Pat. No. 10,169,108

SPECULATIVE EXECUTION MANAGEMENT IN A COHERENT ACCELERATOR ARCHITECTURE

International Business Ma...

1. A computer-implemented method for speculative execution management in a coherent accelerator architecture, the method comprising:detecting, with respect to a set of cache lines of a single shared memory in the coherent accelerator architecture, a first access request from a first Accelerator Functional Unit (AFU);
detecting, with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, a second access request from a second AFU; and
processing, by a speculative execution management engine using a speculative execution technique, the first and second access requests with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, wherein the speculative execution technique is configured to allow both the first and second AFU's to simultaneously access the set of cache lines without a locking mechanism;
determining a number of data entries in the set of cache lines of the single shared memory;
comparing the number of data entries in the set of cache lines of the single shared memory to a threshold number of data entries;
determining an elapsed period since a previous capture on the set of cache lines of the set of single shared memory;
comparing the elapsed period since the previous capture to a time threshold;
capturing, in response to the number of data entries exceeding the threshold number of data entries and in response to the elapsed period exceeding the time threshold, by the speculative execution management engine, a set of checkpoint roll-back data, wherein the set of checkpoint roll-back data includes an image of the first AFU at a first point in time, an image of the second AFU at the first point in time, and an image of the set of cache lines of the single shared memory at the first point in time;
evaluating the first and second access requests, wherein evaluating the first and second access requests further comprises:
identifying a first subset of target cache lines of the set of cache lines for the first access request;
identifying a second subset of target cache lines of the set of cache lines for the second access request, wherein the first and second subsets of target cache lines indicate read and write operations by the first and second access requests; and
determining, based on the identified first and second subset of target cache lines, whether a conflict exists;
in response to a determination that a conflict does not exist:
updating, in response to processing the first and second access requests with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, a host memory directory in a batch fashion which includes a set of update data for both the first and second access requests in a single set of data traffic; and
in response to a determination that a conflict exists:
identifying a subset of cache lines of the set of cache lines where the conflict exists;
determining a number of cache lines in the subset of cache lines;
comparing the number of cache lines to a severity threshold;
rolling-back, in response to the number of cache lines exceeding the severity threshold, based on the set of checkpoint roll-back data, the coherent accelerator architecture to a prior state, wherein the rolling-back includes rolling back only the subset of cache lines where the conflict exists;
retrying, without using the speculative execution technique and in a separate fashion in relation to the second access request, the first access request with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture; and
retrying, without using the speculative execution technique and in the separate fashion in relation to the first access request, the second access request with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture.

US Pat. No. 10,169,107

ALMOST FAIR BUSY LOCK

International Business Ma...

1. A method comprising:publishing a current state of a lock and a claim non-atomically to the lock by a next owning thread, in an ordered set of threads, that has requested to own the lock, the claim comprising a structure capable of being read and written only in a single memory access,
obtaining, by each thread in the ordered set of threads, a ticket,
wherein the claim comprises an identifier of a ticket obtained by the next owning thread, and an indication that the next owning thread is claiming the lock;
comparing the ticket obtained by the next owning thread with a current ticket;
responsive to a match between the ticket obtained by the next owning thread and the current ticket, preventing thread monitoring preemptions; and
responsive to a match between the ticket obtained by the next owning thread and the current ticket, non-atomically acquiring the lock.

US Pat. No. 10,169,106

METHOD FOR MANAGING CONTROL-LOSS PROCESSING DURING CRITICAL PROCESSING SECTIONS WHILE MAINTAINING TRANSACTION SCOPE INTEGRITY

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for managing critical section processing, the method comprising:generating, using a processor, a transaction scope for a process in response to processing in a critical section;
collecting data related to the process;
generating, using the processor, a plurality of requests using the collected data;
storing the plurality of requests and data as a plurality of pending items chained together to form an ordered list in a private storage during critical section processing; and
processing, using the processor, the request based on the transaction scope, the processing comprising implementing a check of the process for any pending items in response to a transaction scope application programming interface being called or other processing relating to the pending items,
wherein the pending items are processed in the order they are created by using the ordered list, and
wherein one of the plurality of requests is a rollback request that includes at least one from a group consisting of removing the pending items from the private storage, releasing the private storage for all pending items, and resuming normal processing.

US Pat. No. 10,169,105

METHOD FOR SIMPLIFIED TASK-BASED RUNTIME FOR EFFICIENT PARALLEL COMPUTING

QUALCOMM Incorporated, S...

1. A method of scheduling and executing lightweight computational procedures in a computing device, comprising:determining whether a first task pointer in a task queue is a simple task pointer for a lightweight computational procedure;
scheduling a first simple task for the lightweight computational procedure for execution by a first thread in response to determining that the first task pointer is a simple task pointer;
retrieving a kernel pointer for the lightweight computational procedure from an entry of a simple task table, wherein the entry is associated with the simple task pointer; and
directly executing the lightweight computational procedure as the first simple task.

US Pat. No. 10,169,104

VIRTUAL COMPUTING POWER MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:receiving a request for a computing resource for a computing task, wherein the computing task is active in a computing system, and wherein the computing system includes a current power consumption profile for the computing task and a historical power consumption profile for the computing task;
determining whether a peak in a current power consumption profile is expected based on the historical power consumption profile for the computing task; and
responsive to determining a peak is expected, delaying the request for the computing resource by: (i) initiating an allocation timeout, (ii) determining whether the allocation timeout is effective in reducing the current power consumption profile, (iii) responsive to determining the allocation timeout is not effective in reducing the current power consumption profile, granting the request for the computing resource, and (iv) updating the historical power consumption profile.

US Pat. No. 10,169,103

MANAGING SPECULATIVE MEMORY ACCESS REQUESTS IN THE PRESENCE OF TRANSACTIONAL STORAGE ACCESSES

International Business Ma...

1. A processing unit, comprising:a processor core;
a cache memory coupled to the processor core; and
transactional memory logic that, responsive to receipt of a speculative memory access request at the cache memory that includes a target address of data speculatively requested for the processor core, determines whether the target address of the speculative memory access request matches an address in a set of addresses forming a store footprint of a memory transaction and, responsive to determining that the target address of the speculative memory access request matches an address in the set of addresses forming the store footprint of a memory transaction, causes the cache memory to reject servicing the speculative memory access request;
wherein:
the transactional memory logic determines whether the speculative memory access request is a transactional speculative memory access request or a non-transactional speculative memory access request;
the transactional memory logic causes the cache memory to reject servicing the speculative memory access request in response to determining the memory access request is a transactional speculative memory access request; and
the transactional memory logic fails the memory transaction in response to determining the speculative memory access request is a non-transactional speculative memory access request.

US Pat. No. 10,169,102

LOAD CALCULATION METHOD, LOAD CALCULATION PROGRAM, AND LOAD CALCULATION APPARATUS

FUJITSU LIMITED, Kawasak...

7. A load calculation method comprising:acquiring processor usage information including usage of a processor of a managed physical machine, and usage of the processor for each of a plurality of virtual machines generated by a hypervisor executed on the managed physical machine;
acquiring data transmission information including a data transmission amount for each of a plurality of virtual network interfaces used by the plurality of virtual machines;
determining a virtual network interface having a second correlation between overhead processor usage, obtained by subtracting a sum of processor usage of the plurality of virtual machines from processor usage of the managed physical machine, and a data transmission amount of the virtual network interface, to be a second virtual network interface that performs data transmission via the hypervisor among the plurality of virtual network interfaces;
determining a virtual network interface having a first correlation that is smaller than the second correlation between the overhead processor usage and the data transmission amount of the virtual network interface, to be a first virtual network interface that performs data transmission without routing through the hypervisor, among the plurality of virtual network interfaces;
calculating load information including amount of increase or decrease of the processor usage for data transmission in the managed physical machine, which a virtual machine to be added or deleted will be added to or deleted from, based on whether each of the virtual machines uses the first virtual network interface or the second virtual interface; and
adding or deleting the virtual machine to be added or deleted to or from a managed physical machine that is selected based on the calculated amount of increase or decrease of the processor usage for data transmission.

US Pat. No. 10,169,101

SOFTWARE BASED COLLECTION OF PERFORMANCE METRICS FOR ALLOCATION ADJUSTMENT OF VIRTUAL RESOURCES

International Business Ma...

1. A method for collecting and processing performance metrics, the method comprising:assigning, by the one or more computer processors, an identifier corresponding to a first workload, wherein a first workload includes inbound input-output transaction from input-output devices and accelerators associated with a first virtual machine, wherein the first virtual machine is a container;
recording, by the one or more computer processors, resource consumption data, wherein the resource consumption data is selected from a group consisting of: one or more time stamps, one or more identified workloads, and one or more resource consumption estimates associated with the one or more time stamps, of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
creating, by the one or more computer processors, a relational association of the first workload and the first virtual machine to the resource consumption data of the at least one processor, wherein creating a relational association between the first workload and the first virtual machine further comprises using the calculated difference in resource consumption between the performance monitoring interrupt and a previous interrupt to track a change in resource consumption of the at least one processor over time;
determining, by the one or more computer processors, if the first workload is complete; responsive to determining that the first workload is not complete, calculating, by the one or more computer processors, a difference in recorded resource consumption data between the performance monitoring interrupt and a previous performance monitoring interrupt;
assigning, by the one or more computer processors, an identifier corresponding to a second workload associated with the first virtual machine;
recording, by the one or more computer processors, resource consumption data of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
creating, by the one or more computer processors, a relational association of the second workload and the first virtual machine to the resource consumption data of the at least one processor;
determining, by the one or more computer processors, if the second workload is complete;
responsive to determining that the second workload is complete, switching, by the one or more computer processors, the first virtual machine to a third workload;
aggregating, by the one or more computer processors, the recorded resource consumption data to provide one or more resource consumption estimates; and
notifying, by the one or more computer processors, a resource manager, wherein the resource manager is a hardware component, of a workload switch between the second workload and the third workload and data regarding changes in resource consumption of the at least one processor over time.

US Pat. No. 10,169,100

SOFTWARE-DEFINED STORAGE CLUSTER UNIFIED FRONTEND

INTERNATIONAL BUSINESS MA...

1. A method, comprising:initializing a plurality of first layer software defined storage (SDS) clusters, each of the first layer SDS clusters comprising multiple storage nodes, each of the multiple storage nodes executing in separate independent virtual machines on respective separate independent servers;
defining a second layer SDS cluster comprising a combination of the first layer SDS clusters; and
managing, using a distributed management application, the second layer SDS cluster, the distributed management application comprising multiple management nodes executing on all of the servers; wherein each of the separate independent virtual machines comprises a first virtual machine, and wherein each server comprises a second virtual machine that executes a given management node; and wherein the distributed management application comprising the multiple management nodes executing on all of the servers provides a unified front-end interface for accessing each of the first layer SDS cluster and the second layer SDS clusters.

US Pat. No. 10,169,099

REDUCING REDUNDANT VALIDATIONS FOR LIVE OPERATING SYSTEM MIGRATION

International Business Ma...

1. A method to reduce redundant validations for live operating system migration to increase performance of a previous mobility event, the method comprising:monitoring, by a virtualization manager, for configuration changes in a validation inventory, wherein the validation inventory comprises data selected from a group consisting of: physical hardware data related to a previous mobility event, and virtual hardware data related to the previous mobility event, and wherein the live operating system migration is performed by a control point and the virtualization manager in combination, and wherein monitoring for the configuration changes in the validation inventory based on determining configuration changes in one or more of a Virtual Fiber Channel (VFC), a Storage Area Network (SAN), and an external storage subsystem;
receiving a request to perform the live operating system migration of a logical partition (LPAR) from a source LPAR on a source computer to a target LPAR on a target computer, wherein the target LPAR is created using the validation inventory corresponding to the source LPAR,
receiving from a repository of validation inventory and based on the received request the validation inventory corresponding to the source LPAR;
in response to determining that the monitored validation inventory has changed, re-validating the received validation inventory prior to beginning the live operating system migration of the source LPAR to the target LPAR and re-caching the repository of validation inventory with the re-validated validation inventory, perform the live operating system migration of the source LPAR to the target LPAR;
and
in response to determining that the monitored validation inventory has not changed, perform the live operating system migration of the source LPAR to the target LPAR by using contents of the unchanged validation inventory, allowing the source LPAR to continually run during the live operating system migration, and without performing the re-validation of the received validation inventory.

US Pat. No. 10,169,098

DATA RELOCATION IN GLOBAL STORAGE CLOUD ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method of data relocation in global storage cloud environments, comprising:providing a computer system, being operable to:
mapping a user device to a home data server to store data of a user;
locating a data server near a travel location of the user based on one or more travel plans of the user, the one or more travel plans include one or more final travel locations and one or more intermediate travel locations including temporary locations the user travels prior to reaching the one or more final travel locations including a stopover or a layover;
locating the one or more intermediate travel locations during a user's travels using online travel web sites;
indexing and sorting one or more user-defined policies based on an owner and class of each policy of the one or more policies;
accessing the one or more user-defined policies by a primary key which includes an owner and a class of a desired policy out of the one or more user-defined policies;
filtering data from the stored data based on the one or more user-defined policies to determine which stored data is to be transferred; and
transferring the filtered data from the home data server near a home location of the user to the data server near the travel location.

US Pat. No. 10,169,097

DYNAMIC QUORUM FOR DISTRIBUTED SYSTEMS

Microsoft Technology Lice...

1. In a distributed computing system in which performance of a computing task within the distributed system is based at least in part upon each of a minimum number of nodes or devices providing authorization for performance of the computing task, a method of dynamically managing the minimum number of nodes or devices required to enable performance of the computing task, the method comprising:instantiating a dynamic quorum daemon in the distributed system, the dynamic quorum daemon running as a background task in the distributed system and the dynamic quorum daemon managing a set of nodes within the distributed system that are enabled to authorize performance of a computing task in the distributed system;
establishing that each of one or more nodes and zero or more devices in the distributed system is designated as an authorizing entity enabled to authorize performance of a computing task in the distributed system;
establishing a minimum number of the authorizing entities which are required to authorize performance of the computing task in order to allow performance of the computing task in the distributed system;
the dynamic quorum daemon determining that a state of a node or device in the distributed system has changed;
based on the determined change in the state of a node or device, the dynamic quorum daemon changing a designation of whether the node or device is an authorizing entity that is enabled to authorize performance of a computing task in the distributed system; and
based on the change of the designation of the node or device, the dynamic quorum daemon adjusting the minimum number of authorizing entities which are required to authorize performance of a computing task in order to allow performance of the computing task in the distributed system, the adjustment of the minimum number of authorizing entities being based at least in part upon a quorum policy which comprises one of node-majority with disk witness or node-majority with file share witness.

US Pat. No. 10,169,096

IDENTIFYING MULTIPLE RESOURCES TO PERFORM A SERVICE REQUEST

Hewlett-Packard Developme...

1. A method for scheduling a service request, the method comprising:receiving the service request including a latency associated with a publication of a result of the service request;
retrieving heterogeneous data upon the receipt of the service request;
filtering the heterogeneous data to obtain data relevant to the service request;
determining an amount of relevant data prior to computing a workload for the service request;
computing the workload for the service request;
identifying multiple resources, based on the latency and the workload, to perform the service request;
distributing the workload to the identified multiple resources; and
publishing the results of the service request in accordance with the latency.

US Pat. No. 10,169,095

AUTOMATED CAPACITY PROVISIONING METHOD USING HISTORICAL PERFORMANCE DATA

BMC Software, Inc., Hous...

1. A method for automatically allocating computer resources in a computer system, the method comprising:obtaining performance data characterizing the computer system, the computer system implementing services with fluctuating demand over time;
generating a system resource usage profile based on the performance data, the performance data comprising central processing unit (CPU) utilization data and response time data, the response time data defined for one or more of transactions, workloads, jobs, tasks, applications, or threads comprising the services;
receiving service level objectives for the services, the service level objectives characterizing a manner in which the services are provided to users over time;
automatically generating one or more provisioning policies based on the system resource usage profile and one or more of the service level objectives, including executing a series of workload scenarios, the workload scenarios including combinations of workload levels and service level objectives; and
provisioning the computer resources based on at least one of the provisioning policies, wherein the provisioning includes:
allocating one or more additional servers for a first time slot of a plurality of time slots in response to the response time data being greater than a first threshold level; and
allocating one or more additional servers for a second time slot of the plurality of time slots in response to the CPU utilization data being greater than a second threshold level.

US Pat. No. 10,169,094

DYNAMIC TRANSACTION-PERSISTENT SERVER LOAD BALANCING

Hewlett Packard Enterpris...

1. A non-transitory machine readable storage medium having stored thereon machine readable instructions to cause a computer processor to:receive, at a particular device, a first authentication request corresponding to a client device;
determine, for each of a plurality of servers, a number of outstanding authentication requests;
select a first server, from the plurality of servers, based on the number of outstanding authentication requests for each server of the plurality of servers, and a transmission latency between the particular device and each of the plurality of servers,
wherein the transmission latency is inferred based on the number of outstanding authentication requests for each server among the plurality of servers;
transmit, from the device to the first server:
the first authentication request corresponding to the client device; and
a second authentication request corresponding to the client device in response to a determination that the second authentication request is in a same first transaction as the first authentication request, wherein subsequent requests within the first transaction will be received by the first server;
select a second server, based on the number of outstanding authentication requests, to receive a third authentication request in response to the determination that the third authentication request is in a second transaction different from the previous authentication requests; and
select the first server to receive a fourth authentication request in a third transaction based on the determination that the first and the second authentication requests were completed and the third authentication request is pending.

US Pat. No. 10,169,093

TOPOLOGY-AWARE PROCESSOR SCHEDULING

SYBASE, INC., Dublin, CA...

1. A method of operating a task scheduler for one or more processors, the method comprising:obtaining a topology of the one or more processors, the topology indicating a plurality of execution units and physical resources associated with each of the plurality of execution units;
receiving a task to be performed by the one or more processors;
identifying a plurality of available execution units from the plurality of execution units;
determining an optimal execution unit, from the plurality of execution units, to which to assign the task based on the topology and a policy of utilizing a maximum count of sockets;
assigning the task to the optimal execution unit; and
sending the task to the optimal execution unit for execution.

US Pat. No. 10,169,092

SYSTEM, METHOD, PROGRAM, AND CODE GENERATION UNIT

International Business Ma...

1. A system for performing exclusive control among tasks, the system comprising:a lock status storage unit for storing update information that is updated in response to acquisition and release of an exclusive lock by one task and for storing task identification information for identifying a task that has acquired an exclusive lock;
an exclusive execution unit for causing processing in a critical section included in a first task to be performed by acquiring the exclusive lock, wherein the exclusive execution unit releases the exclusive lock and updates the update information after the processing in the critical section included in the first task; and
a nonexclusive execution unit for causing processing in a critical section included in a second task, the processing in the critical section included in the second task excluding acquiring the exclusive lock;
wherein, when the processing in the critical section by the second task has not been successfully completed when a predetermined condition has been reached and the update information has been changed, the nonexclusive execution unit acquires the exclusive lock and the processing in the critical section included in the second task is performed.

US Pat. No. 10,169,091

EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS

NVIDIA CORPORATION, Sant...

1. A method for scheduling tasks for execution in a parallel processor comprising two or more streaming multiprocessors, the method comprising:receiving a set of tasks associated with a first processing context related to a first page table included in a plurality of page tables;
selecting a first task that is associated with a first address space identifier (ASID) from the set of tasks and associated with the first processing context;
determining a minimum a number of streaming multiprocessors included in the two or more streaming multiprocessors able to execute the tasks included in the set of tasks based on a number of tasks each streaming multiprocessor is able to execute concurrently, wherein the minimum number of streaming multiprocessors includes at least a first streaming multiprocessor;
assigning the tasks included in the set of tasks to the minimum number of streaming multiprocessors;
selecting the first streaming multiprocessor from the two or more streaming multiprocessors to execute the first task;
scheduling the first task to execute on the first streaming multiprocessor;
selecting a second task that is associated with a second ASID from the set of tasks and associated with the first processing context; and
scheduling the second task to execute on the first streaming multiprocessor, wherein scheduling the second task occurs prior to scheduling any other task from the set of tasks to execute on a second streaming multiprocessor included in the two or more streaming multiprocessors.

US Pat. No. 10,169,090

FACILITATING TIERED SERVICE MODEL-BASED FAIR ALLOCATION OF RESOURCES FOR APPLICATION SERVERS IN MULTI-TENANT ENVIRONMENTS

salesforce.com, inc., Sa...

1. A method comprising:collecting, by a resource-management server computing device of a database system, data relating to job types associated with multiple tenants within a multi-tenant environment;
based on the data, computing an actual resource usages and expected resource allocations of the job types and actual resource usages and expected resource allocations of the tenants;
assigning the job types to service tiers based on the actual resource usages and the expected resource allocations associated with the job types, wherein each job type is at least one of a high-tiered job type or a low-tiered job type;
assigning the tenants to the service tiers based on the actual resource usages and the expected resource allocations associated with the tenants, wherein each tenant is classified as a high-tiered tenant type or a low-tiered tenant type; and
real-time reassigning and executing of the job types to one or more of the service tiers while ensuring that resources are distributed between the job types and the tenants such that actual resource usage does not exceed expected resource allocation for each job type and each tenant.

US Pat. No. 10,169,089

COMPUTER AND QUALITY OF SERVICE CONTROL METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A computer, comprising:a system bus comprising a bus management device;
a processor coupled to the system bus;
a storage coupled to the system bus, the storage comprising an operating system, and the operating system comprising a scheduling subsystem; and
at least one other device coupled to the system bus,the processor being configured to invoke the scheduling subsystem to allocate, to at least one container of the computer, a container identity (ID) corresponding one-to-one to the at least one container,the processor or the at least one other device being configured to send a bus request carrying the container ID and a hardware device ID of a hardware device used by the at least one container indicated by the container ID to the system bus, the hardware device comprising a memory in the storage or the processor, and the bus request being sent to the system bus comprising sending, to the system bus using a first memory management (MM) subsystem in the operating system, the bus request carrying the container ID and the hardware device ID, and
the bus management device being configured to:
search, according to the bus request, for a quality of service (QoS) parameter corresponding to both the container ID and the hardware device ID, the QoS parameter being stored in the bus management device; and
configure, according to the found QoS parameter, a resource required when the at least one container corresponding to the QoS parameter uses the hardware device corresponding to the QoS parameter, the resource comprising at least one of bandwidth, a delay, or a priority.

US Pat. No. 10,169,088

LOCKLESS FREE MEMORY BALLOONING FOR VIRTUAL MACHINES

1. A method of managing memory, comprising:receiving, by a hypervisor, an inflate notification from a guest running on a virtual machine, the virtual machine and the hypervisor running on a host machine, the inflate notification including a first identifier corresponding to a first time, and the inflate notification indicating that a set of guest memory pages is unused by the guest at the first time;
determining whether the first identifier precedes a last identifier corresponding to a second time and included in a previously sent inflate request to the guest;
if the first identifier does not precede the last identifier:
for a first subset of the set of guest memory pages modified since the first time, determining, by the hypervisor, to not reclaim a first set of host memory pages corresponding to the first subset of guest memory pages, and
for a second subset of the set of guest memory pages not modified since the first time, reclaiming, by the hypervisor, a second set of host memory pages corresponding to the second subset of guest memory pages; and
if the first identifier precedes the last identifier, discarding the inflate notification, wherein discarding the inflate notification includes determining to not reclaim a set of host memory pages corresponding to the set of guest memory pages specified in the inflate notification.

US Pat. No. 10,169,087

TECHNIQUE FOR PRESERVING MEMORY AFFINITY IN A NON-UNIFORM MEMORY ACCESS DATA PROCESSING SYSTEM

International Business Ma...

1. A non-transitory computer readable device having a computer program product for preserving memory affinity in a non-uniform memory access data processing system, said non-transitory computer readable device comprising:program code for, in response to a request for memory access to a page within a first memory affinity domain, determining whether or not said request is initiated by a remote processor associated with a second memory affinity domain;
program code for, in response to a determination that said request is initiated by a remote processor associated with a second memory affinity domain, determining whether or not a page migration tracking module associated with said first memory affinity domain includes an entry for said remote processor;
program code for, in response to a determination that said first page migration tracking module includes an entry for said remote processor, incrementing an access counter associated with said entry within said page migration tracking module;
program code for determining whether or not there is a page ID match with an entry within said page migration tracking module:
program code for, in response to a determination that there is no page ID match with any entry within said page migration tracking module, selecting an entry within said page migration tracking module and providing said entry with a new page ID and a new memory affinity ID;
program code for, in response to the determination that there is a page ID match with an entry within said page migration tracking module, determining whether or not there is a memory affinity ID match with said entry having the page ID field match;
program code for in response to a determination that there is no memory affinity ID match, updating said entry with the page ID field match with a new memory affinity ID;
program code for, in response to a determination that there is a memory affinity ID match, incrementing an access counter of said entry having the page ID field match;
program code for determining whether or not said access counter has reached a predetermined threshold; and
program code for, in response to a determination that said access counter has reached a predetermined threshold, migrating said page from said first memory affinity domain to said second memory affinity domain.

US Pat. No. 10,169,086

CONFIGURATION MANAGEMENT FOR A SHARED POOL OF CONFIGURABLE COMPUTING RESOURCES

International Business Ma...

1. A computer-implemented method of managing a shared pool of configurable computing resources, the method comprising:collecting a set of scaling factor data related to an active workload on a configuration of the shared pool of configurable computing resources, wherein the set of scaling factor data includes an actual number of transactions per time period being processed;
ascertaining a set of workload resource data associated with the active workload, wherein the set of workload resource data includes a hardware configuration template specifying a processor requirement and a memory requirement for an expected number of transactions per time period to be processed;
computing, by subtracting the actual number of transactions per time period from the expected number of transactions per time period, a transaction per time period difference value;
comparing the transaction per time period difference value to a transaction per time period difference threshold to determine whether the transaction per time period difference value exceeds the transaction per time period difference threshold;
detecting, in response to a determination that the transaction per time period difference value exceeds the transaction per time period difference threshold, a triggering event; and
performing, in response to detecting the triggering event, a configuration action with respect to the configuration of the shared pool of configurable computing resources, wherein the configuration action includes:
reconfiguring the configuration of the shared pool of configurable computing resources.

US Pat. No. 10,169,085

DISTRIBUTED COMPUTING OF A TASK UTILIZING A COPY OF AN ORIGINAL FILE STORED ON A RECOVERY SITE AND BASED ON FILE MODIFICATION TIMES

International Business Ma...

1. A computer-implemented method comprising:receiving a request to perform a task indicating at least a first file used to perform the task, wherein the first file is modified at a first update time and is stored on a production site comprising hardware resources configured to store original data and process tasks associated with the original data, and wherein a first copy of the first file is created at a first copy time and stored on a recovery site comprising hardware resources configured to store copies of the original data and process tasks associated with the copies of the original data;
determining the task is a candidate for processing on the recovery site by:
determining that processing the task comprises reading the first file and creating a result file based on the first file;
determining that processing the task can be completed without user input;
determining that the task does not define a physical location for processing the task; and
determining that the task does not alter the first file as a result of performing the task;
determining the first file and the first copy of the first file match by determining the first update time is earlier than the first copy time, wherein determining the first file and the first copy of the first file match further comprises:
determining a first difference between a current time and the first copy time is above a time threshold, wherein the first copy time indicates a time the first copy of the first file began to be created, wherein the time threshold comprises an amount of time greater than an amount of time used to create the first copy of the first file;
performing the task using resources of the recovery site and using the first copy of the first file stored in the recovery site in response to determining that the first file and the first copy of the first file match, wherein performing the task further comprises:
selecting a first resource on the recovery site for processing the task based on the first resource having a first processing utilization below a processing utilization threshold, wherein the first processing utilization comprises an ongoing processing amount in the first resource divided by a processing capacity amount of the first resource, wherein the processing utilization threshold comprises 80%;
selecting the first resource on the recovery site for processing the task further based on the first resource having a first network speed above a network speed threshold, wherein the first network speed is calculated by sending a test file from the first resource to a second resource connected to the first resource via a network and measuring a transfer speed of the test file, wherein the network speed threshold comprises five megabytes per second; and
storing a result file in the recovery site in response to performing the task; and
outputting a file path in response to performing the task, wherein the file path indicates a location of the result file stored in the recovery site.

US Pat. No. 10,169,084

DEEP LEARNING VIA DYNAMIC ROOT SOLVERS

International Business Ma...

1. A computer implemented method comprising:identifying, by a host computer processor, graphic processor units (GPUs) that are available (available GPUs);
identifying, by the host computer processor, GPUs that are idle (initially idle GPUs) among the available GPUs for an initial iteration of deep learning;
choosing, by the host computer processor, one of the initially idle GPUs as an initial root solver GPU for the initial iteration;
initializing, by the host computer processor, weight data for an initial set of multidimensional data;
transmitting, by the host computer processor, the initial set of multidimensional data to the available GPUs;
forming, by the host computer processor, an initial set of GPUs into an initial binary tree architecture, wherein the initial set comprises the initially idle GPUs and the initial root solver GPU, wherein the initial root solver GPU is the root of the initial binary tree architecture;
calculating, by the initial set of GPUs, initial gradients and a set of initial adjusted weight data with respect to the weight data and the initial set of multidimensional data via the initial binary tree architecture;
in response to the calculating the initial gradients and the initial adjusted weight data, identifying, by the host computer processor, a first GPU among the available GPUs to become idle (first currently idle GPU) for a current iteration of deep learning;
choosing, by the host computer processor, the first currently idle GPU as a current root solver GPU for the current iteration;
transmitting, by the host computer processor, a current set of multidimensional data to the current root solver GPU;
in response to the identifying the first currently idle GPU, identifying, by the host computer processor, additional GPUs that are currently idle (additional currently idle GPUs) among the available GPUs;
transmitting, by the host computer processor, the current set of multidimensional data to the additional currently idle GPUs;
forming, by the host computer processor, a current set of GPUs into a current binary tree architecture, wherein the current set comprises the additional currently idle GPUs and the current root solver GPU, wherein the current root solver GPU is the root of the current binary tree architecture;
calculating, by the current set of GPUs, current gradients and a set of current adjusted weight data with respect to at least the weight data and the current set of multidimensional data via the current binary tree architecture;
in response to the initial root solver GPU receiving a set of calculated initial adjusted weight data, transmitting, by the initial root solver GPU, an initial update to the weight data to the available GPUs;
in response to the current root solver GPU receiving a set of current initial adjusted weight data, transmitting, by the current root solver GPU, a current update to the weight data to the available GPUs; and
repeating the identifying, the choosing, the transmitting, the forming, and the calculating with respect to the weight data, updates to the weight data, and subsequent sets of multidimensional data.

US Pat. No. 10,169,083

SCALABLE METHOD FOR OPTIMIZING INFORMATION PATHWAY

EMC IP Holding Company LL...

1. An apparatus comprising:a receiving module configured to receive a request for task execution at a central processing node for worldwide data; wherein the central processing node is connected to sub-processing network nodes; wherein the sub-processing network nodes are grouped into clusters; wherein each cluster has a distributed file system mapping out network nodes for each respective cluster; wherein each cluster stores a subset of the worldwide data; and wherein each cluster is enabled to use the network nodes of the cluster to perform parallel processing; wherein the central processing node is communicatively coupled to a global distributed file system that maps over each of the cluster's distributed file systems to enable orchestration between the clusters;
a dividing module configured to divide by a worldwide job tracker the request for task execution into worldwide task trackers to be distributed to sub-processing network nodes of the clusters; wherein the network sub-nodes manages a portion of the worldwide data for each respective cluster; wherein each worldwide task tracker maintains records of sub-activities executed as part of the worldwide job;
a transmitting module configured to transmit to each of the sub-processing network nodes for each respective cluster the respective portion of the divided task execution by assigning each worldwide task tracker corresponding to the respective portion to the respective each cluster; and
a leveraging module configured to generate a graph layout of data pathways, the pathways calculated based upon physical distance between the processing nodes and bandwidth constraints, the leveraging module further configured to distribute task execution based upon the processing power of the processing nodes, graph layout, and the size of data processed by the sub-processing network nodes to reduce data movement between the central processing node and the sub-processing nodes.

US Pat. No. 10,169,082

ACCESSING DATA IN ACCORDANCE WITH AN EXECUTION DEADLINE

INTERNATIONAL BUSINESS MA...

1. A method for execution by a processing module of a dispersed storage and task (DST) execution unit that includes a processor, the method comprises:receiving a data request for execution by the DST execution unit, the data request including an execution deadline;
comparing the execution deadline to a current time, which includes:
determining an estimated un-accelerated processing duration and an estimated accelerated processing duration;
determining that the execution deadline compares favorably to the current time when an addition of the estimated un-accelerated processing duration to the current time does not exceed the execution deadline; and
determining that the execution deadline compares favorably to the current time when an addition of the estimated accelerated processing duration to the current time does not exceed the execution deadline and the addition of the estimated un-accelerated processing duration to the current time exceeds the execution deadline;
generating an error response when the execution deadline compares unfavorably to the current time; and
when the execution deadline compares favorably to the current time:
determining a priority level based on the execution deadline; and
executing the data request in accordance with the priority level, wherein executing the data request includes accelerating the executing of the data request when the addition of the estimated accelerated processing duration to the current time does not exceed the execution deadline and the addition of the estimated un-accelerated processing duration to the current time exceeds the execution deadline.

US Pat. No. 10,169,081

USE OF CONCURRENT TIME BUCKET GENERATIONS FOR SCALABLE SCHEDULING OF OPERATIONS IN A COMPUTER SYSTEM

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions, which when executed by one or more hardware processors, cause performance of operations comprising:determining a time for performing an action on a first object stored in a data repository, wherein the action comprises one of:
deleting the first object from the data repository,
modifying content of the first object, or
transferring the first object from one location in the repository to another location in the repository;
responsive to determining, at runtime, that a first time bucket generation of a plurality of time bucket generations is a time bucket generation last-configured for storing references included in an object processing index: selecting the first time bucket generation of the plurality of time bucket generations for storing a first reference to the first object, wherein each time bucket generation comprises time buckets that are (a) of a same interval size and (b) correspond to different time periods;
wherein the object processing index comprises references to objects that are to be processed at a particular time;
responsive to selecting the first time bucket generation: selecting a first time bucket of the first time bucket generation based on the time for performing the action on the first object;
storing the first reference to the first object in the first time bucket of the first time bucket generation;
adding a second time bucket generation to the plurality of time bucket generations by configuring the second time bucket generation for the object processing index;
wherein the first time bucket generation and the second time bucket generation are concurrently configured for the object processing index on a temporary basis while the object processing index is transitioned from using the first time bucket generation to using the second time bucket generation;
determining a time for performing an action on a second object stored in the data repository, wherein the action comprises one of:
deleting the second object from the data repository,
modifying the content of the second object, or
transferring the second object from one location in the repository to another location in the repository;
responsive to determining, at runtime, that the second time bucket generation of the plurality of time bucket generations is the time bucket generation last-configured for storing references included in the object processing index: selecting the second time bucket generation of the plurality of time bucket generations for storing a second reference to the second object;
responsive to selecting the second time bucket generation: selecting a second time bucket of the second time bucket generation based on the time for performing the action on the second object;
storing the second reference to the second object in the second time bucket of the second time bucket generation,
wherein the first object corresponding to the first time bucket in the first time bucket generation and the second object corresponding to the second time bucket in the second time bucket generation are processed in accordance with the object processing index.

US Pat. No. 10,169,080

METHOD FOR WORK SCHEDULING IN A MULTI-CHIP SYSTEM

Cavium, LLC, Santa Clara...

1. A method of processing work items in a multi-chip system, the method comprising:designating, by a work source component associated with a source chip device, a work item to a scheduler processor for scheduling, the source chip device being one of multiple chip devices of the multi-chip system, the work source component comprising a core processor or a coprocessor configured to create work items;
assigning, by the scheduler processor, the work item to a destination chip device of the multiple chip devices for processing, the scheduler processor being one of one or more scheduler processors each associated with a corresponding chip device of the multiple chip devices.

US Pat. No. 10,169,079

TASK STATUS TRACKING AND UPDATE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A method for providing status updates while collaboratively resolving an issue, the method comprising:receiving, using a processing device, an electronic text-based message from a user;
identifying, using the processing device, one or more key phrases in the electronic text-based message, wherein the one or more key phrases are identified based at least in part on training a neural network using training data and applying the neural network to the electronic text-based message, wherein the training data includes key phrases manually indicated by a user;
in response to identifying the one or more key phrases in the received electronic text-based message, automatically displaying, by the processing device, the one or more key phrases to the user with highlighted text;
receiving, by the processing device, a selection from a user of a displayed key phrase from the one or more key phrases that were displayed with highlighted text; and
in response to the user selecting, the displayed key phrase from the one or more key phrases displayed with highlighted text, providing at least one status-based suggestion to the user to change a status milestone associated with a problem resolution based on the user selected key phrase;
wherein the providing of the at least one status-based suggestion to the user based on the user selected key phrase comprises:
building a table to map a key phrase to one or more status identifiers;
mapping the key phrase to one or more status identifiers to associate the key phrase with the at least one status-based suggestion;
in response to the user selecting the displayed key phrase having highlighted text, matching the highlighted text to the key phrase of the table to identify the at least one status-based suggestion that is associated with the matching key phrase in the table and then displaying the at least one status-based suggestion to the user for selection; and
displaying a corresponding status milestone based on the user selecting from the at least one status-based suggestion.

US Pat. No. 10,169,078

MANAGING THREAD EXECUTION IN A MULTITASKING COMPUTING ENVIRONMENT

International Business Ma...

1. A method for managing thread execution, the method comprising:predicting, by one or more computer processors, an amount of processor usage that would be used by a thread in a computing system for execution of a critical section of code, where the critical section of code is defined by a starting marker and an ending marker in a program code that contains the critical section of code;
determining that the thread has a sufficient processor usage allowance to execute the critical section of code to completion; and
in response to determining that the thread has sufficient processor usage allowance to execute the critical section of code to completion:
scheduling, by one or more computer processors, the thread for execution of the critical section of code;
receiving, by one or more computer processors, a request to deschedule the thread, wherein the request is made in response to determining that the thread has insufficient processor usage allowance to continue execution;
responsive to receiving a request to deschedule the thread, scheduling, by one or more computer processors, the thread to complete execution of the critical section of code;
responsive to scheduling the thread to complete execution, determining, by one or more computer processors, processor usage debt accumulated by the thread;
determining that the thread has completed execution of the critical section of code;
responsive to determining that the thread has completed execution of the critical section of code, suspending the thread; and
preventing further execution of the thread until after the processor has executed one or more other threads for an amount of time equal to the amount of processor usage debt accumulated by the thread;
wherein:
the predicted amount of processor usage is a percentage of total execution capacity of the processor that the thread is predicted to use during execution of the critical section of code;
the processor usage debt comprises an amount of time for which the thread is executing while the thread has both insufficient processor usage allowance to continue execution and is executing the critical section of code; and
the one or more computer processors are one or more field programmable gate arrays.

US Pat. No. 10,169,077

SYSTEMS, DEVICES, AND METHODS FOR MAINFRAME DATA MANAGEMENT

United Services Automobil...

1. A method comprising:loading, by a processor, a utility program onto an operating system of a mainframe, wherein the operating system hosts an application that includes a plurality of running jobs, wherein the utility program includes a set of configuration metadata for the application;
configuring, by the processor, the utility program such that the utility program is configured to receive a user input from a workstation that is in communication with the mainframe and the utility program is configured to interface with the application based on the set of configuration metadata responsive to the user input, wherein the mainframe is in communication with the workstation;
creating, by the processor, a job via the utility program interfacing with the application based on the set of configuration metadata responsive to the user input;
submitting, by the processor, the job to the application via the utility program interfacing with the application based on the set of configuration metadata responsive to the user input;
querying, by the processor, the job at the application before completion for an error via the utility program based on the set of configuration metadata;
triggering, by the processor, an alert based on the set of configuration metadata via the utility program responsive to the error; and
outputting, by the processor, the alert to the workstation in communication with the utility program.

US Pat. No. 10,169,076

DISTRIBUTED BATCH JOB PROMOTION WITHIN ENTERPRISE COMPUTING ENVIRONMENTS

International Business Ma...

1. A computer-implemented method for batch code promotion between enterprise scheduling system environments, the method comprising the steps of:connecting, by one or more processors, a graphical interface of an entity to one or more enterprise scheduling environments for promoting changes of batch code of the entity between the one or more enterprise scheduling environments, the batch code is processed during a batch job, the batch job is a low priority job, wherein the low priority batch job is processed by the one or more enterprise scheduling environments;
mapping, by the one or more processors, parameters to batch code fields of the batch code that changes between a first scheduling level of the one or more enterprise scheduling environments to a second scheduling level of the one or more enterprise scheduling environments to create a mapping table to the batch code fields that changes from the first scheduling level and the second scheduling level, wherein the parameters include at least one batch job scheduling object identification, wherein the scheduling object identification further includes a container for all low priority batch jobs, an identification of the batch code, and an identification of the network workstations of the one or more enterprises scheduling environments for promoting the batch code between the first scheduling level to the second scheduling level;
generating a backup, in memory, of the mapping table to the batch code fields;
in response to an action on the graphical interface to promote the changes of the batch code fields between the mapped parameters of the first scheduling level and the second scheduling level, assigning, by the one or more processors, identification to the changes of the batch code fields;
in response to a request to promote the identified changes of the batch code fields, promoting, by the one or more processors, the requested identified changes from the first scheduling level to the second scheduling level using the mapped parameters of the first scheduling level and the second scheduling level; and
correlating, by the one or more processors, the mapping table of changed batch code fields of the first scheduling level with the mapping table of changed batch code fields of the second scheduling level, wherein the correlated mapping table of the batch code fields that change between the first scheduling level and the second scheduling level includes metadata of batch code for each one of the first and the second scheduling levels, and wherein the metadata of the batch code for each one of the first and the second scheduling levels is identified for promoting changes of the batch code fields from first scheduling level to the second scheduling level, further includes the steps of:
creating the batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
verifying the created batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
operating the verified batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
generating a second mapping table based on the mapped parameters and the created, verified, and operated batch job of batch code fields; and
promoting the operated batch job between the second scheduling level and a third scheduling level based on the second mapping table.

US Pat. No. 10,169,075

METHOD FOR PROCESSING INTERRUPT BY VIRTUALIZATION PLATFORM, AND RELATED DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A method for processing an interrupt by a virtualization platform, wherein the method is applied to a computing node, wherein the computing node comprises a physical hardware layer, a host running at the physical hardware layer, at least one virtual machine (VM) running on the host, and virtual hardware that is virtualized on the at least one VM, wherein the physical hardware layer comprises X physical central processing units (pCPUs) and Y physical input/output devices, wherein the virtual hardware comprises Z virtual central processing units (vCPUs), wherein the Y physical input/output devices comprise a jth physical input/output device, wherein the at least one VM comprises a kth VM, wherein the jth physical input/output device directs to the kth VM, wherein the method is executed by the host, and wherein the method comprises:determining an nth pCPU from U target pCPUs when an ith physical interrupt occurs in the jth physical input/output device, wherein the U target pCPUs are pCPUs that comprise an affinity relationship with both the ith physical interrupt and V target vCPUs, wherein the V target vCPUs are vCPUs that are virtualized on the kth VM and comprise an affinity relationship with an ith virtual interrupt, wherein the ith virtual interrupt corresponds to the ith physical interrupt, wherein the X pCPUs comprise the U target pCPUs, and wherein the Z vCPUs comprise the V target vCPUs;
setting the nth pCPU to process the ith physical interrupt;
determining the ith virtual interrupt according to the ith physical interrupt; and
determining an mth vCPU from the V target vCPUs such that the kth VM uses the mth vCPU to execute the ith virtual interrupt,
wherein X, Y, and Z are positive integers greater than 1, wherein U is a positive integer greater than or equal to 1 and less than or equal to X, wherein V is a positive integer greater than or equal to 1 and less than or equal to Z, and wherein i, j, k, m, and n are positive integers.

US Pat. No. 10,169,074

MODEL DRIVEN OPTIMIZATION OF ANNOTATOR EXECUTION IN QUESTION ANSWERING SYSTEM

International Business Ma...

1. A method, in a data processing system comprising a processor and a memory, for scheduling execution of pre-execution operations of an annotator of a question and answer (QA) system pipeline, the method comprising:using, by the data processing system, a model to represent a system of annotators of the QA system pipeline, wherein the model represents each annotator in the system of annotators as a node having one or more performance parameters for indicating a performance of an execution of an annotator corresponding to the node, wherein each annotator in the s stem of annotators is a program that takes a portion of unstructured input text, extracts structured information from the portion of the unstructured input text, and generates annotations or metadata that are attached by the annotator to a source of the unstructured input text, wherein, for each node in the model, the one or more performance parameters corresponding to the node comprise an arrival rate parameter and a service rate parameter of the annotator associated with the node, wherein the arrival rate parameter indicates a number of jobs arriving in the node per second, and wherein the service rate parameter indicates a number of jobs being serviced by the node per second;
determining, by the data processing system, for each annotator in a set of annotators of the system of annotators, an effective response time for the annotator based on the one or more performance parameters;
calculating, by the data processing system, a pre-execution start interval for a first annotator based on an effective response time of a second annotator, wherein execution of the first annotator is sequentially after execution of the second annotator; and
scheduling, by the data processing system, execution of pre-execution operations associated with the first annotator based on the calculated pre-execution start interval for the first annotator.

US Pat. No. 10,169,073

HARDWARE ACCELERATORS AND METHODS FOR STATEFUL COMPRESSION AND DECOMPRESSION OPERATIONS

Intel Corporation, Santa...

1. A hardware processor comprising:a core to execute a thread and offload at least one of a compression thread and a decompression thread; and
a hardware compression and decompression accelerator to execute the at least one of the compression thread and the decompression thread to consume input data and generate output data, wherein the hardware compression and decompression accelerator is coupled to a plurality of input buffers to store the input data, a plurality of output buffers to store the output data, an input buffer descriptor array with an entry for each respective input buffer, an input buffer response descriptor array with a corresponding response entry for each respective input buffer, an output buffer descriptor array with an entry for each respective output buffer, and an output buffer response descriptor array with a corresponding response entry for each respective output buffer.

US Pat. No. 10,169,072

HARDWARE FOR PARALLEL COMMAND LIST GENERATION

NVIDIA CORPORATION, Sant...

1. A method for providing an initial default state for a multi-threaded processing environment, the method comprising:receiving, from an application program, a plurality of separate command lists corresponding to a plurality of parallel threads associated with the application program, wherein each thread in the plurality of parallel threads generates a separate command list in the plurality of command lists;
causing a first command list associated with a first thread included in the plurality of parallel threads to be executed by a processing unit based on a first processing state, wherein the first processing state includes a set of graphics parameters;
after the processing unit executes the first command list, causing a second command list associated with a second thread included in the plurality of parallel threads to be executed by the processing unit based on the first processing state inherited from the first command list;
causing a single unbind method to be executed by the processing unit, wherein the unbind method resets one or more parameters included in the set of graphics parameters to an initial processing state; and
causing commands included in a third command list to be executed by the processing unit after the unbind method is executed.

US Pat. No. 10,169,071

HYPERVISOR-HOSTED VIRTUAL MACHINE FORENSICS

MICROSOFT TECHNOLOGY LICE...

1. A computing system comprising:a processor; and
memory storing instructions executable by the processor, wherein the instructions, when executed, provide a hypervisor configured to:
host a virtualization environment that includes a set of virtual machine (VM) partitions that each include an isolated execution environment managed by the hypervisor, the set of VM partitions comprising:
a root virtual machine (VM) partition,
a first child VM partition that is hypervisor-aware,
a second child VM partition that is non-hypervisor-aware, and
a forensics VM partition that:
includes a forensics service application programming interface (API),
is configured to directly access hardware resources associated with the computing system, and
is separate from, and more privileged than, the first child VM partition; and
create, in the virtualization environment:
a first inter-partition communication mechanism configured to provide a communication channel between the forensics VM partition and the first child VM partition, and
a second inter-partition communication mechanism;
wherein the forensics VM partition is configured to:
acquire, by the first inter-partition communication mechanism, forensics data from a VM running in the first child VM partition; and
provide the forensics data to a forensics service using the forensics service API.

US Pat. No. 10,169,070

MANAGING DEDICATED AND FLOATING POOL OF VIRTUAL MACHINES BASED ON DEMAND

United Services Automobil...

1. A computer-implemented method for managing demand of a pool of virtual machines, the method comprising:determining a demand for a use of virtual machines in a pool of virtual machines, wherein, for a pool that is managed as a dedicated pool, the demand is determined based on resource usage per virtual machine in the pool, and, for a pool that is managed as a floating pool, the demand is determined based on times that one or more virtual machines in the pool are unassigned to users of the pool;
identifying that the determined demand is outside a threshold resource usage of the pool; and
provisioning one or more additional resources to the pool.

US Pat. No. 10,169,069

SYSTEM LEVEL UPDATE PROTECTION BASED ON VM PRIORITY IN A MULTI-TENANT CLOUD ENVIRONMENT

International Business Ma...

1. A computer-implemented method for managing system activities in a cloud computing environment, comprising:determining a type of system activity to perform on one or more servers in the cloud computing environment;
identifying a set of locking parameters at a plurality of hierarchal levels of the cloud computing environment available for restricting system activity on the one or more servers, wherein each locking parameter corresponds to a different type of system activity and is associated with a particular hierarchal level of the plurality of hierarchal levels;
determining whether to perform the type of system activity based on a value of a locking parameter of the set of locking parameters that is associated with the type of system activity, the particular hierarchal level of the plurality of hierarchal levels associated with the locking parameter of the set of locking parameters, and a priority associated with the type of system activity; and
performing the type of system activity after determining to perform the type of system activity.

US Pat. No. 10,169,068

LIVE MIGRATION FOR VIRTUAL COMPUTING RESOURCES UTILIZING NETWORK-BASED STORAGE

Amazon Technologies, Inc....

1. A system, comprising:a plurality of compute nodes comprising one or more processors and memory, configured to implement:
a plurality of hosts for virtual compute instances;
a control plane; and
the control plane, configured to:
for a virtual compute instance that is identified for migration from a source host to a destination host and is a client of a network-based storage resource that stores data for which access is enforced according to a lease state for hosts connected to the network-based resource:
direct the destination host to establish a connection with the network-based storage resource with a standby lease state; and
direct that a request be sent to the network-based storage resource to promote the standby lease state for the destination host to a primary lease state and to change a primary lease state for the source host to another lease state.

US Pat. No. 10,169,067

ASSIGNMENT OF PROXIES FOR VIRTUAL-MACHINE SECONDARY COPY OPERATIONS INCLUDING STREAMING BACKUP JOB

Commvault Systems, Inc., ...

1. A computer-readable medium, excluding transitory propagating signals, storing instructions that, when executed by a computing device having one or more processors and non-transitory computer-readable memory, cause the computing device to perform a method comprising:identifying, by a first data agent executing on the computing device, one or more proxies in a storage management system that are eligible to back up a given virtual machine in a first set of virtual machines in the storage management system,
wherein any one proxy among the one or more proxies is one of:
(a) a first virtual machine that executes on a first computing device, wherein the first virtual machine executes a second data agent for virtual-machine backup, and
(b) a second computing device that executes a second data agent for virtual-machine backup;
wherein the identifying comprises:
(i) determining (A) a set of candidate proxies for backing up the given virtual machine, and (B) a mode of access available to each respective candidate proxy for accessing the given virtual machine's data as a source for backup,
wherein the mode of access has a predefined tier of preference,
wherein the determining is based on analyzing, by the first data agent, data from a database that is associated with a storage manager component that manages the storage management system, and
wherein the storage manager component designates the first data agent as a coordinator data agent for a first backup job for the first set of virtual machines,
(ii) classifying each candidate proxy in the set of candidate proxies based on the predefined tier of preference for the respective candidate proxy's mode of access to the given virtual machine's data as the source for backup, and
(iii) defining one or more candidate proxies that are classified in a highest tier of preference as being eligible to back up the given virtual machine; and
wherein if the defining results in the given virtual machine being stranded without an eligible proxy, subsequently defining one or more candidate proxies, which are classified in a next highest tier of preference that is less than the highest tier of preference, as being eligible to back up the given virtual machine.

US Pat. No. 10,169,066

SYSTEM AND METHOD FOR ENHANCING ADVANCED DRIVER ASSISTANCE SYSTEM (ADAS) AS A SYSTEM ON A CHIP (SOC)

iOnRoad Technologies Ltd....

1. A System on Chip (SoC), comprising:an Integrated Circuit (IC) integrating the following into a single chip:
at least one Advanced Driver Assistance System (ADAS) processing unit;
at least one application processing unit;
at least one memory storing ADAS code comprising ADAS computer instructions adapted to be executed on said at least one ADAS processing unit for processing vehicle sensor data and Virtual Machine (VM) code for executing on said at least one application processing unit at least one VM, wherein said VM code is executed separately and independently from an execution of said ADAS code; and
a hypervisor which manages an execution of at least one Operation System (OS) of said at least one VM and an access to a processor shared memory of said at least one ADAS processing unit for acquiring an outcome of executing said ADAS computer instructions for the completion of an ADAS enhancing function by said execution of said at least one VM on said at least one application processing unit.

US Pat. No. 10,169,065

LIVE MIGRATION OF HARDWARE ACCELERATED APPLICATIONS

Altera Corporation, San ...

1. A method of migrating a hardware accelerated application from a source server to a destination server, wherein the source server comprises a processor connected to an external migration controller and reconfigurable circuitry, wherein the reconfigurable circuitry includes a plurality of accelerator resource slots and a memory management unit for the accelerator resource slots, the method comprising:at the source server, receiving a migration notification from the migration controller, wherein the migration notification specifies a set of accelerator resource slots of the plurality of accelerator resource slots to be migrated and an identifier for the resources in the memory management unit to be migrated; wherein the migration controller is further configured for:
saving an image of state information associated with the hardware accelerated application from the source server to network attached storage in response to receiving the migration notification;
copying the image of state information associated with the hardware accelerated application from the network attached storage to the destination server; and
running the hardware accelerated application in parallel on the source server and the destination server.

US Pat. No. 10,169,064

AUTOMATIC NETWORK CONFIGURATION OF A PRE-CONFIGURED HYPER-CONVERGED COMPUTING DEVICE

VMware, Inc., San Jose, ...

1. A computer-implemented method for automatic network configuration of a pre-configured hyper-converged computing device, comprising:requesting network configuration information from another pre-configured hyper-converged computing device already configured on a network, said another pre-configured hyper-converged computing device includes pretested, pre-configured and pre-integrated storage, server and network components, including software, that are located in an enclosure;
said another pre-configured hyper-converged computing device further including a hypervisor that supports a virtualization infrastructure, wherein said pre-configured hyper-converged computing device is offered for sale as a single stock keeping unit (SKU), said pre-configured hyper-converged computing device not required to include any additional hardware or software to support and manage said virtualization infrastructure, wherein upon powering on said pre-configured hyper-converged computing device for a first time, only a single end-user license agreement (EULA), pertaining to said hypervisor and said pre-configured and pre-integrated storage, is displayed to an end-user;
receiving said network configuration information from said another pre-configured hyper-converged computing device; and
automatically performing network configuration by said pre-configured hyper-converged computing device such that said pre-configured hyper-converged computing device is automatically configured to said network, said pre-configured hyper-converged computing device includes pretested, pre-configured and pre-integrated storage, server and network components, including software, that are located in an enclosure; said pre-configured hyper-converged computing device further including a hypervisor that supports a virtualization infrastructure.

US Pat. No. 10,169,063

HYPERVISOR CAPABILITY ACCESS PROVISION

Red Hat Israel, LTD., Ra...

1. A method, comprising:receiving, via a user interface provided by a host controller, a first request for a first hypervisor capability of a hypervisor executing on a host server;
determining, by a processing device, that the first hypervisor capability can be provisioned by a virtualization manager executing on the host controller in view of inclusion within a hypervisor capability subset offered by the virtualization manager;
receiving, via the user interface, a second request for a second hypervisor capability of the hypervisor;
determining, by the processing device, that the virtualization manager cannot provision the second hypervisor capability in view of lack of inclusion within the hypervisor capability subset offered by the virtualization manager;
providing, via the user interface, a first indication of successful provision of the first hypervisor capability in response to the first hypervisor capability being provisioned by the virtualization manager; and
providing, via the user interface, a second indication of successful provision of the second hypervisor capability in response to the second hypervisor capability being provisioned by a hypervisor accessor bypassing the virtualization manager and using one or more first common gateway interface (CGI) scripts hosted by the hypervisor accessor to directly access the hypervisor via a command line tool of the hypervisor, wherein a set of hypervisor capabilities is accessible by the hypervisor accessor executing on the host server, the set comprising the hypervisor capability subset that is accessible by the virtualization manager and a plurality of hypervisor capabilities that are inaccessible by the virtualization manager, the plurality of hypervisor capabilities comprising the second hypervisor capability.

US Pat. No. 10,169,062

PARALLEL MAPPING OF CLIENT PARTITION MEMORY TO MULTIPLE PHYSICAL ADAPTERS

International Business Ma...

1. A method for performing an input/output (I/O) request, the method comprising:Mapping an address for at least a first page associated with a virtual I/O request to an entry in a virtual translation control entry (TCE) table;
Identifying a plurality of physical adapters required to service the virtual I/O request;
And upon determining, for each of the identified physical adapters, that an entry in the respective physical TCE table corresponding to the physical adapter is available, for the identified available physical adapters:
mapping the entry in the virtual TCE table to an entry in the respective physical TCE table corresponding to the identified available physical adapters in parallel, and
issuing a physical I/O request corresponding to each physical TCE table entry to the respective available physical adapters in parallel.

US Pat. No. 10,169,061

SCALABLE AND FLEXIBLE OPERATING SYSTEM PLATFORM

FORD GLOBAL TECHNOLOGIES,...

1. A system, comprising a computer having a processor and a memory, wherein the memory includes:at least one bootloader program that includes instructions to instantiate a management layer that includes a first operating system kernel and a virtual machine manager that executes in the context of the operating system kernel;
instructions in the management layer to instantiate, after the management layer is running, at least one second operating system that executed in the context of the virtual machine manager; and wherein the memory of the computer further includes at least one application; and
at least one security layer that includes instructions for receiving, from at least one application, a request to instantiate and execute, and for identifying a guest operating system to instantiate and execute the at least one application;
wherein the at least one boot loader program includes a primary boot loader and a secondary boot loader, wherein the secondary boot loader includes instructions for receiving updates to support the guest operating system according to a download initiated by the guest operating system.

US Pat. No. 10,169,060

OPTIMIZATION OF PACKET PROCESSING BY DELAYING A PROCESSOR FROM ENTERING AN IDLE STATE

Amazon Technologies, Inc....

1. A method, comprising:determining, by a computer system, a first processing time for a first stage of a pipeline and a second processing time for a second stage of the pipeline, the first stage and the second stage included in a plurality of stages of the pipeline;
determining a first delay value based at least in part on the first processing time, the first delay value being associated with the first stage of the pipeline;
obtaining a first network packet at the pipeline, the first network packet being obtained from a particular source within a first time that includes at least the first processing time and the second processing time;
causing a processor to process the first network packet at the first stage of the pipeline while the processor is in a processing state;
causing the processor to remain in the processing state after processing the first network packet at the first stage of the pipeline for at least the first delay value;
causing the processor to process the first network packet at the second stage of the pipeline while the processor is in the processing state;
determining a second delay value based at least in part on the second processing time, the second delay value being associated with the second stage of the pipeline, the first delay value being different from the second delay value, and the second delay value being calculated based at least in part on which of polling the processor at the first stage of the pipeline, the first processing time, or the second processing time is a greatest value;
causing the processor to remain in the processing state after processing the first network packet for at least the second delay value; and
obtaining a second network packet at the pipeline, the second network packet being obtained after the first network packet is obtained, and the second network packet being able to be processed while the processor is in the processing state.

US Pat. No. 10,169,059

ANALYSIS SUPPORT METHOD, ANALYSIS SUPPORTING DEVICE, AND RECORDING MEDIUM

FUJITSU LIMITED, Kawasak...

11. An analysis support computer, comprising:a memory; and
a processor coupled to the memory and configured to:
first search, in configuration identification information that identifies configuration values indicating identification of software, hardware resource specification, and hardware resource utilization of physical machines and virtual machines executed on the physical machines, before and after respective changes of virtual machines on the physical machines and of the virtual machines respectively executed on the physical machines, for a second physical machine that has configuration identification information which configuration values of software, hardware resource specification and hardware resource utilization are similar to configuration values of a first physical machine, on which configuration identification information of a first virtual machine to be analyzed for migration is executed, before and after a change of a state of at least one virtual machine among virtual machines executed on the second physical machine,
second search in the configuration identification information for a second virtual machine among the at least one virtual machine that is executed on the second physical machine and which configuration values of software, hardware resource specification and hardware resource utilization are similar to configuration values of software, hardware resource specification and hardware resource utilization of the first virtual machine, before and after the change of the state of the second virtual machine executed on the second physical machine, to output information for execution of a process to control the migration of the first virtual machine, in response to the first and second searching, and
perform analysis processing that analyzes operational trends of the virtual machines by using the hardware resource utilization of the first virtual machine and the hardware resource utilization of the second virtual machine.

US Pat. No. 10,169,058

SCRIPTING LANGUAGE FOR ROBOTIC STORAGE AND RETRIEVAL DESIGN FOR WAREHOUSES

1. A system for scripting language for design and operation of a robotic storage and retrieval system in a warehouse, said system comprising:processor and memory operable to provide;
a scripting language framework for directed operation of a control system of said robotic storage and retrieval system, said scripting language framework providing a shelving descriptor and a robot descriptor;
said shelving descriptor operable to model a shelving to be deployed in said warehouse, said shelving descriptor further having associated shelving attributes defining properties of said shelving descriptor;
said robot descriptor operable to model a robot to be deployed in said warehouse, said robot descriptor further having associated robot attributes defining properties of said robot descriptor;
a scripting editor comprising a user interface operable to receive input scripting language code conforming to said scripting language framework and based on warehouse metadata;
a parser operable to interpret or compile said input scripting language code into a runtime system;
said runtime system configured to issue control operations to a robot in said warehouse and communicatively interposed between said robot and a control system of said robotic storage and retrieval system.

US Pat. No. 10,169,057

SYSTEM AND METHOD FOR DEVELOPING AN APPLICATION

Taplytics Inc., Toronto ...

1. A method of remotely modifying a user interface of an application deployed on a plurality of computing devices, the method to be performed at a server that is remote from the computing devices, the method comprising:identifying a first set of parameters corresponding to at least one user interface element of the user interface;
identifying a second set of parameters, the second set of parameters including second update parameters for updating the at least one user interface element of the user interface, the at least one user interface element being identified at the server by a programming language unit for the user interface element in the program code of the application;
identifying at least one first computing device and at least one second computing device in the plurality of computing devices;
associating the at least one first computing device with the first set of parameters;
associating the at least one second computing device with the second set of parameters;
sending the second update parameters to the at least one second computing device, wherein each computing device in the at least one second computing device
updates the at least one user interface element of the deployed application on the second computing device with the second update parameters; and
displays a modified user interface for the deployed application, the modified user interface comprising the updated at least one user interface element.

US Pat. No. 10,169,056

EFFECTIVE MANAGEMENT OF VIRTUAL CONTAINERS IN A DESKTOP ENVIRONMENT

International Business Ma...

1. A method for identifying installed software components in a container running in a virtual execution environment, wherein the container is created by instantiating image data, the method comprising:determining a respective identifier for each of individual layers of a layered structure of the image data;
retrieving from a repository storage arrangement storing information for non-container-based software and container-based software, the information for the container-based software identifying at least one of the installed software components in the container based on the respective identifier for at least one of the individual layers;
forming, from the information stored in the repository storage arrangement, a displayable data structure allowing row filtering for software management and at least specifying as a respective row in the displayable data structure, for each of the installed software components, (i) a type as one of the non-container-based software or the container-based software, (ii) a virtual machine and an operating system corresponding thereto, and (iii) an operating status of started or stopped; and
displaying, on a display device, the displayable data structure.

US Pat. No. 10,169,055

ACCESS IDENTIFIERS FOR GRAPHICAL USER INTERFACE ELEMENTS

SAP SE, Walldorf (DE)

1. A non-transitory computer readable storage medium storing instructions, which when executed by a computer cause the computer to perform operations comprising:receiving a trigger to render at least one graphical user interface element on a graphical user interface associated with a display;
retrieving one or more pre-defined accessibility parameters associated with the at least one graphical user interface element, wherein retrieving the one or more pre-defined accessibility parameters associated with the at least one graphical user interface element comprises accessing one or more application programming interfaces (APIs) associated with the at least one graphical user interface element, and wherein the one or more APIs returning a set of requirements associated with rendering the triggered at least one graphical user interface element;
performing an access control check in real time to determine accessibility information associated with an application and corresponding to the pre-defined accessibility parameters, wherein the access control check determines whether the accessibility information meets the one or more pre-defined accessibility parameters based on whether the one or more pre-defined accessibility parameters are met;
associating a visual identifier representing an accessibility status to the at least one graphical user interface element determined based on the access control check; and
rendering the at least one graphical user interface element with the visual identifier on the graphical user interface, wherein each of the at least one graphical user interface elements are augmented with the associated visual identifier indicating a real-time accessibility status of the associated graphical user interface element.

US Pat. No. 10,169,054

UNDO AND REDO OF CONTENT SPECIFIC OPERATIONS

International Business Ma...

1. A method for performing undo or redo requests, the method comprising:receiving, by one or more computer processors, a list of performed operations, wherein the list of performed operations contains all operations performed in an order of processing;
receiving, by one or more computer processors, a request from a user, wherein the request includes at least one of an undo request of a last performed operation or a redo request of a last performed undo request from the list of performed operations;
requesting, by one or more computer processors, the user provide a selection of at least one content type, wherein the at least one content type is at least one of the following categories: text, audio, video, images;
receiving, by one or more computer processors and from the user, the selection of at least one content type;
determining, by one or more computer processors, a content type of each performed operation in the list of performed operations;
determining, by one or more computer processors, a group of all performed operations from the list of performed operations that have a content type the same as one content type of the at least one content types;
determining, by one or more computer processors, that the group of all performed operations from the list of performed operations that have a content type the same as one content type of the at least one content types consists of zero performed operations;
requesting, by one or more computer processors, that the user provide an additional selection of at least one content type which consists of one or more performed operations;
receiving, by one or more computer processors and from the user, the additional selection of at least one content type; and
responsive to determining the group of all performed operations from the list of performed operations that have a content type the same as one content type of the at least one content types, performing, by one or more computer processors, the at least one of the undo request of a last performed operation or the redo request of a last performed undo request from the list of performed operations that have one content type of the at least one content types.

US Pat. No. 10,169,053

LOADING A WEB PAGE

International Business Ma...

1. A method for loading a web page, the method comprising:searching, by one or more processors, a web application for user interface change portions, wherein execution of the user interface change portions triggers a user interface to change, and wherein the web application is renderable on the user interface as a web page by a browser;
marking, by one or more processors, the user interface change portions to interrupt, upon execution of the web application, the execution of the web application;
interrupting, by one or more processors, execution of the web application upon an initial execution of the web application;
displaying, by one or more processors, the user interface change portions;
displaying, by one or more processors, other portions of the web page at an N unit time delay after a time that the user interface change portions are displayed;
storing, by one or more processors, code for identified user interface change portions from the web page in a ready queue;
storing, by one or more processors, code for the other portions of the web page in a candidate queue, wherein the ready queue and the candidate queue are different queues;
retrieving and executing, by one or more processors, the code for the identified user interface change portions from the ready queue in order to display the identified user interface change portions of the web page;
in response to retrieving and executing the code from the ready queue in order to display the identified user interface change portions of the web page, moving, by one or more processors, the code for the other portions of the web page from the candidate queue to the ready queue; and
retrieving and executing, by one or more processors, the code in the ready queue for the other portions in order to display the other portions of the web page.

US Pat. No. 10,169,052

AUTHORIZING A BIOS POLICY CHANGE FOR STORAGE

Hewlett-Packard Developme...

1. A method executable by a computing device, the method comprising:receiving a basic input output system (BIOS) policy change;
authorizing the BIOS policy change; and
upon the authorization of the BIOS policy change, storing a first copy of the BIOS policy change in a first memory accessible by a central processing unit and transmitting a second copy of the BIOS policy change for storage in a second memory electrically isolated from the central processing unit;
wherein the BIOS policies comprises at least one of a boot order of the BIOS, hardware configurations, and a BIOS security mechanism, and
wherein the BIOS policy change is a modification to the at least one of a boot order of the BIOS, hardware configurations, and a BIOS security mechanism.

US Pat. No. 10,169,051

DATA PROCESSING DEVICE, PROCESSOR CORE ARRAY AND METHOD FOR CHARACTERIZING BEHAVIOR OF EQUIPMENT UNDER OBSERVATION

Blue Yonder GmbH, Karlsr...

1. A data processing device for characterizing behavior properties of an equipment under observation, the data processing device comprising:a plurality of processing units configured to:
pre-process historic data from a plurality of master equipment in order to define a configuration in advance; and
process input values based on numerical transfer functions to generate output values by implementing an input to output mapping based on the configuration defined, wherein the configuration corresponds to behavior properties of one of the plurality of master equipment, wherein some of the output values represent the behavior properties of the equipment under observation, and wherein the plurality of processing units is cascaded into a first processing stage and a second processing stage, wherein
the first processing stage comprises a plurality of first processing units that are adapted to receive a plurality of equipment data values from the equipment under observation as input, and adapted to provide a plurality of intermediate data values as output, according to a plurality of first numerical transfer functions, and
the second processing stage comprises a second processing unit that is adapted to receive the plurality of intermediate data values as input and adapted to provide behavior data as output values according to a second numerical transfer function.

US Pat. No. 10,169,050

SOFTWARE APPLICATION PROVISIONING IN A DISTRIBUTED COMPUTING ENVIRONMENT

International Business Ma...

1. A software provisioning system for a computer system comprising client devices connected via a communication network to a computing infrastructure, the computing infrastructure being configured to provide, upon a user's request, a software application package to an already running machine, wherein the software provisioning system is configured to:retrieve session information about a user logged in to the computing infrastructure via a client device, thereby creating a session;
determine a list of software application packages that the user is entitled to request to be provided to the running machine so that the user is able to use a software application contained in the software application packages; and
calculate software application usage information from the session information and the list of software application packages.

US Pat. No. 10,169,049

APPLICATION SYSTEM INDEPENDENT DYNAMIC PROCESS ORIENTED HELP

Software AG, Darmstadt (...

1. A method for generating help from an application system for a process, the method comprising:receiving, by a processor, a help request to trigger help for a process in response to a user selection of help, wherein an identification of an application system for which help is requested and an identifier of a task for which help is requested are passed as a parameter in the help request that triggers help, wherein the application system for which help is requested is modeled in a repository, wherein a plurality of different application systems are modeled in the repository and the different application systems share a part of connection-relations in the repository, the repository provides an application programming interface (API) that navigates and accesses (i) objects in the repository and (ii) process model definitions in the repository, wherein the process for which help is requested is derived from a process model definition stored in the repository, the repository is accessed through an application programming interface (API), the repository stores, for each application system, the process model definition used to implement the each application system, each process model definition includes relations that define navigation between objects that provide functions in the application system;
determining, by the processor, in response to the help request, which configuration of a plurality of configurations stored in a dynamic process help generator (DPHG) storage to use to provide help, based on the identification of the application system which is passed as the parameter in the help request that triggers help;
obtaining, by the processor, from the determined configuration in the DPHG storage, information indicating (i) the relations of the repository, said relations consisting essentially of relations from which the application system was implemented, and (ii) process models and objects in the repository for said relations;
requesting, by the processor, from the repository through the API of the repository, (i) the process models and the objects consisting of those from which the application system was implemented based on the information obtained from the determined configuration, and (ii) information that indicates how to identify a task which is a current task currently being executed in the application system and how to navigate from the current task to process models consisting essentially of those used by the current task in the application system, wherein the application system was implemented from the relations; and causing the processor to perform navigation through the repository from which the plurality of application systems are modeled, wherein the navigation is limited to (a) the relations and (b) the process models and (c) the objects, which are both specific to the application system which was actually implemented and used by the current task; and
providing, by the processor, as a response to the help request, the relations, and the process models and the objects consisting essentially of the relations, the process models, and the objects, from which the application system was implemented and over which the processor navigated as triggered by the help request, which are received by the processor in response to the help request for the application system,
wherein the limited navigation starts at a typed-object which is the task for which help is requested, and steps iteratively through the repository via the connection-relations using as input a typed-object which results from a previous step to get a next typed-object in the repository, and results of each of the steps of the limited navigation of the repository are collected and provided as the response to the help request for the process.

US Pat. No. 10,169,048

PREPARING COMPUTER NODES TO BOOT IN A MULTIDIMENSIONAL TORUS FABRIC NETWORK

International Business Ma...

1. A method for preparing a plurality of computer nodes to boot in a multidimensional fabric network, comprising:retrieving, by a fabric processor (FP) of a computer node within the multidimensional fabric network, a MAC address from a baseboard management controller (BMC) of the computer node and configuring a DHCP discovery packet using the BMC MAC address and sending that packet into the multi-host switch, wherein the BMC is directly connected to the FP by a management port, and wherein the BMC, the multi-host switch, and the FP are located inside the computer node;
establishing an exit node from the multidimensional fabric network to a service provisioning node (SPN) outside the multidimensional fabric network, wherein the SPN is not part of the multidimensional fabric network;
forwarding, by the exit node to the SPN, DHCP requests for IP addresses from the multi-host switch of the computer node within the multidimensional fabric network, wherein the computer node is identified by the BMC MAC address found in the DHCP discovery packet coming from that node's multi-host switch;
receiving, from the SPN by the exit node, a location-based IP address, and forwarding the received location-based IP address to the computer node, wherein the location-based IP address is a computed IP address that uniquely identifies the physical location of the computer node within the multidimensional fabric network;
calculating, by the FP, a host MAC address, wherein the host MAC address is the FP received location-based IP address plus a value of one, combined with a fixed, three byte value for a high twenty-four bits of a forty-eight bit MAC address, the fixed three byte value being known by all nodes and by the SPN; and
programming, by the FP, the calculated host MAC address onto the multi-host switch, wherein the calculated host MAC address replaces the factory default MAC address in NVRAM.

US Pat. No. 10,169,047

COMPUTING DEVICES, METHODS, AND STORAGE MEDIA FOR A SENSOR LAYER AND SENSOR USAGES IN AN OPERATING SYSTEM-ABSENT ENVIRONMENT

Intel Corporation, Santa...

1. A computing device for computing, comprising:a processor; and
firmware to be operated by the processor while the computing device is operating without an operating system (OS) that includes one or more modules, including an environmental factor boot module, and a sensor layer,
wherein the sensor layer is to:
receive sensor data produced by a plurality of sensors, wherein the plurality of sensors is of the computing device or operatively coupled with the computing device;
aggregate the sensor data from the plurality of sensors; and
selectively provide the sensor data or the aggregated sensor data to the one or more modules via an interface of the sensor layer that abstracts the plurality of sensors; and
wherein the environmental factor boot module is to selectively instantiate one or more drivers for one or more corresponding sensors of the plurality of sensors, based at least in part on a portion of sensor data or aggregated sensor data associated with one or more environmental factors.

US Pat. No. 10,169,046

OUT-OF-ORDER PROCESSOR THAT AVOIDS DEADLOCK IN PROCESSING QUEUES BY DESIGNATING A MOST FAVORED INSTRUCTION

International Business Ma...

1. A processor for executing software instructions, the processor comprising:a plurality of processing queues that process the software instructions and provide out-of-order processing of the software instructions when specified conditions are satisfied;
an instruction sequencing unit circuit that determines a sequence of the software instructions executed by the processor, wherein the instruction sequencing unit circuit comprises a most favored instruction circuit that selects an instruction as the most favored instruction (MFI) and communicates the MFI to the plurality of processing queues; and
wherein at least one of the plurality of processing queues comprises a plurality of slots that receive any instruction that is not the most favored instruction when written to one of the plurality of slots, and a dedicated slot for processing the MFI, wherein the dedicated slot cannot process any instruction that is not the MFI.

US Pat. No. 10,169,045

METHOD FOR DEPENDENCY BROADCASTING THROUGH A SOURCE ORGANIZED SOURCE VIEW DATA STRUCTURE

Intel Corporation, Santa...

1. A method for dependency broadcasting through a source organized source view data structure, the method comprising:receiving an incoming instruction sequence using a global front end;
grouping the instructions to form instruction blocks;
populating the register template with block numbers corresponding to the instruction blocks, wherein the block numbers corresponding to the instruction blocks indicate interdependencies among the instruction blocks wherein an incoming instruction block writes its respective block number into fields of the register template corresponding to destination registers referred to by the incoming instruction block;
populating a source organized source view data structure, wherein the source view data structure stores the instruction sources corresponding to the instruction blocks as read from the register template by incoming instruction blocks;
upon dispatch of one block of the instruction blocks, broadcasting a number belonging to the one block to a row of the source view data structure that relates to the one block and marking sources of the row accordingly; and
updating dependency information of remaining instruction blocks in accordance with the broadcast.

US Pat. No. 10,169,044

PROCESSING AN ENCODING FORMAT FIELD TO INTERPRET HEADER INFORMATION REGARDING A GROUP OF INSTRUCTIONS

Microsoft Technology Lice...

1. A method comprising:fetching a group of instructions, configured to execute atomically by a processor, and a group header for the group of instructions, wherein the group header comprises a plurality of fields including an encoding format field, wherein the encoding format field is configured to provide to the processor information concerning how to interpret a format of at least one of a remaining of the plurality of fields of the group header for the group of instructions, and wherein the plurality of fields of the group header comprises: a first field comprising first information regarding exit types for use by a branch predictor in making branch predictions for the group of instructions and a second field comprising second information about whether during execution of the group of instructions each of the group of instructions requires independent vector lanes, a third field comprising third information about whether during the execution of the group of instructions branch prediction is inhibited, and a fourth field comprising fourth information about whether during the execution of the group of instructions predicting memory dependencies between memory operations is inhibited; and
processing the encoding format field to: (1) interpret the first information in the first field to generate a first signal for a branch predictor associated with the processor, (2) interpret the second information in the second field to generate a second signal for an instruction decoder or an instruction scheduler associated with the processor, (3) interpret the third information in the third field to generate a third signal for the branch predictor associated with the processor, and (4) interpret the fourth information in the fourth field to generate a fourth signal to inhibit dependencies between memory operations, including load/store operations.

US Pat. No. 10,169,043

EFFICIENT EMULATION OF GUEST ARCHITECTURE INSTRUCTIONS

Microsoft Technology Lice...

1. In a computing environment a method of converting data for 80 bit registers of a guest architecture to data for 64-bit registers on a host system, the method comprising:determining that an operation should be performed to restore a first set of 80 bits stored in memory for a first 80 bit register of a guest architecture on a host having 64-bit registers;
storing a first set of 64 bits from the first set of 80 bits stored in memory, wherein the first set of 64 bits could be used for 64-bit SIMD operations in the guest architecture, in a first host register;
storing a first set of remaining 16 bits from the first set of 80 bits stored in memory in a supplemental memory storage;
documenting that the remaining 16 bits stored in the supplemental memory are padding bits;
identifying a SIMD operation that should be performed to operate on the first 80-bit register for the guest architecture; and
as a result of identifying a SIMD operation that should be performed to operate on the first 80-bit register for the guest architecture, determining to not convert the first set of 64 bits in the first host register to a floating point number.

US Pat. No. 10,169,042

MEMORY DEVICE THAT PERFORMS INTERNAL COPY OPERATION

Samsung Electronics Co., ...

1. A memory device comprising:a memory cell array including at least one bank that includes at least one block, each of the at least one block having a plurality of memory cell rows having memory cells therein; and
processing circuitry configured to,
receive, from an external source, an internal copy command along with a source address and a destination address associated therewith, the source address indicating a source bank of the at least one bank and a source block of the at least one block within the source bank, and the destination address indicating a destination bank of the at least one bank and a destination block of the at least one block within the destination bank,
compare one or more of (i) the source bank and the destination bank and (ii) the source block and the destination block,
generate one or more of a bank comparison signal and a block comparison signal based on a result of comparing the one or more of (i) the source bank with the destination bank and (ii) the source block with the destination block, the bank comparison signal indicating whether the source bank and the destination bank are a same bank or different banks, and the block comparison signal indicating whether the source block and the destination block are a same block or different blocks,
select a selected internal copy operation from among an internal block copy operation, an inter-bank copy operation or an internal bank copy operation based on the one or more of the bank comparison signal and the block comparison signal,
perform the selected internal copy operation on the memory cell array from the memory cells associated with the source address to the memory cells associated with the destination address, and
output a copy-done signal indicating that the selected internal copy operation is complete, if the selected internal copy operation is complete.

US Pat. No. 10,169,041

EFFICIENT POINTER LOAD AND FORMAT

International Business Ma...

1. A method comprising:receiving a microprocessor instruction for processing by a microprocessor;
maintaining, by the microprocessor, the microprocessor instruction as a single instruction in an instruction queue until the microprocessor instruction is removed from the instruction queue for processing by the microprocessor;
processing the microprocessor instruction in a multi-cycle operation, wherein processing the microprocessor instruction comprises:
retrieving, by a load-store unit of the microprocessor and from cache memory of the microprocessor, a unit of data having a plurality of ordered bits during a first clock cycle;
zeroing, after the retrieving and during the first clock cycle, any of the bits that are not required for use with a predefined addressing mode;
shifting, by the load store unit, the unit of data by a number of bits during a second clock cycle immediately following the first clock cycle;
placing, after the shifting and during the second clock cycle, the unit of data into a register of the microprocessor; and
providing, after the shifting and during the second clock cycle, the unit of data to comparison logic of the microprocessor, wherein the microprocessor instruction is maintained as a single instruction in a completion table during the processing of the microprocessor instruction.

US Pat. No. 10,169,040

SYSTEM AND METHOD FOR SAMPLE RATE CONVERSION

Ceva D.S.P. Ltd., Herzli...

1. A method for performing sample rate conversion by an execution unit, the method comprising:receiving an instruction, wherein the instruction comprises an irregular shifting pattern of data elements stored in a vector register; and
shifting the data elements in the vector register according to the irregular shifting pattern,
wherein the sample rate conversion comprises downsampling, and wherein the irregular shifting pattern is provided by an indication stating whether a memory element in the input vector register loads a data element from an immediate next memory element, or whether the memory element loads a data element previously stored in a shadow vector register and the data element stored in the immediate next memory element is loaded into the shadow vector register.

US Pat. No. 10,169,039

COMPUTER PROCESSOR THAT IMPLEMENTS PRE-TRANSLATION OF VIRTUAL ADDRESSES

OPTIMUM SEMICONDUCTOR TEC...

1. A processor, comprising:a register file comprising one or more registers; and
processing logic circuit, communicatively coupled to the register file, to:
identify a value stored in a first register of the register file as a virtual address, the virtual address comprising a corresponding virtual base page number;
translate the virtual base page number to a corresponding real base page number and zero or more real page numbers, wherein zero or more real page numbers correspond to zero or more virtual page numbers associated with the virtual base page number;
store, in the one or more registers, the real base page number and the zero or more real page numbers;
responsive to identifying at least one input value stored in at least one register of the register file specified by an instruction, combine the at least input value to produce a result value;
compute, based on real translation information stored in the one or more registers, a real translation to a real address of the result value; and
access, based on the computed real translation, a memory.

US Pat. No. 10,169,037

IDENTIFYING EQUIVALENT JAVASCRIPT EVENTS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented process for identifying equivalent JavaScript events, comprising:receiving source code containing two JavaScript events;
extracting, to form extracted HTML elements, an HTML element containing an event from each of the two JavaScript events; and
identifying that the two JavaScript events are equivalent based upon:
a determination that the extracted HTML elements are of a same type according to equivalency criteria B,
a determination that the extracted HTML elements have a same number of attributes according to equivalency criteria C,
a determination that JavaScript function calls of each of the two JavaScript events are similar according to equivalency criteria A, and
a determination that other attributes of the extracted HTML elements satisfy equivalency criteria D.

US Pat. No. 10,169,036

SYNCHRONIZING COMMENTS IN SOURCE CODE WITH TEXT DOCUMENTS

International Business Ma...

1. A method, with a processor of an information processing system, for synchronizing comments in a source code file with text of a source code document, the method comprising:analyzing a source code file;
identifying, based on the analyzing, a set of source code comment text within the source code file;
extracting, based on the identifying, a set of text from the set of source code comment text that has been identified;
generating, based on the identifying, a set of metadata for at least the set of text, the set of metadata comprises at least a unique representation of the set of text, and wherein the set of metadata at least identifies one or more line numbers in the source code file associated with the set of text;
applying a plurality of markup tags to the set of text, the plurality of markup tags at least one of formatting and stylizing the set of text when presented to the user; and
generating a source code document comprising one or more of the set of text, the set of metadata, and the plurality of markup tags.

US Pat. No. 10,169,035

CUSTOMIZED STATIC SOURCE CODE ANALYSIS

INTERNATIONAL BUSINESS MA...

1. A system comprising:a memory; and
a processor coupled with the memory, the processor configured to perform a customized static source code analysis of a source code, the customized static source code analysis comprising:
parsing a source code, the parsing comprising identifying a first application programming interface (API) call, and a second API call;
identifying a first analysis configuration file corresponding to the first API call, and a second analysis configuration file corresponding to the second API call;
determining, based on the first analysis configuration file, a description of the first API call and an identification of a first target resource invoked by the first API call;
determining, based on the second analysis configuration file, a second description of the second API call and an identification of a second target resource invoked by the second API call; and
generating a static source code analysis report that includes the description of the first API call and the identification of the first target resource corresponding to the first API call, and the description of the second API call and the identification of the second target resource corresponding to the second API call.

US Pat. No. 10,169,034

VERIFICATION OF BACKWARD COMPATIBILITY OF SOFTWARE COMPONENTS

International Business Ma...

1. A method of determining backward compatibility of a software component, the method comprising:generating, by a processor, respective tree structures for both a first version of the software component and a second version of the software component, wherein the respective tree structures include a name for each respective attribute, a type for each respective attribute, a name for each respective operation, and a type for each respective operation included in the respective tree structure;
identifying, by the processor, one or more programming interfaces that are exposed by the first version of the software component by: converting attributes of exposed programming interfaces into corresponding operations of a first tree structure that includes a name and a type for each parameter, return, and fault associated with each respective operation; and
determining, by the processor, a backward compatibility of the first version of the software component by comparing the operations of the first version of the software component to one or more operations of the second version of the software component based on the respective tree structures.

US Pat. No. 10,169,033

ASSIGNING A COMPUTER TO A GROUP OF COMPUTERS IN A GROUP INFRASTRUCTURE

International Business Ma...

1. A computer-implemented method for assigning a given computer to a computer group of a set of computer groups, the method comprising:scanning, by a computer, software components installed on the given computer, resulting in a list of discovered software components of the given computer;
for at least one computer group of the set of computer groups:
obtaining, by the computer, a first list of software components most frequently installed on computers of the at least one computer group, wherein the first list is unique for all computer groups of the set of computer groups; and
comparing, by the computer, the first list with the list of discovered software components and, based on the comparison, computing a first likelihood that the given computer belongs to the at least one computer group; and
in case only one of the first likelihoods exceeds a first threshold, assigning, by the computer, the given computer to the at least one computer group for which the first likelihood exceeds the first threshold.

US Pat. No. 10,169,032

PARALLEL DEVELOPMENT OF DIVERGED SOURCE STREAMS

International Business Ma...

1. A computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions, when executed by a computer, cause the computer to:manage, using a parallel development tool implemented in programmable logic circuitry, diverged source streams, wherein the parallel development tool is to:
track, on a position-by-position basis in a diverged code history associated with a diverged source stream, an origin source stream and an original position of code contained within the diverged source stream, wherein the position-by-position basis is one or more of an argument order or an operand order, and wherein the diverged code history is to be a separate file;
detect a modification to a first portion of the code contained within the diverged source stream at a first position;
automatically document the modification and the first position in the diverged code history, wherein the modification triggers a modification indicator to be prepended to an origin stream indicator and an original position indicator in the diverged code history, and wherein the modification indicator indicates that the modification occurred in a transition to a version of the diverged source stream;
detect a move of a second portion of the code in the diverged source stream from a second position to a third position;
automatically document the move and the second position in the diverged code history, wherein the move triggers a move indicator to be prepended to the origin stream indicator and the original position indicator in the diverged code history, and wherein the move indicator indicates the diverged source stream and a version level at which the move occurred;
detect an addition of a third portion of the code in the diverged source stream at a fourth position;
automatically document the addition with a timestamp in the diverged code history;
present, through a user interface device, an option to ignore the detected modification;
receive a request to merge shifted code contained within the diverged source stream with the origin source stream;
search, using the modification indicator, the original position indicator, the origin stream indicator, the move indicator, and the timestamp, an origin code history for content corresponding to the shifted code in the diverged code history to resolve the request;
track, on a position-by-position basis in the origin code history associated with the origin source stream, the origin source stream, an original position, the diverged source stream and a diverged position of code contained within the origin source stream, using the modification indicator, the original position indicator, the origin stream indicator, the move indicator, and the timestamp;
receive a request to merge out-of-order code contained within the diverged source stream with the origin source stream, wherein the out-of-order code is to be modified prior to other code that has already been merged; and
present, through the user interface device, an alignment option to merge conflicted code, wherein the conflicted code is to be bounded by modified code, wherein the alignment option identifies the modified code at one or more locations to use to correctly merge the conflicted code, and wherein at least one of the one or more locations of the conflicted code includes one or more of the modification, the move, the shifted code, the addition or the out-of-order code.

US Pat. No. 10,169,031

PROGRAM CODE LIBRARY SEARCHING AND SELECTION IN A NETWORKED COMPUTING ENVIRONMENT

International Business Ma...

1. A computer-implemented method for searching for program code libraries in multiple programming languages in a networked computing environment, comprising:receiving, in a computer memory medium, a request to search at least one program code library repository associated with an integrated development environment (IDE) for a program code library;
searching, based on an annotation to the request, the at least one program code library repository for the program code library corresponding to a programming language of a project;
expanding the searching, using emulation based on another annotation to the request, to include a substitute programming language, wherein the emulation creates a wrapper method in the substitute programming language and emulates a library programming language during execution; and
providing, based on the expanded searching, a result to a device hosting the IDE.

US Pat. No. 10,169,030

REFRESHING A SOFTWARE COMPONENT WITHOUT INTERRUPTION

International Business Ma...

1. A computer-implemented method for refreshing a software component without interruption, comprising:detecting when a current instance of the software component is inactive;
activating a refresh process of the software component in parallel to the current instance, including starting a new instance of the software component;
monitoring a state of the current instance and, when the current instance ceases to be inactive, canceling the refresh process;
determining that the refresh process is complete; and
switching from the current instance to the new instance of the software component.

US Pat. No. 10,169,029

PATTERN BASED MIGRATION OF INTEGRATION APPLICATIONS

International Business Ma...

1. A method for migrating applications, the method comprising:obtaining configuration information from a source integration application;
determining a set of features for the source integration application based on the configuration information;
identifying a set of integration patterns, wherein each integration pattern in the set of integration patterns defines a set of expected characteristics;
determining a respective fitness score for each of the set of integration patterns by, for each respective integration pattern of the set of integration patterns:
determining a respective feature score for at least two respective features of the set of features, wherein each of the respective feature scores represent a likelihood that the respective feature matches the respective integration pattern; and
aggregating the respective feature scores to generate the respective fitness score for the respective integration pattern;
selecting one or more integration patterns from the set of integration patterns based on the respective fitness score associated with each of the respective integration patterns; and
migrating the source integration application based on the selected one or more integration patterns.

US Pat. No. 10,169,028

SYSTEMS AND METHODS FOR ON DEMAND APPLICATIONS AND WORKFLOW MANAGEMENT IN DISTRIBUTED NETWORK FUNCTIONS VIRTUALIZATION

Ciena Corporation, Hanov...

1. A workloads management method for on-demand applications in distributed Network Functions Virtualization Infrastructure (dNFVI), the workloads management method comprising:receiving usage data from a unikernel implementing one or more functions of a plurality of functions related to a Virtual Network Function (VNF);
determining an update to the one or more functions in the unikernel based on the usage data;
updating the unikernel by requesting generation of application code for the unikernel based on the update; and
starting the updated unikernel and redirecting service requests thereto,
wherein the unikernel and the updated unikernel are each a specialized, single address space machine image constructed using library operating systems which is executed directly on a hypervisor.

US Pat. No. 10,169,027

UPGRADE OF AN OPERATING SYSTEM OF A VIRTUAL MACHINE

International Business Ma...

1. A method, comprising:receiving, by one or more processors of a computer system, a virtual machine (VM) deletion request, wherein if the VM deletion request includes a first flag then the VM deletion request is a request to upgrade a base operating system (OS) of the VM, and wherein if the VM deletion request does not include the first flag then the VM deletion request is a request to delete the VM;
said one or more processors determining whether the received VM deletion request includes the first flag;
in response to the one or more processors determining that the received VM deletion request includes the first flag, said one or more processors storing metadata of the VM into a resource registry;
after said storing the metadata of the VM into the resource registry, said one or more processors receiving a VM creation request, wherein if the VM creation request includes a second flag then the VM deletion request is a request to upgrade the base OS of the VM, and wherein if the VM creation request does not include the second flag then the VM creation request is a request to create a new VM;
said one or more processors determining whether the received VM deletion request includes the second flag;
in response to the one or more processors determining that the received VM creation request includes the second flag, said one or more processors retrieving the metadata from the resource registry;
after said retrieving the metadata from the resource registry, said one or more processors loading a new version of the base OS onto the VM and using the retrieved metadata to configure the VM with the new version of the base OS; and
said one or more processors deploying the VM with the new version of the base OS.

US Pat. No. 10,169,026

TRANSFERRING OPERATING ENVIRONMENT OF REGISTERED NETWORK TO UNREGISTERED NETWORK

KT CORPORATION, Gyeonggi...

1. A method of providing an operation environment of a registered network having first devices to a user in an unregistered network having second devices, the method comprising:detecting the second devices in the unregistered network when user equipment associated with the user enters a service area of the unregistered network;
as compatible devices, selecting devices compatible with the first devices in the registered network from the detected second devices;
obtaining system images of the first devices compatible with the selected compatible devices;
installing the obtained system images of the first devices at the selected compatible devices, respectively;
generating a user interface for enabling the user to control at least one of the compatible devices; and
transmitting the generated user interface the user equipment,
wherein the user equipment provides the user interface, receives a user input to control the at least one of the compatible devices through the user interface, and controls the at least one of the compatible devices installed with the system image and in the unregistered device based on the received user input by generating a control signal based on the user input and transmitting the generated control signal to the at least one of the compatible devices.

US Pat. No. 10,169,025

DYNAMIC MANAGEMENT OF SOFTWARE LOAD AT CUSTOMER PREMISE EQUIPMENT DEVICE

ARRIS Enterprises LLC, S...

1. A method comprising:detecting a request to load a requested executable software component to volatile memory;
determining that the size of the requested executable software component is greater than the size of available space in the volatile memory;
determining a probability to unload value for each executable software component of one or more executable software components currently loaded in the volatile memory, wherein the probability to unload value for each respective one executable software component is calculated based upon one or more criteria associated with an execution of the respective one executable software component;
based upon the probability to unload values determined for each of the one or more executable software components that are currently loaded in the volatile memory, identifying one or more of the executable software components for removal from the volatile memory;
removing the identified one or more executable software components from the volatile memory; and
loading the requested executable software component to the volatile memory.

US Pat. No. 10,169,024

SYSTEMS AND METHODS FOR SHORT RANGE WIRELESS DATA TRANSFER

Arm Limited, Cambridge (...

1. A method for device control of a primary device using an accessory device as a proxy, the method comprising:executing a device application in an operating system of the primary device;
establishing a short range wireless link between the device application and an accessory application on the accessory device in accordance with a protocol implemented by a device low energy stack on the primary device and an accessory low energy stack on the accessory device, where a procedure of the protocol implemented in the device low energy stack is inaccessible by the device application via the operating system and where the procedure accesses connection parameters of the link;
sending a device control message from a device application to an accessory application requesting performance of the procedure;
responsive to the device control message, the accessory application connecting with the accessory low energy stack and requesting the procedure;
the accessory low energy stack performing the procedure to access the connection parameters of the short range wireless link;
sending, by the accessory application, a response to the device control message; and
receiving the response, at the device application, for the device control message.

US Pat. No. 10,169,023

VIRTUAL CONTAINER DEPLOYMENT

International Business Ma...

1. A computer-implemented method of virtual container deployment, the computer-implemented method comprising:retrieving runtime information of a plurality of virtual environments and containers installed in a computing system, each virtual environment selected from a virtual machine and a virtual appliance, the runtime information including information of a plurality of read-only layers in the plurality of virtual environments and containers, wherein each read-only layer of the plurality of read-only layers has a respective weight value assigned thereto;
retrieving at least one deployment policy specifying to select at least one read-only layer having the highest or lowest accumulative weight value among the plurality of read-only layers for installation of a first container in the computing system;
determining, by operation of one or more computer processors and based on the runtime information and the at least one deployment policy, a first virtual environment of the plurality of virtual environments, to host the first container and that includes one or more read-only layers selected based on the at least one deployment policy; and
installing the first container in the first virtual environment, including adding a writable layer on top of the one or more read-only layers selected based on the at least one deployment policy.

US Pat. No. 10,169,022

INFORMATION PROCESSING APPARATUS AND RESOURCE MANAGEMENT METHOD

Canon Kabushiki Kaisha, ...

1. An information processing apparatus that consumes resources of an amount that depends on how many libraries are open, comprising:a determining unit, configured to determine whether the number of libraries including classes set for an installed program is two or more; and
an integrating unit, configured to integrate, in a case where it is determined that the number of libraries is two or more, the classes included to the libraries into libraries of smaller number than the number of libraries, wherein
the determining unit specifies, with reference to a class path indicating class locations that is described in a configuration file of the installed program, a library including a class that is set for the program,
the program includes a system program and an application, with a boot class path and a system class path being described in the configuration file with regard to the system program and an application class path being described in the configuration file with regard to the application, and
the integrating unit integrates the classes included to the library based on the class paths.

US Pat. No. 10,169,021

SYSTEM AND METHOD FOR DEPLOYING A DATA-PATH-RELATED PLUG-IN FOR A LOGICAL STORAGE ENTITY OF A STORAGE SYSTEM

1. A method for deploying a data-path-related plug-in for a logical storage entity of a storage system, the method comprising:deploying the data-path-related plug-in for the logical storage entity, wherein the deploying includes creating a plug-in inclusive data-path specification and wherein the plug-in inclusive data-path specification includes operation of the data-path-related plug-in;
creating a verification data path specification, wherein the verification data-path specification does not include operation of the data-path-related plug-in;
executing a related to the data-path-related plug-in task on a data-path defined by the plug-in inclusive data-path specification to yield a first execution result;
executing the task on a data-path defined by the verification data-path specification to yield a second execution result;
verifying the first execution result using the second execution result thereby validating the task execution;
if any discrepancy exists between the first execution result and the second execution result, performing one or more failure actions; and
removing the verification data-path and performing one or more validation actions when a validation of the data-path-related plug-in is complete, wherein the one or more validation actions include one or more of the following actions:
(a) increasing a grade associated with the data-path-related plug-in; and
(b) issuing a notification indicating that the validation is complete to a user of the logical storage entity.

US Pat. No. 10,169,020

SOFTWARE GLOBALIZATION OF DISTRIBUTED PACKAGES

International Business Ma...

1. A method for globalizing distributed software packages using a global application programming interface (API), the method comprising:extracting, by a computer, a text string in a source language from an independent package program code;
calculating, by the computer using an algorithm, a resource message key for the extracted text string from content of the extracted text string;
storing, by the computer, the resource message key and the extracted text string in a source language resource file;
translating, by the computer, the extracted text string into an additional language to create a translated text string;
storing, by the computer, the translated text string with the resource message key in an additional language resource file; and
distributing, by the computer, an independent package with the source language resource file, the additional language resource file, and the independent package program code bundled in the independent package.

US Pat. No. 10,169,019

CALCULATING A DEPLOYMENT RISK FOR A SOFTWARE DEFINED STORAGE SOLUTION

International Business Ma...

1. An apparatus comprising:a processor;
a memory storing code executable by the processor to perform:
querying a deployed data storage solution for performance data, wherein the data storage solution provides data storage using hardware elements, software elements, an operating system, and drivers for the software elements;
receiving the performance data from the deployed data storage solution;
storing the performance data;
receiving failure data;
calculating discrepancy data for the deployed data storage solution from the failure data;
storing the discrepancy data;
generating data storage solution that provides configurable data storage for data storage deployment, wherein the data storage solution is organized as a data structure comprising a plurality of data storage components, and each data storage component comprises a hardware identifier for the hardware elements, and software prerequisites for the software elements, the operating system identifier, and a driver identifier for software elements;
calculating a deployment risk for the data storage solution using a trade-off analytics function performed by a neural network and based on the discrepancy data, the performance data, a product match, an operating system match, and a software prerequisites match between the data storage solution and data storage parameters, wherein the neural network is trained on data storage solution field data comprising discrepancy data, performance data, and failure data for deployed data storage solutions; and
in response to the deployment risk not exceeding a risk threshold, deploying the data storage solution by providing the hardware and software elements.

US Pat. No. 10,169,018

DOWNLOADING A PACKAGE OF CODE

International Business Ma...

1. A computer-implemented method comprising:receiving at a server a request from a client, for download of a package of code, the request specifying the package of code to be downloaded, wherein upon receipt of the package of code, the client can execute source code comprising the package of code locally;
acquiring information from the request received relating to a user of the client, wherein the information comprises a role of the user of the client, wherein the acquiring information relating to the user of the client comprises identifying the user of the client in an entry in a user registry and ascertaining user access rights from the entry in the user registry to determine the role of the user of the client;
automatically modifying the package of code according to the acquired information to provide a modified package of code specific to the user of the client, wherein functionality of the modified package of code, when executed on the client, is based on the role of the user of the client, wherein the automatically modifying the package of code according to the acquired information to produce the modified package of code comprises automatically removing one or more methods from the package of code, wherein each method of the one or more methods comprises source code, and wherein the one or more methods comprise a known access level, and wherein the automatically removing the one or more methods from the package of code is based on the role of the user of the client not permitting use of methods of the known access level of the one or more methods;
compiling, at the server, the modified package of code; and
transmitting the modified package of code to the client, wherein the client can immediately execute source code comprising the modified package of code locally.