US Pat. No. 10,339,065

OPTIMIZING MEMORY MAPPING(S) ASSOCIATED WITH NETWORK NODES

Ampere Computing LLC, Sa...

1. A system for optimizing memory mappings associated with a plurality of network nodes in a multi-node system, comprising:a first network node of the plurality of nodes configured for generating a memory page request in response to an invalid memory access associated with a virtual central processing unit of the first network node and, in response to a determination that a second network node of the plurality of nodes comprises a memory space associated with the memory page request, transmitting the memory page request to the second network node via a communication channel; and
the second network node configured for receiving the memory page request, retrieving a memory page request associated with the memory page request, and transmitting the memory page to the first network device via the communication channel, the first network node being further configured for mapping a memory page associated with the memory page request based on a set of memory page mappings stored by the first network node.

US Pat. No. 10,339,064

HOT CACHE LINE ARBITRATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for hot cache line arbitration, the method comprising:detecting, by a processing device, a hot cache line scenario;
tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests; and
facilitating, by the processing device, servicing of the hot cache line requests according to a hierarchy of the requestors, the hierarchy of the requestors being based at least in part on a location of the requestors relative to one another.

US Pat. No. 10,339,063

SCHEDULING INDEPENDENT AND DEPENDENT OPERATIONS FOR PROCESSING

Advanced Micro Devices, I...

1. A method, comprising:adding a first operation to a tracking array of a processor in response to the first operation being received for scheduling for execution at the processor and assigning the first operation a first age value;
adjusting the first age value in response to scheduling a second operation from the tracking array;
selecting the first operation for execution based on the first age value;
after selecting the first operation for execution, blocking the first operation from being issued to an execution unit responsive to identifying that the first operation is dependent on a third operation;
in response to blocking the first operation, resetting the first age value to an initial value and maintaining the first operation at the tracking array;
adjusting a value of a counter by a first adjustment in response to blocking the first operation from being scheduled for execution while the first operation is stored at the tracking array; and
in response to the value of the counter exceeding a threshold, suppressing scheduling of execution operations not stored the tracking array.

US Pat. No. 10,339,062

METHOD AND SYSTEM FOR WRITING DATA TO AND READ DATA FROM PERSISTENT STORAGE

EMC IP Holding Company LL...

1. A method for managing data stored in a persistent storage, the method comprising:receiving a write request comprising a logical address and a first datum;
storing a table entry corresponding to the logical address in a primary cache entry table;
updating a bitmap entry corresponding to the logical address;
storing the first datum in an external memory, wherein the external memory is operatively connected to the persistent storage;
transmitting a copy of the first datum to the persistent storage;
receiving a write request comprising a second logical address and second datum;
storing a second table entry corresponding to the second logical address in an overflow table;
updating a bitmap entry corresponding to the second logical address;
storing the second datum in the external memory; and
transmitting a copy of the second datum to the persistent storage.

US Pat. No. 10,339,061

CACHING FOR HETEROGENEOUS PROCESSORS

Intel Corporation, Santa...

1. A system comprising:a first processor comprising:
a plurality of cores on a single semiconductor chip, and
a first cache on the single semiconductor chip, the first cache to be shared by two or more of the plurality of cores on the single semiconductor chip;
a data processing device comprising:
a plurality of accelerator devices having a different instruction processing architecture from the plurality of cores, and
a second cache to be shared by the plurality of accelerator devices; and
an interconnect to couple the second cache to the first cache, wherein the interconnect comprises circuitry to perform coherence actions between the first cache and the second cache.

US Pat. No. 10,339,060

OPTIMIZED CACHING AGENT WITH INTEGRATED DIRECTORY CACHE

Intel Corporation, Santa...

1. A system comprising:a plurality of processing units, wherein each processing unit comprises one or more processing cores;
a memory coupled to and shared by the plurality of processing units; and
a cache/home agent (“CHA”) of a first processing unit, the CHA to:
maintain a remote snoop filter (“RSF”) corresponding to the first processing unit to track cache lines, wherein a cache line is tracked by the RSF if the cache line is stored in both the memory and one or more other processing units;
receive a request to access a target cache line from a processing core of the first processing unit;
allocate a tracker entry corresponding to the request, the tracker entry used to track a status of the request;
perform a lookup in the RSF for the target cache line; and
deallocate the tracker entry responsive to a detection that the target cache line is not tracked by the RSF.

US Pat. No. 10,339,059

GLOBAL SOCKET TO SOCKET CACHE COHERENCE ARCHITECTURE

Mellanoz Technologeis, Lt...

11. Apparatus comprising:a memory controller for maintaining cache coherency, the controller configured for a system comprising plural multi-core processor chips in plural sockets, the apparatus configured to:
receive a read request from a requesting one of the plural sockets for a requested cache line;
access an ownership directory that keeps track of which blocks are owned by the plural sockets to determine if the read request hits or misses in the ownership directory, with the ownership directory tracking only cache lines of data that are owned by one or more of the plural sockets;
activate a hybrid cache coherency protocol either;
when the read request misses in the ownership directory by configuring the apparatus to:
mark the requested cache line with a shared status;
return the requested cache line corresponding to the read request without sending a snoop message to other sockets
or when the read request hits in the ownership directory by configuring the apparatus to:
change status of the requested cache line to a shared status;
remove from the ownership directory, a directory entry corresponding to the requested cache line having the changed shared status;
produce a specific socket ID corresponding to the multi-core processor chip socket that has the exclusive copy of the requested cache line; and
send a directed message to the specified socket to supply the requested cache line.

US Pat. No. 10,339,058

AUTOMATIC CACHE COHERENCY FOR PAGE TABLE DATA

QUALCOMM Incorporated, S...

1. A method of automatic cache coherency for page table data on a computing device, comprising:modifying, by a first processing device, page table data stored in a first cache associated with the first processing device;
receiving, at a page table coherency unit, a page table cache invalidate signal from the first processing device;
issuing, by the page table coherency unit, a cache maintenance operation command to the first processing device; and
writing, by the first processing device, the modified page table data stored in the first cache to a shared memory accessible by the first processing device and a second processing device associated with a second cache storing the page table data.

US Pat. No. 10,339,057

STREAMING ENGINE WITH FLEXIBLE STREAMING ENGINE TEMPLATE SUPPORTING DIFFERING NUMBER OF NESTED LOOPS WITH CORRESPONDING LOOP COUNTS AND LOOP OFFSETS

TEXAS INSTRUMENTS INCORPO...

1. A digital data processor comprising:an instruction memory to store a plurality of instructions, each of the instructions specifying a data processing operation and at least one data operand;
an instruction decoder connected to the instruction memory to recall the instructions from the instruction memory and to determine, for each recalled instruction, the specified data processing operation and the at least one data operand;
at least one functional unit connected to a data register file and the instruction decoder to perform a data processing operation upon at least one data operand corresponding to an instruction decoded by the instruction decoder and to cause a result of the data processing operation to be stored in the data register file; and
a streaming engine connected to the instruction decoder to, in response to a stream start instruction, recall a data stream from a memory, wherein the data stream includes a sequence of data elements, wherein the sequence of data elements includes a selected number of nested loops, wherein each nested loop has a respective corresponding loop count, and wherein the streaming engine includes:
an address generator to generate stream memory addresses corresponding to the sequence of data elements of the data stream; and
a stream head register to store one or more data elements of the data stream so that the one or more data elements stored in the stream head register are available to be supplied to the at least one functional unit, wherein the at least one functional unit is configured to receive one of the one or more data elements from the stream head register as a data operand in response to a stream operand instruction;
wherein the data elements of the data stream are specified at least in part by a stream definition template stored in a stream definition template register, wherein the stream definition template includes a loop count field to indicate a loop format for the nested loops, wherein a first configuration indicated by the loop count field specifies a first format for the nested loops and a second configuration indicated by the loop count field specifies a second format for the nested loops, wherein, in the first format, the nest loops include a first number of nested loops and, in the second format, the nest loops include a second number of nested loops, the first number being different from the second number; and
wherein the address generator is configured to generate the stream memory addresses corresponding to the first number of nested loops when the loop count field specifies the first format and to generate the stream memory addresses corresponding to the second number of nested loops when the loop count field specifies the second format.

US Pat. No. 10,339,056

SYSTEMS, METHODS AND APPARATUS FOR CACHE TRANSFERS

SANDISK TECHNOLOGIES LLC,...

23. A system, comprising:a first host computing device, comprising:
a first cache manager configured to:
cache data of a particular virtual machine in cache storage of the first host computing device in association with respective cache tags allocated to the particular virtual machine, wherein the respective cache tags are stored outside of a memory space of the particular virtual machine; and
retain the cache data of the particular virtual machine within the cache storage of the first host computing device in response to the particular virtual machine being migrated from the first host computing device; and
a second host computing device, comprising:
a cache provisioner configured to allocate cache storage capacity within a cache storage device of the second host computing device to the particular virtual machine in response to the particular virtual machine being migrated to operate on the second host computing device; and
a second cache manager configured to:
receive the cache tags of the particular virtual machine from the first host computing device;
use the received cache tags to determine that the cache data of the particular virtual machine is being retained at the first host computing device;
access a portion of the cache data of the particular virtual machine retained at the first host computing device by use of the received cache tags; and
populate the cache storage capacity allocated to the particular virtual machine at the second host computing device by transferring the portion of the cache data of the particular virtual machine accessed from the first host computing device to the cache storage capacity of the second host computing device.

US Pat. No. 10,339,055

CACHE SYSTEM WITH MULTIPLE CACHE UNIT STATES

Red Hat, Inc., Raleigh, ...

1. A method comprising:determining, by a processing device, that a hit ratio is below a first hit ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and
responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, loading a dataset into the first cache unit rather than the second cache unit, wherein the first cache unit and the second cache unit are available to load the dataset.

US Pat. No. 10,339,054

INSTRUCTION ORDERING FOR IN-PROGRESS OPERATIONS

Cavium, LLC, Santa Clara...

1. An apparatus comprising:one or more modules configured to execute memory instructions that access data stored in physical memory based on virtual addresses translated to physical addresses based on mappings in a page table; and
memory management circuitry coupled to the one or more modules, the memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses;
wherein the memory management circuitry is configured to execute operations from the one or more modules, the executing including selectively ordering each of a plurality of in-progress operations that were in progress within a processor pipeline when a first operation was received by the memory management circuitry,
wherein said selectively ordering is with respect to completing execution within said processor pipeline, and is performed in response to the first operation being received,
wherein the first operation invalidates at least a first virtual address as a result of inserting an instruction into the pipeline within a pre-determined maximum number of cycles after the first operation was received, wherein the pre-determined maximum number of cycles is determined based at least in part on (1) a guaranteed maximum latency and (2) a maximum number of cycles needed for the inserted instruction to propagate through the pipeline, and
wherein a position in said selective ordering of a particular in-progress operation depends on whether or not the particular in-progress operation provides results to at least one of the first cache or second cache.

US Pat. No. 10,339,053

VARIABLE CACHE FLUSHING

Hewlett Packard Enterpris...

1. A method for variable cache flushing, the method comprising:detecting, by a storage controller, a cache flush failure;
in response to the detecting, executing, by the storage controller, a first reattempt of the cache flush after a first time period has elapsed; and
adjusting, by the storage controller, durations of time periods between reattempts of the cache flush subsequent to the first reattempt based at least on a rate of input/output (I/O) errors for a backing medium to which cache lines corresponding to the cache flush are to be written.

US Pat. No. 10,339,052

MASSIVE ACCESS REQUEST FOR OUT-OF-CORE TEXTURES BY A PARALLEL PROCESSOR WITH LIMITED MEMORY

CENTILEO LLC, Moscow (RU...

1. A method, comprising:organizing access request by one or more processors to the elements of textures, wherein a storage representation of the plurality of all the textures comprises a larger size than a capacity of processor memory, wherein the plurality of all the textures are stored only out-of-core, wherein an access is requested to incoherent data locations randomly distributed across the plurality of all the textures;
dividing by the one or more processors each texture from a plurality of all textures into plural evenly sized pages, each page comprising a subset of texels that are proximal to each other relative to the other of the texels of a particular texture, wherein all the plural pages comprises a representation of the plurality of textures, wherein each page contains up to M texels, where M is an integer number;
storing the plurality of all the pages in external memory and creating and allocating a cache system, the cache system comprising a page table and page cache in the processor memory;
allocating in the processor memory a page cache in a form of index structure capable of storing a subset of descriptors of the plurality of all pages wherein a page cache size depends on several limitations comprising the number of plural textures and the texture's size, the size of page in texels, the processor memory size or user setting;
allocating in the processor memory the page cache capable of storing CachedNP pages, wherein CachedNP depends on processor memory size and a user setting; and
allocating in the processor memory the page table comprising CachedNP descriptors of pages stored in the page cache, wherein each page descriptor comprises a page key, page time and a base address of the page data in the page cache, wherein the page key is a unique identity number of the page among the plurality of all the pages.

US Pat. No. 10,339,051

CONFIGURABLE COMPUTER MEMORY

Hewlett Packard Enterpris...

1. A method for configuring a memory, comprising:accessing, from a BIOS of a computer system, a set of memory settings for each of a set of memory segments of a memory, wherein each set of memory settings comprises a current memory setting that includes information about resiliency, power consumption, and performance of the corresponding memory segment and a set of potential memory settings, each potential memory setting including information about resiliency, power consumption, and performance available with the potential memory setting;
replacing, by an operating system of the computer system, a first current memory setting of a first segment of the memory with a first potential memory setting of a first set of memory settings for the first segment;
booting the computer system after the replacement of the first current memory setting with the first potential memory setting by the operating system;
supporting, by the operating system, a set of virtual machines, the set of virtual machines including a first virtual machine;
storing, by the operating system, the first virtual machine in the first segment responsive to determining that the first potential memory setting of the first segment is configured to support mirroring.

US Pat. No. 10,339,050

APPARATUS INCLUDING A MEMORY CONTROLLER FOR CONTROLLING DIRECT DATA TRANSFER BETWEEN FIRST AND SECOND MEMORY MODULES USING DIRECT TRANSFER COMMANDS

Arm Limited, Cambridge (...

1. An apparatus comprising:a memory controller and a plurality of memory modules;wherein:the memory controller, in order to control direct data transfer, is configured to:
issue a first direct transfer command to a first memory module of the plurality of memory modules, wherein the first direct transfer command comprises information indicating that the first memory module should transmit data bypassing the memory controller; and
issue a second direct transfer command to a second memory module of the plurality of memory modules, wherein the second direct transfer command comprises information indicating that the second memory module should store the data received directly from the first memory module;
the first memory module is configured to:
receive the first direct transfer command from the memory controller; and
directly transmit the data for receipt by the second memory module in dependence on the first direct transfer command; and
the second memory module is configured to:
receive the second direct transfer command from the memory controller;
receive the data from the first memory module directly; and
store the data in dependence on the second direct transfer command.

US Pat. No. 10,339,049

GARBAGE COLLECTION FACILITY GROUPING INFREQUENTLY ACCESSED DATA UNITS IN DESIGNATED TRANSIENT MEMORY AREA

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method of managing memory, the method comprising:executing a memory management process within a computing environment, the executing the memory management process comprising:
establishing an area of a memory as a transient memory area and an area of the memory as a conventional memory area;
tracking, for each data unit in the transient memory area or the conventional memory area, a number of accesses to the data unit, the tracking providing a respective access count for each data unit;
performing garbage collection processing on the memory, the garbage collection processing facilitating consolidation of the data units within the transient memory area or the conventional memory area, and the garbage collection processing comprising:
determining, for each data unit in the transient memory area or the conventional memory area, whether the respective access count is below a transient threshold ascertained to separate frequently accessed data units and infrequently accessed data units; and
grouping data units with respective access counts below the transient threshold together as transient data units within the transient memory area;
repeating the garbage collection processing over multiple garbage collection processing cycles; and
applying, between one garbage collection processing cycle and another garbage collection processing cycle of the multiple garbage collection processing cycles, an adjustment to lower, at least in part, the respective access counts of the data units to facilitate the garbage collection processing on the memory.

US Pat. No. 10,339,048

ENDURANCE ENHANCEMENT SCHEME USING MEMORY RE-EVALUATION

International Business Ma...

1. An apparatus, comprising:non-volatile memory configured to store data; and
a controller and logic integrated with and/or executable by the controller, the logic being configured to:
determine, by the controller, that at least one block of the non-volatile memory and/or portion of a block of the non-volatile memory meets a retirement condition;
re-evaluate, by the controller, the at least one block and/or the portion of a block to determine whether to retire the at least one block and/or the portion of a block;
indicate, by the controller, that the at least one block and/or the portion of a block remains usable when a result of the re-evaluation is not to retire the block; and
indicate, by the controller, that the at least one block and/or the portion of a block is retired when the result of the re-evaluation is to retire the block,
wherein the re-evaluating includes:
assigning the at least one block and/or the portion of a block into a delay queue for at least a dwell time and/or a read delay,
performing one or more erase operations on the at least one block and/or the portion of a block,
writing data to the at least one block and/or the portion of a block,
performing a calibration of the at least one block and/or the portion of a block, and
performing a read sweep on the at least one block and/or the portion of a block,
wherein performing the calibration includes determining an optimal threshold voltage shift value for each of the at least one block and/or the portion of a block.

US Pat. No. 10,339,047

ALLOCATING AND CONFIGURING PERSISTENT MEMORY

Intel Corporation, Santa...

1. An apparatus comprising:memory controller logic, coupled to non-volatile memory (NVM), to configure the NVM into a plurality of partitions at least in part based on one or more attributes,
wherein one or more volumes visible to an application or operating system are to be formed from one or more of the plurality of partitions, wherein each of the one or more volumes is to comprise one or more of the plurality of partitions having at least one similar attribute from the one or more attributes, wherein the NVM is to utilize at least two management partitions, wherein the management partitions are to be accessible prior to the NVM having been mapped into a processor's address space, wherein a first management partition from the at least two management partitions is read or write accessible by a Basic Input/Output System (BIOS), wherein the first management partition is read or write inaccessible by the application or the operating system.

US Pat. No. 10,339,046

DATA MOVING METHOD AND STORAGE CONTROLLER

SHENZHEN EPOSTAR ELECTRON...

1. A data moving method, adapted for controlling a storage device equipped with a flash memory, the storage device being controlled by a storage controller, the flash memory comprising a plurality of dice, the dice comprising a first die corresponding to a first channel and a second die corresponding to a second channel, each of the dice comprising a first plane and a second plane, the method comprising:performing a data moving operation by the storage controller to obtain a valid data from a plurality of source blocks, the valid data comprising a first data, a second data, a third data and a fourth data;
determining whether the valid data is a sequential data by the storage controller;
when the valid data is the sequential data, transmitting a first 2-plane read command by the storage controller to read the first data and the second data respectively from the first plane and the second plane of the first die through the first channel, transmitting a second 2-plane read command to read the third data and the fourth data respectively from the first plane and the second plane of the second die through the second channel, and transmitting the first data, the third data, the second data and the fourth data to a buffer memory in order; and
transmitting a first 2-plane programming command by the storage controller to respectively program the first data and the second data to the first plane and the second plane of a third die among the dice, and transmitting a second 2-plane programming command to respectively program the third data and the fourth data to the first plane and the second plane of a fourth die among the dice.

US Pat. No. 10,339,045

VALID DATA MANAGEMENT METHOD AND STORAGE CONTROLLER

SHENZHEN EPOSTAR ELECTRON...

1. A valid data management method, adapted to a storage device having a rewritable non-volatile memory module, wherein the rewritable non-volatile memory module has a plurality of physical units, each physical unit among the physical units comprises a plurality of physical sub-units, and the method comprises:creating a valid data mark table and a valid logical addresses table corresponding to a target physical unit according to the target physical unit among the physical units, a logical-to-physical table corresponding to the rewritable non-volatile memory module and a target physical-to-logical table corresponding to the target physical unit, wherein the target physical-to-logical table records target logical addresses of a plurality of target logical sub-units mapped to a plurality of target physical sub-units according to an arrangement order of the target physical sub-units of the target physical unit, and the target logical addresses respectively correspond to a plurality of target physical addresses of the target physical sub-units,
wherein the created valid data mark table records a plurality of mark values respectively corresponding to the target logical addresses, wherein each mark value among the mark values is a first bit value or a second bit value, wherein the first bit value is configured to indicate that the corresponding target logical address is valid, and the second bit value is configured to indicate that the corresponding target logical address is invalid,
wherein the created valid logical addresses table only records one or more valid target logical addresses respectively corresponding to one or more said first bit values according to an order of the one or more said first bit values in the valid data mark table, wherein the one or more valid target logical addresses are the target logical addresses determined as valid among the target logical addresses, wherein the valid data mark table is smaller than the valid logical addresses table, and the valid logical addresses table is smaller than the target physical-to-logical table; and
identifying one or more valid data stored in the target physical unit according to the logical-to-physical table, the valid data mark table and the valid logical addresses table corresponding to the target physical unit.

US Pat. No. 10,339,044

METHOD AND SYSTEM FOR BLENDING DATA RECLAMATION AND DATA INTEGRITY GARBAGE COLLECTION

SanDisk Technologies LLC,...

1. A method of recycling data in a non-volatile memory system, comprising:determining occurrences of triggering events, wherein the triggering events include:
data reclamation events, urgent data integrity recycling events, and scheduled data integrity recycling events, wherein the data reclamation events include events that each corresponds to the occurrence of one or more host data write operations in accordance with a target reclamation to host write ratio, a respective urgent data integrity recycling event occurs when a respective memory portion of the non-volatile memory system satisfies predefined urgent read disturb criteria, and the scheduled data integrity recycling events include events that occur at a rate corresponding to a projected quantity of memory units for which data integrity recycling is to be performed by the non-volatile memory system over a period of time;
in response to each of a plurality of triggering events, recycling data in a predefined quantity of memory units from a source memory portion to a target memory portion of the non-volatile memory system; and
in response to determining occurrences of a first type of non-urgent recycling events and a second type of the non-urgent recycling events, calculating a hybrid timeout period in accordance with a first timeout period and a second timeout period.

US Pat. No. 10,339,043

SYSTEM AND METHOD TO MATCH VECTORS USING MASK AND COUNT

MoSys, Inc., San Jose, C...

1. An apparatus for calculating an index into a main memory, the apparatus comprising:an index-generating logic coupleable to the main memory and having a plurality of inputs and an output;
a local memory for storing a plurality of population counts of compressed data that is stored in the main memory, the local memory selectively coupled to the index-generating logic in order to selectively provide at least a portion of the plurality of population counts to the index-generating logic; and
a register coupled to provide the index-generating logic a plurality of multi-bit strides (MBSs) of a prefix string; and wherein:
the index-generating logic generates a composite index on its output to a data location in the main memory; and
the data location in the main memory is a longest prefix match (LPM) for the prefix string and any data associated with the LPM.

US Pat. No. 10,339,042

MEMORY DEVICE INCLUDING COLUMN REDUNDANCY

Samsung Electronics Co., ...

1. A memory device comprising:a memory cell array comprising a plurality of mats connected to a word line and a plurality of bit lines; and
a column decoder comprising a first repair circuit in which a first repair column address is stored, and a second repair circuit in which a second repair column address is stored,
wherein when the first repair column address coincides with a received column address in a read command or a write command, the column decoder is configured to select other bit lines from among the plurality of bit lines instead of bit lines from among the plurality of bit lines corresponding to the received column address in one mat among the plurality of mats, and
wherein when the second repair column address coincides with the received column address, the column decoder is configured to select other bit lines from among the plurality of bit lines instead of the bit lines corresponding to the received column address in the plurality of mats.

US Pat. No. 10,339,041

SHARED MEMORY ARCHITECTURE FOR A NEURAL SIMULATOR

QUALCOMM Incorporated, S...

1. A computer-implemented method for allocating memory in an artificial nervous system simulator implemented in hardware, comprising:performing a simulation of a plurality of artificial neurons of an artificial nervous system;
determining memory resource requirements of each of the artificial neurons of the artificial nervous system being simulated based on at least one of a state or a type of the artificial neuron being simulated;
dynamically allocating portions of a shared memory pool to the artificial neurons based on the determination of memory resource requirements as the memory resource requirements change during the simulation, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients; and
accessing, for each artificial neuron, the portion of the shared memory pool allocated to the artificial neuron during the simulation via the write clients and the read clients.

US Pat. No. 10,339,040

CORE DATA SERVICES TEST DOUBLE FRAMEWORK AUTOMATION TOOL

SAP SE, Walldorf (DE)

1. A computer-implemented method for evaluating integrity of data models, comprising:selecting a package comprising a semantic and reusable data model expressed in data definition language;
selecting a class to create a plurality of local test classes;
generating, based on a class name and a package name, a plurality of local test class templates for the package; and
determining an integrity of the data model by comparing an actual result for the data model and an expected result for the data model.

US Pat. No. 10,339,039

VIRTUAL SERVICE INTERFACE

CA, Inc., Islandia, NY (...

1. A method comprising:identifying a virtualization request to initiate a virtualized transaction involving a first software component and a virtual service simulating a second software component;
determining a location of a reference, within the first software component, to the second software component based on a type of the first software component, wherein the location of the reference comprises a particular file associated with the first software component, and the first software component is to use the location of the reference to determine a first network location of the second software component and communicate with the second software component based on the first network location;
determining a second network location of a system to host the virtual service; and
changing the reference within the particular file, using a plug-in installed on the first software component, to direct communications of the first software component to the second network location instead of the first network location responsive to the virtualization request, wherein the virtualized transaction is to comprise a request sent from the first software component to the virtual service and a synthetic response generated by the virtual service to model a real response by the second software component to the request.

US Pat. No. 10,339,038

METHOD AND SYSTEM FOR GENERATING PRODUCTION DATA PATTERN DRIVEN TEST DATA

JPMORGAN CHASE BANK, N.A....

1. A computer implemented system that implements a test data tool that generates test data based on production data patterns, the test data tool comprising:a data input that interfaces with one or more production environments;
an output interface that transmits test data to one or more user acceptance testing (UAT) environments;
a communication network that receives production data from the one or more production environments and transmits test data to the one or more UAT environments; and
a computer server comprising at least one processor, coupled to the data input, the interactive user interface and the communication network, the processor configured to:
receive, via the data input, production data from the one or more production environments, the production data comprises personally identifiable information;
identify a plurality of attributes from the production data;
for each attribute, identify one or more data patterns;
generate one or more rules that define the one or more data patterns for each attribute;
generate a configuration file based on the one or more rules;
apply the configuration file to generate test data in a manner that obscures personally identifiable information existing in the production data; and
transmit the test data to a UAT environment.

US Pat. No. 10,339,037

RECOMMENDATION ENGINE FOR RECOMMENDING PRIORITIZED PERFORMANCE TEST WORKLOADS BASED ON RELEASE RISK PROFILES

INTUIT INC., Mountain Vi...

1. A method for recommending prioritized performance test workloads, the method comprising:retrieving, by a processor, a baseline workload and variability coverage matrix associated with an aspect of a software release, wherein the variability coverage matrix identifies variations of the baseline workload in the software release;
retrieving, by the processor, from one or more external resources, information about the software release based on keywords associated with a baseline test workload for a software release;
creating, by the processor, a risk profile for the software release based, at least in part, on a number of matches to each of the keywords in the retrieved information, wherein the risk profile includes weightings to apply to the baseline workload for each variation of a workload in the variability coverage matrix, and wherein each of the keywords has a corresponding weight, and creating the risk profile comprises adjusting a weighting associated with each keyword in the baseline test workload by the corresponding weight for each instance of the keyword in the retrieved information;
generating, by the processor, a prioritized test workload for execution over one or more prioritized variability dimensions based on the risk profile and the baseline test workload, wherein generating the prioritized test workload comprises adjusting, for a variability dimension in the variability coverage matrix, a distribution of tests to execute for each value defined for the variability dimension; and
executing, by the processor, a test of the software release based on the prioritized test workload.

US Pat. No. 10,339,036

TEST AUTOMATION USING MULTIPLE PROGRAMMING LANGUAGES

Accenture Global Solution...

1. A device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
receive information identifying a set of steps to perform,
the set of steps being related to a test of a program,
one or more steps, of the set of steps, being written in a first programming language;
determine whether the set of steps is associated with a first artifact that is similar to a second artifact associated with another set of steps based on the information identifying the set of steps,
the first artifact identifying information related to the test of the program and the second artifact identifying information related to another test of another program;
determine whether two or more steps, of the set of steps, can be combined into a combined set of steps based on determining whether the set of steps is associated with the first artifact that is similar to the second artifact;
identify program code written in a second programming language based on determining whether the two or more steps, of the set of steps, can be combined into the combined set of steps,
the second programming language being different from the first programming language; and
perform an action related to the test of the program based on identifying the program code.

US Pat. No. 10,339,035

TEST DB DATA GENERATION APPARATUS

HITACHI, LTD., Tokyo (JP...

1. A test DB data generation apparatus for generating a database for testing, which approximates an existing database having a plurality of tables, each table having a plurality of corresponding columns that take on a corresponding plurality of values, the test DB data generation apparatus comprising:a column distribution extraction module extracting distribution information of values of each column of the existing database, wherein the column distribution information indicates, for one or more columns in a corresponding table, a frequency distribution of a range of the plurality of values taken on by said one or more columns in the existing database;
a column dependency extraction module extracting column dependency information of the existing database, wherein the column dependency information indicates a probability that, when a first column in a first table takes on a source value that falls within a range of values, a second column in a second table takes on a target value;
a data generation module generating test DB data based on the distribution information and the column dependency information, wherein the test DB data indicates, for the first table in the existing database, that when a first column in the first table takes on a first value, a second column in the first table takes on a second value in a proportion that corresponds to the probability indicated by the column dependency information; and
a column dependency degree calculation module measuring a degree of dependency between columns of the existing database,
wherein the column dependency extraction module is configured to:
group pieces of data with a rule for each column of the existing database;
replace the pieces of data with group names of respective groups obtained by the grouping; and
calculate a degree of co-occurrence of pieces of data for a combination of two columns, and
wherein the data generation module is configured to determine whether or not to generate test DB data by using the column dependency information for each column based on the degree of dependency between columns calculated by the column dependency degree calculation module.

US Pat. No. 10,339,034

DYNAMICALLY GENERATED DEVICE TEST POOL FOR STAGED ROLLOUTS OF SOFTWARE APPLICATIONS

Google LLC, Mountain Vie...

1. A method comprising:receiving, by a computing system that includes an application repository, an updated version of an executable application;
determining, by the computing system, based at least in part on one or more characteristics of a particular computing device and one or more characteristics of a group of computing devices that excludes the particular computing device,
whether the particular computing device contributes additional test scope for the updated version of the executable application beyond existing test scope for the updated version of the executable application that is contributed by the group of computing devices; and
responsive to determining that the particular computing device contributes additional test scope for the updated version of the executable application beyond the existing test scope for the updated version of the executable application that is contributed by the group of computing devices:
adding, by the computing system, the particular computing device to the group of computing devices; and
sending, by the computing system, the updated version of the executable application to the particular computing device for installation at the particular computing device.

US Pat. No. 10,339,033

RUNTIME DETECTION OF UNINITIALIZED VARIABLE ACROSS FUNCTIONS

International Business Ma...

1. A computer implemented method for detecting uninitialized variables, the computer implemented method comprising:running a first function, wherein the first function comprises a local variable and a first flag associated with the local variable for indicating an initialization state of the local variable;
calling a second function from the first function, with the local variable as a parameter of the second function, wherein the second function comprises a second flag associated with the parameter for indicating an initialization state of the parameter;
in response the local variable not indicating the initialization state of the parameter, providing a global variable to the second function as a second parameter, wherein the global variable indicates the availability state of the second flag to the first function;
in response to the second flag to the first function determined as available, returning the second flag from the second function to the first function; and
updating the first flag based at least on the second flag and the global variable being available, wherein the global variable is associated with the second flag returned to the first function from the second function.

US Pat. No. 10,339,032

SYSTEM FOR MONITORING AND REPORTING PERFORMANCE AND CORRECTNESS ISSUES ACROSS DESIGN, COMPILE AND RUNTIME

Microsoft Technology Lice...

1. A method performed on a computing device that includes at least one processor and memory, the method comprising:receiving, by the computing device from a development tool, at least one event configured to identify a design-time, a compile-time, and a run-time issue associated with code;
determining, from a plurality of categories, a category of the at least one event, wherein the plurality of categories comprise: best practices, application performance, accessibility, localization, or any other aspect of application development or operation;
mapping, by the computing device based on a mapping information in a mapping store, the category of the at least one event to at least one rule of a plurality of rules in a rule store associated with development code;
identifying, by the computing device based at least on the mapping, the at least one rule of the plurality of rules in the rule store associated with the development of the code, wherein each rule of the plurality of rules in the rule store includes an identifier that uniquely identifies the each rule from each other of the plurality of rules;
evaluating, by the computing device based on the at least one identified rule, the at least one received event according to the at least one rule resulting in identification of a development issue associated with the code;
generating, by the computing device based on said evaluating, a rule output that identifies a cause of the development issue associated with the code;
displaying, by the computing device in a user interface host based at least on the evaluating, the identification of the development issue associated with the code and a proposed solution for the development issue associated with the code; and
modifying, by the computing device based on the rule output, the code according to the proposed solution for the development issue associated with the code.

US Pat. No. 10,339,031

EFFICIENT METHOD DATA RECORDING

BMC Software Israel Ltd.,...

1. A method executed by a processor, the method comprising:collecting subroutine call data regarding a of a plurality of subroutine calls executed by a software application;
selecting one or more of the plurality of subroutine calls for data recording, each of the selected subroutine calls being assigned a unique call identification (ID);
storing the recorded data for the selected subroutine calls in a subroutine data registry; and
pruning, as the plurality of subroutine calls execute, a subroutine call tree to include only the selected subroutine calls and one or more parent subroutine calls of each of the selected subroutine calls, the unique call IDs of the subroutine calls in the pruned subroutine call tree providing a mapping to the recorded data for the selected subroutine calls in the subroutine data registry,
wherein pruning the subroutine call tree includes maintaining an array of elements, each array element being associated with a respective subroutine call, the maintaining including:
adding a first element to the array when a first selected subroutine is called;
retaining, when the first selected subroutine exits, the first element in the array based on determining that the first selected subroutine is associated with a parent subroutine included in selected subroutine calls;
selecting a second element present in the array;
removing the selected second element from the array; and
generating the subroutine call tree based on the elements remaining in the array, the subroutine call tree including the first subroutine call associated with the first element and omitting the second subroutine call associated with the selected second element.

US Pat. No. 10,339,030

DUPLICATE BUG REPORT DETECTION USING MACHINE LEARNING ALGORITHMS AND AUTOMATED FEEDBACK INCORPORATION

Oracle International Corp...

1. A method comprising:for each particular set of bug reports, in a first plurality of sets of bug reports, identifying:
(a) a user-classification of the particular set of bug reports as including duplicate bug reports or non-duplicate bug reports;
(b) a first plurality of correlation values, each of which corresponds to a respective feature, of a plurality of features, between bug reports in the particular set of bug reports;
based on (a) and (b), for the first plurality of sets of bug reports, generating a model to identify any set of bug reports as including duplicate bug reports or non-duplicate bug reports;
receiving a request to determine whether a particular bug report is a duplicate of any of a second plurality of bug reports;
identifying a first category associated with the particular bug report;
identifying a first subset of bug reports, of the second plurality of bug reports, associated with the first category;
identifying a second subset of bug reports, of the second plurality of bug reports, that have been previously identified as a duplicate of at least one bug report of the first subset of bug reports;
identifying a set of candidate bug reports that:
(a) includes one or more of the first subset of bug reports;
(b) includes one or more of the second subset of bug reports; and
(c) does not include a third subset of bug reports, of the second plurality of bug reports, that (i) are not associated with the first category and (ii) have not been previously identified as a duplicate of any bug report of the first subset of bug reports;
applying the model to obtain a classification of the particular bug report and a candidate bug report, of the set of candidate bug reports, as duplicate bug reports or non-duplicate bug reports, and refraining from applying the model to classify the particular bug report and any of the third subset of bug reports as duplicate bug reports or non-duplicate bug reports.

US Pat. No. 10,339,029

AUTOMATICALLY DETECTING INTERNALIZATION (I18N) ISSUES IN SOURCE CODE AS PART OF STATIC SOURCE CODE ANALYSIS

CA, Inc., New York, NY (...

1. A method, comprising:installing a plug-in component in a stand-alone static source code analysis program/application, wherein the plug-in component contains a plurality of sets of internationalization rules, wherein each respective set is configured to enable the detection of internationalization issues in source code of a particular programming type that is different from respective programming language types corresponding to other sets;
automatically creating a repository comprising the plurality of sets of internationalization rules during the installation of the plug-in;
accessing a first set of the plurality of sets of internationalization rules;
creating a first quality profile for a first programming language type using the first set of the plurality of sets of internationalization rules, corresponding to the first programming language type;
accessing a second set of the plurality of sets of internationalization rules;
creating a second quality profile for a second programming language type using the second set of the plurality of sets of internationalization rules, corresponding to the second programming language type;
scanning source code of a software product for potential issues, wherein scanning source code comprises scanning at block level and searching code by comparing each block to a rule in a quality profile, wherein the quality profile used is the first quality profile when the source code is written in the first programming language type or the second quality profile if the source code is written in the second programming language type;
identifying detected internationalization issues in the source code when a block of code matches or meets a rule in the quality profile;
formatting for display the detected internationalization issues; and
suggesting a solution to fix the detected internationalization issues.

US Pat. No. 10,339,028

LOG STORAGE VIA APPLICATION PRIORITY LEVEL

FUJITSU LIMITED, Kawasak...

1. An information processing device comprising:a memory; and
a processor coupled to the memory and the processor configured to:
determine, in accordance with a first depth corresponding to a first condition associated with an application in a hierarchical structure, a priority level of the application that provides a service based on the first condition included in a plurality of conditions, each condition of the plurality of conditions corresponding to each depth in the hierarchical structure,
perform, in accordance with the priority level of the application, determination whether the application is a collection target, and
when the application is the collection target, collect a log of the application from a terminal which has downloaded the application.

US Pat. No. 10,339,027

AUTOMATION IDENTIFICATION DIAGNOSTIC TOOL

Accenture Global Solution...

1. A system, comprising:a database interface configured to communicate with a database library storing a set of automation rules;
a communication interface configured to communicate with a computing device;
a processor configured to communicate with the database interface and the communication interface, the processor further configured to:
receive, through the communication interface, a recording request to commence recording of actions interacting with a program running on the computing device;
in response to receiving the recording request, record a recording session capturing the actions interacting with the program running on the computing device;
detect an actionable input to the computing device during recording of the recording session;
capture a screenshot of the actionable input in response to detecting the actionable input;
receive, through the communication interface, a stop recording request to stop recording of the recording session;
in response to receiving the stop record request, stop recording of the recording session;
compare the recording session to a predetermined automation list;
determine the actionable input is one of an automatable process or a potentially automatable process based on the comparison;
generate a workflow diagram describing the recording session; and
generate an analysis report graphical user interface (GUI) based on the workflow diagram, wherein the analysis report GUI includes the actionable input, wherein the actionable input is tagged in the analysis report GUI as being one of the automatable process or the potentially automatable process.

US Pat. No. 10,339,026

TECHNOLOGIES FOR PREDICTIVE MONITORING OF A CHARACTERISTIC OF A SYSTEM

Intel Corporation, Santa...

1. A predictive sensor module to monitor a characteristic of a monitored system, the predictive sensor module comprising:a primary sensor to produce primary sensor data indicative of a primary characteristic of the monitored system;
one or more secondary sensors to produce secondary sensor data indicative of a secondary characteristic, different from the primary characteristic, of the monitored system; and
a sensor controller to (i) determine a measured value of the primary characteristic based on the primary sensor data, (ii) determine a measured value of the secondary characteristic based on the secondary sensor data, (iii) predict a predicted value of the primary characteristic using a predictive model with the measured value of the secondary characteristic as an input to the predictive model, and (iv) determine whether to update the predicted model based on whether a difference between the measured value and the predicted value of the primary characteristic exceeds a threshold.

US Pat. No. 10,339,025

STATUS MONITORING SYSTEM AND METHOD

EMC IP Holding Company LL...

1. A signal generation subsystem configured to:receive a plurality of binary status signals from a plurality of monitored subcomponents within a system being monitored, wherein each binary status signal includes a warning of an upcoming change in power output of each monitored subcomponent; and
generate a cumulatively-encoded status signal based, at least in part, upon the plurality of binary status signals, which is indicative of the overall health of the system being monitored, wherein an amplitude of the cumulatively-encoded status signal indicates a number of monitored subcomponents with the warning of an upcoming change in power output, and the cumulatively-encoded status signal is configured to control the power demand of one or more controlled subcomponents based, at least in part, upon the amplitude of the cumulatively-encoded status signal.

US Pat. No. 10,339,024

PASSIVE DEVICE DETECTION

Microsoft Technology Lice...

1. A system comprising:memory;
a processor;
a passive device identifier stored in the memory and executable by the processor to:
increment a current supplied to a passive electronic device at discrete intervals;
sample a voltage of the passive electronic device at each one of the discrete intervals to generate a dataset of current-voltage pairs; and
identify the passive electronic device based on the generated dataset.

US Pat. No. 10,339,023

CACHE-AWARE ADAPTIVE THREAD SCHEDULING AND MIGRATION

Intel Corporation, Santa...

1. A processor comprising:a plurality of cores each to independently execute instructions, the plurality of cores included on a single die of the processor;
a shared cache memory coupled to the plurality of cores, the shared cache memory having a plurality of cache portions, wherein the shared cache memory is a single memory structure included on the single die of the processor, wherein a first cache portion of the shared cache memory is associated with a first core of the plurality of cores, wherein a second cache portion of the shared cache memory is associated with a second core of the plurality of cores;
a plurality of cache activity monitors each associated with one of the plurality of cache portions of the shared cache memory, wherein each cache activity monitor is to monitor a cache miss rate of an associated cache portion and to output cache miss rate information;
a plurality of thermal sensors each associated with one of the plurality of cache portions and to output thermal information including a temperature of the corresponding cache portion; and
a logic coupled to the plurality of cores to:
receive the cache miss rate information from the plurality of cache activity monitors and the thermal information,
in response to a determination that the cache miss rate of the first cache portion of the shared cache memory exceeds a cache miss threshold stored in the processor, migrate a first thread from the first core associated with the first cache portion to the second core associated with the second cache portion of the shared cache memory.

US Pat. No. 10,339,022

NON-INTRUSIVE MONITORING AND CONTROL OF INTEGRATED CIRCUITS

ALTERA CORPORATION, San ...

1. A tangible, non-transitory, machine-readable medium, comprising machine-readable instructions to:establish a connection to an electrical device that comprises a programmable logic device;
send an incremental configuration data over the connection, wherein the incremental configuration data comprises a trigger condition, wherein the incremental configuration data is based at least in part on a first configuration data previously loaded on the programmable logic device, and wherein the incremental configuration data is configured to cause the programmable logic device of the electrical device to generate debug information upon meeting the trigger condition; and
receive the debug information from the programmable logic device over the connection.

US Pat. No. 10,339,021

METHOD AND APPARATUS FOR OPERATING HYBRID STORAGE DEVICES

EMC IP Holding Company LL...

1. A method for operating a hybrid storage device, the hybrid storage device including a storage device of a first type and a storage device of a second type different from the first type, the method comprising:synchronously writing data into the storage device of the first type and the storage device of the second type, wherein the hybrid storage device further includes a volatile memory;
in response to a failure of the synchronous writing, transmitting, by the volatile memory, information indicating a success of writing the data to a host;
rewriting the data in the storage device of the first type;
in response to the failure of the synchronous writing, updating metadata in the storage device of the first type;
writing the data in the storage device of the first type using the data written in the storage device of the second type; and
updating again the metadata in the storage device of the first type.

US Pat. No. 10,339,020

OBJECT STORAGE SYSTEM, CONTROLLER AND STORAGE MEDIUM

Toshiba Memory Corporatio...

1. An object storage system configured to store a key and a value in association with each other, the object storage system comprising:a first storage region in which the value is stored;
a second storage region in which first information and second information are stored, the first information being used for managing an association between the key and a storage position of the value, the second information being used for managing a position of a defective storage area in the first storage region; and
a controller configured to control the first storage region and the second storage region, wherein the controller comprises a write processor configured to:
determine whether there is a defective storage area in a storage area reserved in the first storage region as a write area for a write value or not based on the second information;
execute, when determining that there is a defective storage area, write processing of writing the write value in the first storage region by arranging the write value for an area other than the defective storage area in the storage area reserved in the first storage region to avoid the defective storage area; and
execute, when determining that there is no defective storage area, write processing of writing the write value in the first storage region by arranging the write value for the entire storage area reserved in the first storage region.

US Pat. No. 10,339,019

PACKET CAPTURING SYSTEM, PACKET CAPTURING APPARATUS AND METHOD

FUJITSU LIMITED, Kawasak...

13. A method of capturing a plurality of packets through a network, the method comprising:storing, by a capturing apparatus coupled to the network, into a storage device, a first mirror packet which is generated by mirroring a first packet transmitted in the network;
determining whether another capturing apparatus is in an operation state or a non-operation state, the another capturing apparatus being coupled to the network and storing the first mirror packet into another storage device while the another capturing apparatus is in the operation state;
deleting by the capturing apparatus, when the capturing apparatus determines the another capturing apparatus is in the operation state, the first mirror packet stored in the storage device; and
storing into the another storage device, when the capturing apparatus determines the another capturing apparatus is in the non-operation state, a second mirror packet generated by mirroring a second packet transmitted in the network, while maintaining the first mirror packet stored in the storage device.

US Pat. No. 10,339,018

REDUNDANCY DEVICE, REDUNDANCY SYSTEM, AND REDUNDANCY METHOD

Yokogawa Electric Corpora...

1. A redundancy device which is configured to communicate with a redundancy opposite device and perform a redundancy execution, the redundancy device comprising:receivers configured to receive individually HB signals transmitted from the redundancy opposite device;
a calculator configured to calculate a number of normal communication paths among communication paths of the HB signals based on a reception result of the receivers;
a comparator configured to compare a calculation result of the calculator with a predetermined threshold value; and
a changer configured to change the redundancy device from a standby state to an operating state, or change the redundancy device from the standby state to a not-standby state in which the redundancy execution is released, based on the calculation result of the calculator and a comparison result of the comparator.

US Pat. No. 10,339,017

METHODS AND SYSTEMS FOR USING A WRITE CACHE IN A STORAGE SYSTEM

NETAPP, INC., Sunnyvale,...

8. A non-transitory machine readable medium having stored thereon instructions for performing a method comprising machine executable code which when executed by at least one machine, causes the machine to:store received data from a client system in a first write cache, the first write cache comprising volatile storage;
transfer the received data from the first write cache to a second write cache in response to an input/output (I/O) request size exceeding a threshold value, the second write cache comprising nonvolatile storage;
update a recovery control block with a location of the received data stored in the second write cache and an entry in a tracking structure used to track valid data stored at the second write cache; and
transfer, in response to a power failure, the recovery control block to a persistent storage device instead of all data contents of the first write cache.

US Pat. No. 10,339,015

MAINTAINING SYSTEM RELIABILITY IN A CPU WITH CO-PROCESSORS

INTERNATIONAL BUSINESS MA...

1. A computer program product for maintaining reliability of a computer having a Central Processing Unit (CPU) and multiple co-processors, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising:launching a same set of operations in each of an original co-processor and a redundant co-processor, from among the multiple co-processors, to obtain respective execution signatures from the original co-processor and the redundant co-processor;
detecting an error in an execution of the set of operations by the original co-processor, by comparing the respective execution signatures;
designating the execution of the set of operations by the original co-processor as error-free and committing a result of the execution, responsive to identifying a match between the respective execution signatures; and
performing an error recovery operation that replays the set of operations by the original co-processor and the redundant co-processor, responsive to identifying a mismatch between the respective execution signatures.

US Pat. No. 10,339,014

QUERY OPTIMIZED DISTRIBUTED LEDGER SYSTEM

McAfee, LLC, Santa Clara...

1. A method for indexing a distributed ledger, the method comprising:receiving, with a hardware processor of a data node, a first snapshot of transaction data, wherein the first snapshot of transaction data is data added to the distributed ledger that has not been included in an original master table or an original index of the distributed ledger;
identifying, with a hardware processor of a data node, attributes of the transaction data of the first snapshot; verifying, with the hardware processor, the first snapshot;
copying, with a hardware processor of a data node, the attributes of the transaction data of the first snapshot to a first master table;
constructing, with a hardware processor of a data node, a first index for a first attribute of the transaction data of the first snapshot;
publishing, with a hardware processor of a data node, completion of the first index for the first attribute of the transaction data of the first snapshot;
concatenating the original master table and the first master table;
concatenating the original index and the first index;
receiving a request to query the distributed ledger transaction data; and
processing the query on the indexed attributes.

US Pat. No. 10,339,013

ACCELERATED RECOVERY AFTER A DATA DISASTER

International Business Ma...

1. A system for restoring data from a first system onto a second system comprising:at least one processor configured to:
receive via a network interface at the second system, a metadata file from the first system, and initialize a database on the second system based on the metadata file;
receive an image from the first system, via the network interface of the second system, and initiate restoration of the image on the second system, wherein the image includes information of the first system to be restored on the second system;
receive via the network interface and at the initialized database on the second system, during the restoration of the image on the second system, one or more log files from the first system indicating transactions performed on the first system during the restoration of the image on the second system and relating to the information to be restored; and
perform the transactions of the log files to synchronize the restored data on the second system with the first system, in response to completion of the restoration.

US Pat. No. 10,339,012

FAULT TOLERANT APPLICATION STORAGE VOLUMES FOR ENSURING APPLICATION AVAILABILITY AND PREVENTING DATA LOSS USING SUSPEND-RESUME TECHNIQUES

VMware, Inc., Palo Alto,...

1. A method for fault tolerant delivery of an application to a virtual machine (VM) being executed by a server in a remote desktop environment using application storage volumes, comprising:delivering the application to the VM by attaching a primary application storage volume (ASV) containing components of the application to the VM;
cloning the primary ASV to create a backup ASV;
executing the application on the VM from the primary ASV;
monitoring the primary ASV to detect failures;
detecting a failure of the primary ASV;
in response to the detecting the failure of the primary ASV, suspending execution of the application;
attaching the backup ASV to the VM; and
resuming the execution of the application from the backup ASV by redirecting operating system calls accessing the application to the backup ASV.

US Pat. No. 10,339,011

METHOD AND SYSTEM FOR IMPLEMENTING DATA LOSSLESS SYNTHETIC FULL BACKUPS

EMC IP Holding Company LL...

1. A method for archiving data, comprising:selecting a virtual machine (VM) executing on a first computing system;
identifying at least one virtual disk (VD) associated with the VM;
for each VD of the at least one VD:
obtaining a user-checkpoint tree (UCT) for the VD;
identifying, within the UCT, a set of user-checkpoint branches (UCBs) comprising an active UCB and at least one inactive UCB;
generating a VD image (VDI) based on the at least one inactive UCB and the active UCB; and
after generating the VDI for each VD of the at least one VD, to obtain at least one VDI:
generating, for the VM, a VM image (VMI) comprising the at least one VDI.

US Pat. No. 10,339,010

SYSTEMS AND METHODS FOR SYNCHRONIZATION OF BACKUP COPIES

1. A method providing a technical solution to the technical problem of how to efficiently create and maintain multiple backup copies without having to separately read the original data for each of the backup copies, the method comprising:(a) receiving, at a data processing engine running on a first electronic device from a second electronic device associated with original storage, first data corresponding to a first version of original data stored in the original data storage;
(b) effecting creating, in primary backup storage at a location remote to the first electronic device based on the received first data, a primary backup copy of the first version of the original data;
(c) receiving, at the data processing engine running on the first electronic device from the primary backup storage, second data corresponding to the primary backup copy stored at the primary backup storage;
(d) effecting creating, in secondary backup storage at a second location remote to the first electronic device based on the received second data, a secondary backup copy of the first version of the original data;
(e) periodically, based on a first time interval,
(i) determining, by the data processing engine, whether the original data stored in the original data storage has been changed, and
(ii) if it is determined that the original data stored in the original data storage has been changed, automatically synchronizing, by the data processing engine based on data received from the original data storage, the primary backup copy stored in the primary backup storage to correspond to an updated version of the original data,
(iii) wherein this includes
(A) determining, by the data processing engine, that the original data stored in the original data storage has been changed to a second version, and
(B) based on determining that the original data stored in the original data storage has been changed to the second version, automatically synchronizing, by the data processing engine based on data received from the original data storage, the primary backup copy stored in the primary backup storage to correspond to the second version; and
(g) periodically, based on a second time interval,
(i) determining, by the data processing engine, whether the primary backup copy stored in the primary backup storage differs from the secondary backup copy stored in the secondary backup storage, and
(ii) if it is determined that the primary backup copy stored in the primary data storage differs from the secondary backup copy stored in the secondary data storage, automatically synchronizing, by the data processing engine based on differential data received from the primary backup storage, the secondary backup copy to correspond to the first backup copy,
(iii) wherein this includes
(A) determining, by the data processing engine, that the primary backup copy stored in the primary backup storage differs from the secondary backup copy stored in the secondary backup storage, and
(B) based on determining that the primary backup copy stored in the primary data storage differs from the secondary backup copy stored in the secondary data storage, automatically synchronizing, by the data processing engine based on differential data received from the primary backup storage, the secondary backup copy to correspond to the second version;
(h) wherein, via performance of this method, the secondary backup copy stored in the secondary data storage is periodically indirectly synchronized to data stored in the original data storage via the primary backup copy stored in the primary data storage being both
(i) periodically synchronized, based on the first time interval, to data in the original data storage, and
(ii) periodically used to synchronize, based on the second time interval, the secondary backup copy stored in the secondary data storage.

US Pat. No. 10,339,009

SYSTEM FOR FLAGGING DATA MODIFICATION DURING A VIRTUAL MACHINE BACKUP

International Business Ma...

6. A computer program product for virtual machine backup in a computer system, computer system comprising:a processor unit arranged to run a hypervisor running one or more virtual machines;
a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag, wherein the image modification flag indicates whether a virtual machine being backed up has modified the cache line; and
a memory connected to the cache and arranged to store an image of at least one virtual machine;
the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to:
define a log in the memory;
in response to a determination that a virtual machine that modified the cache line is being backed up, set, in the cache, the image modification flag for the cache line modified by the virtual machine being backed up, wherein the image modification flag is not set if the cache line is modified by a virtual machine not being backed up; and
write only the memory address of the cache rows flagged with the image modification flag in the defined log.

US Pat. No. 10,339,008

DETERMINING TYPE OF BACKUP

Hewlett Packard Enterpris...

1. A system comprising:a physical processor; and
a non-transitory storage medium storing machine readable instructions executable on the physical processor to:
for each respective backup parameter of a set of backup parameters comprising backup parameters selected from among a parameter relating to an amount of storage space of a storage system, a parameter relating to a type of data to be backed up, and a parameter relating to an available resource of the system, the respective backup parameter having a respective predefined threshold for the respective backup parameter:
obtain a respective actual value of the respective backup parameter; and
determine a respective weightage for the respective backup parameter based on the respective predefined threshold and the respective actual value of the respective backup parameter, the respective weightage indicating a degree of contribution of the respective backup parameter for determining a type of backup to be executed in the system;
select the type of backup from a plurality of different types of backup based on the respective weightage and the respective actual value of each respective backup parameter of the set of backup parameters, wherein the machine-readable instructions are executable on the physical processor to select the type of backup by:
determining a respective weightage change value for each respective backup parameter of the set of backup parameters based on a respective maximum weightage, a respective minimum weightage, a respective predefined threshold, and a respective maximum value of the respective predefined threshold,
wherein the respective weightage change value is a value associated with a variation of the respective weightage with relation to an increase of the respective predefined threshold for the respective backup parameter; and
perform the selected type of backup to backup data from a source to the storage system.

US Pat. No. 10,339,007

AGILE RE-ENGINEERING OF INFORMATION SYSTEMS

CA, Inc., Islandia, NY (...

1. A non-transitory computer-readable storage medium, with instructions stored thereon, which when executed by at least one processor of a computer, cause the computer to:receive, from a user interface utilized to select patterns, a selected pattern to be implemented for a service model that corresponds to a system comprising a set of information technology (IT) resources, wherein the selected pattern includes a group of configuration item configuration settings;
when the selected pattern is received for implementation, issue commands to one or more of the IT resources that correspond to configuration items included in the selected pattern to modify configuration item configuration settings based on the selected pattern;
when a configuration change resulting from the commands to modify configuration item configuration settings is identified, compare a current configuration of the service model to a previous configuration to identify modified configuration item configuration settings;
in response to determining, based on a performance indicator for the system, that the system performance is improved, store the identified modified configuration item configuration settings as a candidate pattern in a pattern database;
identify a performance indicator violation within a dataset of performance metric data for the system;
query the pattern database to retrieve the candidate pattern including the group of configuration item configuration settings;
instantiate the configuration item configuration settings to implement the retrieved candidate pattern; and
apply at least one performance metric to the IT resources to confirm the performance indicator violation has been resolved.

US Pat. No. 10,339,006

PROXYING SLICE ACCESS REQUESTS DURING A DATA EVACUATION

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:selecting a second storage unit based a decentralized agreement module decision decided by a decentralized agreement module, wherein the decentralized agreement module receives a ranked scoring information request from a requestor with regards to a set of candidate storage unit resources and, for each of the candidate storage unit resources, the decentralized agreement module performs a deterministic function on a location identifier (ID) of the candidate storage unit resource or an asset ID of the ranked scoring information request;
initiating an evacuation of encoded data slices from a first storage unit to the second storage unit;
receiving, at the second storage unit, a checked write slice request from a requesting entity, the checked write slice request including a requested encoded data slice;
determining, at the second storage unit, that locally stored encoded data slices do not include the requested encoded data slice; and
generating, at the second storage unit, a response to include one or more of: a code associated with the checked write slice request, a name of the encoded data slice, or a revision level.

US Pat. No. 10,339,005

STRIPE MAPPING IN MEMORY

Micron Technology, Inc., ...

1. A method for stripe mapping, comprising:storing a first stripe map, wherein the first stripe map includes a number of stripe indexes to identify a number of stripes stored in a plurality of memory devices and a number of element identifiers to identify elements included in each of the number of stripes;
storing a second stripe map, wherein the second stripe map is an inverse stripe map of the first stripe map; and
performing a redundant array of independent disks (RAID) read error recovery operation using the second stripe map to identify a plurality of stripes that each include a bad element, wherein the RAID read error recovery operation corrects data in the bad element using parity data, moves the corrected data to a different element, and updates element identifiers of the plurality of stripes to include an identifier for the different element.

US Pat. No. 10,339,004

CONTROLLER AND OPERATION METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A controller within a memory system comprising:an initialization circuit suitable for initializing values and states of variable nodes and initializing values of check nodes;
a variable node update circuit suitable for updating the values and states of the variable nodes provided from the initialization circuit;
a check node update circuit suitable for updating the values of the check nodes based on the updated values and states of the variable nodes provided from the variable node update circuit; and
a syndrome check circuit suitable for deciding iteration of the operation of the variable node update circuit and the check node update circuit when the values of the check nodes provided from the check node update are not all in a satisfied state,
wherein the variable node update circuit calculates reliability values of the variable nodes and a reference flip value based on a result of a previous iteration, and
wherein the variable node update circuit updates the values and states of the variable nodes based on the reference flip value and the reliability values and states of the variable nodes.

US Pat. No. 10,339,003

PROCESSING DATA ACCESS TRANSACTIONS IN A DISPERSED STORAGE NETWORK USING SOURCE REVISION INDICATORS

INTERNATIONAL BUSINESS MA...

1. A method comprises:sending, by a dispersed storage (DS) processing unit of a dispersed storage network (DSN), a set of data access requests to a set of storage units of the DSN, wherein the set of data access requests is regarding a data access transaction involving a set of encoded data slices, wherein a data segment of a data object is dispersed storage error encoded into the set of encoded data slices, and wherein the set of storage units stores, or is to store, the set of encoded data slices;
receiving, by the DS processing unit from each of at least some storage units of the set of storage units, a storage-revision indicator, wherein the storage-revision indicator includes a content-revision field, a delete-counter field, and a contest-counter field, wherein the content-revision uniquely identifies content of an encoded data slice of the set of encoded data slices, wherein the delete-counter indicates a number of times the encoded data slice has been deleted, and wherein the contest-counter indicates a number of data access contests the encoded data slice has participated in;
generating, by the DS processing unit, an anticipated storage-revision indicator for the data access transaction based on a current revision level of the set of encoded data slices and based on a data access type of the data access transaction;
comparing, by the DS processing unit, the anticipated storage-revision indicator with the storage-revision indicators received from the at least some storage units; and
when a threshold number of the storage-revision indicators received from the at least some storage units substantially match the anticipated storage-revision indicator, executing, by the DS processing unit, the data access transaction.

US Pat. No. 10,339,002

CATASTROPHIC DATA LOSS AVOIDANCE

VMware, Inc., Palo Alto,...

1. A computer-implemented method for recovering data that has been divided into a plurality of portions, the method comprising:detecting an indication of a loss of at least one portion of the plurality of portions of the data, wherein the data is recoverable using a subset of the plurality of portions of the data stored in multiple storage devices;
copying remaining portions of the data not indicated as being lost to backup storage devices in response to the detected indication; and
after the copying of the remaining portions of the data is initiated, recovering the data using the remaining portions of the data,
wherein the copying of the remaining portions of the data to the backup storage devices results in a risk reduction of further loss of the portions of the data.

US Pat. No. 10,339,001

METHOD AND SYSTEM FOR IMPROVING FLASH STORAGE UTILIZATION BY PREDICTING BAD M-PAGES

EMC IP Holding Company LL...

1. A method for managing persistent storage, the method comprising:issuing, by a control module, a proactive read request to a page in the persistent storage;
receiving, in response to the proactive read request, a bit error value (BEV) for data stored on the page, wherein the BEV is based on a number of incorrect bits in the data;
obtaining, by the control module and based on at least one parameter associated with the page, a BEV threshold (T); and
based a determination that the BEV is greater than T, setting an m-page as non-allocatable for future operations, wherein the m-page is a set of pages in the persistent storage and the page is in the set of pages.

US Pat. No. 10,339,000

STORAGE SYSTEM AND METHOD FOR REDUCING XOR RECOVERY TIME BY EXCLUDING INVALID DATA FROM XOR PARITY

SanDisk Technologies LLC,...

1. A storage system comprising:a memory; and
a controller in communication with the memory, wherein the controller is configured to:
generate a first exclusive-or (XOR) parity for pages of data written to the memory, wherein the pages of data are protected by a data protection scheme;
after the first XOR parity has been generated, determine whether a percentage of errors for the pages is above a threshold;
based on a determination that the percentage of errors for the pages is below the threshold:
determine that the data protection scheme cannot correct at least one error in a page; and
use the first XOR parity to recover the page that contains the error;
based on a determination that the percentage of errors for the pages is above the threshold:
generate a second XOR parity for the pages of data that excludes the at least one page of invalid data, wherein the second XOR parity is generated by performing an XOR operation using the first XOR parity and the at least one page of invalid data as inputs;
determine that the data protection scheme cannot correct an error in a page; and
use the second XOR parity to recover the page that contains the error, wherein using the second XOR parity to recover the page that contains the error is faster than using the first XOR parity to recover the page that contains the error.

US Pat. No. 10,338,999

CONFIRMING MEMORY MARKS INDICATING AN ERROR IN COMPUTER MEMORY

International Business Ma...

1. A method of confirming memory marks indicating an error in computer memory, the method comprising:detecting, by memory logic responsive to a memory read operation, an error in a memory location;
marking, by the memory logic in an entry in a hardware mark table, the memory location as containing the error, the entry including one or more parameters for correcting the error; and
responsive to detecting the error in the memory location, retrying, by the memory logic, the memory read operation, including:
responsive to again detecting the error in the memory location, determining whether the error is correctable at the memory location using the parameters included in the entry; and
if the error is correctable at the memory location using the one or more parameters included in the entry, confirming the error in the entry of the hardware mark table.

US Pat. No. 10,338,998

METHODS FOR PRIORITY WRITES IN AN SSD (SOLID STATE DISK) SYSTEM AND APPARATUSES USING THE SAME

SHANNON SYSTEMS LTD., Sh...

1. A method for priority writes in an SSD (Solid State Disk) system, performed by a processing unit, comprising:receiving a priority write command instructing the processing unit to write first data whose length is less than a page length in a storage unit;
directing a buffer controller to store the first data from the next available sub-region of a buffer, which is associated with a priority write, in a first direction;
receiving a non-priority write command instructing to write second data whose length is less than page length in the storage unit; and
directing the buffer controller to store the second data from the next available sub-region of the buffer, which is associated with a non-priority write, in a second direction.

US Pat. No. 10,338,997

APPARATUSES AND METHODS FOR FIXING A LOGIC LEVEL OF AN INTERNAL SIGNAL LINE

Micron Technology, Inc., ...

1. A method comprising:enabling a register of a semiconductor device, wherein the enabled register is configured to provide data bus inversion information or data mask information to a data terminal of the semiconductor device;
providing, from the register of the semiconductor device, a first control signal to a control circuit of the semiconductor device, the first control signal including information indicative of an operation mode of an error check operation being enabled and an operation mode of a data bus inversion operation being disabled; and
responsive to the first control signal, providing a voltage level of a signal line coupled to an external terminal of the semiconductor device at a constant level, the external terminal configured to receive the data bus inversion information or the data mask information.

US Pat. No. 10,338,996

PIPELINED DECODER AND METHOD FOR CONDITIONAL STORAGE

NXP USA, Inc., Austin, T...

1. A receiver for a wireless communication system, the receiver comprising:first, second and third memory locations, each being arranged to store information associated with one received block of a transmission signal;
a Log-Likelihood Ratio, LLR, computing unit arranged to provide soft bits based on a received block of the transmission signal;
a Hybrid Automatic Repeat Request, HARQ, combining unit operably coupled to the LLR computation unit and to the first, second and third memory locations, the HARQ combining unit being arranged to:
combine the soft bits with a retransmission of the received block; and,
store the combined soft bits into a cyclically selected one of the first, second and third memory locations;
a Forward Error Correction, FEC, decoding unit operably coupled to the first, second and third memory locations, the FEC decoding unit being arranged to:
decode combined soft bits into hard bits based on a previous cyclically selected memory location of the first, second and third memory locations; and,
provide a cyclic redundancy check, CRC, value of the hard bits;
a processing unit, such as a processor, operably coupled to the FEC decoding unit and the first, second and third memory locations, the processing unit being arranged to store in an external memory:
the combined soft bits of the previously cyclically selected memory location when the CRC value is representative of a CRC failure; and,
the hard bits when the CRC value is representative of a CRC success, wherein storage of the combined soft bits and the hard bits is deferred until after the CRC failure or CRC success, respectively, reducing the amount of memory required for the soft bits and the hard bits,
wherein, in operation,
the HARQ combining unit and FEC decoding unit concurrently operate on soft bits associated with different blocks of the transmission signal.

US Pat. No. 10,338,994

PREDICTING AND ADJUSTING COMPUTER FUNCTIONALITY TO AVOID FAILURES

SAS INSTITUTE INC., Cary...

1. A system comprising:a processing device; and
a memory device including instructions that are executable by the processing device for causing the processing device to:
receive prediction data representing a prediction, wherein the prediction data forms a time series that spans a future time-period;
receive a plurality of files defining abnormal data-point patterns to be identified in the prediction data, wherein each file in the plurality of files includes customizable program-code for identifying a respective abnormal pattern of data-point values in the prediction data;
automatically identify a plurality of abnormal data-point patterns in the prediction data by interpreting and executing the customizable program-code in the plurality of files;
automatically determine a plurality of override processes that correspond to the plurality of abnormal data-point patterns in response to identifying the plurality of abnormal data-point patterns in the prediction data, wherein the plurality of override processes are automatically determined using correlations between the plurality of abnormal data-point patterns and the plurality of override processes, and wherein an override process involves replacing a value of at least one data point in the prediction data with another value that is configured to mitigate an impact of an abnormal data-point pattern on the prediction;
automatically determine that the plurality of override processes are to be applied to the prediction data in a particular order;
automatically generate a corrected version of the prediction data in response to determining the plurality of override processes, wherein the corrected version of the prediction data is generated by executing the plurality of override processes in the particular order; and
automatically adjust one or more computer parameters based on the corrected version of the prediction data.

US Pat. No. 10,338,993

ANALYSIS OF FAILURES IN COMBINATORIAL TEST SUITE

SAS Institute Inc., Cary...

1. A computer-program product tangibly embodied in a non-transitory machine-readable storage medium, the computer-program product including instructions operable to cause a computing device to:generate a test suite that provides test cases for testing a system comprising different
components, wherein each element of a test case of the test suite is a test condition for testing one of categorical factors for the system, each of the categorical factors representing one of the different components, and wherein a test condition in the test suite comprises one of different levels representing different options assigned to a categorical factor for the system;
receive a set of input weights for one or more levels of the test suite;
receive a failure indication indicating a test conducted according to the test cases failed;
in response to receiving the failure indication, determine a plurality of cause indicators based on the set of input weights and any commonalities between test conditions of any failed test cases of the test suite that resulted in a respective failed test outcome, wherein each cause indicator represents a likelihood that a test condition or combination of test conditions of the any failed test cases caused the respective failed test outcome;
identify, based on comparing the plurality of cause indicators, a most likely potential cause for a potential failure of the system; and
output an indication of the most likely potential cause for the potential failure of the system.

US Pat. No. 10,338,992

SEMICONDUCTOR APPARATUS AND DISPLAY APPARATUS

Japan Display Inc., Toky...

1. A semiconductor apparatus comprising:a plurality of semiconductor devices that includes a first semiconductor device including a first anomaly detection circuit and a second semiconductor device including a second anomaly detection circuit,
wherein the first anomaly detection circuit is configured to detect anomalies in a plurality of first functions implemented in the first semiconductor device and output a first anomaly detection signal to the second anomaly detection circuit and a device outside the semiconductor apparatus,
wherein the second anomaly detection circuit is configured to detect anomalies in a plurality of second functions implemented in the second semiconductor device and output a second anomaly detection signal to the first anomaly detection circuit,
wherein the first anomaly detection circuit is configured to generate the first anomaly detection signal when the first anomaly detection circuit detects (a) an anomaly in at least one of the first functions, (b) the second anomaly detection signal that is output from the second anomaly detection circuit, or (c) both, and
wherein the second anomaly detection circuit is configured to generate the second anomaly detection signal when the second anomaly detection circuit detects an anomaly in at least one of the second functions.

US Pat. No. 10,338,991

CLOUD-BASED RECOVERY SYSTEM

Microsoft Technology Lice...

1. A computing system, comprising:a communication system configured to:
receive a diagnostic data package from a client computing device that is remote from the computing system, the diagnostic data package including:
a problem scenario identifier that identifies a problem scenario indicative of a problem associated with the client computing device, and
first problem-specific diagnostic data that is obtained from the client computing device and specific to the problem associated with the client computing device;
a state-based diagnostic system configured to:
identify a problem-specific diagnostic analyzer, that is specific to the problem associated with the client computing device, based on mapping information that maps the problem scenario to the problem-specific diagnostic analyzer; and
run the problem-specific diagnostic analyzer to:
obtain second problem-specific diagnostic data from a server environment in which the computing system is deployed, the second problem-specific diagnostic data being specific to the problem associated with the client computing device; and
aggregate the first problem-specific diagnostic data and the second problem-specific diagnostic data to obtain aggregated data;
data analysis logic configured to:
identify an estimated root cause for the problem scenario based on the aggregated data; and
identify a suggested recovery action, based on the estimated root cause,
wherein the communication system is configured to communicate the suggested recovery action to the client computing device.

US Pat. No. 10,338,990

CULPRIT MODULE DETECTION AND SIGNATURE BACK TRACE GENERATION

VMware, Inc., Palo Alto,...

1. A computer-implemented method for identifying a culprit module and for generating a signature back trace corresponding to a symptom of a crash of a computer system, said method comprising:receiving a core dump at a crash analyzer, wherein said core dump corresponds to said crash of said computer system;
generating, at said crash analyzer, an essential stack of functions corresponding to said crash of said computer system;
determining a tag sequence and a tag depth corresponding to said essential stack of functions, at said crash analyzer;
deriving a list of permissible tag permutations corresponding to said computer system;
utilizing said tag sequence and said tag depth in combination with said list of permissible tag permutations, by said crash analyzer, to identify a culprit module responsible for said computer crash; and
generating a signature back trace from said essential stack of functions including at least one function corresponding to said culprit module, and providing said signature back trace as an output from said crash analyzer, wherein said signature back trace pertains to a symptom of said crash of said computer system.

US Pat. No. 10,338,989

DATA TUPLE TESTING AND ROUTING FOR A STREAMING APPLICATION

International Business Ma...

1. An apparatus comprising:at least one processor;
a memory coupled to the at least one processor; and
a streaming application residing in the memory and executed by the at least one processor, the streaming application comprising a flow graph that includes a plurality of operators that process a plurality of data tuples, wherein the plurality of operators comprises:
a plurality of parallel test operators that test in parallel the plurality of data tuples; and
a tuple testing and routing operator that routes the plurality of data tuples to the plurality of parallel test operators, receives feedback from the plurality of parallel test operators regarding the results of testing the plurality of data tuples, and routes a first selected data tuple from the plurality of data tuples to a first operator when the first selected data tuple passes the plurality of parallel test operators according to a specified pass threshold;
wherein the streaming application is executed under control of a streams manager and is configured by the streams manager according to a specified routing method that determines a number of the plurality of parallel test operators that operate in parallel on each of the plurality of data tuples.

US Pat. No. 10,338,988

STATUS MONITORING SYSTEM AND METHOD

EMC IP Holding Company LL...

1. A user-configurable decoder circuit, associated with a controlled subcomponent, configured to:receive a cumulatively-encoded status signal, wherein the cumulatively-encoded status signal includes a warning of an upcoming change in power output of at least one monitored subcomponent of a plurality of monitored subcomponents and an amplitude of the cumulatively-encoded status signal indicates a number of monitored subcomponents with the warning of an upcoming change in power output;
compare the cumulatively-encoded status signal to a user-definable threshold that defines a subcomponent policy for the controlled subcomponent, wherein the amplitude of the user-definable threshold is based upon, at least in part, a threshold number of monitored subcomponents asserting the warning of an upcoming change in power output; and
effectuate a procedure on the controlled subcomponent based, at least in part, upon the comparison of the cumulatively-encoded status signal and the user-definable threshold, wherein effectuating the procedure on the controlled subcomponent includes reducing a power demand of the controlled subcomponent prior to the upcoming change in the power output of the at least one monitored subcomponent based, at least in part, upon the subcomponent policy defined for the controlled subcomponent.

US Pat. No. 10,338,987

TESTING MODULE COMPATIBILITY

Dell Products LP, Round ...

1. A method for checking support and compatibility of a module without inserting the module into one of one or more empty slots of a chassis comprising:obtaining a platform specification and a configuration of the chassis;
receiving information about the module from a Near Field Communication (NFC) tag coupled to the module;
analyzing the information about the module against the platform specification, and the chassis configuration;
based on the analysis, determining that one of a plurality of conditions exists, wherein the plurality of conditions comprise:
a first condition exists when the module will not be supported according to the platform specification;
a second condition exists when the module will be supported according to the platform specification and there are no empty slots for which the module will be compatible with the chassis configuration; and
a third condition exists when the module will be supported according to the platform specification and there is at least one empty slot for which the module will be compatible with the chassis configuration; and
generating an indication, perceptible to a user, of a determined condition to allow the user to decide whether to insert the module.

US Pat. No. 10,338,986

SYSTEMS AND METHODS FOR CORRELATING ERRORS TO PROCESSING STEPS AND DATA RECORDS TO FACILITATE UNDERSTANDING OF ERRORS

Microsoft Technology Lice...

1. A system comprising:a processor and memory; and
machine readable instructions, when executed by the processor and memory, configured to:
receive one or more commands from a computer program file;
assign a first set of identifiers to different portions of the computer program file to link errors occurring during execution of one or more of a plurality of processing steps to corresponding portions of the computer program file;
execute the plurality of processing steps to process data in a data processing system distributed over a plurality of nodes in a cluster, wherein each of the nodes is a computing device;
compose a graph including vertices representing the plurality of processing steps;
assign a second set of identifiers to the vertices;
collect, from the plurality of nodes, in response to an error occurring on executing a first processing step of the processing steps on the plurality of nodes, information about the error stored on the plurality of nodes, the information about the error including a first identifier associated with one of the vertices representing the first processing step in the graph;
process the information about the error from the plurality of nodes to correlate the error to the first processing step based on the first identifier that is associated with the first processing step and that is included in the information about the error stored on the plurality of nodes;
generate, based on the processed information, correlation between the error and the first processing step, wherein the correlation between the error and the first processing step indicates a cause and a location of the error; and
in response to one or more of the plurality of processing steps being modified by reordering, consolidating, or discarding the one or more steps prior to executing the plurality of processing steps, retain identifiers of the one or more processing steps being modified with the modified one or more processing steps,
wherein the retained identifiers are used to correlate one or more errors to the one or more processing steps in the event of the modification.

US Pat. No. 10,338,985

INFORMATION PROCESSING DEVICE, EXTERNAL STORAGE DEVICE, HOST DEVICE, RELAY DEVICE, CONTROL PROGRAM, AND CONTROL METHOD OF INFORMATION PROCESSING DEVICE

Toshiba Memory Corporatio...

1. An information processing system comprising:a host device and a storage device coupled with the host device;
the storage device including:
a nonvolatile memory including a plurality of blocks; and
a first controller configured to
control the nonvolatile memory,
determine whether a data write operation to the nonvolatile memory is prohibited based on a first value and a first threshold value, the first value being a value of a number of free blocks, the first threshold value corresponding to the first value, and
send, when determining the data write operation to the nonvolatile memory is prohibited, information indicating that data write operation to the nonvolatile memory is prohibited;
the host device being connectable to a display, the host device including a second controller, the second controller configured to
acquire a second value from the storage device, the second value being at least one value of a plurality of pieces of statistical information,
cause the display to show a certain message when the acquired second value exceeds a second threshold value, the second threshold value corresponding to the second value, and
recognize the storage device as a read only device that supports only a read operation of read and write operations of the nonvolatile memory when receiving the information.

US Pat. No. 10,338,984

STORAGE CONTROL APPARATUS, STORAGE APPARATUS, AND STORAGE CONTROL METHOD

SONY CORPORATION, Tokyo ...

1. A storage control apparatus, comprising:circuitry configured to determine a unit-of-storage of a memory cell for a non-volatile memory as suspected of having a defect,
wherein the unit-of-storage is determined as suspected of having the defect based on a number of errors in one of a reset operation or a set operation of the non-volatile memory that exceeds a first threshold value and based on a total value of a number of errors in the reset operation and a number of errors in the set operation that exceeds a second threshold value.

US Pat. No. 10,338,983

METHOD AND SYSTEM FOR ONLINE PROGRAM/ERASE COUNT ESTIMATION

EMC IP Holding Company LL...

1. A method for managing persistent storage, the method comprising:selecting a sample set of physical addresses in a solid state memory module (SSMM), wherein the sample set of physical addresses is associated with a region in the SSMM;
performing a garbage collection operation on the sample set of physical addresses;
after the garbage collection operation, issuing a write request to the sample set of physical addresses;
after issuing the write request, issuing a read request to the sample set of physical addresses to obtain a copy of data stored in the sample set of physical addresses;
determining an error rate in the copy of the data stored using at least one selected from a group consisting of an Error Correction Code (ECC) codeword and, known data in the write request;
determining a calculated P/E cycle value for the SSMM using at least the error rate; and
updating an in-memory data structure in a control module with the calculated P/E cycle value.

US Pat. No. 10,338,982

HYBRID AND HIERARCHICAL OUTLIER DETECTION SYSTEM AND METHOD FOR LARGE SCALE DATA PROTECTION

International Business Ma...

1. A method comprising:receiving metadata associated with one or more data backup jobs performed on one of more storage devices, wherein the metadata comprises univariate time series data for each variable of a multivariate time series, and the multivariate time series comprises different variables that exhibit different characteristics over time; and
decreasing likelihood of a failure in data protection involving the one or more data backup jobs by:
for each variable of the multivariate time series:
selecting, from different anomaly detection models with different performance costs, an anomaly detection model suitable for the variable based on one or more characteristics exhibited by corresponding univariate time series data for the variable and covariations and interactions between the variable and at least one other variable of the multivariate time series; and
detecting an anomaly on the variable utilizing the anomaly detection model selected for the variable; and
based on each anomaly detection model selected for each variable of the multivariate time series,
determining whether the multivariate time series is anomalous at a particular time point, and generating data indicative of whether the multivariate time series is anomalous at the particular time point.

US Pat. No. 10,338,981

SYSTEMS AND METHODS TO FACILITATE INFRASTRUCTURE INSTALLATION CHECKS AND CORRECTIONS IN A DISTRIBUTED ENVIRONMENT

VMware, Inc, Palo Alto, ...

1. An apparatus comprising:a first virtual appliance including a first management endpoint, the first virtual appliance to organize tasks to be executed to install a computing infrastructure; and
a first component server including a first management agent to communicate with the first management endpoint, the first virtual appliance to assign a first role to the first component server and to determine a subset of prerequisites associated with the first role, the subset of prerequisites selected from a plurality of prerequisites based on an applicability of the subset of prerequisites to the first role, each of the subset of prerequisites associated with an error correction script, the first component server to determine whether the first component server satisfies the subset of prerequisites associated with the first role, the first component server to address an error when the first component server is determined not to satisfy at least one of the subset of prerequisites by executing the error correction script associated with the at least one of the subset of prerequisites.

US Pat. No. 10,338,980

BINDING SMART OBJECTS

KONINKLIJKE KPN N.V., Ro...

1. A method comprising:receiving, at a sensor device, from a binding initiator, a first REST(Representational State Transfer) request for a first REST resource hosted by the sensor device, the first REST request comprising at least an identification of an action to be executed on an actuator device and an identification of a condition of a state of the first REST resource for executing the action on the actuator device;
storing, in a binding table of the sensor device, the identification of the action to be executed on the actuator device and the identification of the condition for executing the action on the actuator device as information related to the first REST resource; monitoring, by the sensor device, the state of the first REST resource to determine whether the state satisfies the condition identified in the first REST request; after determining that the state of the first REST resource satisfies the condition,
providing, from the sensor device directly to the actuator device, a trigger for the actuator device to execute the action identified in the first REST request, wherein the trigger is provided in a form of a second REST request.

US Pat. No. 10,338,979

MESSAGE PATTERN DETECTION AND PROCESSING SUSPENSION

Chicago Mercantile Exchan...

1. A computer implemented method for processing electronic data transaction request messages for a data object in a data transaction processing system, the method comprising:receiving, by a processor from a first source, a first electronic data transaction request message to perform a first transaction of a first transaction type on a data object;
processing, by the processor, the first electronic data transaction request message, wherein processing an electronic data transaction request comprises determining whether the electronic data transaction request message matches with another electronic data transaction request message;
receiving, by the processor from a second source, a second electronic data transaction request message to perform a second transaction of the first transaction type on the data object;
processing, by the processor, the second electronic data transaction request message;
receiving, by the processor from the first source, a third electronic data transaction request message to undo results of processing the first electronic data transaction request message;
processing, by the processor, the third electronic data transaction request message;
receiving, by the processor from the first source, within a first predetermined amount of time after receiving the third electronic data transaction request message, a fourth electronic data transaction request message to perform a first transaction of a second transaction type on the data object;
upon determining that processing the fourth electronic data transaction request message would result in a match between the second and the fourth electronic data transaction request messages, automatically preventing, by the processor, further processing of the fourth electronic data transaction request message; and
after a passage of a second predetermined amount of time, enabling further processing, by the processor, of the fourth electronic data transaction request message.

US Pat. No. 10,338,978

ELECTRONIC DEVICE TEST SYSTEM AND METHOD THEREOF

PRIMAX ELECTRONICS LTD., ...

1. A method for testing a Macintosh compliant electronic device and labeling the electric device through a Windows system computer, used to detect a memory serial number of the Macintosh compliant electronic device and using the Windows system to generate a bar code label corresponding to the memory serial number, the method comprising the following steps:(a) using a Macintosh computer to detect the memory serial number of the Macintosh compliant electronic device;
(b) using the Macintosh computer to transmit the memory serial number to the Windows system computer by means of an RS232 interface;
(c) using the Windows system computer to compare whether the memory serial number satisfies a coding rule; if the memory serial number does not satisfy the coding rule, the Windows system computer generates an alarm message, and if the memory serial number satisfies the coding rule, the Windows system computer performs the next steps (d) and (e):
(d) executing an analog keyboard event to input the memory serial number; and
(e) driving a printer to print a bar code label that includes the memory serial number of the Macintosh compliant electronic device; and
(f) adhering the bar code label to the Macintosh compliant electronic device.

US Pat. No. 10,338,977

CLUSTER-BASED PROCESSING OF UNSTRUCTURED LOG MESSAGES

ORACLE INTERNATIONAL CORP...

10. A computer-implemented method comprising:accessing a data store that associates, for each machine-generated data record of a set of machine-generated data records, an identifier of the machine-generated data record with one or more value identifiers, each value identifier of the one or more value identifiers representing one or more values included within the machine-generated data record;
selecting a representative machine-generated data record from amongst the set of machine-generated data records;
identifying, for each component of a plurality of components of the representative machine-generated data record, a value for the component that is included in a part of the representative machine-generated data record that corresponds to the component; and
determining, for each component of a plurality of components, that the component corresponds to a variable component, thereby indicating that the set of machine-generated data records includes one or more other values for the component;
facilitating a presentation that includes, for each component of the plurality of components:
the value for the component; and
one or more interactive options configured to, upon detecting input of a defined type corresponding to the value, identify at least one of the one or more other values for the component, wherein each of the at least one of the one or more other values is included in a part of another machine-generated data record in the set of machine-generated data records.

US Pat. No. 10,338,976

METHOD AND APPARATUS FOR PROVIDING SCREENSHOT SERVICE ON TERMINAL DEVICE AND STORAGE MEDIUM AND DEVICE

Baidu Online Network Tech...

1. A method for providing a screenshot service on a terminal device, the method comprising:executing, by a producer thread, a screenshot operation in response to a received screenshot command instruction, and writing screen data captured into a buffer; and
reading, by a consumer thread, the screen data stored by the producer thread from the buffer, executing image processing on the screen data to generate a screenshot image, and returning the screenshot image to an application invoking the screenshot service;
the method further comprising:
starting, by a main thread of the screenshot service, the producer thread and the consumer thread, and establishing, at a specified port, a session connection to the application invoking the screenshot service; and
determining, by the producer thread, a screenshot command instruction being received, by listening to a data reading instruction on a session connection.

US Pat. No. 10,338,975

CONTENTION MANAGEMENT IN A DISTRIBUTED INDEX AND QUERY SYSTEM

VMware, Inc., Palo Alto,...

1. A method of contention management in a distributed index and query system, the method comprising:utilizing one or more index processing threads of an index thread pool in a distributed index and query system to index documents buffered into a work queue buffer in a memory of the distributed index and query system after being received via a network connection;
simultaneous to the indexing, utilizing one or more query processing threads of a query thread pool to process queries, received via the network connection, of indexed documents, wherein a sum of the index processing threads and the query processing threads is a plurality of processing threads;
responsive to the work queue buffer reaching a predefined fullness, emptying the work queue buffer by backing up the work queue buffer into an allotted storage space in a data storage device of the distributed index and query system; and
setting a number of index processing threads, of the plurality of processing threads allocated to the index thread pool, in a linear relationship to a ratio of a utilized amount of the allotted storage space to a total amount of the allotted storage space.

US Pat. No. 10,338,974

VIRTUAL RETRY QUEUE

Intel Corporation, Santa...

1. An apparatus comprising:a computing system block comprising logic implemented at least in part in hardware circuitry, wherein the logic is to:
enter a starvation mode based on a determination that one or more particular requests in one or more retry queues of the computing system block fail to make forward progress, wherein the starvation mode blocks the one or more retry queues and activates one or more virtual retry queues for the one or more particular requests, each of the one or more virtual retry queues comprises a respective table of pointers to entries in one or more of the retry queues, the virtual retry queues are to be used for retries during the starvation mode instead of the retry queues, and ordering of retries defined in at least one of the virtual retry queues is different from ordering of retries defined in a corresponding one of the one or more retry queues;
identify in a particular one of the virtual retry queues, a particular dependency of a first one of the particular requests, wherein the first request is in a particular one of the one or more retry queues;
determine that the particular dependency is acquired; and
retry the first request during the starvation mode based on acquisition of the particular dependency, wherein the first request is retried before another request ahead of the first request in the particular retry queue.

US Pat. No. 10,338,973

CROSS-CLOUD ORCHESTRATION OF DATA ANALYTICS

The MITRE Corporation, M...

1. A system comprising:at least one processor; and
a memory operatively coupled to the at least one processor, the at least one processor configured to:
receive, by a command and control (C&C) service application from an executive service application, a first C&C request to execute a first analytic application of a workflow, wherein the workflow is executed by the executive service application and includes one or more analytic applications to be executed by one or more analytics computing environments;
transmit, by the C&C service application, a first storage request to a storage system to request transfer of source data ingested from one or more information source systems to a first analytics computing environment configured to execute the first analytic application, in response to the first C&C request to execute the first analytic application;
generate, by the C&C service application, a first native access request to request execution of the first analytic application at the first analytics computing environment, wherein the first native access request includes execution application information that identifies the first analytic application to be executed and execution input information that identifies the ingested source data for analysis by the first analytic application;
transmit, by the C&C service application, the first native access request to the first analytics computing environment, wherein
the first analytic application is executed by the first analytics computing environment, in response to the first native access request,
the first analytic application is configured to perform analysis on the ingested source data and generate first execution result data based on the analysis of the ingested source data,
the first analytics computing environment is configured to generate first provenance class information based on execution input information and execution application information associated with each execution of the first analytic application,
the first provenance class information includes first provenance instance information that identifies a derivation history of the first execution result data, and
the first provenance instance information includes at least provenance timestamp information identifying a time and date of the first analytic application execution, provenance input information identifying the ingested source data, and provenance execution output information identifying the first execution result data; and
transmit, by the C&C service application, an adapter request to an adapter component to transfer the first execution result data generated by the execution of the first analytic application to a knowledge datastore, wherein the adapter component is configured to convert at least a portion of the first execution result data to an ontology data model.

US Pat. No. 10,338,972

PREFIX BASED PARTITIONED DATA STORAGE

Amazon Technologies, Inc....

5. A system, comprising:one or more processors; and
memory with instructions that, as a result of being executed by the one or more processors, cause the system to:
for a request to access a data object identified by an identifier, determine a subsequence of the identifier associated with a partition in which the data object is stored, the partition tracks a maximum number of subsequences;
increment a counter corresponding to the subsequence, the counter maintained by the partition; and
perform one or more mitigating actions as a result of the counter reaching a threshold value, the one or more mitigating actions includes generating a new subsequence associated with a generated second partition such that the generated second partition fulfills requests for the new subsequence.

US Pat. No. 10,338,971

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM

Ricoh Company, Ltd., Tok...

8. A method of processing information performed by an information processing apparatus connected to a plurality of computational resource groups, each group including a plurality of computational resources, through a network, the information processing method comprising:monitoring a state of each computational resource belonging to each computational resource group;
identifying an unavailable computational resource group, in which a ratio of unusable computational resources to total computational resources is equal to or greater than a threshold, from among the plurality of computational resource groups, the identifying being based on the state of each computational resource monitored by the monitoring; and
receiving a target request that includes an allocation destination designating a computational resource from among the plurality of computational resources to execute a requested process, wherein
in a case where the computational resource designated by the allocation destination of the target request is a usable computational resource included within the computational resource group identified as the unavailable computational resource group, the target request is sent to the computational resource designated by the allocation destination, and
in a case where the computational resource designated by the allocation destination of the target request is an unusable computational resource included within the computational resource group identified as the unavailable computational resource group, the target request is sent to one or more usable computational resources belonging to a computational resource group other than the unavailable computational resource group.

US Pat. No. 10,338,970

MULTI-PLATFORM SCHEDULER FOR PERMANENT AND TRANSIENT APPLICATIONS

INTERNATIONAL BUSINESS MA...

1. A method of scheduling assignment of computer resources to a plurality of applications, the method comprising:determining, by a processor, shares of the computer resources assigned to each application during a first period;
determining, by the processor, shares of the computer resources assigned to each application during a second period that occurs after the first period;
determining, by the processor, an imbalance value for each application that is based on a sum of the shares assigned to the corresponding application over both periods;
determining, by the processor, a first set of the imbalance values that are below a range; and
assigning a first container to a first application among the applications associated with the lowest imbalance value of the first set to satisfy a first request of the first application for computer resources when the first container is available with enough computer resources to satisfy the first request.

US Pat. No. 10,338,969

MANAGING A VIRTUALIZED APPLICATION WORKSPACE ON A MANAGED COMPUTING DEVICE

VMware, Inc., Palo Alto,...

1. A method for managing a virtualized application workspace on a managed computing device, the method comprising:authenticating a first user and verifying that the first user belongs to a first group maintained by a directory service, based on first user credentials, wherein belonging to a group indicates entitlements;
obtaining the entitlements associated with the authenticated user from the directory service, the entitlements including one or more indications of software available to the authenticated user, and the entitlements further including a list of application family objects associated with the authenticated user, the list comprising a first application family object comprising a first set of rules wherein the first set of rules determines that the first application family object resolves to a local application object that is local to the managed computing device, the list further comprising a second application family object comprising a second set of rules wherein the second set of rules determines that the second application family object resolves to a remote application object that is remote to the managed computing device;
resolving the first set of rules of the first application family object and the second set of rules of the second application family object so as to obtain a result vector that includes indications of application objects that are available to the authenticated user, wherein the result vector comprises (a) the local application object, and (b) the remote application object;
processing the result vector to identify application objects for which installation operations are to be performed, wherein the processing comprises determining that (a) the local application object is local to the managed computing device, or (b) that the remote application object is remote to the managed computing device; and
subsequent to the processing, performing the installation operations to install one or more of the identified application objects on the managed computing device.

US Pat. No. 10,338,968

DISTRIBUTED NEUROMORPHIC PROCESSING PERFORMANCE ACCOUNTABILITY

SAS INSTITUTE INC., Cary...

1. An apparatus comprising a processor and a storage to store instructions that, when executed by the processor, cause the processor to perform operations comprising:receive, at a portal, and from a remote device via a network, a request to repeat an earlier performance, described in a first instance log of multiple instance logs stored in one or more federated areas, of a first job flow defined in a first job flow definition of multiple job flow definitions stored in the one or more federated areas, or to provide objects to the remote device to enable the remote device to repeat the earlier performance, wherein:
the portal is provided on the network to control access by the remote device to the one or more federated areas via the network;
the one or more federated areas are maintained within one or more storage devices to store at least the multiple job flow definitions and the multiple instance logs; and
the request specifies a first instance log identifier of the first instance log;
use the first instance log identifier to retrieve the first instance log from among the multiple instance logs stored in the one or more federated areas, wherein the first instance log comprises a first job flow identifier of the first job flow definition, a task routine identifier for each task routine used to perform a task specified in the first job flow definition, and a data object identifier for each data object associated with the earlier performance of the first job flow;
analyze the first job flow definition to determine whether performances of the first job flow comprise use of a neural network;
in response to a determination that performances of the first job flow do comprise use of the neural network, analyze an object associated with the first job flow to determine whether the neural network was trained to perform an analytical function using a training data set derived from at least one performance of a second job flow defined by a second job flow definition stored in the one or more federated areas, wherein:
the object associated with the first job flow comprises at least one of the first job flow definition, the first instance log, or a task routine executed during the earlier performance of the first job flow; and
performances of the second job flow comprise performances of the analytical function in a manner that does not use any neural network; and
in response to the request comprising a request to repeat the earlier performance, in response to a determination that performances of the first job flow do comprise use of the neural network, and in response to a determination that the neural network was trained using the training data set derived from at least one performance of the second job flow, the processor is caused to perform operations comprising:
repeat the earlier performance of the first job flow with one or more data sets associated with the earlier performance of the first job flow, wherein the repetition of the earlier performance of the first job flow comprises execution, by the processor, of each task routine identified by a task routine identifier in the first instance log;
perform the second job flow with the one or more data sets associated with the earlier performance of the first job flow, wherein the performance of the second job flow comprises execution, by the processor, of a most recent version of a task routine to perform each task identified by a flow task identifier in the second job flow definition;
analyze an output of the repetition of the earlier performance of the first job flow relative to a corresponding output of the performance of the second job flow to determine a degree of accuracy of the first job flow in performing the analytical function relative to a predetermined threshold of accuracy to determine whether the second job flow is to be used in place of the first job flow to perform the analytical function; and
transmit at least the output of the repetition of the earlier performance of the first job flow and an indication of the degree of accuracy or the results of the comparison to the requesting device.

US Pat. No. 10,338,967

SYSTEMS AND METHODS FOR PREDICTING PERFORMANCE OF APPLICATIONS ON AN INTERNET OF THINGS (IOT) PLATFORM

Tata Consultancy Services...

1. A method for predicting performance of one or more applications being executed on an Internet of Things (IoT) platform, comprising:obtaining, by said IoT platform, at least one of (i) one or more user requests and (ii) one or more sensor observations from one or more sensors;
identifying and invoking one or more Application Programming Interface (APIs) of said IoT platform based on said at least one of (i) one or more user requests and (ii) one or more sensor observations from said one or more sensors;
identifying, based on said one or more invoked APIs, one or more open flow requests and one or more closed flow requests of one or more systems connected to said IoT platform;
identifying one or more workload characteristics of said one or more open flow requests and said one or more closed flow requests to obtain one or more segregated open flow requests and one or more segregated closed flow requests, and a combination of open and closed flow requests;
executing one or more performance tests with said one or more invoked APIs based on said one or more workload characteristics;
concurrently measuring utilization of one or more resources of said one or more systems and computing one or more service demands of each of said one or more resources;
executing said one or more performance tests with said one or more invoked APIs based on a volume of workload characteristics pertaining to said one or more applications; and
predicting, using a queuing network, performance of said one or more applications for said volume of workload characteristics.

US Pat. No. 10,338,966

INSTANTIATING CONTAINERS WITH A UNIFIED DATA VOLUME

Red Hat, Inc., Raleigh, ...

1. A system comprising:a first host including a first memory;
a second memory located across a network from the first host;
one or more processors;
an orchestrator including a scheduler and a container engine;
the orchestrator executing on the one or more processors to:
request, by the scheduler, a first persistent storage to be provisioned in the second memory based on at least one of an image file and metadata associated with the image file, wherein the first persistent storage is mounted to the first host;
copy, by the container engine, the image file to the first memory as a lower system layer of an isolated guest based on the image file, wherein the lower system layer is write protected;
construct, by the container engine, an upper system layer in the first persistent storage based on the image file, wherein a baseline snapshot is captured of the first persistent storage including the upper system layer after the upper system layer is constructed; and
launch the isolated guest, wherein the isolated guest is attached to the lower system layer and the upper system layer.

US Pat. No. 10,338,965

MANAGING A SET OF RESOURCES

Hewlett Packard Enterpris...

1. A cache controller for managing resources, comprising:a first tracker structure, in a coherent request tracker, having first tracker entries each statically associated with one of a first set of transaction identifiers that are assigned to the coherent request tracker for use by the coherent request tracker to manage coherent requests issued to a processor external to the cache controller;
a second tracker structure, in a victim request tracker, having second tracker entries each dynamically associable with one of a second variable set of the transaction identifiers that are assigned to the victim request tracker for use by the victim request tracker to manage writes to a memory that are associated with eviction of entries from a directory cache;
a resource sharing mechanism to, in response to determining that no transaction identifier in the second variable set of the transaction identifiers is available to process a resource request from the cache controller, borrow an idle transaction identifier associated with a first tracker entry among the first tracker entries, associate the borrowed transaction identifier with a second tracker entry among the second tracker entries, and lock the first tracker entry; and
the victim request tracker to assign the borrowed transaction identifier to the resource request.

US Pat. No. 10,338,964

COMPUTING NODE JOB ASSIGNMENT FOR DISTRIBUTION OF SCHEDULING OPERATIONS

Capital One Services, LLC...

1. A method, comprising:receiving, by a computing node included in a set of computing nodes, a corresponding set of heartbeat messages that originated at the set of computing nodes,
wherein the set of heartbeat messages is related to selecting a scheduler computing node, of the set of computing nodes, for scheduling a set of jobs associated with the set of computing nodes,
wherein each heartbeat message indicates, for a corresponding computing node of the set of computing nodes:
a number of times that the corresponding computing node has been selected as the scheduler computing node, and
whether the corresponding computing node is currently executing a scheduler to schedule one or more jobs for the set of computing nodes;
determining, by the computing node and based on the set of heartbeat messages, that the computing node has been selected as the scheduler computing node the fewest number of times as compared to other computing nodes included in the set of computing nodes;
determining, by the computing node and based on the set of heartbeat messages, that the scheduler is not being executed by any computing node included in the set of computing nodes; and
selecting, by the computing node, the computing node as the scheduler computing node based on determining that the computing node has been selected as the scheduler computing node the fewest number of times and that the scheduler is not being executed by any computing node included in the set of computing nodes.

US Pat. No. 10,338,963

SYSTEM AND METHOD OF SCHEDULE VALIDATION AND OPTIMIZATION OF MACHINE LEARNING FLOWS FOR CLOUD COMPUTING

Atlantic Technical Organi...

1. A computer system comprising:a memory storing a data array in the form of an execution matrix comprising a horizontal and vertical grid of slots;
an interface system by which a user may configure a plurality of functionality icons, wherein each functionality icon represents an operator that is part of a process to be executed;
an execution manager configured to generate a segmentation of said operators by extracting a plurality of nodes and links from a user-made configuration of the plurality of functionality icons, determining X and Y dimensions of said execution matrix, placing said operators according to said links along the X and Y dimensions of said execution matrix, and verifying predecessor and successor dependencies of said operators; and
a controller configured to coordinate parallel execution of operations represented by said operators based on the positions of said functionality icons in said data array, said coordinating including polling a computing infrastructure to determine its capabilities, reorganizing said operations represented by said operators, and reorganizing said computing infrastructure to optimize said process to be executed;
wherein said interface system generates a graphical user interface comprising a display of said functionality icons placed in cells segmented via vertical and horizontal lines as a representation of said data array for coordinating the parallel execution of said process to be executed, wherein said functionality icons are reorganized on the graphical user interface in response to said reorganizing of said operations represented by said operators.

US Pat. No. 10,338,962

USE OF METRICS TO CONTROL THROTTLING AND SWAPPING IN A MESSAGE PROCESSING SYSTEM

Microsoft Technology Lice...

1. A system comprising a processor and a memory communicatively coupled to the processor, the memory comprising instructions that, when executed by the processor, cause the system to:determine a workload of the system based on performance metrics of the system;
receive a message and, in response to receiving the message, create an instance of a process or route the message to an existing instance of a process;
idle the created instance or the existing instance based on the determined workload of the system;
determine a predicted duration for the idling based on the performance metrics;
based on the predicted duration, move the idled instance out of active memory and into secondary storage associated with the system; and
update the determined workload based on updated performance metrics and said moving the idled instance out of active memory.

US Pat. No. 10,338,961

FILE OPERATION TASK OPTIMIZATION

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:receiving, by a data processing apparatus, a plurality of file operation requests, each file operation request representing a request to perform an operation on at least one file maintained in a distributed file system and corresponding to a priority and to an operation type;
indexing, by the data processing apparatus, the plurality of file operation requests at least by the priority corresponding to the requests as a priority index;
indexing, by the data processing apparatus, the plurality of file operation requests at least by the operation type corresponding to the requests as an operation index;
selecting, by the data processing apparatus, a particular file operation request from the priority index based on a level of priority of the particular file operation request;
in response to selecting the particular file operation request from the priority index based on the level of priority of the particular file operation request, selecting, by the data processing apparatus and based on the operation type of the particular file operation request selected from the priority index, a group of file operation requests from the operation index that have an operation type in common with the particular file operation request selected from the priority index; and
sending, by the data processing apparatus, a request to perform the group of file operation requests, including the particular operation request, that have the common operation type.

US Pat. No. 10,338,960

PROCESSING DATA SETS IN A BIG DATA REPOSITORY BY EXECUTING AGENTS TO UPDATE ANNOTATIONS OF THE DATA SETS

International Business Ma...

1. A computer-implemented method for processing data sets in a data repository for storing at least unstructured data, the method comprising:providing agents, wherein each of the agents triggers processing of one or more of the data sets, wherein execution of each of the agents is triggered in response to one or more conditions assigned to that agent being met for a data set whose processing is triggered by the agent;
in response to executing a first agent of the agents to trigger processing of a first data set of the one or more of the data sets, wherein the execution is triggered by the one or more conditions of the first agent being met for the first data set,
updating, by the first agent, annotations of the first data set, thereby including a result of the processing of the first data set triggered by the first agent in the annotations; and
executing a second agent of the agents, wherein the execution is triggered by the updated annotations of the first data set meeting the one or more conditions of the second agent, wherein the execution of the second agent triggers a further processing of the first data set and a further updating of the annotations of the first data set by the second agent; and
in response to generation, by the first agent, of a second data set that is a derivative of the first data set,
updating, by the first agent, the annotations of the first data set to add a link that points to a storage location of the second data set; and
processing, by the second agent, the second data set.

US Pat. No. 10,338,959

TASK STATE TRACKING IN SYSTEMS AND SERVICES

Microsoft Technology Lice...

14. A computer-implemented method for decoupling task state tracking from an execution of a task, comprising:receiving, at a task-agnostic shared task completion platform, task registration data for a plurality of tasks from a plurality of task owner resources, wherein each task in the plurality of tasks is executable by a particular task owner resource and wherein the task registration data for each task in the plurality of tasks comprises at least one mandatory parameter that is necessary to be collected for execution of each task in the plurality of tasks and at least one optional parameter;
storing, in a storage device associated with the task-agnostic shared task state platform, the task registration data;
receiving an input from a user of the shared task state platform;
determining, based at least in part, on the received input and the task registration data, whether the received input is associated with at least one task;
when it is determined that the received input is associated with the at least one task, updating, by the task-agnostic shared task completion platform, a task state tracker that manages the state of the at least one task based, at least in part, on the received input and determines a subsequent action associated with processing the received input;
determining whether at least the at least one mandatory parameter for the at least one task has been collected; and
when it is determined that at least the at least one mandatory parameter data has been collected, transmitting the at least one mandatory parameter data, over a network, to at least one of the plurality of task owner resources that is determined to be responsible for executing the at least one task.

US Pat. No. 10,338,958

STREAM ADAPTER FOR BATCH-ORIENTED PROCESSING FRAMEWORKS

Amazon Technologies, Inc....

1. A system, comprising:one or more computing devices comprising one or more hardware processors and memory and configured to:
receive, from a client of a batch-oriented data processing service implementing a MapReduce programming model at a provider network, an indication of an input data stream comprising a plurality of data records that are to be batched for at least a first computation at the data processing service, wherein the plurality of data records are received from a plurality of data producers and retained during a time window by a multi-tenant stream management service of the provider network;
retrieve from the stream management service, based at least in part on respective sequence numbers associated with data records of the input data stream by the stream management service, a set of data records of the input data stream on which the first computation is to be performed during a particular processing iteration at the batch-oriented data processing service, wherein the set of data records comprises a plurality of retrieved data records of the input data stream;
save, in a persistent repository, metadata that corresponds to the set of data records of the input data stream, the metadata for the set of data records of the input data stream comprising an iteration identifier that uniquely identifies the particular processing iteration for the plurality of retrieved data records of the input data stream with respect to at least one of previous processing iterations completed prior to the particular processing iteration for sets of data records of input data streams and subsequent processing iterations to be performed subsequent to the particular processing iteration for other sets of data records of input data streams at the batch-oriented data processing service, and wherein the metadata for the set of data records of the input data stream further comprises two or more sequence numbers of respective records of the set of data records that indicate a range of sequence numbers of the set of data records of the input data stream on which the first computation is to be performed during the particular processing iteration for the plurality of retrieved records of the input data stream at the batch-oriented data processing service;
generate a batch representation of the set of data records in accordance with a data input format supported at the batch-oriented processing service;
schedule the particular processing iteration at selected nodes of the batch-oriented data processing service; and
execute the scheduled particular processing iteration at the selected nodes based on the saved metadata.

US Pat. No. 10,338,957

PROVISIONING KEYS FOR VIRTUAL MACHINE SECURE ENCLAVES

Intel Corporation, Santa...

1. At least one non-transitory machine accessible storage medium having code stored thereon, the code when executed on a machine, causes the machine to:identify a launch of a particular virtual machine on a host computing system, wherein the particular virtual machine is launched to comprise a secure quoting enclave to perform an attestation of one or more aspects of the virtual machine;
generate, using a secure migration enclave hosted on the host computing system, a root key for the particular virtual machine, wherein the root key is to be used in association with provisioning the secure quoting enclave with an attestation key to be used in the attestation; and
register the root key with a virtual machine registration service.

US Pat. No. 10,338,956

APPLICATION PROFILING JOB MANAGEMENT SYSTEM, PROGRAM, AND METHOD

FUJITSU LIMITED, Kawasak...

1. An application profiling job management system, configured to compose and initiate one or more application profiling tasks for profiling a software application, the application profiling job management system comprising:processing hardware coupled to memory hardware, the memory hardware storing processing instructions which, when executed by the processing hardware, cause the processing hardware to perform a process comprising including:
receiving user input information, wherein the user input information specifies a profiling target and profiling execution requirements, the profiling target including the software application;
storing, in a profiler specification storage area of the memory hardware, a profiler specification of each of a plurality of application profilers accessible to the application profiling job management system;
determining which of the plurality of application profilers satisfy the profiling execution requirements, based on one of respective profiler specifications, and for each of the application profilers determined to satisfy the profiling execution requirements, generating one or more application profiling tasks, each application profiling task specifying an application profiler from among the plurality of application profilers and the profiling target;
selecting one or more systems of hardware resources to perform each of the application profiling tasks; and
initiating execution of each of one or more application profiling tasks with the respective selected one or more systems of hardware resources.

US Pat. No. 10,338,955

SYSTEMS AND METHODS THAT EFFECTUATE TRANSMISSION OF WORKFLOW BETWEEN COMPUTING PLATFORMS

GoPro, Inc., San Mateo, ...

1. A system configured to effectuate transmission of workflow between computing platforms, the system comprising:one or more physical computer processors configured by computer readable instructions to:
receive, from a client computing platform, a first command, the first command including a proxy image representing a lower resolution version of an image stored on the client computing platform;
associate an identifier with the proxy image;
effectuate transmission of the identifier to the client computing platform, the identifier to be associated with the image stored on the client computing platform;
determine edits, at a remote computing platform, to the image based on the proxy image;
effectuate transmission of instructions from the remote computing platform to the client computing platform, the instructions including the identifier and causing the client computing platform to process the edits on the image; and
determine classifications of the image based on one or more objects recognized within the proxy image.

US Pat. No. 10,338,954

METHOD OF SWITCHING APPLICATION AND ELECTRONIC DEVICE THEREFOR

Samsung Electronics Co., ...

1. An electronic device, comprising:a display; and
at least one processor that is configured to:
control the display to display an execution screen of an application,
control the display to display a reduced size object corresponding to the application based on a reducing event generated for the execution screen,
control the display to display the execution screen of the application in an area of the display if a hovering input is detected on the reduced size object corresponding to the application, and
control the display to display the reduced size object corresponding to the application if the hovering input is released.

US Pat. No. 10,338,953

FACILITATING EXECUTION-AWARE HYBRID PREEMPTION FOR EXECUTION OF TASKS IN COMPUTING ENVIRONMENTS

Intel Corporation, Santa...

1. An apparatus comprising:detection/reception logic to detect a software application being hosted by a computing device, wherein the software applications to facilitate one or more tasks that are capable of being executed by a graphics processor of the computing device;
preemption selection logic to select one of a fine grain preemption or a coarse grain preemption based on comparison of a first time estimation and a second time estimation relating to the one or more tasks at thread level execution and work group level execution, respectively, the preemption selection logic to select the one of the fine grain preemption or the coarse grain preemption in response to the detection/reception logic detecting a preemption request while the fine grain preemption and the coarse grain preemption are not being performed;
preemption initiation and application logic to initiate performance of the selected one of the fine grain preemption and the coarse grain preemption; and
watermark time logic to set a timer to pause the work group level execution to wait to receive a refined set of the second time estimation, wherein the preemption selection logic is operable to select the fine grain preemption if the timer expires prior to receiving the refined second time estimation set, wherein the preemption selection logic is further operable to to select the coarse grain preemption based on the refined second time estimation set.

US Pat. No. 10,338,952

PROGRAM EXECUTION WITHOUT THE USE OF BYTECODE MODIFICATION OR INJECTION

International Business Ma...

1. A processor-implemented method for registering a plurality of callbacks, the method comprising:registering each of a plurality of callback functions in a virtual machine tool interface within a virtual machine to a list of callback functions for an event based on a plurality of event context elements associated with each callback function;
in response to the event occurring, generating a local frame for each registered callback function within the list of callback functions for the determined event; and
executing each registered callback function, concurrently, based on each generated local frame associated with each at least one registered callback function.

US Pat. No. 10,338,951

VIRTUAL MACHINE EXIT SUPPORT BY A VIRTUAL MACHINE FUNCTION

Red Hat, Inc., Raleigh, ...

1. A method of securing a state of a guest, comprising:determining, by a virtual machine function within a guest running on a virtual machine, a guest central processing unit (CPU) state that is stored in one or more registers of a CPU and associated with the guest;
encrypting, by the virtual machine function, a first portion of the guest CPU state that is not used to execute a privileged instruction being attempted by the guest;
sending, by the virtual machine function, one or more requests based on the privileged instruction to a hypervisor, the virtual machine and the hypervisor running on a common host machine; and
after execution of the privileged instruction is completed, decrypting, by the virtual machine function, the first portion of the guest CPU state.

US Pat. No. 10,338,950

SYSTEM AND METHOD FOR PROVIDING PREFERENTIAL I/O TREATMENT TO DEVICES THAT HOST A CRITICAL VIRTUAL MACHINE

Veritas Technologies LLC,...

1. A computer-implemented method comprising:generating a mapping of a group of virtual machine disk blocks to a group of corresponding offsets in a logical unit number (LUN) of a storage unit, wherein the LUN is one of a plurality of LUNs of the storage unit,
each corresponding offset of the group of corresponding offsets corresponds to a corresponding virtual machine disk block of the group of virtual machine disk blocks,
the mapping identifies a plurality of universally unique identifiers (UUIDs),
each UUID of the plurality of UUIDs uniquely identifies the corresponding virtual machine disk block that begins at the corresponding offset, and
each UUID is stored at a fixed offset in a related LUN of the plurality of LUNs;
detecting that an input/output (I/O) operation is directed to a specific LUN among the plurality of LUNs in the storage unit;
based on the mapping, determining a specific virtual machine from which the I/O operation originated, wherein
the specific virtual machine is one of a plurality of virtual machines, and
the determining comprises identifying a specific UUID of the plurality of UUIDs that is associated with the related LUN, wherein
the identifying comprises reading data at the fixed offset in the related LUN;
identifying a priority level of the specific virtual machine, wherein
the priority level is identified based on the mapping; and
assigning the I/O operation a matching quality rating based on the priority level, wherein
the matching quality rating represents a quality of one or more shared computing resources available to the specific virtual machine.

US Pat. No. 10,338,949

VIRTUAL TRUSTED PLATFORM MODULE FUNCTION IMPLEMENTATION METHOD AND MANAGEMENT DEVICE

Huawei Technologies Co., ...

1. A virtual trusted platform module (vTPM) function implementation method for use at an exception level EL3 of a processor that uses an ARM V8 architecture, the method comprising:generating, according to requirements of one or more virtual machines (VMs), one or more vTPM instances corresponding to each VM, and storing the generated one or more vTPM instances in preset secure space, wherein each vTPM instance has a dedicated instance communication queue for a VM corresponding to itself to use, and a physical address is allocated to each instance communication queue; and
interacting with a virtual machine monitor (VMM) and the VM, for causing the VM to acquire a VM communication queue virtual address, in VM virtual address space, corresponding to a communication queue physical address of the vTPM instance, by:
sending a first query request to an EL2, wherein the first query request comprises the communication queue physical address of the vTPM instance, for causing the EL2 to determine, according to the first query request and a mapping table that is between a physical address and an intermediate physical address and is stored at the EL2, an intermediate physical address corresponding to the communication queue physical address of the vTPM instance, and send the intermediate physical address to the EL3;
receiving the intermediate physical address sent by the EL2; and
sending a second query request to an EL1 wherein the second query request comprises the intermediate physical address, for causing the EL1 to determine, according to the second query request and a mapping table that is between an intermediate physical address and a virtual address and is stored at the EL1 a virtual address corresponding to the intermediate physical address,
wherein the determined virtual address is the VM communication queue virtual address, and
wherein the VM communicates with a vTPM instance communication queue by using the VM communication queue virtual address.

US Pat. No. 10,338,948

METHOD AND DEVICE FOR MANAGING EXECUTION OF SCRIPTS BY A VIRTUAL COMPUTING UNIT

Wipro Limited, Bangalore...

1. A method for managing execution of scripts by a virtual computing unit, on a host computing device, comprising:configuring, by a host computing device, one or more ports for establishing a communication interface between the host computing device and a virtual computing unit, wherein the virtual computing unit is configured in the host computing device;
providing, by the host computing device, one or more scripts to be executed by the virtual computing unit and one or more parameters associated with the one or more scripts to the virtual computing unit via the communication interface, wherein the virtual computing unit executes the one or more scripts upon locating the one or more scripts from an associated memory location;
receiving, by the host computing device, during the execution of the one or more scripts, real time status of the execution of the one or more scripts from the virtual computing unit via the communication interface, wherein the real time status comprises information of successfully executed scripts, information of unsuccessfully executed scripts, a number of exceptions, type of exceptions, a number of errors, and type of errors; and
instructing, by the host computing device, the virtual computing unit to complete execution of unsuccessfully executed scripts upon handling each of the exceptions and errors, wherein the exceptions and the errors are handled based on the one or more parameters, and wherein the exceptions and the errors are handled based on at least one of priority, availability of data and severity associated with each of the scripts.

US Pat. No. 10,338,947

EXTENT VIRTUALIZATION

Microsoft Technology Lice...

1. A method, comprising:employing at least one processor configured to execute computer-executable instructions stored in memory to perform the following acts:
identifying a first set of one or more contiguous storage blocks to be allocated for storage of a master-image virtual hard disk;
extending the first set of one or more contiguous storage blocks by one or more additional storage blocks reserved for patches to the master-image virtual hard disk different from updates to the master-image virtual hard disk that are represented by one or more differencing virtual hard disks, wherein the one or more differencing virtual hard disks are dependent on the master-image virtual hard disk;
allocating space in a physical file system for the extended first set of contiguous storage blocks for the master-image virtual hard disk and for the patches to the master-image virtual hard disk; and
allocating additional space in the physical file system for a second set of contiguous storage blocks for the one or more differencing virtual hard disks, wherein the additional space in the physical file system is physically contiguous with and after the space in the physical file system.

US Pat. No. 10,338,946

COMPOSABLE MACHINE IMAGE

Amazon Technologies, Inc....

1. A method for executing a computer system image on a computing node, comprising:receiving from a user data indicative of a selection of a specification file from a plurality of specification files, wherein the plurality of specification files are defined by a plurality of other users, wherein the user selects one of the plurality of specification files;
obtaining, based on the data indicative of the selection, the specification file, wherein the specification file comprises references to components of the computer system image, the components including a base system image and a resource, the specification file also comprising at least a signature associated with the resource for validating the specification file;
preparing the computer system image based on the components specified by the specification file by at least ensuring that the resource is incorporated into the computer system image; and
executing the computer system image on the computing node.

US Pat. No. 10,338,945

HETEROGENEOUS FIELD DEVICES CONTROL MANAGEMENT SYSTEM BASED ON INDUSTRIAL INTERNET OPERATING SYSTEM

KYLAND TECHNOLOGY CO., LT...

1. A heterogeneous field devices control management system based on an industrial internet operating system, wherein the heterogeneous field devices control management system comprises a plurality of servers, each of the plurality of servers comprises a memory storing first instructions, a physical communication interface and at least one processor; wherein a virtual machine management layer, a real-time virtual machine, and a non-real-time virtual machine are operated on the each of the plurality of servers, and each the real-time virtual machine and each the non-real-time virtual machine are respectively installed with a plurality of service instances; and wherein the at least one processor is configured to read and execute the first instructions to:control the virtual machine management layer to perform a configuration, operating scheduling and hardware access management of the real-time virtual machine and the non-real-time virtual machine;
control the real-time virtual machine to communicate with heterogeneous field devices, and to control the heterogeneous field devices to perform corresponding operations;
control the non-real-time virtual machine to communicate with an off-site device and process a specified service without a real-time requirement; and
control the real-time virtual machine and the non-real-time virtual machine to:
for any service instance, ascertain whether or not the service instance has a bound physical communication interface, according to an one-to-one binding relationship between a service instance and a physical communication interface;
when the service instance has a bound physical communication interface, transmit information of the service instance to a destination service instance via the bound physical communication interface; and
when the service instance does not have a bound physical communication interface, ascertain a server where the destination service instance is located by means of logical addressing, upon sending the information of the service instance to the destination service instance; submit the information of the service instance to an internal transmission queue when the service instance and the destination service instance are in a same server, and send the information of the service instance to the destination service instance via the internal transmission queue; or call an interface driver to transmit the information of the service instance to the destination service instance, when the service instance and the destination service instance are in different servers.

US Pat. No. 10,338,944

AUTOMATIC DISCOVERY AND CLASSFICATION OF JAVA VIRTUAL MACHINES RUNNING ON A LOGICAL PARTITION OF A COMPUTER

INTERNATIONAL BUSINESS MA...

1. A method of automatic discovery and classification of Java virtual machines on a logical partition (LPAR) of a computing system, the computing system comprising a main storage memory comprising at least volatile memory, wherein the volatile memory includes a common collector and a plurality of address space control blocks (ASCB), the plurality of ASCBs comprising at least one ASCB for each address space of a plurality of address spaces of the LPAR, wherein the common collector comprises a data space in system memory, the method comprising:constructing a Service Request Block (SRB) routine in the common collector along with a parameter list, wherein the SRB routine is independent from at least one system service, wherein the at least one system service includes a network service;
examining, via the SRB routine, each ASCB of the plurality of ASCBs to identify one or more address spaces of the plurality of address spaces of the LPAR that are eligible to operate a Java virtual machine (JVM), wherein examining each ASCB includes examining each ASCB for flags that indicate that a corresponding address space is dubbed into UNIX System Services (USS) and that the corresponding address space of the ASCB is dispatchable;
retrieving CSVINFO, by a JVM management system via a CSVINFO macro call to each of the plurality of ASCBs on the LPAR of the computing system, in a predetermined interval;
automatically discovering, through the CSVINFO retrieved, one or more JVMs running on the LPAR of the computing system; and
automatically classifying, through a plurality of Content Directory Entries examined using the CSVINFO macro call, the one or more JVMs discovered;
wherein the retrieving and the discovering comprises:
for each ASCB of the plurality of ASCBs:
calling the SRB routine to retrieve CSVINFO from the ASCB, wherein the CSVINFO from the ASCB includes a list of modules loaded on the ASCB; and
discovering a JVM when the CSVINFO from the ASCB includes one or more JVM modules by at least detecting by the SRB routine, whether a JVM module named libjvm.so is present in the list of modules, wherein the SRB routine is configured to return a path name to the libjvm.so module in response to detecting that the libjvm.so module is present.

US Pat. No. 10,338,943

TECHNIQUES FOR EMULATING MICROPROCESSOR INSTRUCTIONS

SYMANTEC CORPORATION, Mo...

1. A computer-implemented method for emulating microprocessor instructions, the method comprising:identifying, in a computing device, an instruction of a first software application using a second software application that emulates instructions of a type of microprocessor, wherein the instruction includes an instruction prefix and an operation code;
adding, in the computing device, an additional bit to a length of the operation code of the instruction to create an extended operation code, wherein the additional bit accounts for a program state set by the instruction prefix and wherein the extended operation code, including the additional bit, is represented in an operation code table of the second software application; and
emulating, in the computing device, execution of the instruction using the second software application and the extended operation code.

US Pat. No. 10,338,942

PARALLEL PROCESSING OF DATA

Google LLC, Mountain Vie...

1. A computer-implemented method, comprising:executing a deferred, combined parallel operation, which is included in a dataflow graph that comprises deferred parallel data objects and deferred, combined parallel operations corresponding to a data parallel pipeline, to produce materialized parallel data objects corresponding to deferred parallel data objects, wherein the executing comprises:
determining an estimated size of data associated with the deferred, combined parallel operation being executed;
determining that the estimated size of data associated with the deferred, combined parallel operation does not exceed a threshold size based at least on accessing annotations in the dataflow graph that represent an estimate of the size of the data associated with the deferred, combined parallel operation; and
in response to determining that the estimated size does not exceed the threshold size, executing the deferred, combined parallel operation as a local, sequential operation.

US Pat. No. 10,338,941

ADJUSTING ADMINSTRATIVE ACCESS BASED ON WORKLOAD MIGRATION

International Business Ma...

1. A computer-implemented method of migrating a workload from a source system to a target system, the method comprising:detecting migration of the workload from the source system to the target system, wherein the workload includes one or more virtual machines, and wherein the source system is an unallocated server and the target system is a server allocated to a system pool;
scanning a hardware management console (HMC) on the source system to determine an identity of an administrator associated with the workload;
adjusting access rights of the identified administrator to an HMC on the target system to provide access to the migrated workload based, at least in part, on access rights of the identified administrator to the source system;
adjusting access rights of the identified administrator to the HMC on the source system based on the migration of the workload from the source system to the target system, wherein adjusting access rights of the identified administrator to the HMC on the source system comprises:
determining whether to revoke access rights of the identified administrator to the source system based on whether or not the identified administrator owns workloads other than the migrated workload executing on the source system,
granting the administrator access rights to the server allocated to the system pool consistent with access rights of the administrator to the unallocated server,
scanning a management console of the system pool to determine categories of policies available in the system pool,
granting the administrator access rights with respect to policies within the categories that are analogous to policies applicable to the workload on the unallocated server,
revoking access rights of the administrator to tasks that conflict with the active policies defined for the system pool within the categories, and
upon determining that the administrator no longer owns a workload on the unallocated server subsequent to the migration, revoking access rights of the administrator to the unallocated server; and
executing the migrated workload on the target system based on the adjusted access rights of the identified administrator.

US Pat. No. 10,338,940

ADJUSTING ADMINSTRATIVE ACCESS BASED ON WORKLOAD MIGRATION

International Business Ma...

1. A non-transitory computer-readable storage medium storing an application, which, when executed on a processor, performs an operation of migrating a workload from a source system to a target system, the operation comprising:detecting migration of the workload from the source system to the target system, Wherein the workload includes one or more virtual machines, and wherein the source system is an unallocated server and the target system is a server allocated to a system pool;
scanning a hardware management console (HMC) on the source system to determine an identity of an administrator associated with the workload;
adjusting access rights of the identified administrator to an HMC on the target system to provide access to the migrated workload based, at least in part, on access rights of the identified administrator to the source system;
adjusting access rights of the identified administrator to the HMC on the source system based on the migration of the workload from the source system to the target system, wherein adjusting access rights of the identified administrator to the HMC on the source system comprises:
determining whether to revoke access rights of the identified administrator to the source system based on whether or not the identified administrator owns workloads other than the migrated workload executing on the source system,
granting the administrator access rights to the server allocated to the system pool consistent with access rights of the administrator to the unallocated server,
scanning a management console of the system pool to determine categories of policies available in the system pool,
granting the administrator access rights with respect to policies within the categories that are analogous to policies applicable to the workload on the unallocated server,
revoking access rights of the administrator to tasks that conflict with the active policies defined for the system pool within the categories, and
upon determining that the administrator no longer owns a workload on the unallocated server subsequent to the migration, revoking access rights of the administrator to the unallocated server; and
executing the migrated workload on the target system based on the adjusted access rights of the identified administrator.

US Pat. No. 10,338,939

SENSOR-ENABLED FEEDBACK ON SOCIAL INTERACTIONS

Bose Corporation, Framin...

1. A computer-implemented method comprising:receiving location information associated with a location of a user-device, the location information comprising information representing a number of individuals interacting with a user of the user-device at the location;
receiving, from one or more sensor devices, user-specific information about the user associated with the user device;
estimating, by one or more processors, based on (i) the user-specific information and (ii) the location information, a set of one or more parameters indicative of social interactions of the user, the one or more parameters indicating, at least in part,
(i) a participation metric indicative of a relative amount of time the user speaks in a conversation with the number of individuals, and
(ii) a social outing metric indicative of a measure of spread of the user's physical locations;
determining that the relative amount of time the user speaks in the conversation with the number of individuals satisfies a first threshold condition;
determining that the measure of spread of the user's physical locations satisfies a second threshold condition;
responsive to determining that the relative amount of time the user speaks in the conversation with the number of individuals satisfies the first threshold condition, and determining that the measure of spread of the user's physical locations satisfies the second threshold condition, generating a signal representing informational output regarding the user's participation in the conversation and participation in social outings; and
presenting the informational output on a user-interface displayed on an output device, the user-interface configured to provide feedback to the user regarding the user's participation in the conversation and participation in social outings.

US Pat. No. 10,338,938

PRESENTING ELEMENTS BASED ON CONFIGURATION OF DEVICE

Lenovo (Singapore) Pte. L...

1. An apparatus, comprising:a touch-enabled display;
a processor; and
storage accessible to the processor and bearing instructions executable by the processor to:
make a first determination that a device is being or has been transitioned between a laptop configuration and a tablet configuration; and
at least in part based on the first determination, make a second determination pertaining to at least one change in presentation of an element presented on the touch-enabled display relative to its presentation prior to the first determination, the element associated with an application, the change in presentation being from a first presentation to a second presentation;
wherein the instructions are executable by the processor to make the second determination at least in part based on a third determination that the application has launched a threshold number of launches each following a transition to one of the laptop configuration and tablet configuration.

US Pat. No. 10,338,937

MULTI-PANE GRAPHICAL USER INTERFACE WITH DYNAMIC PANES TO PRESENT WEB DATA

Red Hat, Inc., Raleigh, ...

1. A method comprising:receiving a selection of one or more system administration data items in a data item pane in a web application graphical user interface (GUI), the web application GUI comprising the data item pane and a multi-selection pane in a window of the web application GUI, the data item pane comprising a set of system administration data items of the web application;
determining a number of the one or more system administration data items from the set of system administration data items of the web application that have been selected in the data item pane;
responsive to determining that a single system administration data item of the set of system administration items has been selected in the data item pane, sliding, by a processing device, another pane with a display of permission information for the selected system administration data item in a horizontal direction away from the data item pane within the window of the web application GUI to cover the multi-selection pane in the window, wherein content of the data item pane is fully visible in the window of the web application GUI;
responsive to determining that a plurality of the system administration data items of the set of system administration items have been selected in the data item pane, providing one or more actions to remove permissions from each system administration data item of the selected plurality of system administration data items, the one or more actions being provided in the multi-selection pane;
receiving a request to perform an action of the one or more actions for the selected plurality of system administration data items in the data item pane to remove a permission from each of the plurality of system administration data items selected in the data item pane from the window of the web application GUI, the request corresponding to a selection of the action at the multi-selection pane; and
responsive to receiving the request, removing, by the processing device, the permission from each of the plurality of system administration data items selected in the data item pane from the window of the web application GUI.

US Pat. No. 10,338,936

METHOD FOR CONTROLLING SCHEDULE OF EXECUTING APPLICATION IN TERMINAL DEVICE AND TERMINAL DEVICE IMPLEMENTING THE METHOD

Sony Corporation, Tokyo ...

1. A terminal device comprising:a memory configured to store a first application and a second application, the first application and the second application being applications that have data related with the first application and the second application respectively preserved while in a standby mode; and
circuitry, comprising a processor, configured to
implement a first timer associated with the first application;
implement a second timer different from the first timer;
determine whether the first timer wakes up the processor;
in a case where the circuitry determines the first timer wakes up the processor, wake up the processor from the standby mode when the first timer measures a first predetermined amount of elapsed time and, after the processor is woken up, execute the first application and the second application on the processor, wherein the second application does not have a function to wake up the processor from the standby mode in the case where the circuitry determines the first timer wakes up the processor; and
in a case where the circuitry determines the first timer does not wake up the processor, wake up the processor from the standby mode when the second timer measures a second predetermined amount of elapsed time and, after the processor is woken up, execute the first application and the second application on the processor, wherein neither the first application nor the second application has a function to wake up the processor from the standby mode in the case where the circuitry determines the first timer does not wake up the processor.

US Pat. No. 10,338,935

CUSTOMIZING PROGRAM LOGIC FOR BOOTING A SYSTEM

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for generating customized program, the method comprising:determining, by a reporting unit of a target system, one or more hardware devices operatively connected with the target system;
sending, by the reporting unit, a first list of identifiers of the determined hardware devices to a server system;
receiving, by the server system, the first list of device identifiers;
automatically selecting from a set of drivers, by the server system, for each of the device identifiers in the received first list, at least one driver operable to control the identified device, thereby generating a sub-set of said set of drivers;
retrieving, by the server system, a core program logic being free of any drivers of the target system;
automatically complementing the core program logic with said driver sub-set to generate the customized program logic; and
deploying the customized program logic to the target system for loading of the customized program logic into a memory of the target system, the customized program logic configured to use the sub-set of drivers for downloading an operating system to the target system.

US Pat. No. 10,338,934

INTER-OBJECT VALIDATION SYSTEM AND METHOD USING CHAINED SPECIALIZED CONFIGURATION APPLICATIONS

VCE IP Holding Company LL...

1. A configuration application computing system comprising:at least one processor;
at least one memory to store instructions that are executed by the at least one processor; and
a plurality of components that have been incorporated into a customized integrated computing system by a specialized configuration application controller that is executed by the at least one processor to:
receive a customized integrated computing system configuration comprising a plurality of design elements associated with the plurality of components of the customized integrated computing system;
select, from a plurality of specialized configuration applications executed by the specialized configuration application controller, a first specialized configuration application to determine whether at least a subset of the design elements in the customized integrated computing system configuration meet a first specified criteria associated with a behavior that a subset of the design elements provides to the customized integrated computing system, wherein the first specified criteria are in accordance with a particular market segment independent of customized integrated computing system component interoperability; and
determine that the customized integrated computing system configuration satisfies one or more second specified criteria associated with a behavior of at least a second subset of the design elements from a second specialized configuration application from the plurality of specialized configuration applications.

US Pat. No. 10,338,933

METHOD FOR GENERATING CUSTOM BIOS SETUP INTERFACE AND SYSTEM THEREFOR

Dell Products, LP, Round...

1. A computer implemented method comprising:actuating a predetermined key during boot initialization of an information handling system to display a basic input/output system (BIOS) setup interface;
determining that a first configuration option is not available at the BIOS setup interface;
exiting the BIOS setup interface to allow the information handling system to complete the boot initialization and to load an operating system;
invoking, by a user of the information handling system, a runtime application, the runtime application identifying a plurality of configuration options, including the first configuration option; and
selecting, by the user at an interface of the runtime application, the first configuration option from the plurality of configuration options, the selecting causing a software agent to update BIOS firmware to include the first configuration option at the BIOS setup interface.

US Pat. No. 10,338,932

BOOTSTRAPPING PROFILE-GUIDED COMPILATION AND VERIFICATION

Google LLC, Mountain Vie...

1. A method, comprising:receiving, at a server computing device, a request to provide a software package for a particular software application;
determining composite application execution information (AEI) for at least the particular software application using the server computing device, the composite AEI comprising a composite list of software for at least the particular software application, wherein the composite list of software comprises data about software methods of the particular software application executed by at least one computing device other than the server computing device, the software methods of the particular software application including a frequently-executed software method and an initialization software method;
extracting particular AEI related to the particular software application from the composite AEI using the server computing device, the particular AEI providing a compiler hint for indicating to compile the frequently-executed software method before runtime of the particular software application and for indicating to compile the initialization software method during runtime of the particular software application;
generating the software package using the server computing device, wherein the software package includes the particular software application and the particular AEI; and
providing the software package using the server computing device.

US Pat. No. 10,338,931

APPROXIMATE SYNCHRONIZATION FOR PARALLEL DEEP LEARNING

INTERNATIONAL BUSINESS MA...

1. A computer program product for deep learning, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processing component to cause the processing component to:generate, by the processing component, first output data based on first input data associated with machine learning and received by the processing component and one or more other processing components;
transmit, by the processing component, the first output data to a first processing component from the one or more other processing components, wherein the first processing component is determined by the processing component or the one or more other processing components, wherein the processing component and the first processing component are operated in synchronization for the deep learning, and wherein the first output data is stored in a memory operatively coupled to the processing component;
generate, by the processing component, second output data based on the first output data and communication data that is generated by the first processing component; and
transmit, by the processing component, the second output data to a second processing component from the processing component, wherein the second processing component is determined by the processing component or the one or more other processing components, and wherein the processing component and the second processing component are operated in synchronization for the deep learning.

US Pat. No. 10,338,930

DUAL-RAIL DELAY INSENSITIVE ASYNCHRONOUS LOGIC PROCESSOR WITH SINGLE-RAIL SCAN SHIFT ENABLE

Eta Compute, Inc., Westl...

1. A self-timed processor comprising:combinatorial logic comprising multi-rail delay insensitive asynchronous logic (DIAL) to output one or more multi-rail data values to a multiplexer;
a test pattern input to output a test pattern bit stream of multi-rail test values to the multiplexer;
the multiplexer comprising Boolean logic to output one or more multi-rail multiplexed values to a latch, the multiplexer having a single rail selector input to select whether the multi-rail multiplexed values are the multi-rail data values or the multi-rail test values;
the Boolean logic comprising:
a first multiplexer logic that is I) a first Boolean AND gate to receive a) a first-rail input of the multi-rail data values, and b) an inverse of a first Boolean input from the single rail selector input; II) a second Boolean AND gate to receive a) a first-rail input of the multi-rail test values, and b) the first Boolean input from the single rail selector input; and III) a Boolean OR gate to receive outputs of the first and second Boolean AND gates of the first multiplexer logic; and
a second multiplexer logic that is I) a first Boolean AND gate to receive a) a second-rail input of the multi-rail data values, and b) the inverse of the first Boolean input from the single rail selector input; II) a second Boolean AND gate to receive a) a second-rail input of the multi-rail test values, and b) the first Boolean input from the single rail selector input; and III) a Boolean OR gate to receive outputs of the first and second Boolean AND gates of the second multiplexer logic.

US Pat. No. 10,338,929

METHOD FOR HANDLING EXCEPTIONS IN EXCEPTION-DRIVEN SYSTEM

Imagination Technologies ...

1. A method of processing exceptions in an exception-driven computing-based system, the method comprising:executing, using a processor of the system, a main program which causes the system to operate first in an initialisation mode and then in an exception-driven mode;
detecting, using the processor, that an exception has occurred;
in response to detecting that an exception has occurred, executing, using the processor, one of one or more sets of exception handling instructions using a main register set; and
wherein when the system is operating in the initialisation mode the set of exception handling instructions that are executed invoke a first exception handler that causes the processor to save the main register set prior to processing the exception and restore the main register set after processing the exception, and when the system is operating in the exception-driven mode the set of exception handling instructions that are executed invoke a second exception handler that does not cause the processor to save and restore the main register set.

US Pat. No. 10,338,928

UTILIZING A STACK HEAD REGISTER WITH A CALL RETURN STACK FOR EACH INSTRUCTION FETCH

Oracle International Corp...

1. A method comprising:fetching a first instruction; and
in response to detecting the first instruction is a call instruction based on decode information generated from an opcode fetched with the first instruction:
determining a first return address corresponding to the call instruction;
storing the first return address in a stack head register; and
pushing the first return address onto a call return stack that is separate from the stack head register; and
on every instruction fetch:
reading the stack head register to obtain a speculative return address stored therein; and
storing the speculative return address in a temporary storage location prior to determining whether a corresponding fetched instruction comprises a return instruction.

US Pat. No. 10,338,927

METHOD AND APPARATUS FOR IMPLEMENTING A DYNAMIC OUT-OF-ORDER PROCESSOR PIPELINE

Intel Corporation, Santa...

1. An apparatus comprising:an instruction fetch unit to fetch Very Long Instruction Words (VLIWs) in program order from memory, each of the VLIWs comprising a plurality of reduced instruction set computing (RISC) instruction syllables grouped into the VLIWs in an order which removes data-flow dependencies and false output dependencies between the syllables, and wherein the plurality of RISC instruction syllables in the VLIWs include one or more false anti-dependencies;
a decode unit to decode the VLIWs in program order and output the syllables of each decoded VLIW in parallel; and
an out-of-order execution engine to execute at least some of the syllables in parallel with other syllables, wherein at least some of the syllables are to be executed in a different order than the order in which they are received from the decode unit.

US Pat. No. 10,338,926

PROCESSOR WITH CONDITIONAL INSTRUCTIONS

BULL SAS, Les Clayes-sou...

1. A computer implemented method for processing machine instructions by a physical processor, comprising:receiving, from a memory, at least one machine instruction, wherein the at least one machine instruction comprises a first identification of first and third operations to execute and a conditional prefix representing a condition for verifying whether to execute at least the first and third operations, wherein the conditional prefix comprises:
a second identification of a value of a predicate register, and
a third identification of a second operation to perform on the value of the predicate register for the verification, wherein the second operation comprises a wait until the value of the predicate register is met;
executing, using a processing unit, a comparison instruction, a resulting value of which is stored in the predicate register of the received machine instructions such that the value of the predicate register is met;
evaluating, using a management module, the conditional prefix, wherein evaluating the conditional prefix comprises the verification of the value of the predicate register;
executing, using a processing unit, the first operation identified in the at least one machine instruction, according to whether the condition is verified; and
executing the third operation identified in the at least one machine instruction, according to whether the condition is verified.

US Pat. No. 10,338,925

TENSOR REGISTER FILES

Microsoft Technology Lice...

1. An apparatus comprising:a plurality of tensor operation calculators each configured to perform a type of tensor operation of a plurality of types of tensor operations, the plurality of tensor operation calculators comprising multiple instances of tensor operation calculators configured to perform a first type of the tensor operations;
a plurality of tensor register files, each of the tensor register files being associated with one of the plurality of tensor operation calculators; and
logic configured to store tensors in the plurality of tensor register files in accordance with the type of tensor operation to be performed on the respective tensors;
wherein the logic is further configured to store multiple separate copies of tensors, for which the apparatus is to perform the first type of tensor operation, in each of the dedicated tensor register files that are associated with the multiple instances of the tensor operation calculators configured to perform the first type of tensor operation, to the exclusion of others of the plurality of tensor register files.

US Pat. No. 10,338,924

CONFIGURABLE EVENT SELECTION FOR MICROCONTROLLER TIMER/COUNTER UNIT CONTROL

RENESAS ELECTRONICS CORPO...

1. A microcontroller comprising:a central processing unit (CPU);
a memory for storing instructions executable by the CPU;
first and second input pins for receiving first and second external event signals, respectively, from one or more devices that are external to the microcontroller;
a first timer/counter (T/C) channel coupled to receive control values generated by the CPU in response to executing the instructions, and further coupled to receive the first external event signal, and the second external event signal;
wherein each of the first and second external event signals is a binary signal that can transition between a first state and a second state;
wherein the first T/C channel is configured to generate a plurality of event signals based on the first event signal or the second external event signal;
wherein the first T/C channel is configured to select one of the plurality of event signals based on one or more of the control values;
wherein the first T/C channel is configured to generate a first control signal as a function of the selected event signal;
wherein a first function of the first T/C channel can be controlled by the first control signal.

US Pat. No. 10,338,923

BRANCH PREDICTION PATH WRONG GUESS INSTRUCTION

INTERNATIONAL BUSINESS MA...

1. A method comprising:receiving, with a processor, a branch wrong guess instruction located at a branch wrong guess instruction address;
determining, by the processor, whether any branch address in a branch prediction mechanism matches the branch wrong guess instruction address;
subsequent to the determining whether any branch address in the branch prediction mechanism matches the branch wrong guess instruction address, receiving, by the processor, an end branch wrong guess instruction that includes the branch wrong guess instruction address, wherein the end branch wrong guess instruction is distinct and separate from the branch wrong guess instruction;
responsive to determining that the branch wrong guess instruction address does not match any branch address in the branch prediction mechanism:
inducing a branch prediction error by prefetching an instruction immediately sequentially following the branch wrong guess instruction address; and
decoding and executing instructions in a state invariant region, wherein the state invariant region is a two-instruction state invariant region comprising decode wrong stream instructions, and the state invariant region immediately sequentially follows the branch wrong guess instruction and immediately sequentially precedes the end branch wrong guess instruction; and
prefetching, by the processor, an instruction at a branch target address in response to the end branch wrong guess instruction, even if the branch wrong guess instruction has not yet been executed.

US Pat. No. 10,338,922

SOFT SENSOR DEVICE FOR ACQUIRING A STATE VARIABLE BY CALCULATION USING A PLURALITY OF PROCESSOR CORES

TOYOTA JIDOSHA KABUSHIKI ...

1. A soft sensor device that acquires a state variable x by calculation using a plurality of processor cores, in which a value of the state variable x changes in relation to an input variable u that is observable, and a time derivative dx/dt of the state variable x is expressed by a function f(x, g(x), u) having the state variable x, an inner function g(x) having the state variable x as an independent variable, and the input variable u, as independent variables, the soft sensor device, comprising:a first arithmetic operation device that is an arithmetic operation device configured to perform an arithmetic operation by using one or a plurality of processor cores, and is programmed to calculate a function f(x, v, u) by using respective values of the state variable x, an intervening variable v that is defined by the inner function g(x), and the input variable u, and further obtain a value of the state variable x by time-integrating a value of the function f(x, v, u); and
a second arithmetic operation device that is an arithmetic operation device configured to perform an arithmetic operation by using one or a plurality of processor cores that is or are different from the processor core or the processor cores used in the first arithmetic operation device, and is programmed to calculate the intervening variable v,
wherein the first arithmetic operation device is programmed to calculate the function f(x, v, u) by using a value of the state variable x that is calculated in the first arithmetic operation device in processing of a previous time, a value of the intervening variable v that is calculated in the second arithmetic device in the processing of the previous time, and a value of the input variable u that is inputted in processing of this time.

US Pat. No. 10,338,921

ASYNCHRONOUS INSTRUCTION EXECUTION APPARATUS WITH EXECUTION MODULES INVOKING EXTERNAL CALCULATION RESOURCES

Huawei Technologies Co., ...

13. An asynchronous instruction execution method, wherein the method is executed by an asynchronous instruction execution apparatus, the asynchronous instruction execution apparatus comprises a vector execution unit control (VXUC) module and n vector execution unit data (VXUD) modules, n is a positive integer, the n VXUD modules are cascaded and separately connected to the VXUC module, a bit width of data processed by the asynchronous instruction execution apparatus is M, a bit width of each VXUD module is N, n=M/N, and the method comprises:decoding, by the VXUC module, an instruction from a vector instruction fetcher (VIF), to obtain decoded instruction information;
managing, by the VXUC module according to the decoded instruction information obtained by a decoding submodule, token transfer between the asynchronous instruction execution apparatus and another asynchronous instruction execution apparatus and token transfer inside the asynchronous instruction execution apparatus;
when the decoded instruction information obtained by the decoding submodule indicates that an external calculation resource needs to be invoked, generating, by the VXUC module, a clock pulse signal corresponding to the external calculation resource, and sending control information comprised in the decoded instruction information and the clock pulse signal to a first VXUD module of the n VXUD modules;
sending, by the first VXUD module, the clock pulse signal and the control information to the external calculation resource, to enable the external calculation resource to perform a data calculation according to the clock pulse signal and the control information; and
receiving, by the first VXUD module, a data calculation result from the external calculation resource.

US Pat. No. 10,338,920

INSTRUCTIONS AND LOGIC FOR GET-MULTIPLE-VECTOR-ELEMENTS OPERATIONS

Intel Corporation, Santa...

1. A processor, comprising:a decoder to decode an instruction, the instruction including an opcode and fields to identify a first source vector register, a second source vector register, and a destination vector register;
the first source vector register to store data elements of at least two tuples, each tuple to include at least three data elements;
the second source vector register to store data elements of at least two tuples, each tuple to include at least three data elements;
execution circuitry to execute the decoded instruction, the execution circuitry to:
extract a respective data element from a specific position within each tuple in the first source vector register, the specific position to be dependent on an encoding for the instruction;
extract a respective data element from the specific position within each tuple in the second source vector register;
store the data elements to be extracted from the first source vector register and the second source vector register in contiguous locations in the destination vector register; and
a retirement unit to retire the executed decoded instruction.

US Pat. No. 10,338,919

GENERALIZED ACCELERATION OF MATRIX MULTIPLY ACCUMULATE OPERATIONS

NVIDIA Corporation, Sant...

1. A processor, comprising:a datapath configured to execute a matrix multiply and accumulate (MMA) operation to generate a plurality of elements of a result matrix at an output of the datapath,
wherein each element in the plurality of elements of the result matrix is generated by calculating at least one dot product of corresponding pairs of vectors associated with matrix operands specified in an instruction for the MMA operation, and
wherein a dot product operation for calculating each dot product in the at least one dot product comprises:
generating a plurality of partial products by multiplying each element of a first vector with a corresponding element of a second vector,
aligning the plurality of partial products based on the exponents associated with each element of the first vector and each element of the second vector, and
accumulating the plurality of aligned partial products in parallel into a result queue utilizing at least one adder.

US Pat. No. 10,338,918

VECTOR GALOIS FIELD MULTIPLY SUM AND ACCUMULATE INSTRUCTION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method of executing instructions, the computer-implemented method comprising:obtaining, by a processor, an instruction for execution, the instruction having associated therewith:
an opcode identifying a Vector Galois Field Multiply Sum and Accumulate operation; and
a plurality of operands including a first operand, a second operand, a third operand, and a fourth operand; and
a control to specify a size of elements of the second operand and the third operand, wherein the control is specified by a mask associated with the instruction: and
executing the instruction, the executing comprising:
multiplying one or more elements of the second operand with one or more elements of the third operand using carryless multiplication to obtain a plurality of products;
performing a first mathematical operation on the plurality of products to obtain a first result;
performing a second mathematical operation on the first result and one or more selected elements of the fourth operand to obtain a second result; and
placing the second result in the first operand.

US Pat. No. 10,338,917

METHOD, APPARATUS, AND SYSTEM FOR READING AND WRITING FILES

Alibaba Group Holding Lim...

1. A method comprising:receiving an access request;
acquiring file identifier information and first user identifier information according to the access request;
querying a cache server for a locked file corresponding to the file identifier information;
acquiring a first original file corresponding to the file identifier information from a file server in response to determining that the locked file corresponding to the file identifier information is not found;
generating a locked file according to the first user identifier information and the first original file;
submitting the locked file to the cache server;
receiving an editing operation for the first original file; and
submitting the edited first original file to the cache server to request the cache server to update the locked file according to the edited first original file.

US Pat. No. 10,338,916

SOFTWARE VERSION FINGERPRINT GENERATION AND IDENTIFICATION

SAP SE, Walldorf (DE)

1. A computer-implemented method comprising:accessing, at a server computer, a source code repository comprising a plurality of versions of code for a software component;
analyzing, by the server computer, the plurality of versions of code of the component to compute values of metrics to identify each version of code for the software component;
wherein the metrics are stored in a metrics matrix along with the plurality of versions of code;
analyzing, by the server computer, the metrics to determine a subset of the metrics to use to as a fingerprint definition to identify each version of the code for the software component by:
(1) determining a best candidate metric among the metrics in the metrics matrix to identify the most versions of the plurality of versions of code;
(2) adding the best candidate metric to a set of optimal metrics;
(3) determining which versions of code of the plurality of versions of code can be identified with the best candidate metric;
(4) removing the versions of code of the plurality of versions of code that can be identified with the best candidate metric and the optimal metrics from the metrics matrix to generate a reduced metrics matrix; and
(5) repeating (1)-(4) of the process on the reduced metrics matrix until all versions of the plurality of versions of code are uniquely identified by a combination of selected metrics or until there is one or more versions of code that cannot be uniquely identified using the metrics; and
based on determining that all versions of the plurality of version of code are uniquely identified by the combination of selected metrics, and wherein the set of optimal metrics is the subset of the metrics to use as the fingerprint definition, generating, by the server computer, a fingerprint for each version of code for the software component, using the fingerprint definition;
generating, by the server computer, a fingerprint matrix with the fingerprint for each version of code for the software component; and
storing, by the server computer, the fingerprint definition and the fingerprint matrix.

US Pat. No. 10,338,915

AUTOMATED IDENTIFICATION OF CODE DEPENDENCY

SAP SE, Walldorf (DE)

1. One or more non-transitory computer-readable storage media storing computer-executable instructions for causing a computing system to perform processing to identify a code relationship between code versions, the processing comprising:receiving a first code update to a first code version, the first code update comprising at least a first code change, the at least a first code change being specified by (1) operations to locate a code segment to be changed; and one or more of (2) operations to modify the located code segment or code proximate the located code segment; (3) operations to remove code from the located code segment or code proximate the located code segment; or (4) operations to add code to the located code segment or code proximate the located code segment;
determining at least one software object changed by the first code update;
identifying a second code update of a plurality of code updates that modifies the at least one software object;
generating a second, code version, the second code version being a reference code version, by undoing at least the second code update, the second code update having been previously applied to the first code version, the second code update comprising at least a second code change:
the at least a second code change being specified by (1) operations to locate a code segment to be changed; and one or more of (2) operations to modify the located code segment or code proximate the located code segment; (3) operations to remove code from the located code segment or code proximate the located code segment; or (4) operations to add code to the located code segment or code proximate the located code segment;
wherein undoing the at least a second code change comprises one or more of (1) adding previously deleted code; (2) removing previously added code; or (3) reversing a previous code modification; and
determining whether the first code update can be implemented on the second version, the determining comprising attempting to apply the first code update to the second code version, and wherein it is determined that the first code update cannot be implemented on the second code version if the code segment to be changed cannot be located in the second code version.

US Pat. No. 10,338,914

DYNAMICALLY APPLYING A PATCH TO A SHARED LIBRARY

Hewlett Packard Enterpris...

1. A non-transitory machine-readable storage medium comprising instructions to dynamically apply a patch to a shared library, the instructions executable by a processor to:perform an initial testing operation related to a computer application that refers to a shared library;
in response to the initial testing operation, invoke an external process to control target processes of the computer application, wherein the target processes refer to the shared library;
direct the external process to:
abort execution of the target processes;
examine a user stack for threads corresponding to the target processes to determine whether the target processes are at a safe point; and
upon a determination that the target processes are at an unsafe point, resume the target processes for an interval prior to a reexamination;
in response to a request from a target process, amongst the target processes determined to be at the safe point, to access a target function in the shared library, direct the request to a special function in a dynamic loader, wherein the target function is a function to be patched, and wherein the target process is dynamically updated when it invokes the target function in the shared library subsequent to reaching the safe point;
direct the special function to determine whether a shared patch library, comprising a patched version of the target function or a new function, is loaded for the target process; and in response to determination that the shared patch library is not loaded for the target process, direct dynamic loader to load the shared patch library only for the target process, and route the request for the target function to the patched version of the target function or the new function, in the shared patch library.

US Pat. No. 10,338,913

ACTIVE ADAPTATION OF NETWORKED COMPUTE DEVICES USING VETTED REUSABLE SOFTWARE COMPONENTS

Archemy, Inc., New York,...

1. A non-transitory processor-readable medium storing code representing instructions to cause a processor to:receive, at a processor, a signal representing a text description of at least one system capability request made by a user and associated with a system capability;
convert the text description of the least one system capability request into a normalized description of the at least one system capability request;
query a repository via a query, using a search algorithm of the processor and in response to receiving the signal, to identify a plurality of candidate application software units, the repository stored in a memory operably coupled to the processor, the query referencing the normalized description of the at least one system capability request;
send a signal to cause display of a representation of each candidate application software unit from the plurality of candidate application software units to the user;
receive a user selection of a candidate application software unit from the plurality of candidate application software units, the user selection made by the user; and
cause deployment of the user-selected candidate application software unit to at least one remote compute device in response to receiving the user selection, such that the user-selected candidate application software unit is integrated into a software package of the at least one remote compute device to define a modified software package that is configured to provide the system capability.

US Pat. No. 10,338,912

UPDATING SOFTWARE BASED ON FUNCTION USAGE STATISTICS

Metrological Media Innova...

1. A computer-implemented method of measuring and updating a software program in a client system, the software program comprising plural functions, the method comprising receiving in a server system, statistics on the usage of certain functions in the software program in a predetermined time period, generating and issuing an optimized update to the software to the client system based on the statistics, the optimized update subsequently adopted by the client system, deriving with the server system a function mapping table mapping functions of the software program to abbreviations, said mapping being arranged in order of frequency of use of the functions as indicated in previously received statistics, and communicating said table to the client system, where said statistics received from the client system refer to the abbreviations instead of the functions.

US Pat. No. 10,338,911

METHOD AND DEVICE FOR DOWNLOADING SOFTWARE VERSION, AND STORAGE MEDIUM

ZTE Corporation, Shenzhe...

1. A method for downloading a software version, comprising:determining, by electronic equipment, n partitions to be downloaded of a software version to be sent, n?1;
indicating, by the electronic equipment, a mobile terminal to format a mapping partition corresponding to an mth partition to be downloaded in the n partitions to be downloaded in the mobile terminal, n?m?1;
receiving, by the mobile terminal, a formatting indication from the electronic equipment, and formatting the mapping partition corresponding to the mth partition to be downloaded of the software version from the electronic equipment in the mobile terminal according to the formatting indication, wherein formatting comprises clearing all data in an existing partition of the mobile terminal;
determining that formatting of the mapping partition corresponding to the mth partition to be downloaded is completed and succeeds;
in response to determining that the formatting is completed and succeeds, reading, by the electronic equipment, a first data block with a preset address length from the mth partition to be downloaded, and judging whether the first data block of the mth partition to be downloaded is an all-0 data block or not;
in response to judging, packing, by the electronic equipment, data of the mth partition to be downloaded into an all-0 data packet or a non-0 data packet, and sending the all-0 data packet or the non-0 data packet;
receiving, by the mobile terminal, the all-0 data packet and not performing a writing operation on the all-0 data packet, or receiving the non-0 data packet and writing the non-0 data packet into the mapping partition corresponding to the mth partition to be downloaded; and
in response to determining that m=n, sending, by the electronic equipment, a downloading completion instruction to the mobile terminal.

US Pat. No. 10,338,909

SYSTEM AND METHOD OF DISTRIBUTING SOFTWARE UPDATES

AO Kaspersky Lab, Moscow...

1. A method for distributing software updates to terminal nodes in a network comprising:installing, by a network administration server, on a plurality of terminal nodes in the network, security applications configured to at least manage security of said terminal nodes;
receiving, by the network administration server, from the security applications installed on the plurality of terminal nodes in the network, criteria characterizing the terminal nodes on which said security applications are installed and identifiers of other terminal nodes in broadcast domains of the terminal nodes on which said security applications are installed, wherein the criteria comprises at least information assessing the vulnerability of the terminal node to a malware attack;
based on the criteria characterizing the terminal nodes, selecting, by the network administration server, terminal nodes to be used as active and passive update agents for each broadcast domain, wherein an active update agent is configured to receive software updates from the network administration server and other update agents, and a passive update agent is configured to receive software updates from other update agents only; and
transmitting, by the network administration server, to the security applications of the selected active update agents for each broadcast domain in the network, one or more software updates for further distribution of the software updates by the active update agents to one or more passive update agents and the plurality of terminal nodes in the same broadcast domain.

US Pat. No. 10,338,908

MODULARIZED APPLICATION FRAMEWORK

SAP SE, Walldorf (DE)

1. A computer-implemented method, comprising:receiving, at a framework, first core module data from a first application module;
receiving, at the framework, a contract file from a second application module in response to an interaction invoking the second application module, wherein the contract file specifies that the second application module is subscribed to the first application module;
receiving, at the framework, a first request from the second application module for the first core module data;
in response to receiving the first request from the second application module for the first core module data transmitting, from the framework, the first core module data from the framework to the second application module;
receiving, at the framework, a second request from the second application module for second core module data associated with the first application module;
determining, at the framework, that the first application module is unresponsive;
in response to the determining, generating, at the framework, substitute core module data; and
transmitting, from the framework, the substitute core module data to the second application module as the second core module data.

US Pat. No. 10,338,907

CLOUD SERVICE FRAMEWORK FOR TOOLKIT DEPLOYMENT

SAP SE, Walldorf (DE)

1. A computer-implemented method for toolkit deployment, comprising:opening a connection between a server and a local software development environment, the server hosting a cloud based development environment, the local software development environment comprising a web browser with a plugin that functions as the connector, the server and the local software development environment forming part of a hybrid development platform providing both local and cloud-based software development functionality;
in response to opening the connection, invoking, by the plugin, a web service to query the local software development environment to determine a version of an installed software toolkit;
determining if the determined version of the installed software toolkit is compatible with the server by retrieving the installed software toolkit from a local cache associated with the cloud based development environment;
when the determined version of the installed software toolkit is not compatible with the server, deploying a compatible version of the toolkit to the local software development environment and initiating installation of the compatible version of the toolkit; and
initiating, in the local software development using the installed software toolkit, an emulator that is configured to debug code generated at the server in the local software development environment.

US Pat. No. 10,338,906

CONTROLLING FEATURE RELEASE USING GATES

Facebook, Inc., Menlo Pa...

1. A method performed by a computing system, comprising:receiving, at a server computer, a request object from a client device, the request object representing a request from a user for accessing a host application executing at the server computer;
receiving, by a gate application executing on the server computer and from the host application, the request object for determining whether to make available a feature of the host application to the user, the gate application including a plurality of different gates, each gate controlling availability of one or more specified features of the host application to the user based on a criterion defined in the gate application, the gate application implementing at least one of the plurality of gates to control the availability of a specified feature at runtime, the gate application including multiple parameters and each of the parameters being associated with a parameter value;
obtaining, by the gate application, multiple attributes of the request object, the attributes associated with the user, each of the attributes having an attribute value;
determining, by the gate application, whether the attributes satisfy the criterion;
responsive to a determination that the attributes satisfy the criterion, causing execution, at the server computer, of a portion of host application code associated with the specified feature to make the feature available to the user on the server computer; and
sending, from the server computer, a response to the client device, the response indicating availability of the feature.

US Pat. No. 10,338,905

COMPUTING INSTANCE SOFTWARE PACKAGE INSTALLATION

Amazon Technologies, Inc....

1. A computing system for installation associated with a software package comprising:one or more processors; and
one or more memories having stored therein instructions that, upon execution by the one or more processors, cause the computing system perform operations comprising:
receiving, by one or more first components executing on a first virtual machine instance, a first request indicating the software package and requesting the installation, wherein the first request is also received by at least a second virtual machine instance for installing associated with the software package on at least the second virtual machine instance, wherein the second virtual machine instance has at least one of an operating system type or an architecture type that differs from at least one of an operating system type or an architecture type of the first virtual machine instance;
selecting, by the one or more first components executing on the first virtual machine instance, based at least in part on at least one of the operating system type or the architecture type of the first virtual machine instance, a package type associated with a first version of the software package for the first virtual machine instance;
retrieving an information collection comprising data associated with the package type and an instruction set for installing one or more second components associated with the package type;
uninstalling, by the one or more first components executing on the first virtual machine instance, a second version of the software package from the first virtual machine instance; and
installing, by the one or more first components executing on the first virtual machine instance, the one or more second components on the first virtual machine instance based, at least in part, on the first instruction set.

US Pat. No. 10,338,904

SPECIALIZED APP DEVELOPMENT AND DEPLOYMENT SYSTEM AND METHOD

SCHNEIDER ELECTRIC BUILDI...

1. A system comprising:one or more memory elements collectively storing:
a plurality of widgets including a plurality of default identifiers; and
a plurality of identifiers of a plurality of devices associated with an identified space;
at least one processor in data communication with the one or more memory elements; and
a deployment component executable by the at least one processor and configured to:
receive a request to bind the plurality of widgets to the plurality of devices; and
bind, in response to receiving the request, the plurality of widgets to the plurality of devices by replacing one or more default identifiers of the plurality of default identifiers with one or more identifiers of the plurality of identifiers,
wherein the request to bind is included in a request to deploy and the deployment component is further configured to transmit, in response to receiving the request to deploy, a user app to a mobile computing device, the user app including the plurality of widgets and being configured to monitor and control the plurality of devices via the plurality of widgets.

US Pat. No. 10,338,903

SIMULATION-BASED CODE DUPLICATION

ORACLE INTERNATIONAL CORP...

1. A method for analyzing a program, comprising:generating an initial control flow graph (CFG) for the program;
identifying a plurality of merge blocks of the initial CFG;
identifying a plurality of predecessor-merge pairs based on identifying a plurality of predecessor blocks for each of the plurality of merge blocks;
simulating a duplication of each of the plurality of predecessor-merge pairs;
for each of a plurality of optimizations, each of the plurality of optimizations comprising a precondition:
determining whether the duplication satisfies the precondition of the optimization;
applying, in response to satisfying the precondition, the optimization to the duplication; and
generating a simulation result for the predecessor-merge pair corresponding to the duplication, the simulation result comprising the optimization and a benefit of applying the optimization to the duplication; and
duplicating, in the initial CFG, a predecessor-merge pair of the plurality of predecessor-merge pairs based on the simulation result corresponding to the predecessor-merge pair.

US Pat. No. 10,338,902

METHOD AND SYSTEM FOR A COMPILER AND DECOMPILER

Unity IPR ApS, Copenhage...

1. A system comprising:one or more computer processors;
one or more computer memories;
one or more instructions incorporated into the computer memories, the one or more instructions configuring the one or more computer processors to perform operations for optimizing computer code, the operations comprising:
receiving a block of mixed intermediate representation (MIR) code;
for each instruction in the block of MIR code, in reverse order, generating a partially-decompiled block of computer code, the generating of the partially-decompiled block of computer code including computing a native expression vector for the instruction and repeating a set of pattern-matching operations until no transformations occur, the set of pattern-matching operations including matching all patterns involving the instruction in the block of MIR code, performing transformations based on the matching of all of the patterns in the block of the MIR code recomputing the expression vector after each of the transformations,
for each instruction in the partially-decompiled block of MIR code, in order, computing an additional native expression vector for the instruction in the partially-decompiled block of MIR code and performing an additional set of pattern-matching operations until no additional transformations occur, the additional set of pattern-matching operations including matching all additional patterns involving the instruction in the partially-decompiled block of computer code, and recomputing the expression vector after each of the additional transformations;
generating a fully-decompiled block of computer code from the generated partially-decompiled block of computer code, the fully-decompiled block of computer code having a semantic level that is raised; and
providing the fully-decompiled block of computer code for deployment on an architecture, the deployment including lowering the semantic of the computer code to a level that corresponds to a central processing unit (CPU) or graphical processing unit (GPU) supported by the architecture.

US Pat. No. 10,338,901

TRANSLATION OF A VISUAL REPRESENTATION INTO AN EXECUTABLE INFORMATION EXTRACTION PROGRAM

INTERNATIONAL BUSINESS MA...

1. A method for generating an executable extraction program from a visual representation, the method comprising:utilizing at least one processor to execute computer code that performs the steps of:
receiving input indicating at least one concept from at least one document of an input document collection;
generating a validated data model representing an extractor for each of a plurality of concepts indicated from the received input, wherein each of the concepts is represented as a visual data structure comprising semantics associated with the visual data structure;
generating at least one intermediate model object by parsing the validated data model, wherein each of the intermediate model objects comprises a concept object and wherein the at least one intermediate model object identifies concept dependencies;
translating the at least one intermediate model object into executable source code, wherein the translating comprises importing at least one pre-built extractor having a dependency related to at least one of the plurality of concepts and translating at least one rule identified from the visual data structure; and
generating an executable information extraction program from the executable source code, wherein the generating comprises generating at least one rule for the executable information extraction program based on the identified concept dependencies.

US Pat. No. 10,338,900

METHOD FOR GENERATING SOFTWARE ARCHITECTURES FOR MANAGING DATA

Siemens Aktiengesellschaf...

1. A method for generating data elaboration software architectures suitable for a manufacturing execution system (MES) or a manufacturing operation management (MOM) system, which comprises the steps of:providing at least one source program block configured for receiving data and generating a signal from the received data; and
providing at least one elaboration program block configured for elaborating the signal generated by the source program block;
wherein the source program block is only configured to format the received data and supply the formatted received data to the elaboration program block.

US Pat. No. 10,338,899

DYNAMICALLY COMPILED ARTIFACT SHARING ON PAAS CLOUDS

International Business Ma...

1. A method for sharing artifacts between instances of an application deployed in a cloud computing environment, the method comprising:upon determining, during staging of the application, that a set of artifacts are not available for instances of the application to share in the cloud computing environment:
generating the set of artifacts via a first application instance by executing the first application instance for a predetermined period of time during a run phase of the staging of the application;
storing, via the first application instance, the set of artifacts in a computing system located in the cloud computing environment based on a hashed value of the application after the predetermined period of time has elapsed; and
requesting a re-staging of the application via the first application instance;
during the re-staging of the application:
generating, via a second application instance, a file for deploying scaled instances of the application;
requesting, via the second application instance, the stored set of artifacts from the computing system located in the cloud computing environment during a compile phase of the re-staging;
verifying, via the second application instance, based on the hashed value of the application, that the stored set of artifacts belong to the application; and
packing the stored set of artifacts into the generated file via the second application instance after verifying that the stored set of artifacts belong to the application; and
executing an instance scale out of the application with the generated file comprising the stored set of artifacts.

US Pat. No. 10,338,898

STATE-SPECIFIC EXTERNAL FUNCTIONALITY FOR SOFTWARE DEVELOPERS

Samsung Electronics Co., ...

1. A computing, device comprising:a data store configured to store information identifying a plurality of functions, wherein each of the plurality of functions corresponds to external functionality available from third party applications; and
at least one processor configured to:
receive a selection, from a first electronic device corresponding to a first application developer, of a first function of the plurality of functions to supplement functionality of a first application under development by the first application developer, and
provide a software development kit (SDK) library associated with the first function to the first electronic device,
wherein the SDK library includes instructions for:
when a first screen of the first application including a first user interface element is displayed at a user device which executes the first application developed at least based on the SDK library, in response to user selection of the first user interface element, preparing a query wrapper including a combination of a predefined text string corresponding to the first function and a name of a first entity inputted at the user device,
transmitting the query wrapper to a search system,
receiving a result set from the search system, wherein the result set includes a plurality of items, and wherein a first item of the plurality of items includes an identifier of a target application and an access mechanism for a specified screen of the target application associated with the query wrapper,
displaying the plurality of items, and
in response to user selection of the first item, actuating the access mechanism to execute the target application and display the specified screen.

US Pat. No. 10,338,897

VISUAL PROTOCOL DESIGNER

Stratedigm, Inc., San Jo...

1. A method for programming protocols for a flow cytometry machine, the method comprising:generating a graphical user interface (GUI), the GUI comprising a workspace and a bank, wherein the bank includes a plurality of selectable graphical elements, wherein each selectable graphical element in the bank corresponds to program logic, and wherein each of the selectable graphical elements in the bank can be dragged over to the workspace;
displaying the GUI to a user;
receiving two or more first user inputs for dragging selectable graphical elements from the bank to the workspace of the GUI, wherein at least one of the selectable graphical elements is a conditional block;
receiving a second user input selecting: one or more target wells, a metric for the one or more target wells, a comparison number, and a comparison operator describing a satisfactory relationship between the metric and the comparison number;
updating the workspace of the GUI with a graphical representation for a protocol, wherein the graphical representation comprises an arrangement of the selectable graphical elements, and wherein the arrangement of the graphical representation conveys a sequence for performing the program logic corresponding to the selectable graphical elements;
translating the graphical representation for the protocol into one or more executable instructions sent to a flow cytometry machine to perform the protocol, wherein the one or more executable instructions follows the sequence for performing the program logic corresponding to the selectable graphical elements conveyed by the graphical representation,
wherein the program logic corresponding to the conditional block executes subsequent program logic in the sequence conveyed by the arrangement of the graphical representation only when the flow cytometry machine produces a metric by processing detected signals from a flow cytometer for the one or more target wells which satisfies the conditional relationship described by the second user input.

US Pat. No. 10,338,896

SYSTEMS AND METHODS FOR DEVELOPING AND USING REAL-TIME DATA APPLICATIONS

PTC Inc., Boston, MA (US...

1. A computer-implemented method for operating a real-time Web application, the computer-implemented method comprising:providing, at a client-side application executing on a computing device, a graphical user interface comprising one or more rendering widgets for presentation of data associated with a plurality of second computing devices,
wherein the client-side application is defined by a plurality of application definition files having instructions to invoke a plurality of Web service objects, and
wherein the instructions, during runtime, cause retrieval of data from one or more storage computing devices for presentation on the one or more rendering widgets;
receiving, by the client-side application, from the one or more storage computing devices, one or more datasets corresponding to the invoked Web service objects;
responsive to receipt of the one or more datasets, caching, by the client-side application, each of the received one or more datasets;
responsive to the received one or more datasets being cached, presenting, by the client-side application, data of the cached one or more datasets via the one or more rendering widgets;
receiving, by the client-side application, a manifest file listing one or more updated application definition files having second instructions to invoke a second plurality of plurality of Web service objects;
responsive to receipt of the manifest file, retrieving, by the client-side application, the one or more updated application definition files listed in the manifest file;
responsive to receipt of the one or more updated application definition files, caching, by the client-side application, each of the received one or more updated application definition files; and
responsive to the received one or more updated application definition files being cached, updating, by the client-side application, the plurality of application definition files with the cached one or more updated application definition files.

US Pat. No. 10,338,895

INTEGRATED DEVELOPER ENVIRONMENT FOR INTERNET OF THINGS APPLICATIONS

Cisco Technology, Inc., ...

1. A method, comprising:establishing, on a computer, a graphical user interface (GUI) for an Internet of Things (IoT) integrated developer environment (IDE) with one or more visual developer tools;
providing, by the IoT IDE on the computer, nodes within the IoT IDE having connectivity and functionality, the nodes selected from a) discovered real nodes in communication with the IoT IDE or b) virtual nodes within the IoT IDE;
connecting a plurality of the nodes, by the IoT IDE on the computer, as a logical and executable graph for a flow-based programming framework virtualized across one or more IoT layers that are undeveloped;
programming nodes of at least one IoT layer of the one or more IoT layers, by the IoT IDE on the computer, based on respective connectivity and functionality, such that the logical and executable graph has one or more real and/or virtual inputs, one or more real and/or virtual processing functions, and one or more real and/or virtual actions;
deploying, by the IoT IDE on the computer, the node programming to one or more corresponding platform emulators configured to execute the node programming; and
simulating, by the IoT IDE on the computer, the logical and executable graph by executing the node programming to produce the one or more actions based on the one or more inputs and the one or more processing functions, the simulating including abstracting one or more of the IoT layers that are different than the at least one IoT layer by emulating functionality to produce an outcome necessary for remaining developed IoT layers.

US Pat. No. 10,338,894

GENERATING APPLICATIONS BASED ON DATA DEFINITION LANGUAGE (DDL) QUERY VIEW AND APPLICATION PAGE TEMPLATE

SAP SE, Walldorf (DE)

1. A non-transitory computer-readable medium to store instructions, which when executed by a computer, cause the computer to perform operations comprising:receive an application configuration, wherein the application configuration is retrieved from a user at an application configuration application, wherein the application configuration comprises a Data Definition Language (DDL) query view, and wherein the DDL query view is obtained by modifying a combined DDL view to define the DDL query view for a navigational state of a query;
transfer the application configuration to an application generator;
receive, at an user interface, a request to generate an application;
execute, based on the received request and by the application generator, the DDL query view comprised in the application configuration to obtain a query view and a data transfer service, wherein the DDL query view comprises a definition of at least one data transfer service defined to transfer data from a database to the application;
retrieve, from a database, data based on the query view and the data transfer service;
generate, based on the received request, an application page template including a plurality of user interface (UI) related elements of the application; and
bind the application page template and the retrieved data to generate a plurality of application pages of the application, and render the application pages on the user interface.

US Pat. No. 10,338,893

MULTI-STEP AUTO-COMPLETION MODEL FOR SOFTWARE DEVELOPMENT ENVIRONMENTS

Microsoft Technology Lice...

1. A method for providing auto-completion functionality in a source code editor, comprising:identifying a plurality of code entities that comprise all candidate code entities available from the source code editor for a single auto-completion operation for a single code entity based on data that a user has input via a user interface (UI) into the source code editor;
determining, based at least in part upon a total number of code entities in the plurality of code entities, a number of menus in a plurality of auto-completion menus by which to present the plurality of code entities to the user via the UI for the single auto-completion operation, each menu in the plurality of auto-completion menus being configured to present a unique subset of the plurality of code entities at a given time; and
subsequent to determining the number of menus, presenting a first menu of the plurality of auto-completion menus that allows access to the plurality of code entities in the UI, the first menu including an element that is a code entity representation that when activated by the user via a selection input causes a second menu of the plurality of auto-completion menus to be displayed for the single auto-completion operation, each of the first and second menus including one or more of the candidate code entities for the single auto-completion operation for the single code entity.

US Pat. No. 10,338,892

DYNAMIC PROVISIONING OF A SET OF TOOLS BASED ON PROJECT SPECIFICATIONS

Accenture Global Solution...

1. A device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
receive project information associated with a project,
the project information including information related to:
a set of specifications associated with the project, and
a set of descriptions associated with roles for the project;
process the project information to identify an attribute of the project based on receiving the project information,
the attribute of the project being identified based on at least one of:
natural language processing,
machine learning, or
artificial intelligence;
identify a first device used to develop at least one of software or an application;
identify a second device used to manage development of at least one of the software or the application;
identify a third device used to test at least one of the software or the application;
identify a set of tools to be provisioned on each of the first device, the second device, and the third device based on processing the project information,
each of the set of tools being associated with the set of specifications and the set of descriptions,
each of the set of tools being different for the first device, the second device, and the third device,
each of the set of tools being identified based upon a trained model,
the trained model being trained based on input from crowdsourcing;
receive, from the first device, the second device, and the third device, a request for the set of tools;
identify a set of scripts associated with the set of tools,
the set of scripts to obtain or configure the set of tools;
provide the set of scripts associated with the set of tools to the first device, the second device, and the third device based on the request; and
cause installation or configuration of the set of tools by executing the set of scripts on the first device, the second device, and the third device,
the installation or configuration of the set of tools to occur in parallel.

US Pat. No. 10,338,891

MIGRATION BETWEEN MODEL ELEMENTS OF DIFFERENT TYPES IN A MODELING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method comprising:obtaining an indication of a first Unified Modeling Language (UML) model element of a first model element type which is to be migrated to a second UML model element of a second model element type; and
automatically migrating the first UML model element of the first model element type to the second UML model element of the second model element type, wherein the automatically migrating migrates a relationship between the first UML model element of the first model element type and a related UML model element, of a third model element type, to a relationship between the second UML model element of the second model element type and the related UML model element, and wherein the automatically migrating comprises:
obtaining indications of relationship types that are deemed valid or invalid as between model element types, including as between the second model element type and the third model element type, the relationship types being for relationships between UML model elements, the relationships having at least one directional property associated therewith;
checking the relationship between the first UML model element and the related UML model element to identify a relationship type of the relationship between the first UML model element and the related UML model element; and
checking the obtained indications to determine whether the identified relationship type of the relationship between the first UML model element and the related UML model element is indicated to be a valid relationship type that may exist as between the second UML model element of the second model element type and the related UML model element of the third model element type, wherein the checking obtained indications checks one or more data structures comprising the indications and determines that the identified relationship type is an invalid type of relationship as between the second model element type and the third model element type; and
based on determining that the identified relationship type is an invalid type of relationship as between the second model element type and the third model element type, identifying a replacement relationship type that is indicated to be a valid type of relationship that may exist as between the second model element type and the third model element type, wherein migrating the relationship between the first UML model element of the first model element type and the related UML model element to the relationship between the second UML model element of the second model element type and the related UML model element applies the identified replacement relationship type to the relationship between the second UML model element of the second model element type and the related UML model element.

US Pat. No. 10,338,890

RANDOM VALUES FROM DATA ERRORS

Seagate Technology LLC, ...

1. An apparatus comprising:an extractor circuit configured to:
collect data messages;
select data messages having an error from the collected data messages;
calculate a random value based on the selected data messages; and
provide the random value to a random number generator.

US Pat. No. 10,338,889

APPARATUS AND METHOD FOR CONTROLLING ROUNDING WHEN PERFORMING A FLOATING POINT OPERATION

ARM Limited, Cambridge (...

1. An apparatus comprising:argument reduction circuitry to perform an argument reduction operation;
reduce and round circuitry to generate from a supplied floating point value a modified floating point value to be input to the argument reduction circuitry; and
rescaling circuitry to receive at least one floating point operand value and, for each floating point operand value, to perform a rescaling operation to generate a corresponding rescaled floating point value such that the rescaled floating point value and the floating point operand value differ by a scaling factor;
the reduce and round circuitry being arranged to modify a significand of the supplied floating point value, based on a specified value N, in order to produce a truncated significand with a specified rounding applied, the truncated significand being N bits shorter than the significand of the supplied floating point value, and being used as a significand for the modified floating point value;
the specified value N being such that the argument reduction operation performed using the modified floating point value inhibits roundoff error in a result of the argument reduction operation; wherein:
the argument reduction circuitry and the reduce and round circuitry are employed in a plurality of iterations, in each iteration other than a first iteration, the supplied floating point value received by the reduce and round circuitry being derived from a result value generated by the argument reduction circuitry in a preceding iteration;
the argument reduction circuitry is arranged to receive, in addition to the modified floating point value, at least one additional floating point value, said at least one additional floating point value being determined based on said at least one rescaled floating point value;
the scaling factor being such that an exponent of each rescaled floating point value will ensure that no denormal values will be encountered when performing the plurality of iterations of the argument reduction operation; and
the rescaling circuitry is arranged to generate first and second rescaled floating point values, wherein the apparatus is arranged to generate a result equal to an alternative result generated by performing a specified floating point operation on first and second floating point operand values from which said first and second rescaled floating point values are generated by the rescaling circuitry, the specified floating point operation being a floating point divide operation to divide the first floating point operand value by the second floating point operand value, and the result for the floating point divide operation being determined by an adder arranged to add together a plurality of modified floating point values produced by the reduce and round circuitry during said plurality of iterations;
the adder is arranged to add together the plurality of modified floating point values as a series of addition operations; and
when the scaling factor associated with the first rescaled floating point value is different from the scaling factor associated with the second rescaled floating point value, the adder is arranged to perform an augmented, final addition operation in dependence on the difference between the scaling factors when performing the final addition.

US Pat. No. 10,338,887

METHOD FOR SELECTIVE CALIBRATION OF VEHICLE SAFETY SYSTEMS IN RESPONSE TO VEHICLE ALIGNMENT CHANGES

Hunter Engineering Compan...

1. An improved vehicle service or inspection system, comprising:a processing system configured with software instructions to receive data representative of a measure at least one wheel alignment characteristic of a vehicle configured with at least one onboard vehicle safety system sensor;
wherein said processing system is further configured with software instructions to generate an indication to an operator when a change to said at least one wheel alignment characteristic alters a thrust angle of the vehicle by an amount which either:
i. exceeds a self-adjustment limit associated with said at least one onboard vehicle safety system sensor for responding to changes in an alignment characteristic of the vehicle, or
ii. exceeds an established limit for an amount of change associated with said thrust angle; and
wherein said processing system is further configured with software instructions to generate an output to an operator of a need to recalibrate said at least one onboard sensor when said change or adjustment to the at least one wheel alignment characteristic alters the thrust angle by an amount which exceeds at least one of said limits.