US Pat. No. 10,339,122

ENRICHING HOW-TO GUIDES BY LINKING ACTIONABLE PHRASES

Conduent Business Service...

1. A linking method comprising:providing a knowledge base comprising a corpus of documents, each of the documents describing a respective procedure, wherein for each of a plurality of the documents, an actuable link in a first of the plurality of documents links to another document in the document corpus describing another procedure, the actuable links in the knowledge base having been generated by a method comprising:
providing a collection of at least 100 domain-specific terms;
for each of a plurality of documents in the document corpus, applying rules for identifying action verbs in at least a part of the document corresponding to a procedure and identifying at least one actionable phrase including one of the action verbs in at least a first of the plurality of documents, the identifying of the at least one actionable phrase comprising:
identifying a candidate action object, the candidate action object including a direct object of the identified action verb;
comparing the identified candidate action object to terms in the collection of terms; and
when at least the direct object of the compared candidate action object is found in the collection of terms, identifying an actionable phrase comprising the action verb and respective action object;
for each of the at least one identified action phrase:
identifying a set of documents in a document corpus using a scoring function which takes into account occurrences of words of the actionable phrase in each identified document; and
linking the actionable phrase in the document to at least a part of another one of the documents in the set of documents or to information extracted therefrom;
after providing the knowledge base with the actuable links, receiving a query from a user;
retrieving one of the corpus of documents from the knowledge base which is responsive to the query;
providing for the user to actuate one of the links in the retrieved document; and
when the user actuates the link, retrieving information from the respective linked other document relating to the actionable phrase and presenting the retrieved information to the user to allow the user to find out more detail related to the actionable phrase of the retrieved document procedure on how to perform a specific instruction,
wherein the providing of the knowledge base, receiving the query, retrieving information, and presenting the retrieved information is performed with a processor.

US Pat. No. 10,339,121

DATA COMPRESSION

SAP SE, Walldorf (DE)

1. A computer implemented method to compress a dataset, comprising:based on one or more attributes associated with a unified dataset stored in an in-memory data store, determining a dataset including sensor node identifier data, sensor node timestamp data and sensor measurement data;
determining at least one frequently repetitive type of data pattern in the sensor node identifier data based on a data transmission frequency from a plurality of sensor nodes, wherein the sensor node identifier data includes one or more types of data patterns and a first type of data pattern is associated with a first data transmission frequency and a second type of data pattern is associated with a second data transmission frequency, and at least one of the first type of data pattern and the second type of data pattern is the frequently repetitive type of data pattern based on a respective count of the first type and the second type of data pattern;
determining a first data compression logic to compress the sensor node identifier data based on the determined at least one frequently repetitive type of data pattern, wherein the first data compression logic determines a position of a missing sensor node identifier in the determined at least one frequently repetitive type of data pattern wherein compression reduces an amount of memory the sensor node identifier data uses in the in-memory data store; and
determining a second data compression logic to compress the sensor measurement data, wherein the determination of the second data compression logic is based on determining a type of data pattern associated with the sensor measurement data, wherein compression reduces an amount of memory the sensor measurement data uses in the in-memory data store;
executing a sensor node timestamp data compression model to compress the sensor node timestamp data, wherein compression reduces an amount of memory the sensor node timestamp data uses in the in-memory data store; and
storing the compressed sensor node identifier data, the compressed sensor measurement data and the compressed sensor node timestamp data in the in-memory data store.

US Pat. No. 10,339,120

METHOD AND SYSTEM FOR RECORDING INFORMATION ABOUT RENDERED ASSETS

SONY CORPORATION, Tokyo ...

1. An improved method of storing data about a composite product, the composite product including a plurality of assets, at least one of the plurality having associated version information therewith, comprising:receiving a command to render a model file indicating a desired composite product, the composite product being a shot in a video, the model file indicating one or more computer graphics assets and respective version indicators constituting the composite product;
locking each asset referenced by the model file against modification, so that the one or more computer graphics assets are included in the rendering as they were at a time of the receiving a command to render a model file, and not as later modified;
rendering the model file and during the rendering, recording calls to a versioning and publishing application programming interface (API), the versioning and publishing application programming interface (API) enabling access to registered assets; and
during the rendering, monitoring calls to an operating system to record data about at least one file opened by an application associated with the render, the at least one file not referenced by the versioning and publishing application programming interface (API), and storing data together in a file from the recording calls to the versioning and publishing API and from the monitoring calls to an operating system,
wherein a use of the model file allows retrieval of prior versions of assets, locked in the locking step and referenced by the model file, that have been subsequently modified.

US Pat. No. 10,339,119

CALIBRATION OF A FIRST SEARCH QUERY BASED ON A SECOND SEARCH QUERY

International Business Ma...

1. A computer-implemented method for calibrating site-level search results, the method comprising:generating a first result set based on an execution of a first search query from an administrator of an electronic commerce site;
determining, based on an analysis of the first result set, the first result set does not include one or more desired query results;
generating a second result set based on an execution of a second search query from the administrator of the electronic commerce site;
determining, based on an analysis of the second result set, the second result set includes the one or more desired query results;
automatically associating the second result set with the first search query; and
returning a set of document files corresponding to the one or more desired query results when the first search query is subsequently executed by a third party searcher, who is not the administrator of the electronic commerce site.

US Pat. No. 10,339,118

DATA NORMALIZATION SYSTEM

Palantir Technologies Inc...

1. A data normalization system comprising:one or more computer processors; and
one or more computer-readable mediums storing instructions that, when executed by the one or more computer processors, causes the data normalization system to perform operations comprising:
receiving a first string and a second string, the first string and the second string being ordered according to an initial string ordering;
converting, using a metaphone algorithm, the first string into a first metaphone string, and the second string into a second metaphone string;
searching, based on the first metaphone string and the second metaphone string, a name index including a listing of metaphone strings representing common names and probabilities that the common names are either a given name or a surname;
determining, based on searching the name index, a confidence score indicating a confidence level that the first string represents the given name and that the second string represents the surname;
determining that the confidence score does not meet or exceed a threshold confidence score;
in response to determining that the confidence score does not meet or exceed the threshold confidence score, analyzing, the first string and the second string based on a list of known character sets included in surnames, yielding an analysis;
determining, based on the analysis, that a set of characters in the second string matches a known character set included in the list of known character sets included in surnames; and
in response to determining that the set of characters in the second string matches a known character set included in the list of known character sets included in surname, ordering the first string and the second string according to an updated string ordering.

US Pat. No. 10,339,117

SPECIFYING AND APPLYING RULES TO DATA

Ab Initio Technology LLC,...

1. A computing system for specifying one or more validation rules for validating data included in one or more fields of each element of a plurality of elements of a dataset, the computing system including:a user interface module configured to render a plurality of cells arranged in a two-dimensional grid having a first axis and a second axis, the two-dimensional grid including
one or more subsets of the cells extending in a direction along the first axis of the two-dimensional grid, each subset of the one or more subsets associated with a respective field of an element of the plurality of elements of the dataset, and
multiple subsets of the cells extending in a direction along the second axis of the two-dimensional grid, each of one or more of the multiple subsets including a plurality of cells associated with a same validation rule; and
a processing module, including at least one processor, configured to apply validation rules to at least one element of the dataset based on user input received from at least some of the cells;
wherein at least some cells, associated with a field and a validation rule, each include
an input element for receiving input determining whether or not the associated validation rule is applied to the associated field, and
an indicator for indicating feedback associated with a validation result based on applying the associated validation rule to data included in the associated field of the element.

US Pat. No. 10,339,116

COMPOSITE SHARDING

ORACLE INTERNATIONAL CORP...

1. A method, comprising:maintaining a sharded database that includes a plurality of shards;
wherein the plurality of shards are grouped into a plurality of shardspaces;
wherein each shardspace of the plurality of shardspaces includes at least one shard of the plurality of shards;
using one or more levels of partitioning criteria, performing one or more levels of partitioning on a table to produce a first plurality of partitions;
receiving, from a user, user-specified code that specifies constraints for distributing the first plurality of partitions among the plurality of shardspaces;
selecting a shardspace, of the plurality of shardspaces, for each partition of the first plurality of partitions based, at least in part, on the user-specified code; and
within each shardspace, of the plurality of shardspaces, distributing each partition of the first plurality of partitions to a specific shard in the shardspace based on system-managed distribution criteria;
wherein the user-specified code specifies constraints for distributing the first plurality of partitions among the plurality of shardspaces based on a first key and the system-managed distribution criteria specifies distributing each partition of the first plurality of partitions to a specific shard in the shardspace based on a second key that is different than the first key.

US Pat. No. 10,339,115

METHOD FOR ASSOCIATING ITEM VALUES, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM AND INFORMATION PROCESSING DEVICE

FUJITSU LIMITED, Kawasak...

1. A method for associating item values that is executed by a computer, the method comprising:receiving a plurality of data files, each of which includes a plurality of items and item values respectively associated therewith, wherein the plurality of data files include a first data file that includes a first item, a second item and a key item that is associated with a key item included in an integrated database, and include a second data file that does not include the key item, but include the first item and the second item;
displaying a matrix in which the items included in the first data file and the second data file received are arranged in either one of a row direction or a column direction of the matrix, and candidate items, with which a part or all of the items are associated, are arranged in the other direction;
receiving specification of a position on the matrix, the position indicating specification of a set of the first item of the first data file and a first candidate item in the candidate items, a set of the second item of the first data file and a second candidate item in the candidate items, or any item in the items of the first data file and any candidate item in the candidate items;
receiving specification of a link position on the matrix, the link position indicating specification of a set of the first item of the second data file and the first candidate item and specification of a position on the matrix, the position indicating specification of a set of the second item of the second data file and the second candidate item; and
storing a value of an item among the items of the first data file, the item being indicated by the position specified, in the integrated database, in association with a candidate item among the candidate items, the candidate item being indicated by the position specified, wherein the storing includes storing in the storage integrated database a value of the second item of the second data file, instead of a value of the second item of the first data file, in association with the second candidate item, when a value of the first item of the first data file equals a value of the first item of the second data file.

US Pat. No. 10,339,114

SYSTEM AND METHOD FOR PROVIDING A MODERN-ERA RETROSPECTIVE ANALYSIS FOR RESEARCH AND APPLICATIONS (MERRA) DATA ANALYTIC SERVICE

The United States of Amer...

1. A system comprising:a data analytics platform comprising an assemblage of compute and storage nodes that provide a compute-storage fabric upon which high-performance parallel operations are performed over a collection of climate data stored in a distributed file system;
a hardware sequencer that transforms the climate data encoded in a native model output file format to yield flat serialized block compressed sequence files and loads the flat serialized block compressed sequence files into the distributed file system by a calling application requesting by an order service request to said data analytics platform via a system interface through which a client device can access the climate data via the data analytics platform indicating operation to be performed and specific predetermined parameters that further specify the order service request, wherein the service interface maps the incoming service request to a first order module, which launches an operation as a MapReduce computation on the data analytic platform and returns a session identifier (ID) through the interface to the calling application; wherein once the order request is launched, the calling application issues status service requests wherein the session ID monitors progress of the order request and the system interface maps a status request to the appropriate call to a services library and receives a status update, which the system interface passes back to the calling application;
a hardware desequencer that transforms the flat serialized block compressed sequence files from the native model output file format to a second climate data file format and moves data stored in the second climate data file format out of the distributed file system and is prepared for retrieval by the calling application as a separate file where the calling application submits a download service request via the system interface mapped to the services library;
a services library comprising a plurality of software applications that dynamically create data objects from the data stored in the second climate data file format as reduced final results; and
a utilities library comprising a plurality of software applications that can process the flat serialized block compressed sequence files whereby the services library returns the data which the system interface relays to the calling application and the calling application sends climate data via the analytics platform through a client device to an end user;
whereby the compute and storage nodes comprise a processor configured as containing multiple cores or processors, a bus, memory controller, cache, including multiple distributed processors located in multiple separate computing devices working together via a communications network sharing resources such as memory and the cache or operating using independent resources configured from of an application specific integrated circuit (ASIC), or a programmable gate array (PGA) including a field PGA utilizing a system bus selected from one of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus connected to storage devices such as a hard disk drive, a magnetic disk drive, an optical disk drive, tape drive, solid-state drive, RAM drive, removable storage devices, and a redundant array of inexpensive disks (RAID), hybrid storage device.

US Pat. No. 10,339,113

METHOD AND SYSTEM FOR EFFECTING INCREMENTAL CHANGES TO A REPOSITORY

ORACLE INTERNATIONAL CORP...

1. A computer-implemented method comprising:receiving, by a computing system, an incremental feature package, wherein
the incremental feature package comprises one or more revisions to be made to a repository, the repository including a first set of data that provides a logical construction for the interpretation of a second set of data stored in an associated database, the repository storing a third set of data describing one or more objects, and further wherein
the one or more revisions include information for updating a portion of at least one first object stored in the repository, the first object comprising a plurality of components, and
the incremental feature package comprises a first delta file, the first delta file being generated based at least in part on a location of a change in the first object and an object tag table comprising the change in the first object;
causing, by the computing system, the one or more revisions to be merged with the repository, wherein the one or more revisions update the portion of the first object stored in the repository; and
causing, by the computing system, a schema definition of the associated database to be synchronized with a schema definition of the repository.

US Pat. No. 10,339,112

RESTORING DATA IN DEDUPLICATED STORAGE

Veritas Technologies LLC,...

1. A method comprising:receiving, at a backup computing system, a backup copy of data from a source computing system;
deduplicating the backup copy at the backup computing system, wherein
the backup copy is deduplicated at the backup computing system by using a first deduplication methodology that is not recognized by the source computing system;
after the backup copy has been deduplicated by the backup computing system, receiving, at the backup computing system, a restore request from the source computing system, wherein
the restore request requires restoration of an amount of data that is greater than an amount of storage that is available on the source computing system;
in response to receiving the restore request, rehydrating the backup copy to create a rehydrated backup copy, wherein
the rehydrated backup copy is created by the backup computing system, and
the rehydrated backup copy comprises a set of data objects;
determining an amount of available memory space on the source computing system for storing data;
transmitting a first portion of the rehydrated backup copy to the source computing system, wherein the transmitting comprises specifying a size of the first portion of the rehydrated backup copy based on the amount of available memory space on the source computing system,
the first portion of the rehydrated backup copy is less than all of the rehydrated backup copy,
the first portion of the rehydrated backup copy comprises some, but not all, of the data requested via the restore request, and
the first portion of the rehydrated backup copy comprises an amount of data that does not exceed the amount of storage that is available on the source computing device; and
after the source computing system deduplicates the first portion of the rehydrated backup copy using a second deduplication methodology, transmitting a second portion of the rehydrated backup copy to the source computing system, wherein
the first portion of the rehydrated backup copy is transmitted before the backup computing system transmits the second portion of the rehydrated backup copy,
a revised amount of available storage indicates an amount of storage that is available on the source computing device after the first portion of the rehydrated backup copy has been dedpulicated by the source computing device by using the second deduplication methodology,
the second deduplication methodology is not recognized by the backup computing system,
the second portion of the rehydrated backup copy comprises some, but not all, of the data requested via the restore request,
the second portion of the rehydrated backup copy comprises an amount of data that does not exceed the revised amount of available storage on the source computing device, and
the first portion of the rehydrated backup copy and the second portion of the rehydrated backup copy comprise different data.

US Pat. No. 10,339,109

OPTIMIZING HASH TABLE STRUCTURE FOR DIGEST MATCHING IN A DATA DEDUPLICATION SYSTEM

INTERNATIONAL BUSINESS MA...

1. A method for optimizing a hash table structure for digest matching in a data deduplication system using a processor device in a computing environment, comprising:determining a repository data interval as similar to an input data interval, and subsequent to determining the repository data interval as similar to the input data interval, identifying identical sub-intervals comprising subsets of the data intervals previously stored in the repository, wherein the input and repository data intervals are each produced using a single linear scan of rolling hash values to calculate both similarity elements and digest block boundaries corresponding to the data intervals; and wherein each of the rolling hash values are discarded upon contributing to the calculation;
loading into a search structure a plurality of repository digests corresponding to the similar repository data interval into a sequential representation corresponding to a placement order of calculated values of the plurality of repository digests, the placement order of the calculated values of the plurality of repository digests correlative to an order in which input digest values were calculated, such that the plurality of digests are stored in a linear form independent of a deduplicated form by which the data the plurality of digests describe is stored; and
incorporating into entries of the search structure a compact index pointing to a position in the sequential representation of a plurality of digests.

US Pat. No. 10,339,108

PROGRAMMATICALLY CHOOSING PREFERRED STORAGE PARAMETERS FOR FILES IN LARGE-SCALE DISTRIBUTED STORAGE SYSTEMS

Google LLC, Mountain Vie...

1. A method comprising:receiving, at processor, trace data representing access information about files stored in a distributed storage system, the trace data comprising traces corresponding to log files comprising time-ordered records of events associated with the files stored in the distributed storage system;
identifying, by the processor, file access patterns based on the trace data;
receiving, at the processor, metadata information associated with the files stored in the distributed storage system;
generating, by the processor, a preferred storage parameter for each file based on the received metadata information and the identified file access patterns;
receiving, at the processor, file reliability or accessibility information of a new file from a user computing device in communication with the processor through a network, the file reliability or accessibility information indicating whether the new file requires reliable access from the distributed storage system or whether the new file allows some probability of failure when accessing the new file from the distributed storage system;
determining, by the processor, whether the received file reliability or accessibility information of the new file matches information of a file group of the files stored in the distributed storage system; and
when the file reliability or accessibility information of the new file matches the information of the file group, storing the new file in the distributed storage system based on the preferred storage parameter associated with the file group.

US Pat. No. 10,339,107

MULTI-LEVEL COLOCATION AND PROCESSING OF SPATIAL DATA ON MAPREDUCE

International Business Ma...

1. A method, comprising:correlating multiple items of spatial data and multiple items of attribute data within a file system to generate multiple blocks of correlated data, wherein said correlating is carried out by a file system component executing on at least one computing device;
colocating each of the multiple blocks of correlated data on a given node within the file system based on a data block placement policy, wherein said colocating is carried out by the file system component executing on the at least one computing device; and
clustering at least a portion of multiple replicas generated for each of the multiple data blocks at multiple distinct and user-determined levels of spatial granularity, wherein all portions of the multiple replicas corresponding to a particular one of the multiple levels of spatial granularity are stored on a particular node attributed to the particular one of the multiple levels of spatial granularity, wherein portions of the multiple replicas correspond to specific ones of the multiple levels of spatial granularity based at least in part on spatial information contained within the portions of the multiple replicas, and wherein said clustering is carried out by the file system component executing on the at least one computing device within the file system.

US Pat. No. 10,339,106

HIGHLY REUSABLE DEDUPLICATION DATABASE AFTER DISASTER RECOVERY

Commvault Systems, Inc., ...

1. A networked information management system configured to verify synchronization of deduplication information, the networked information management system comprising:a data storage database comprising a plurality of first job identifiers, wherein each first job identifier comprises a time that a respective job occurred;
a data storage computer, the data storage computer comprising computer hardware configured to receive an indication that the data storage database is being restored to an earlier version of the data storage database; and
a media agent that executes on one or more computer processors and that is configured to:
receive a first instruction from the data storage computer in response to the data storage computer receiving the indication, wherein the first instruction instructs the media agent to stop scheduled secondary storage operations associated with a deduplication database, wherein the deduplication database comprises a job identifier table which correlates deduplication information with a plurality of second job identifiers, wherein the deduplication database comprises a plurality of reference counters, each reference counter representing a number of links to a respective data block in a secondary storage system; and
for each of the second job identifiers that does not correlate with any first job identifiers in a subset of the first job identifiers, instruct the deduplication database to prune an entry in the job identifier table associated with the second job identifier and to decrement by one each of the reference counters of data blocks that are associated with the second job identifier.

US Pat. No. 10,339,105

RELAY SERVER AND STORAGE MEDIUM

BROTHER KOGYO KABUSHIKI K...

1. A relay server configured to relay service provision from a service provision server to a communication apparatus, the relay server comprising:a processor; and
a memory storing an application program interface (API) for the service provision server, and instructions that, when executed by the processor, cause the relay server to perform:
receiving folder selection information from the communication apparatus, the folder selection information indicating a first folder;
registering the received folder selection information indicating the first folder in a database, the database being different from the service provision server and the communication apparatus, and the database being connected with the relay server via Internet;
receiving a request for selection screen data from the communication apparatus through the Internet in a case that a predetermined instruction is provided to the communication apparatus by a specific user, the request comprising an identifier of the specific user and an identifier of the service, the selection screen data indicating a selection screen by which the specific user selects a folder;
transmitting to the database through the Internet, in response to receiving the request for the selection screen data, a request for history information, the request comprising the identifier of the specific user and the identifier of the service;
receiving, from the database through the Internet, first folder history information corresponding to the specific user by communicating with the database, the first folder history information identifying the first folder indicated by the registered folder selection information, the service provision server comprising a plurality of folders, the plurality of folders comprising one or more folders capable of being accessed by the specific user, the one or more folders comprising the first folder, and the first folder being a folder which has been previously accessed in the past by the specific user in response to an instruction from the specific user;
transmitting an information request to the service provision server through the Internet using the API, the information request comprising the first folder history information corresponding to the specific user received from the database;
receiving, from the service provision server through the Internet using the API, a plurality of folder names including a folder name of the first folder and a folder name of a second folder in response to the transmission of the information request, the service provision server being different from the database and the communication apparatus;
generating the selection screen data such that the folder name of the first folder is preferentially displayed on the display unit of the communication apparatus than the folder name of the second folder, and
supplying the generated selection screen data to the communication apparatus through the Internet, the selection screen data being usable by the specific user for selecting a folder among the folders including the first folder and the second folder.

US Pat. No. 10,339,104

INFORMATION PROCESSING APPARATUS, FILE MANAGEMENT METHOD, AND COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN FILE MANAGEMENT PROGRAM

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus comprising:a memory configured to store item information and file information, the item information indicating, for each of a plurality of items included in data of a plurality of files, the number of files among the plurality of files including the item and an identifier per item, and the file information indicating whether each of the plurality of files includes each of the plurality of items and includes a bit sequence indicating whether or not each of the files includes an item corresponding to the identifier according to whether a value of a bit position corresponding to the identifier is valid or invalid; and
a processor coupled to the memory and the processor configured to:
upon receipt of a deletion request of a file, update the number of files of items included in the data of the file of a deletion target in the item information stored in the memory and utilized in searching, and, when the number of files becomes 0, delete an item whose number of files becomes 0 and the number of files including the item from the item information stored in the memory and utilized in the searching,
specify, subsequent to the updating of the number of files based upon the deletion request, an identifier of an item included in a search condition based on the item information,
extract search target files whose values of bit positions corresponding to the specified identifier are valid, based on the file information and subsequent to the updating of the number of files based upon the deletion request,
generate a search bit sequence whose value of the bit position corresponds to the specified identifier, and
search for the file satisfying the search condition, from the search target files corresponding to a bit sequence whose product of the search bit sequence and each bit sequence of the file information match the search bit sequence.

US Pat. No. 10,339,103

STEGANOGRAPHY OBSFUCATION

PAYPAL, INC., San Jose, ...

1. A system, comprising:a non-transitory memory storing instructions; and
one or more hardware processors coupled to the non-transitory memory and configured to read the instructions from the non-transitory memory to cause the system to perform operations comprising:
receiving a file over a network communication;
determining the file is a media file;
determining the media file is large enough to have a message using steganography stored within the media file;
checking for a prior steganography configured to obscure any steganography messages within the media file; and
responsive to determining the file is the media file, responsive to determining the media file is large enough, and responsive to the checking for the prior steganography, performing a steganography on the media file, the steganography configured to obscure any steganography messages within the media file.

US Pat. No. 10,339,102

AUTOMATING SCRIPT CREATION FOR A LOG FILE

VMware, Inc., Palo Alto,...

1. A method for automating script creation for a log file, the method comprising:displaying a first log file for a first component;
receiving a first selection of a string within the first log file;
receiving a search operation to be performed using the string;
performing the search operation on a second log file using the string, wherein the second log file corresponds to a second component;
receiving a second selection of a variable in the second log file;
automatically creating a script based on the search operation using the variable;
storing the script such that it is accessible for use in other log files;
running the script on a third log file associated with a third component; and
displaying an output from the third log file that filters entries based on the string from the first log file and the variable from the second log file.

US Pat. No. 10,339,100

FILE MANAGEMENT METHOD AND FILE SYSTEM

Huawei Technologies Co., ...

1. A file management method, comprising:acquiring input/output (IO) access information for a file in a source storage medium when an operation is repeatedly performed on the file, wherein the source storage medium comprises a storage medium type, wherein the operation uses a current management manner of the file comprising using a first file management granularity for the storage medium type;
determining an IO access mode corresponding to the IO access information when the operation is repeatedly performed;
matching the IO access mode with a preset access mode in a mode matching library to obtain a file management policy-associated with the IO access mode, wherein the file management policy comprises one of a second file management granularity, a file storage medium type, or the second file management granularity and the file storage medium type, and wherein the mode matching library comprises a correspondence between the IO access mode and the file management policy; and
adjusting the current management manner to a new management manner when the current management manner is inconsistent with the file management policy comprising:
applying, to a target storage medium, for a storage block used to store the file, wherein the target storage medium is a storage medium consistent with the file storage medium type comprised in the file management policy;
migrating the file from the source storage medium to the storage block, wherein the file is migrated and is operated according to another IO access request for operating the file, wherein a first part of the file is not stored in the source storage medium and a second part of the file is stored in the source storage medium, and wherein the first part of the file in the target storage medium is operated after the file is migrated and the second part of the file is operated in the source storage medium according to the file storage medium type in the file management policy corresponding to the IO access mode; and
creating new metadata such that the new metadata comprises storage information of the migrated file, wherein the new management manner uses the second file management granularity and the file storage medium type corresponding to the IO access mode.

US Pat. No. 10,339,098

CONTAINER-LEVEL ARRAY STORAGE

Spectra Logic, Corp., Bo...

1. An apparatus comprising:a processor-based storage controller;
a nontransient, tangible computer memory configured to store a plurality of data files; and
computer instructions stored in the computer memory defining container-level array storage logic that is configured to be executed by the controller to logically containerize the data files in a predetermined plurality of virtual storage containers, the data files stored sequentially from a beginning of one of the predetermined plurality of virtual storage containers to an end of another one of the predetermined plurality of virtual storage containers, all of the sequentially stored files in each of the predetermined plurality of virtual storage containers defining a respective container-level stripe unit, and to flush the predetermined plurality of virtual storage containers by migrating each container-level stripe unit to a respective physical storage device.

US Pat. No. 10,339,097

HISTORY ARCHIVE OF LIVE AUDIO AND METHODS OF USING THE SAME

SIEMENS INDUSTRY, INC., ...

1. A fire control panel with archiving capabilities, comprising:a processor in signal communication with a memory and configured to execute a plurality of instructions of a control panel application stored in the memory and in response to an emergency event, the plurality of instructions including monitoring and managing safety field devices comprising fire and smoke detectors;
wherein upon receiving an alert indicative of the emergency event, the processor, under the control of the control panel application, is configured to:
identify one or more safety field devices responsive to the emergency event;
identify an audio file representative of an audio message corresponding to the emergency event and broadcasted via a speaker,
generate a log file in response to the emergency event, and record one or more values corresponding to the one or more safety field devices as an entry in the log file, the one or more values comprising an emergency event date and data indicating activated safety field devices; and
embed the audio message into the log file via an embedding means,
wherein the embedding means is a log file generator residing in the memory of the fire control panel, and wherein embedding comprises, under control of the log file generator, extracting data representing the audio message from the audio file and integrating extracted data into metadata of the log file and converting data from the extracted format into a format compatible for being embedded into the log file, and
further comprising a timer operably coupled to the processor, the timer defining a time period for identifying the audio file,
wherein the processor, under the control of the control panel application, identifies the audio file based on an audio files presence within the defined time period.

US Pat. No. 10,339,096

EFFICIENT PATTERN MATCHING

British Telecommunication...

1. A computer implemented method to generate a pattern matching state machine to identify matches of a plurality of symbol patterns in a sequence of input symbols, wherein one or more of the symbol patterns includes a plurality of wildcard symbols, the method comprising:by a processor and a memory:
receiving the plurality of symbol patterns;
providing a first state machine of states and directed transitions between states corresponding to the plurality of symbol patterns;
identifying one or more mappings between states of the first state machine such that a state representing a sequence of symbols is mapped to other states constituting a proper suffix of a prefix of the sequence of symbols, wherein mappings for states representing a sequence of symbols including wildcard symbols include conditional mappings based on input symbols to be received, by the pattern matching state machine in use, to constitute the wildcard symbols;
generating a dictionary of patterns based on the conditional mappings, each pattern in the dictionary including symbol sequences required to constitute wildcard symbols for a conditional mapping;
providing a second state machine corresponding to patterns in the dictionary and being executable at a runtime of the pattern matching state machine to identify applicable conditional mappings based on input symbols received to constitute wildcard symbols; and
generating the pattern matching state machine based on the first state machine and the second state machine such that, when executed to identify locations of occurrences of matches of the plurality of symbol patterns in the sequences of input symbols, the identified locations correspond to the applicable conditional mappings based on input symbols received to constitute wildcard symbols.

US Pat. No. 10,339,093

USB INTERFACE USING REPEATERS WITH GUEST PROTOCOL SUPPORT

Intel Corporation, Santa...

1. A system for side band communication comprising:a processor;
a system-on-chip (SOC); and
a repeater communicatively coupled to the processor and the SOC, the repeater to:
receive packets from a first transceiver;
detect a pattern in the packets to identify a guest protocol, not natively supported by the repeater that uses data plus (DP) and data minus (DM) pins for data transmission; and
send the packets from the first transceiver to the SOC via a second transceiver based on the identified guest protocol.

US Pat. No. 10,339,092

WIRELESS GIGABIT ALLIANCE (WIGIG) ACCESSORIES

Hewlett-Packard Developme...

1. A wireless gigabit alliance (WiGig) accessory, comprising:a connector to couple the WiGig accessory to a universal serial bus (USB) port of a computing device that is separate and distinct from the WiGig accessory, wherein the USB port operates in a plurality of modes, and in response to receiving a message, the WiGig accessory puts the USB port into a WiGig mode and forms a WiGig path;
a WiGig component to form a WiGig enabled communication system having a wireless peripheral component interconnect express (PCIe) communication capability when coupled to the computing device; and
wherein the computing device comprises a PCIe bus, a first demultiplexor directly coupled to the PCIe bus, and a second demultiplexor directly coupled to the first demultiplexor such that the first demultiplexor is between the PCIe bus and the second demultiplexor to enable wireless PCIe communication between the WiGig enabled computing system and an electronic device via the WiGig path.

US Pat. No. 10,339,091

PACKET DATA PROCESSING METHOD, APPARATUS, AND SYSTEM

HUAWEI TECHNOLOGIES CO., ...

1. A packet data processing method executed by a first processing apparatus, the method comprising:acquiring, by the first processing apparatus, first packet data that needs to be processed, the first packet data that needs to be processed comprising first packet data information and second packet data stored at a first storage address in the first processing apparatus, and the first packet data information comprising a header of the first packet data and the first storage address of the second packet data;
sending, by the first processing apparatus to a second processing apparatus, the first packet data information including the first storage address in the first processing apparatus at which the second packet data is stored;
receiving, by the first processing apparatus from the second processing apparatus, first updated packet data information that comprises a first updated header and the first storage address in the first processing apparatus at which the second packet data is stored;
subsequent to receiving the first updated packet data information from the second processing apparatus, acquiring, by the first processing apparatus, the second packet data using the first storage address;
processing, by the first processing apparatus, the first updated packet data information and the second packet data to generate first finished packet data by associating the first updated header with the second packet data;
storing the first finished packet data in a second storage address in the first processing apparatus;
acquiring, by the first processing apparatus, the first finished packet data from the second storage address in the first processing apparatus at which the first finished packet data is stored;
sending, by the first processing apparatus to the second processing apparatus, the first updated header and the second storage address in the first processing apparatus at which the first finished packet data is stored;
receiving, by the first processing apparatus from the second processing apparatus, second updated packet data information that comprises a second updated header and the second storage address in the first processing apparatus at which the first finished packet data is stored;
subsequent to receiving the second updated packet data information from the second processing apparatus, acquiring, by the first processing apparatus, the second packet data using the second storage address at which the first finished packet data is stored; and
processing, by the first processing apparatus, the second updated packet data information and the second packet data to generate second finished packet data by associating the second updated header with the second packet data.

US Pat. No. 10,339,089

ENHANCED COMMUNICATIONS OVER A UNIVERSAL SERIAL BUS (USB) TYPE-C CABLE

QUALCOMM Incorporated, S...

1. A universal serial bus (USB) host, comprising:a USB Type-C interface configured to couple to a USB Type-C cable, wherein the USB Type-C interface comprises a sideband use (SBU) interface and a configuration channel (CC) interface;
a plurality of communication circuits each configured to transmit and receive protocol-specific data based on a specified communication protocol; and
a link control circuit communicatively coupled to the USB Type-C interface and the plurality of communication circuits, wherein the link control circuit is configured to:
select a communication circuit among the plurality of communication circuits to transmit and receive the protocol-specific data over the SBU interface based on the specified communication protocol of the selected communication circuit;
configure the SBU interface according to the specified communication protocol of the selected communication circuit;
provide the protocol-specific data received from the selected communication circuit to the SBU interface; and
provide the protocol-specific data received from the SBU interface to the selected communication circuit.

US Pat. No. 10,339,088

SYSTEM AND METHOD TO BLACKLIST EQUALIZATION COEFFICIENTS IN A HIGH-SPEED SERIAL INTERFACE

Dell Products, LP, Round...

1. A serial interface, comprising:a receiver including:
a first input compensation module with a first setting that selects a first value from among a plurality of first values for a first input characteristic of the receiver, the first values including a low value, an intermediate value, and a high value;
a memory to store a first blacklist value from among the first values; and
a control module to receive the first blacklist value from the memory, to select each of the first values, except for the first blacklist value, to evaluate an indication of a performance level of the receiver for each of the selected first values, without evaluating the indication of the performance level of the receiver for the first blacklist value, and to select a particular first value based upon the indications of the performance level of the receiver.

US Pat. No. 10,339,086

USB COMMUNICATION CONTROL METHOD FOR USB HOST

Hyundai Motor Company, S...

1. A universal serial bus (USB) communication control method for a USB host connected to a USB accessory through a USB cable, the method comprising:receiving a request signal for switching from a first service to a second service in the USB accessory when the first service is being executed in the USB accessory;
initializing a USB port of the USB host so as to perform switching to the second service in the USB accessory while the USB host is connected to the USB accessory through the USB cable;
when the initialization of the USB port of the USB host fails, waiting for a standby time needed for service switching in the USB accessory, and then reattempting to initialize the USB port of the USB host so as to perform the switching to the second service in the USB accessory; and
when the switching to the second service through initialization or re-initialization of the USB port is successfully performed, executing the second service in the USB accessory.

US Pat. No. 10,339,085

METHOD OF SCHEDULING SYSTEM-ON-CHIP INCLUDING REAL-TIME SHARED INTERFACE

Samsung Electronics Co., ...

1. A scheduling method performed by a scheduler located between a plurality of masters and a slave, the scheduling method comprising:receiving a plurality of access requests from the plurality of masters;
setting the plurality of access requests in a plurality of registers;
scheduling the plurality of access requests, wherein the scheduling of the plurality of access requests comprises:
setting a plurality of time limit values based on the plurality of access requests, and
determining whether a system satisfies preconditions for operations, based on the plurality of time limit values; and
transmitting, when the system does not satisfy the preconditions for operations, a schedule uncontrollability message to the plurality of masters.

US Pat. No. 10,339,084

COMMUNICATION SYSTEM, MANAGEMENT APPARATUS, AND CONTROLLING APPARATUS

FUJITSU LIMITED, Kawasak...

1. A communication system comprising:a communication apparatus comprising a plurality of first connectors;
a controlling apparatus that comprises a plurality of second connectors and is connected to the communication apparatus through a plurality of communication cables; and
a management apparatus that is connected to the communication apparatus and the controlling apparatus;
the management apparatus comprises:
an apparatus identifier setting unit that sets an apparatus identifier for identifying the controlling apparatus, the controlling apparatus being connected to the management apparatus via a communication path that is different from the communication cables;
a first gathering unit that gathers a first identifier that is a serial number of each of the plurality of communication cables coupled to the plurality of first connectors of the communication apparatus; and
the controlling apparatus comprises:
an apparatus identifier obtaining unit that obtains the apparatus identifier from the apparatus identifier setting unit via the communication path;
a second gathering unit that gathers a second identifier that is a serial number of each of the plurality of communication cables coupled to the plurality of second connectors of the controlling apparatus; and
a connection normality determining unit that determines whether or not the communication cables that connect between the controlling apparatus and the communication apparatus are connected properly, by comparing the first identifiers with the second identifiers,
the connection normality determining unit obtains the first identifier for the respective communication cables coupled to the first connectors which apparatus specifications specify to be connected to the second connectors from among the first identifiers gathered by the first gathering unit, by referring to the apparatus specifications specifying respective combinations of the first connectors and the second connectors of the control apparatus associated with the apparatus identifier obtained by the apparatus identifier obtaining unit, when the second identifier of each of the plurality of communication cables matches the first identifier of the communication cable, the connection normality determining unit determines that the second connector coupled to the communication cable is correctly connected, and when the second identifier of each of the plurality of communication cables does not match the first identifier of the communication cable, the connection normality determining unit determines that the second connector coupled to the communication cable is not correctly connected.

US Pat. No. 10,339,083

HOST DEVICE, SLAVE DEVICE, AND REMOVABLE SYSTEM

PANASONIC INTELLECTUAL PR...

1. A host device to be connected to a slave device by a plurality of interfaces of different maximum voltage levels, the host device comprising:a power supply unit that supplies power to the slave device;
a transmitter that transmits a signal to the slave device via a first signal line; and
a receiver that receives a signal from the slave device via a second signal line, wherein
a following sequence of events occurs:
1) the power supply unit supplies power,
2) the transmitter transmits a non-repetitive sequence of a signal of a first voltage level and a signal of a second voltage level via the first signal line, and
3) when the receiver receives a signal of the first voltage level via the second signal line, the transmission of the signal of the second voltage level is stopped in the first signal line.

US Pat. No. 10,339,082

TECHNOLOGIES FOR STABLE SECURE CHANNEL IDENTIFIER MAPPING FOR STATIC AND DYNAMIC DEVICES

Intel IP Corporation, Sa...

1. A computing device for device identifier mapping, the computing device comprising:a device manager to determine a device path to an I/O device of the computing device;
a firmware interface to (i) identify a firmware method as a function of the device path and (ii) invoke the firmware method; and
a firmware device mapper to determine a channel identifier as a function of the device path in response to invocation of the firmware method;
wherein the device manager is further to establish a trusted I/O channel with the I/O device with the channel identifier in response to a determination of the channel identifier.

US Pat. No. 10,339,081

METHODS AND DEVICES THAT UTILIZE HARDWARE TO MOVE BLOCKS OF OPERATING PARAMETER DATA FROM MEMORY TO A REGISTER SET

MEDTRONIC, INC., Minneap...

1. A method of controlling parameters of an active device, comprising:writing a plurality of block navigation data and corresponding parameter data and address value pairs to locations within a memory device of a block moving hardware-based controller, each block navigation datum and corresponding parameter data and address value pairs defining a block;
receiving a trigger at the block moving hardware-based controller;
in response to receiving the trigger, reading the block navigation datum from the memory device for a first block of the memory device and reading a number of parameter data and address value pairs corresponding to the block navigation datum; and
upon reading the number of parameter data and address value pairs, writing the parameter data values that have been read from memory to a set of registers corresponding to the address values.

US Pat. No. 10,339,080

SYSTEM AND METHOD FOR OPERATING THE SAME

SK hynix Inc., Gyeonggi-...

1. A system comprising:a central processing unit (CPU);
main and auxiliary storage devices coupled to a plurality of memory ports;
a memory bus suitable for coupling the CPU and the plurality of memory ports; and
a memory controller suitable for, when the CPU calls data stored in the auxiliary storage device, controlling the called data to be transferred from the auxiliary storage device to the main storage device and stored in the main storage device,
wherein the memory controller comprises:
a first instruction input unit suitable for receiving a first instruction to control the main storage device through a first path;
a second instruction unit suitable for receiving a second instruction to control the auxiliary storage device through a second path; and
an instruction alignment unit suitable for aligning and outputting the first instruction inputted to the first instruction input unit and the second instruction inputted to the second instruction input unit.

US Pat. No. 10,339,079

SYSTEM AND METHOD OF INTERLEAVING DATA RETRIEVED FROM FIRST AND SECOND BUFFERS

WESTERN DIGITAL TECHNOLOG...

1. A host system that communicates with a non-volatile memory (NVM) device over a network, the host system comprising:a memory including a first buffer and a second buffer; and
a processor configured to execute a host interface configured to:
store blocks of application data to be communicated to the NVM device in the first buffer;
generate a respective block of metadata for each respective block of application data metadata;
store the respective blocks of metadata in the second buffer;
store a first descriptor type that includes a first buffer address, a first buffer interleave burst length, and a burst count indicating a total number of blocks contained in the first buffer, wherein there is a one-to-one correlation between blocks of application data and blocks of metadata; and a second descriptor type that includes a second buffer address and a second buffer interleave burst length but no burst count in a scatter/gather list (SGL) stored in the memory, the second descriptor created by a host interface driver, wherein only a first descriptor of the first descriptor type and a second descriptor of the second descriptor type is required to interleave blocks of application data retrieved from the first buffer with associated blocks of protection data retrieved from the second buffer using, wherein the second descriptor employs the burst count of the first descriptor for said interleaving;
generate the scatter/gather list having pairs of descriptors wherein each pair is made from the first descriptor type and the second descriptor type, wherein a single pair of descriptors is configured to provide sufficient information for the NVM device to retrieve each of a plurality of blocks of data from the first buffer and the second buffer and to provide an interleaving of the data.

US Pat. No. 10,339,078

SMART DEVICE AND METHOD OF OPERATING THE SAME

Samsung Electronics Co., ...

1. A smart device comprising:a processor; and
a sensor comprising a plurality of algorithms that each correspond to a different movement type, the sensor configured to:
detect movement of the smart device,
identify at least one movement type based on the detected movement using at least one of the plurality of algorithms,
generate an interrupt signal including both an identifier indicating the identified movement type and information on a movement range comprising a value indicating a range of the detected movement with regard to the identified movement type, and
output the interrupt signal including both the identifier and the information on the movement range to the processor,
wherein the processor is configured to:
receive the interrupt signal including both the identifier and the information on the movement range, from the sensor, and
in response to the interrupt signal, control an action, determined based on the information on the movement range corresponding to the determined at least one movement type included in the interrupt signal, to be performed.

US Pat. No. 10,339,077

SYSTEMS AND METHODS FOR IMPLEMENTING TOPOLOGY-BASED IDENTIFICATION PROCESS IN A MOCHI ENVIRONMENT

Marvell World Trade Ltd.,...

11. A system for identifying a topology of a modular chip system prior to a boot-up of an operating system, the system comprising:a master System-on-Chip (“SoC”) that is configured to:
prior to boot-up of an operating system that uses the master SoC, detect an initialization command;
in response to detecting the initialization command, assign, by the master SoC, a first chip identifier to the master SoC;
transmit, by the master SoC, a discovery communication from the master SoC to a slave SoC that is one hop away from the master SoC; and
a slave SoC that is configured to:
determine, in response to receiving the discovery communication, whether the slave SoC is a last hop SoC;
in response to determining that the slave SoC is a last hop SoC, transmit a reply communication to the master SoC; wherein
the master SoC is further configured to assign, based on the reply communication, a second chip identifier to the slave SoC.

US Pat. No. 10,339,076

SYSTEM AND METHOD FOR ADAPTABLE FABRIC CONSISTENCY VALIDATION AND ISSUE MITIGATION IN AN INFORMATION HANDLING SYSTEM

Dell Products, LP, Round...

1. A method performed by an information handling system (IHS), the method comprising:receiving fabric consistency validation rules for the IHS;
detecting that a first device has been connected to the IHS in response to detecting that software associated with the first device has been updated;
in response to detecting that the first device has been connected, updating one of the fabric consistency validation rules associated with the first device or with one or more other devices that are connected to the first device by one or more links; and
validating that the first device is compatible with each of the other devices based on the updated fabric consistency validation rules of the IHS.

US Pat. No. 10,339,075

CLOCK TREE STRUCTURE IN A MEMORY SYSTEM

Micron Technology, Inc., ...

1. A computing system, comprising:multiple memory devices each being an integrated circuit memory device;
one or more command and address buses, each connected to one or more of the multiple memory devices to transmit command and address signals to each of the multiple memory devices; and
multiple clock lines connected to the multiple memory devices in a tree structure to transmit multiple distributed clock signals to each memory device of the multiple memory devices, the tree structure allowing each distributed clock signal of the multiple distributed clock signals to be individually trained such that the multiple distributed clock signals provide each memory device with a respective distributed clock signal that is temporally aligned with the command and address signals as received by that memory device, the multiple clock lines including a clock line directly connected to two or more memory devices of the multiple memory devices to allow a distributed clock signal of the multiple distributed signals to be individually trained for the two or more directly connected memory devices receiving that distributed clock signal.

US Pat. No. 10,339,073

SYSTEMS AND METHODS FOR REDUCING WRITE LATENCY

Keysight Technologies, In...

1. A computer system that reduces an amount of time that is required to write data to memory, the computer system comprising:memory; and
processing circuitry configured to execute a volume filter driver (VFD), wherein when the processing circuitry receives input/output (IO) requests to write data associated with a file to the memory while the VFD is in a fast termination (FT) mode of operations, the VFD causes metadata associated with received IO write requests to be written to a volume of the memory while preventing actual data associated with received IO write requests from being written to the volume of the memory, and wherein after the FT mode of operations is terminated, the VFD enters a quiescent mode of operations during which the VFD passes all IO write requests to the volume, thereby allowing actual data associated with the file to be written to the volume.

US Pat. No. 10,339,072

READ DELIVERY FOR MEMORY SUBSYSTEM WITH NARROW BANDWIDTH REPEATER CHANNEL

Intel Corporation, Santa...

1. A memory circuit, comprising:a first group of memory devices coupled to a first memory channel, the first memory channel having a first bandwidth to send read data to a host device;
a second group of memory devices coupled to a second memory channel, the second memory channel coupled to the first memory channel and having a second bandwidth to send read data to the host device, the second bandwidth a portion of the first bandwidth; and
a repeater to couple the second memory channel to the first memory channel, the repeater to share the first bandwidth between the first and second groups of memory devices, wherein the repeater is configured to provide access to up to the portion of the first bandwidth to the second group of memory devices to send read data to the host device, and to provide access to the first group of memory devices to either the first bandwidth or to the first bandwidth less the portion to send read data to the host device when the first group of memory devices and the second group of memory devices are coupled, respectively, to the first and second memory channels.

US Pat. No. 10,339,071

SYSTEM AND METHOD FOR INDIVIDUAL ADDRESSING

Micron Technology, Inc., ...

1. A device comprising:a bus interface; and
a plurality of state machine engines connected to the bus interface in a rank, wherein each of the plurality of state machine engines is configured to analyze data and to receive a respective address of a plurality of addresses from the bus interface for loading prior to executing a command from a processor or an instruction buffer, wherein the bus interface comprises a processor, an indirect address storage (IAS), and a multiplexer configured to switch to transmit the respective address of the plurality of addresses comprising an indirect address stored in the IAS when an indirect action is issued by the processor and an enable bit stored in the IAS is set.

US Pat. No. 10,339,070

ACCESS OF VIRTUAL MACHINES TO STORAGE AREA NETWORKS

International Business Ma...

1. A method comprising:defining a storage validation list, wherein the storage validation list indicates at least that:
for a first storage area network connecting a host computer system to a storage system, respective virtual machines of a plurality of virtual machines executing on the host computer system have access to a first group of logical units via a first target port of the storage system; and
for a second storage area network connecting the host computer system to the storage system, respective virtual machines of the plurality of virtual machines executing on the host computer system have access to a second group of logical units via a second target port of the storage system; and
returning, by a respective switch of a respective storage area network connecting the host computer system with the storage system, success information to the host computer system in response to a respective virtual machine port name having access to a target port associated with a target port name specified in a validate access command.

US Pat. No. 10,339,069

CACHING LARGE OBJECTS IN A COMPUTER SYSTEM WITH MIXED DATA WAREHOUSING AND ONLINE TRANSACTION PROCESSING WORKLOAD

ORACLE INTERNATIONAL CORP...

1. A method for managing cached data objects, the method comprising:receiving a request to access a target data object;
in response to the request to access the target data object, increasing a first access-level value associated with the target data object;
after increasing the first access-level value associated with the target data object, comparing the first access-level value associated with the target data object with a set of one or more other access-level values associated with data objects residing in a cache;
based on said comparing, replacing at least one data object with the target data object;
adjusting, based on a second access-level value associated with the at least one data object, a rate at which access-level values are adjusted;
based on the rate at which access-level values are adjusted, adjusting the first access-level value of the target data object; and
wherein the method is performed by one or more computing devices.

US Pat. No. 10,339,067

MECHANISM FOR REDUCING PAGE MIGRATION OVERHEAD IN MEMORY SYSTEMS

Advanced Micro Devices, I...

1. A method for use in a memory system comprising:swapping a first plurality of pages of a first memory of the memory system with a second plurality of pages of a second memory of the memory system, the first memory having a first latency and the second memory having a second latency, the first latency being less than the second latency; and
updating a page table and triggering a translation lookaside buffer shootdown to associate a virtual address of each of the first plurality of pages with a corresponding physical address in the second memory and to associate a virtual address for each of the second plurality of pages with a corresponding physical address in the first memory,
wherein the swapping comprises:
copying the first plurality of pages from the first memory to a staging buffer;
copying the second plurality of pages from the second memory to the staging buffer; and
writing data to a copy of a first page of the first plurality of pages in the staging buffer and writing the data to the first page in the first memory in response to a write instruction to the first page during the copying of the first plurality of pages to the staging buffer.

US Pat. No. 10,339,066

OPEN-ADDRESSING PROBING BARRIER

International Business Ma...

1. A method, in a data processing system, for utilizing an open address probing barrier in association with a memory container, the method comprising:responsive to receiving a request from an application to find an item in the memory container, calculating a starting memory slot for the item;
responsive to the item failing to occupy the starting memory slot, probing a first predetermined number of memory slots immediately following the starting memory slot for the item;
responsive to the item occupying one of the first predetermined number of memory slots immediately following the starting memory slot, returning the item to the application;
responsive to the item failing to occupy one of the first predetermined number of memory slots immediately following the starting memory slot, determining whether a barrier bit has been set in association with the last of the first predetermined number of memory slots;
responsive to the barrier bit being set, probing at least a portion of the memory container for the item and, responsive to the item being found in the portion of the memory container, returning the item to the application;
responsive to the barrier bit failing to be set, returning a notification that the item does not exist in the memory container to the application;
responsive to receiving a request to insert the item in one of the plurality of memory slots in the memory container, calculating the starting memory slot for the item;
responsive to the starting memory slot being empty, storing the item in the starting memory slot;
responsive to the starting memory slot being occupied, probing the first predetermined number of memory slots immediately following the starting memory slot for the item;
responsive to one of the first predetermined number of memory slots immediately following the starting memory slot being empty, storing the item in the empty memory slot;
responsive to all of the first predetermined number of memory slots immediately following the starting memory slot being occupied, probing at least the portion of the memory container for an empty memory slot;
responsive to the empty memory slot being identified, storing the item in the empty memory slot and setting the barrier bit has been set in association with the last of the first predetermined number of memory slots;
responsive to the empty memory slot failing to be identified, returning a notification that the item cannot be stored in the memory container to the application.

US Pat. No. 10,339,065

OPTIMIZING MEMORY MAPPING(S) ASSOCIATED WITH NETWORK NODES

Ampere Computing LLC, Sa...

1. A system for optimizing memory mappings associated with a plurality of network nodes in a multi-node system, comprising:a first network node of the plurality of nodes configured for generating a memory page request in response to an invalid memory access associated with a virtual central processing unit of the first network node and, in response to a determination that a second network node of the plurality of nodes comprises a memory space associated with the memory page request, transmitting the memory page request to the second network node via a communication channel; and
the second network node configured for receiving the memory page request, retrieving a memory page request associated with the memory page request, and transmitting the memory page to the first network device via the communication channel, the first network node being further configured for mapping a memory page associated with the memory page request based on a set of memory page mappings stored by the first network node.

US Pat. No. 10,339,064

HOT CACHE LINE ARBITRATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for hot cache line arbitration, the method comprising:detecting, by a processing device, a hot cache line scenario;
tracking, by the processing device, hot cache line requests from requesters to determine subsequent satisfaction of the requests; and
facilitating, by the processing device, servicing of the hot cache line requests according to a hierarchy of the requestors, the hierarchy of the requestors being based at least in part on a location of the requestors relative to one another.

US Pat. No. 10,339,063

SCHEDULING INDEPENDENT AND DEPENDENT OPERATIONS FOR PROCESSING

Advanced Micro Devices, I...

1. A method, comprising:adding a first operation to a tracking array of a processor in response to the first operation being received for scheduling for execution at the processor and assigning the first operation a first age value;
adjusting the first age value in response to scheduling a second operation from the tracking array;
selecting the first operation for execution based on the first age value;
after selecting the first operation for execution, blocking the first operation from being issued to an execution unit responsive to identifying that the first operation is dependent on a third operation;
in response to blocking the first operation, resetting the first age value to an initial value and maintaining the first operation at the tracking array;
adjusting a value of a counter by a first adjustment in response to blocking the first operation from being scheduled for execution while the first operation is stored at the tracking array; and
in response to the value of the counter exceeding a threshold, suppressing scheduling of execution operations not stored the tracking array.

US Pat. No. 10,339,062

METHOD AND SYSTEM FOR WRITING DATA TO AND READ DATA FROM PERSISTENT STORAGE

EMC IP Holding Company LL...

1. A method for managing data stored in a persistent storage, the method comprising:receiving a write request comprising a logical address and a first datum;
storing a table entry corresponding to the logical address in a primary cache entry table;
updating a bitmap entry corresponding to the logical address;
storing the first datum in an external memory, wherein the external memory is operatively connected to the persistent storage;
transmitting a copy of the first datum to the persistent storage;
receiving a write request comprising a second logical address and second datum;
storing a second table entry corresponding to the second logical address in an overflow table;
updating a bitmap entry corresponding to the second logical address;
storing the second datum in the external memory; and
transmitting a copy of the second datum to the persistent storage.

US Pat. No. 10,339,061

CACHING FOR HETEROGENEOUS PROCESSORS

Intel Corporation, Santa...

1. A system comprising:a first processor comprising:
a plurality of cores on a single semiconductor chip, and
a first cache on the single semiconductor chip, the first cache to be shared by two or more of the plurality of cores on the single semiconductor chip;
a data processing device comprising:
a plurality of accelerator devices having a different instruction processing architecture from the plurality of cores, and
a second cache to be shared by the plurality of accelerator devices; and
an interconnect to couple the second cache to the first cache, wherein the interconnect comprises circuitry to perform coherence actions between the first cache and the second cache.

US Pat. No. 10,339,060

OPTIMIZED CACHING AGENT WITH INTEGRATED DIRECTORY CACHE

Intel Corporation, Santa...

1. A system comprising:a plurality of processing units, wherein each processing unit comprises one or more processing cores;
a memory coupled to and shared by the plurality of processing units; and
a cache/home agent (“CHA”) of a first processing unit, the CHA to:
maintain a remote snoop filter (“RSF”) corresponding to the first processing unit to track cache lines, wherein a cache line is tracked by the RSF if the cache line is stored in both the memory and one or more other processing units;
receive a request to access a target cache line from a processing core of the first processing unit;
allocate a tracker entry corresponding to the request, the tracker entry used to track a status of the request;
perform a lookup in the RSF for the target cache line; and
deallocate the tracker entry responsive to a detection that the target cache line is not tracked by the RSF.

US Pat. No. 10,339,058

AUTOMATIC CACHE COHERENCY FOR PAGE TABLE DATA

QUALCOMM Incorporated, S...

1. A method of automatic cache coherency for page table data on a computing device, comprising:modifying, by a first processing device, page table data stored in a first cache associated with the first processing device;
receiving, at a page table coherency unit, a page table cache invalidate signal from the first processing device;
issuing, by the page table coherency unit, a cache maintenance operation command to the first processing device; and
writing, by the first processing device, the modified page table data stored in the first cache to a shared memory accessible by the first processing device and a second processing device associated with a second cache storing the page table data.

US Pat. No. 10,339,057

STREAMING ENGINE WITH FLEXIBLE STREAMING ENGINE TEMPLATE SUPPORTING DIFFERING NUMBER OF NESTED LOOPS WITH CORRESPONDING LOOP COUNTS AND LOOP OFFSETS

TEXAS INSTRUMENTS INCORPO...

1. A digital data processor comprising:an instruction memory to store a plurality of instructions, each of the instructions specifying a data processing operation and at least one data operand;
an instruction decoder connected to the instruction memory to recall the instructions from the instruction memory and to determine, for each recalled instruction, the specified data processing operation and the at least one data operand;
at least one functional unit connected to a data register file and the instruction decoder to perform a data processing operation upon at least one data operand corresponding to an instruction decoded by the instruction decoder and to cause a result of the data processing operation to be stored in the data register file; and
a streaming engine connected to the instruction decoder to, in response to a stream start instruction, recall a data stream from a memory, wherein the data stream includes a sequence of data elements, wherein the sequence of data elements includes a selected number of nested loops, wherein each nested loop has a respective corresponding loop count, and wherein the streaming engine includes:
an address generator to generate stream memory addresses corresponding to the sequence of data elements of the data stream; and
a stream head register to store one or more data elements of the data stream so that the one or more data elements stored in the stream head register are available to be supplied to the at least one functional unit, wherein the at least one functional unit is configured to receive one of the one or more data elements from the stream head register as a data operand in response to a stream operand instruction;
wherein the data elements of the data stream are specified at least in part by a stream definition template stored in a stream definition template register, wherein the stream definition template includes a loop count field to indicate a loop format for the nested loops, wherein a first configuration indicated by the loop count field specifies a first format for the nested loops and a second configuration indicated by the loop count field specifies a second format for the nested loops, wherein, in the first format, the nest loops include a first number of nested loops and, in the second format, the nest loops include a second number of nested loops, the first number being different from the second number; and
wherein the address generator is configured to generate the stream memory addresses corresponding to the first number of nested loops when the loop count field specifies the first format and to generate the stream memory addresses corresponding to the second number of nested loops when the loop count field specifies the second format.

US Pat. No. 10,339,056

SYSTEMS, METHODS AND APPARATUS FOR CACHE TRANSFERS

SANDISK TECHNOLOGIES LLC,...

23. A system, comprising:a first host computing device, comprising:
a first cache manager configured to:
cache data of a particular virtual machine in cache storage of the first host computing device in association with respective cache tags allocated to the particular virtual machine, wherein the respective cache tags are stored outside of a memory space of the particular virtual machine; and
retain the cache data of the particular virtual machine within the cache storage of the first host computing device in response to the particular virtual machine being migrated from the first host computing device; and
a second host computing device, comprising:
a cache provisioner configured to allocate cache storage capacity within a cache storage device of the second host computing device to the particular virtual machine in response to the particular virtual machine being migrated to operate on the second host computing device; and
a second cache manager configured to:
receive the cache tags of the particular virtual machine from the first host computing device;
use the received cache tags to determine that the cache data of the particular virtual machine is being retained at the first host computing device;
access a portion of the cache data of the particular virtual machine retained at the first host computing device by use of the received cache tags; and
populate the cache storage capacity allocated to the particular virtual machine at the second host computing device by transferring the portion of the cache data of the particular virtual machine accessed from the first host computing device to the cache storage capacity of the second host computing device.

US Pat. No. 10,339,055

CACHE SYSTEM WITH MULTIPLE CACHE UNIT STATES

Red Hat, Inc., Raleigh, ...

1. A method comprising:determining, by a processing device, that a hit ratio is below a first hit ratio threshold associated with a first cache unit and above a second hit ratio threshold associated with a second cache unit, wherein the first hit ratio threshold is different from the second hit ratio threshold; and
responsive to determining that the hit ratio is below the first hit ratio threshold associated with the first cache unit and above the second hit ratio threshold associated with the second cache unit, loading a dataset into the first cache unit rather than the second cache unit, wherein the first cache unit and the second cache unit are available to load the dataset.

US Pat. No. 10,339,054

INSTRUCTION ORDERING FOR IN-PROGRESS OPERATIONS

Cavium, LLC, Santa Clara...

1. An apparatus comprising:one or more modules configured to execute memory instructions that access data stored in physical memory based on virtual addresses translated to physical addresses based on mappings in a page table; and
memory management circuitry coupled to the one or more modules, the memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses;
wherein the memory management circuitry is configured to execute operations from the one or more modules, the executing including selectively ordering each of a plurality of in-progress operations that were in progress within a processor pipeline when a first operation was received by the memory management circuitry,
wherein said selectively ordering is with respect to completing execution within said processor pipeline, and is performed in response to the first operation being received,
wherein the first operation invalidates at least a first virtual address as a result of inserting an instruction into the pipeline within a pre-determined maximum number of cycles after the first operation was received, wherein the pre-determined maximum number of cycles is determined based at least in part on (1) a guaranteed maximum latency and (2) a maximum number of cycles needed for the inserted instruction to propagate through the pipeline, and
wherein a position in said selective ordering of a particular in-progress operation depends on whether or not the particular in-progress operation provides results to at least one of the first cache or second cache.

US Pat. No. 10,339,053

VARIABLE CACHE FLUSHING

Hewlett Packard Enterpris...

1. A method for variable cache flushing, the method comprising:detecting, by a storage controller, a cache flush failure;
in response to the detecting, executing, by the storage controller, a first reattempt of the cache flush after a first time period has elapsed; and
adjusting, by the storage controller, durations of time periods between reattempts of the cache flush subsequent to the first reattempt based at least on a rate of input/output (I/O) errors for a backing medium to which cache lines corresponding to the cache flush are to be written.

US Pat. No. 10,339,051

CONFIGURABLE COMPUTER MEMORY

Hewlett Packard Enterpris...

1. A method for configuring a memory, comprising:accessing, from a BIOS of a computer system, a set of memory settings for each of a set of memory segments of a memory, wherein each set of memory settings comprises a current memory setting that includes information about resiliency, power consumption, and performance of the corresponding memory segment and a set of potential memory settings, each potential memory setting including information about resiliency, power consumption, and performance available with the potential memory setting;
replacing, by an operating system of the computer system, a first current memory setting of a first segment of the memory with a first potential memory setting of a first set of memory settings for the first segment;
booting the computer system after the replacement of the first current memory setting with the first potential memory setting by the operating system;
supporting, by the operating system, a set of virtual machines, the set of virtual machines including a first virtual machine;
storing, by the operating system, the first virtual machine in the first segment responsive to determining that the first potential memory setting of the first segment is configured to support mirroring.

US Pat. No. 10,339,050

APPARATUS INCLUDING A MEMORY CONTROLLER FOR CONTROLLING DIRECT DATA TRANSFER BETWEEN FIRST AND SECOND MEMORY MODULES USING DIRECT TRANSFER COMMANDS

Arm Limited, Cambridge (...

1. An apparatus comprising:a memory controller and a plurality of memory modules;wherein:the memory controller, in order to control direct data transfer, is configured to:
issue a first direct transfer command to a first memory module of the plurality of memory modules, wherein the first direct transfer command comprises information indicating that the first memory module should transmit data bypassing the memory controller; and
issue a second direct transfer command to a second memory module of the plurality of memory modules, wherein the second direct transfer command comprises information indicating that the second memory module should store the data received directly from the first memory module;
the first memory module is configured to:
receive the first direct transfer command from the memory controller; and
directly transmit the data for receipt by the second memory module in dependence on the first direct transfer command; and
the second memory module is configured to:
receive the second direct transfer command from the memory controller;
receive the data from the first memory module directly; and
store the data in dependence on the second direct transfer command.

US Pat. No. 10,339,049

GARBAGE COLLECTION FACILITY GROUPING INFREQUENTLY ACCESSED DATA UNITS IN DESIGNATED TRANSIENT MEMORY AREA

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method of managing memory, the method comprising:executing a memory management process within a computing environment, the executing the memory management process comprising:
establishing an area of a memory as a transient memory area and an area of the memory as a conventional memory area;
tracking, for each data unit in the transient memory area or the conventional memory area, a number of accesses to the data unit, the tracking providing a respective access count for each data unit;
performing garbage collection processing on the memory, the garbage collection processing facilitating consolidation of the data units within the transient memory area or the conventional memory area, and the garbage collection processing comprising:
determining, for each data unit in the transient memory area or the conventional memory area, whether the respective access count is below a transient threshold ascertained to separate frequently accessed data units and infrequently accessed data units; and
grouping data units with respective access counts below the transient threshold together as transient data units within the transient memory area;
repeating the garbage collection processing over multiple garbage collection processing cycles; and
applying, between one garbage collection processing cycle and another garbage collection processing cycle of the multiple garbage collection processing cycles, an adjustment to lower, at least in part, the respective access counts of the data units to facilitate the garbage collection processing on the memory.

US Pat. No. 10,339,048

ENDURANCE ENHANCEMENT SCHEME USING MEMORY RE-EVALUATION

International Business Ma...

1. An apparatus, comprising:non-volatile memory configured to store data; and
a controller and logic integrated with and/or executable by the controller, the logic being configured to:
determine, by the controller, that at least one block of the non-volatile memory and/or portion of a block of the non-volatile memory meets a retirement condition;
re-evaluate, by the controller, the at least one block and/or the portion of a block to determine whether to retire the at least one block and/or the portion of a block;
indicate, by the controller, that the at least one block and/or the portion of a block remains usable when a result of the re-evaluation is not to retire the block; and
indicate, by the controller, that the at least one block and/or the portion of a block is retired when the result of the re-evaluation is to retire the block,
wherein the re-evaluating includes:
assigning the at least one block and/or the portion of a block into a delay queue for at least a dwell time and/or a read delay,
performing one or more erase operations on the at least one block and/or the portion of a block,
writing data to the at least one block and/or the portion of a block,
performing a calibration of the at least one block and/or the portion of a block, and
performing a read sweep on the at least one block and/or the portion of a block,
wherein performing the calibration includes determining an optimal threshold voltage shift value for each of the at least one block and/or the portion of a block.

US Pat. No. 10,339,047

ALLOCATING AND CONFIGURING PERSISTENT MEMORY

Intel Corporation, Santa...

1. An apparatus comprising:memory controller logic, coupled to non-volatile memory (NVM), to configure the NVM into a plurality of partitions at least in part based on one or more attributes,
wherein one or more volumes visible to an application or operating system are to be formed from one or more of the plurality of partitions, wherein each of the one or more volumes is to comprise one or more of the plurality of partitions having at least one similar attribute from the one or more attributes, wherein the NVM is to utilize at least two management partitions, wherein the management partitions are to be accessible prior to the NVM having been mapped into a processor's address space, wherein a first management partition from the at least two management partitions is read or write accessible by a Basic Input/Output System (BIOS), wherein the first management partition is read or write inaccessible by the application or the operating system.

US Pat. No. 10,339,046

DATA MOVING METHOD AND STORAGE CONTROLLER

SHENZHEN EPOSTAR ELECTRON...

1. A data moving method, adapted for controlling a storage device equipped with a flash memory, the storage device being controlled by a storage controller, the flash memory comprising a plurality of dice, the dice comprising a first die corresponding to a first channel and a second die corresponding to a second channel, each of the dice comprising a first plane and a second plane, the method comprising:performing a data moving operation by the storage controller to obtain a valid data from a plurality of source blocks, the valid data comprising a first data, a second data, a third data and a fourth data;
determining whether the valid data is a sequential data by the storage controller;
when the valid data is the sequential data, transmitting a first 2-plane read command by the storage controller to read the first data and the second data respectively from the first plane and the second plane of the first die through the first channel, transmitting a second 2-plane read command to read the third data and the fourth data respectively from the first plane and the second plane of the second die through the second channel, and transmitting the first data, the third data, the second data and the fourth data to a buffer memory in order; and
transmitting a first 2-plane programming command by the storage controller to respectively program the first data and the second data to the first plane and the second plane of a third die among the dice, and transmitting a second 2-plane programming command to respectively program the third data and the fourth data to the first plane and the second plane of a fourth die among the dice.

US Pat. No. 10,339,045

VALID DATA MANAGEMENT METHOD AND STORAGE CONTROLLER

SHENZHEN EPOSTAR ELECTRON...

1. A valid data management method, adapted to a storage device having a rewritable non-volatile memory module, wherein the rewritable non-volatile memory module has a plurality of physical units, each physical unit among the physical units comprises a plurality of physical sub-units, and the method comprises:creating a valid data mark table and a valid logical addresses table corresponding to a target physical unit according to the target physical unit among the physical units, a logical-to-physical table corresponding to the rewritable non-volatile memory module and a target physical-to-logical table corresponding to the target physical unit, wherein the target physical-to-logical table records target logical addresses of a plurality of target logical sub-units mapped to a plurality of target physical sub-units according to an arrangement order of the target physical sub-units of the target physical unit, and the target logical addresses respectively correspond to a plurality of target physical addresses of the target physical sub-units,
wherein the created valid data mark table records a plurality of mark values respectively corresponding to the target logical addresses, wherein each mark value among the mark values is a first bit value or a second bit value, wherein the first bit value is configured to indicate that the corresponding target logical address is valid, and the second bit value is configured to indicate that the corresponding target logical address is invalid,
wherein the created valid logical addresses table only records one or more valid target logical addresses respectively corresponding to one or more said first bit values according to an order of the one or more said first bit values in the valid data mark table, wherein the one or more valid target logical addresses are the target logical addresses determined as valid among the target logical addresses, wherein the valid data mark table is smaller than the valid logical addresses table, and the valid logical addresses table is smaller than the target physical-to-logical table; and
identifying one or more valid data stored in the target physical unit according to the logical-to-physical table, the valid data mark table and the valid logical addresses table corresponding to the target physical unit.

US Pat. No. 10,339,044

METHOD AND SYSTEM FOR BLENDING DATA RECLAMATION AND DATA INTEGRITY GARBAGE COLLECTION

SanDisk Technologies LLC,...

1. A method of recycling data in a non-volatile memory system, comprising:determining occurrences of triggering events, wherein the triggering events include:
data reclamation events, urgent data integrity recycling events, and scheduled data integrity recycling events, wherein the data reclamation events include events that each corresponds to the occurrence of one or more host data write operations in accordance with a target reclamation to host write ratio, a respective urgent data integrity recycling event occurs when a respective memory portion of the non-volatile memory system satisfies predefined urgent read disturb criteria, and the scheduled data integrity recycling events include events that occur at a rate corresponding to a projected quantity of memory units for which data integrity recycling is to be performed by the non-volatile memory system over a period of time;
in response to each of a plurality of triggering events, recycling data in a predefined quantity of memory units from a source memory portion to a target memory portion of the non-volatile memory system; and
in response to determining occurrences of a first type of non-urgent recycling events and a second type of the non-urgent recycling events, calculating a hybrid timeout period in accordance with a first timeout period and a second timeout period.

US Pat. No. 10,339,043

SYSTEM AND METHOD TO MATCH VECTORS USING MASK AND COUNT

MoSys, Inc., San Jose, C...

1. An apparatus for calculating an index into a main memory, the apparatus comprising:an index-generating logic coupleable to the main memory and having a plurality of inputs and an output;
a local memory for storing a plurality of population counts of compressed data that is stored in the main memory, the local memory selectively coupled to the index-generating logic in order to selectively provide at least a portion of the plurality of population counts to the index-generating logic; and
a register coupled to provide the index-generating logic a plurality of multi-bit strides (MBSs) of a prefix string; and wherein:
the index-generating logic generates a composite index on its output to a data location in the main memory; and
the data location in the main memory is a longest prefix match (LPM) for the prefix string and any data associated with the LPM.

US Pat. No. 10,339,042

MEMORY DEVICE INCLUDING COLUMN REDUNDANCY

Samsung Electronics Co., ...

1. A memory device comprising:a memory cell array comprising a plurality of mats connected to a word line and a plurality of bit lines; and
a column decoder comprising a first repair circuit in which a first repair column address is stored, and a second repair circuit in which a second repair column address is stored,
wherein when the first repair column address coincides with a received column address in a read command or a write command, the column decoder is configured to select other bit lines from among the plurality of bit lines instead of bit lines from among the plurality of bit lines corresponding to the received column address in one mat among the plurality of mats, and
wherein when the second repair column address coincides with the received column address, the column decoder is configured to select other bit lines from among the plurality of bit lines instead of the bit lines corresponding to the received column address in the plurality of mats.

US Pat. No. 10,339,041

SHARED MEMORY ARCHITECTURE FOR A NEURAL SIMULATOR

QUALCOMM Incorporated, S...

1. A computer-implemented method for allocating memory in an artificial nervous system simulator implemented in hardware, comprising:performing a simulation of a plurality of artificial neurons of an artificial nervous system;
determining memory resource requirements of each of the artificial neurons of the artificial nervous system being simulated based on at least one of a state or a type of the artificial neuron being simulated;
dynamically allocating portions of a shared memory pool to the artificial neurons based on the determination of memory resource requirements as the memory resource requirements change during the simulation, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients; and
accessing, for each artificial neuron, the portion of the shared memory pool allocated to the artificial neuron during the simulation via the write clients and the read clients.

US Pat. No. 10,339,040

CORE DATA SERVICES TEST DOUBLE FRAMEWORK AUTOMATION TOOL

SAP SE, Walldorf (DE)

1. A computer-implemented method for evaluating integrity of data models, comprising:selecting a package comprising a semantic and reusable data model expressed in data definition language;
selecting a class to create a plurality of local test classes;
generating, based on a class name and a package name, a plurality of local test class templates for the package; and
determining an integrity of the data model by comparing an actual result for the data model and an expected result for the data model.

US Pat. No. 10,339,039

VIRTUAL SERVICE INTERFACE

CA, Inc., Islandia, NY (...

1. A method comprising:identifying a virtualization request to initiate a virtualized transaction involving a first software component and a virtual service simulating a second software component;
determining a location of a reference, within the first software component, to the second software component based on a type of the first software component, wherein the location of the reference comprises a particular file associated with the first software component, and the first software component is to use the location of the reference to determine a first network location of the second software component and communicate with the second software component based on the first network location;
determining a second network location of a system to host the virtual service; and
changing the reference within the particular file, using a plug-in installed on the first software component, to direct communications of the first software component to the second network location instead of the first network location responsive to the virtualization request, wherein the virtualized transaction is to comprise a request sent from the first software component to the virtual service and a synthetic response generated by the virtual service to model a real response by the second software component to the request.

US Pat. No. 10,339,038

METHOD AND SYSTEM FOR GENERATING PRODUCTION DATA PATTERN DRIVEN TEST DATA

JPMORGAN CHASE BANK, N.A....

1. A computer implemented system that implements a test data tool that generates test data based on production data patterns, the test data tool comprising:a data input that interfaces with one or more production environments;
an output interface that transmits test data to one or more user acceptance testing (UAT) environments;
a communication network that receives production data from the one or more production environments and transmits test data to the one or more UAT environments; and
a computer server comprising at least one processor, coupled to the data input, the interactive user interface and the communication network, the processor configured to:
receive, via the data input, production data from the one or more production environments, the production data comprises personally identifiable information;
identify a plurality of attributes from the production data;
for each attribute, identify one or more data patterns;
generate one or more rules that define the one or more data patterns for each attribute;
generate a configuration file based on the one or more rules;
apply the configuration file to generate test data in a manner that obscures personally identifiable information existing in the production data; and
transmit the test data to a UAT environment.

US Pat. No. 10,339,037

RECOMMENDATION ENGINE FOR RECOMMENDING PRIORITIZED PERFORMANCE TEST WORKLOADS BASED ON RELEASE RISK PROFILES

INTUIT INC., Mountain Vi...

1. A method for recommending prioritized performance test workloads, the method comprising:retrieving, by a processor, a baseline workload and variability coverage matrix associated with an aspect of a software release, wherein the variability coverage matrix identifies variations of the baseline workload in the software release;
retrieving, by the processor, from one or more external resources, information about the software release based on keywords associated with a baseline test workload for a software release;
creating, by the processor, a risk profile for the software release based, at least in part, on a number of matches to each of the keywords in the retrieved information, wherein the risk profile includes weightings to apply to the baseline workload for each variation of a workload in the variability coverage matrix, and wherein each of the keywords has a corresponding weight, and creating the risk profile comprises adjusting a weighting associated with each keyword in the baseline test workload by the corresponding weight for each instance of the keyword in the retrieved information;
generating, by the processor, a prioritized test workload for execution over one or more prioritized variability dimensions based on the risk profile and the baseline test workload, wherein generating the prioritized test workload comprises adjusting, for a variability dimension in the variability coverage matrix, a distribution of tests to execute for each value defined for the variability dimension; and
executing, by the processor, a test of the software release based on the prioritized test workload.

US Pat. No. 10,339,036

TEST AUTOMATION USING MULTIPLE PROGRAMMING LANGUAGES

Accenture Global Solution...

1. A device, comprising:one or more memories; and
one or more processors, communicatively coupled to the one or more memories, to:
receive information identifying a set of steps to perform,
the set of steps being related to a test of a program,
one or more steps, of the set of steps, being written in a first programming language;
determine whether the set of steps is associated with a first artifact that is similar to a second artifact associated with another set of steps based on the information identifying the set of steps,
the first artifact identifying information related to the test of the program and the second artifact identifying information related to another test of another program;
determine whether two or more steps, of the set of steps, can be combined into a combined set of steps based on determining whether the set of steps is associated with the first artifact that is similar to the second artifact;
identify program code written in a second programming language based on determining whether the two or more steps, of the set of steps, can be combined into the combined set of steps,
the second programming language being different from the first programming language; and
perform an action related to the test of the program based on identifying the program code.

US Pat. No. 10,339,035

TEST DB DATA GENERATION APPARATUS

HITACHI, LTD., Tokyo (JP...

1. A test DB data generation apparatus for generating a database for testing, which approximates an existing database having a plurality of tables, each table having a plurality of corresponding columns that take on a corresponding plurality of values, the test DB data generation apparatus comprising:a column distribution extraction module extracting distribution information of values of each column of the existing database, wherein the column distribution information indicates, for one or more columns in a corresponding table, a frequency distribution of a range of the plurality of values taken on by said one or more columns in the existing database;
a column dependency extraction module extracting column dependency information of the existing database, wherein the column dependency information indicates a probability that, when a first column in a first table takes on a source value that falls within a range of values, a second column in a second table takes on a target value;
a data generation module generating test DB data based on the distribution information and the column dependency information, wherein the test DB data indicates, for the first table in the existing database, that when a first column in the first table takes on a first value, a second column in the first table takes on a second value in a proportion that corresponds to the probability indicated by the column dependency information; and
a column dependency degree calculation module measuring a degree of dependency between columns of the existing database,
wherein the column dependency extraction module is configured to:
group pieces of data with a rule for each column of the existing database;
replace the pieces of data with group names of respective groups obtained by the grouping; and
calculate a degree of co-occurrence of pieces of data for a combination of two columns, and
wherein the data generation module is configured to determine whether or not to generate test DB data by using the column dependency information for each column based on the degree of dependency between columns calculated by the column dependency degree calculation module.

US Pat. No. 10,339,034

DYNAMICALLY GENERATED DEVICE TEST POOL FOR STAGED ROLLOUTS OF SOFTWARE APPLICATIONS

Google LLC, Mountain Vie...

1. A method comprising:receiving, by a computing system that includes an application repository, an updated version of an executable application;
determining, by the computing system, based at least in part on one or more characteristics of a particular computing device and one or more characteristics of a group of computing devices that excludes the particular computing device,
whether the particular computing device contributes additional test scope for the updated version of the executable application beyond existing test scope for the updated version of the executable application that is contributed by the group of computing devices; and
responsive to determining that the particular computing device contributes additional test scope for the updated version of the executable application beyond the existing test scope for the updated version of the executable application that is contributed by the group of computing devices:
adding, by the computing system, the particular computing device to the group of computing devices; and
sending, by the computing system, the updated version of the executable application to the particular computing device for installation at the particular computing device.

US Pat. No. 10,339,033

RUNTIME DETECTION OF UNINITIALIZED VARIABLE ACROSS FUNCTIONS

International Business Ma...

1. A computer implemented method for detecting uninitialized variables, the computer implemented method comprising:running a first function, wherein the first function comprises a local variable and a first flag associated with the local variable for indicating an initialization state of the local variable;
calling a second function from the first function, with the local variable as a parameter of the second function, wherein the second function comprises a second flag associated with the parameter for indicating an initialization state of the parameter;
in response the local variable not indicating the initialization state of the parameter, providing a global variable to the second function as a second parameter, wherein the global variable indicates the availability state of the second flag to the first function;
in response to the second flag to the first function determined as available, returning the second flag from the second function to the first function; and
updating the first flag based at least on the second flag and the global variable being available, wherein the global variable is associated with the second flag returned to the first function from the second function.

US Pat. No. 10,339,032

SYSTEM FOR MONITORING AND REPORTING PERFORMANCE AND CORRECTNESS ISSUES ACROSS DESIGN, COMPILE AND RUNTIME

Microsoft Technology Lice...

1. A method performed on a computing device that includes at least one processor and memory, the method comprising:receiving, by the computing device from a development tool, at least one event configured to identify a design-time, a compile-time, and a run-time issue associated with code;
determining, from a plurality of categories, a category of the at least one event, wherein the plurality of categories comprise: best practices, application performance, accessibility, localization, or any other aspect of application development or operation;
mapping, by the computing device based on a mapping information in a mapping store, the category of the at least one event to at least one rule of a plurality of rules in a rule store associated with development code;
identifying, by the computing device based at least on the mapping, the at least one rule of the plurality of rules in the rule store associated with the development of the code, wherein each rule of the plurality of rules in the rule store includes an identifier that uniquely identifies the each rule from each other of the plurality of rules;
evaluating, by the computing device based on the at least one identified rule, the at least one received event according to the at least one rule resulting in identification of a development issue associated with the code;
generating, by the computing device based on said evaluating, a rule output that identifies a cause of the development issue associated with the code;
displaying, by the computing device in a user interface host based at least on the evaluating, the identification of the development issue associated with the code and a proposed solution for the development issue associated with the code; and
modifying, by the computing device based on the rule output, the code according to the proposed solution for the development issue associated with the code.

US Pat. No. 10,339,030

DUPLICATE BUG REPORT DETECTION USING MACHINE LEARNING ALGORITHMS AND AUTOMATED FEEDBACK INCORPORATION

Oracle International Corp...

1. A method comprising:for each particular set of bug reports, in a first plurality of sets of bug reports, identifying:
(a) a user-classification of the particular set of bug reports as including duplicate bug reports or non-duplicate bug reports;
(b) a first plurality of correlation values, each of which corresponds to a respective feature, of a plurality of features, between bug reports in the particular set of bug reports;
based on (a) and (b), for the first plurality of sets of bug reports, generating a model to identify any set of bug reports as including duplicate bug reports or non-duplicate bug reports;
receiving a request to determine whether a particular bug report is a duplicate of any of a second plurality of bug reports;
identifying a first category associated with the particular bug report;
identifying a first subset of bug reports, of the second plurality of bug reports, associated with the first category;
identifying a second subset of bug reports, of the second plurality of bug reports, that have been previously identified as a duplicate of at least one bug report of the first subset of bug reports;
identifying a set of candidate bug reports that:
(a) includes one or more of the first subset of bug reports;
(b) includes one or more of the second subset of bug reports; and
(c) does not include a third subset of bug reports, of the second plurality of bug reports, that (i) are not associated with the first category and (ii) have not been previously identified as a duplicate of any bug report of the first subset of bug reports;
applying the model to obtain a classification of the particular bug report and a candidate bug report, of the set of candidate bug reports, as duplicate bug reports or non-duplicate bug reports, and refraining from applying the model to classify the particular bug report and any of the third subset of bug reports as duplicate bug reports or non-duplicate bug reports.

US Pat. No. 10,339,029

AUTOMATICALLY DETECTING INTERNALIZATION (I18N) ISSUES IN SOURCE CODE AS PART OF STATIC SOURCE CODE ANALYSIS

CA, Inc., New York, NY (...

1. A method, comprising:installing a plug-in component in a stand-alone static source code analysis program/application, wherein the plug-in component contains a plurality of sets of internationalization rules, wherein each respective set is configured to enable the detection of internationalization issues in source code of a particular programming type that is different from respective programming language types corresponding to other sets;
automatically creating a repository comprising the plurality of sets of internationalization rules during the installation of the plug-in;
accessing a first set of the plurality of sets of internationalization rules;
creating a first quality profile for a first programming language type using the first set of the plurality of sets of internationalization rules, corresponding to the first programming language type;
accessing a second set of the plurality of sets of internationalization rules;
creating a second quality profile for a second programming language type using the second set of the plurality of sets of internationalization rules, corresponding to the second programming language type;
scanning source code of a software product for potential issues, wherein scanning source code comprises scanning at block level and searching code by comparing each block to a rule in a quality profile, wherein the quality profile used is the first quality profile when the source code is written in the first programming language type or the second quality profile if the source code is written in the second programming language type;
identifying detected internationalization issues in the source code when a block of code matches or meets a rule in the quality profile;
formatting for display the detected internationalization issues; and
suggesting a solution to fix the detected internationalization issues.

US Pat. No. 10,339,028

LOG STORAGE VIA APPLICATION PRIORITY LEVEL

FUJITSU LIMITED, Kawasak...

1. An information processing device comprising:a memory; and
a processor coupled to the memory and the processor configured to:
determine, in accordance with a first depth corresponding to a first condition associated with an application in a hierarchical structure, a priority level of the application that provides a service based on the first condition included in a plurality of conditions, each condition of the plurality of conditions corresponding to each depth in the hierarchical structure,
perform, in accordance with the priority level of the application, determination whether the application is a collection target, and
when the application is the collection target, collect a log of the application from a terminal which has downloaded the application.

US Pat. No. 10,339,027

AUTOMATION IDENTIFICATION DIAGNOSTIC TOOL

Accenture Global Solution...

1. A system, comprising:a database interface configured to communicate with a database library storing a set of automation rules;
a communication interface configured to communicate with a computing device;
a processor configured to communicate with the database interface and the communication interface, the processor further configured to:
receive, through the communication interface, a recording request to commence recording of actions interacting with a program running on the computing device;
in response to receiving the recording request, record a recording session capturing the actions interacting with the program running on the computing device;
detect an actionable input to the computing device during recording of the recording session;
capture a screenshot of the actionable input in response to detecting the actionable input;
receive, through the communication interface, a stop recording request to stop recording of the recording session;
in response to receiving the stop record request, stop recording of the recording session;
compare the recording session to a predetermined automation list;
determine the actionable input is one of an automatable process or a potentially automatable process based on the comparison;
generate a workflow diagram describing the recording session; and
generate an analysis report graphical user interface (GUI) based on the workflow diagram, wherein the analysis report GUI includes the actionable input, wherein the actionable input is tagged in the analysis report GUI as being one of the automatable process or the potentially automatable process.

US Pat. No. 10,339,026

TECHNOLOGIES FOR PREDICTIVE MONITORING OF A CHARACTERISTIC OF A SYSTEM

Intel Corporation, Santa...

1. A predictive sensor module to monitor a characteristic of a monitored system, the predictive sensor module comprising:a primary sensor to produce primary sensor data indicative of a primary characteristic of the monitored system;
one or more secondary sensors to produce secondary sensor data indicative of a secondary characteristic, different from the primary characteristic, of the monitored system; and
a sensor controller to (i) determine a measured value of the primary characteristic based on the primary sensor data, (ii) determine a measured value of the secondary characteristic based on the secondary sensor data, (iii) predict a predicted value of the primary characteristic using a predictive model with the measured value of the secondary characteristic as an input to the predictive model, and (iv) determine whether to update the predicted model based on whether a difference between the measured value and the predicted value of the primary characteristic exceeds a threshold.

US Pat. No. 10,339,025

STATUS MONITORING SYSTEM AND METHOD

EMC IP Holding Company LL...

1. A signal generation subsystem configured to:receive a plurality of binary status signals from a plurality of monitored subcomponents within a system being monitored, wherein each binary status signal includes a warning of an upcoming change in power output of each monitored subcomponent; and
generate a cumulatively-encoded status signal based, at least in part, upon the plurality of binary status signals, which is indicative of the overall health of the system being monitored, wherein an amplitude of the cumulatively-encoded status signal indicates a number of monitored subcomponents with the warning of an upcoming change in power output, and the cumulatively-encoded status signal is configured to control the power demand of one or more controlled subcomponents based, at least in part, upon the amplitude of the cumulatively-encoded status signal.

US Pat. No. 10,339,024

PASSIVE DEVICE DETECTION

Microsoft Technology Lice...

1. A system comprising:memory;
a processor;
a passive device identifier stored in the memory and executable by the processor to:
increment a current supplied to a passive electronic device at discrete intervals;
sample a voltage of the passive electronic device at each one of the discrete intervals to generate a dataset of current-voltage pairs; and
identify the passive electronic device based on the generated dataset.

US Pat. No. 10,339,023

CACHE-AWARE ADAPTIVE THREAD SCHEDULING AND MIGRATION

Intel Corporation, Santa...

1. A processor comprising:a plurality of cores each to independently execute instructions, the plurality of cores included on a single die of the processor;
a shared cache memory coupled to the plurality of cores, the shared cache memory having a plurality of cache portions, wherein the shared cache memory is a single memory structure included on the single die of the processor, wherein a first cache portion of the shared cache memory is associated with a first core of the plurality of cores, wherein a second cache portion of the shared cache memory is associated with a second core of the plurality of cores;
a plurality of cache activity monitors each associated with one of the plurality of cache portions of the shared cache memory, wherein each cache activity monitor is to monitor a cache miss rate of an associated cache portion and to output cache miss rate information;
a plurality of thermal sensors each associated with one of the plurality of cache portions and to output thermal information including a temperature of the corresponding cache portion; and
a logic coupled to the plurality of cores to:
receive the cache miss rate information from the plurality of cache activity monitors and the thermal information,
in response to a determination that the cache miss rate of the first cache portion of the shared cache memory exceeds a cache miss threshold stored in the processor, migrate a first thread from the first core associated with the first cache portion to the second core associated with the second cache portion of the shared cache memory.

US Pat. No. 10,339,021

METHOD AND APPARATUS FOR OPERATING HYBRID STORAGE DEVICES

EMC IP Holding Company LL...

1. A method for operating a hybrid storage device, the hybrid storage device including a storage device of a first type and a storage device of a second type different from the first type, the method comprising:synchronously writing data into the storage device of the first type and the storage device of the second type, wherein the hybrid storage device further includes a volatile memory;
in response to a failure of the synchronous writing, transmitting, by the volatile memory, information indicating a success of writing the data to a host;
rewriting the data in the storage device of the first type;
in response to the failure of the synchronous writing, updating metadata in the storage device of the first type;
writing the data in the storage device of the first type using the data written in the storage device of the second type; and
updating again the metadata in the storage device of the first type.

US Pat. No. 10,339,020

OBJECT STORAGE SYSTEM, CONTROLLER AND STORAGE MEDIUM

Toshiba Memory Corporatio...

1. An object storage system configured to store a key and a value in association with each other, the object storage system comprising:a first storage region in which the value is stored;
a second storage region in which first information and second information are stored, the first information being used for managing an association between the key and a storage position of the value, the second information being used for managing a position of a defective storage area in the first storage region; and
a controller configured to control the first storage region and the second storage region, wherein the controller comprises a write processor configured to:
determine whether there is a defective storage area in a storage area reserved in the first storage region as a write area for a write value or not based on the second information;
execute, when determining that there is a defective storage area, write processing of writing the write value in the first storage region by arranging the write value for an area other than the defective storage area in the storage area reserved in the first storage region to avoid the defective storage area; and
execute, when determining that there is no defective storage area, write processing of writing the write value in the first storage region by arranging the write value for the entire storage area reserved in the first storage region.

US Pat. No. 10,339,019

PACKET CAPTURING SYSTEM, PACKET CAPTURING APPARATUS AND METHOD

FUJITSU LIMITED, Kawasak...

13. A method of capturing a plurality of packets through a network, the method comprising:storing, by a capturing apparatus coupled to the network, into a storage device, a first mirror packet which is generated by mirroring a first packet transmitted in the network;
determining whether another capturing apparatus is in an operation state or a non-operation state, the another capturing apparatus being coupled to the network and storing the first mirror packet into another storage device while the another capturing apparatus is in the operation state;
deleting by the capturing apparatus, when the capturing apparatus determines the another capturing apparatus is in the operation state, the first mirror packet stored in the storage device; and
storing into the another storage device, when the capturing apparatus determines the another capturing apparatus is in the non-operation state, a second mirror packet generated by mirroring a second packet transmitted in the network, while maintaining the first mirror packet stored in the storage device.

US Pat. No. 10,339,018

REDUNDANCY DEVICE, REDUNDANCY SYSTEM, AND REDUNDANCY METHOD

Yokogawa Electric Corpora...

1. A redundancy device which is configured to communicate with a redundancy opposite device and perform a redundancy execution, the redundancy device comprising:receivers configured to receive individually HB signals transmitted from the redundancy opposite device;
a calculator configured to calculate a number of normal communication paths among communication paths of the HB signals based on a reception result of the receivers;
a comparator configured to compare a calculation result of the calculator with a predetermined threshold value; and
a changer configured to change the redundancy device from a standby state to an operating state, or change the redundancy device from the standby state to a not-standby state in which the redundancy execution is released, based on the calculation result of the calculator and a comparison result of the comparator.

US Pat. No. 10,339,015

MAINTAINING SYSTEM RELIABILITY IN A CPU WITH CO-PROCESSORS

INTERNATIONAL BUSINESS MA...

1. A computer program product for maintaining reliability of a computer having a Central Processing Unit (CPU) and multiple co-processors, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising:launching a same set of operations in each of an original co-processor and a redundant co-processor, from among the multiple co-processors, to obtain respective execution signatures from the original co-processor and the redundant co-processor;
detecting an error in an execution of the set of operations by the original co-processor, by comparing the respective execution signatures;
designating the execution of the set of operations by the original co-processor as error-free and committing a result of the execution, responsive to identifying a match between the respective execution signatures; and
performing an error recovery operation that replays the set of operations by the original co-processor and the redundant co-processor, responsive to identifying a mismatch between the respective execution signatures.

US Pat. No. 10,339,014

QUERY OPTIMIZED DISTRIBUTED LEDGER SYSTEM

McAfee, LLC, Santa Clara...

1. A method for indexing a distributed ledger, the method comprising:receiving, with a hardware processor of a data node, a first snapshot of transaction data, wherein the first snapshot of transaction data is data added to the distributed ledger that has not been included in an original master table or an original index of the distributed ledger;
identifying, with a hardware processor of a data node, attributes of the transaction data of the first snapshot; verifying, with the hardware processor, the first snapshot;
copying, with a hardware processor of a data node, the attributes of the transaction data of the first snapshot to a first master table;
constructing, with a hardware processor of a data node, a first index for a first attribute of the transaction data of the first snapshot;
publishing, with a hardware processor of a data node, completion of the first index for the first attribute of the transaction data of the first snapshot;
concatenating the original master table and the first master table;
concatenating the original index and the first index;
receiving a request to query the distributed ledger transaction data; and
processing the query on the indexed attributes.

US Pat. No. 10,339,013

ACCELERATED RECOVERY AFTER A DATA DISASTER

International Business Ma...

1. A system for restoring data from a first system onto a second system comprising:at least one processor configured to:
receive via a network interface at the second system, a metadata file from the first system, and initialize a database on the second system based on the metadata file;
receive an image from the first system, via the network interface of the second system, and initiate restoration of the image on the second system, wherein the image includes information of the first system to be restored on the second system;
receive via the network interface and at the initialized database on the second system, during the restoration of the image on the second system, one or more log files from the first system indicating transactions performed on the first system during the restoration of the image on the second system and relating to the information to be restored; and
perform the transactions of the log files to synchronize the restored data on the second system with the first system, in response to completion of the restoration.

US Pat. No. 10,339,012

FAULT TOLERANT APPLICATION STORAGE VOLUMES FOR ENSURING APPLICATION AVAILABILITY AND PREVENTING DATA LOSS USING SUSPEND-RESUME TECHNIQUES

VMware, Inc., Palo Alto,...

1. A method for fault tolerant delivery of an application to a virtual machine (VM) being executed by a server in a remote desktop environment using application storage volumes, comprising:delivering the application to the VM by attaching a primary application storage volume (ASV) containing components of the application to the VM;
cloning the primary ASV to create a backup ASV;
executing the application on the VM from the primary ASV;
monitoring the primary ASV to detect failures;
detecting a failure of the primary ASV;
in response to the detecting the failure of the primary ASV, suspending execution of the application;
attaching the backup ASV to the VM; and
resuming the execution of the application from the backup ASV by redirecting operating system calls accessing the application to the backup ASV.

US Pat. No. 10,339,011

METHOD AND SYSTEM FOR IMPLEMENTING DATA LOSSLESS SYNTHETIC FULL BACKUPS

EMC IP Holding Company LL...

1. A method for archiving data, comprising:selecting a virtual machine (VM) executing on a first computing system;
identifying at least one virtual disk (VD) associated with the VM;
for each VD of the at least one VD:
obtaining a user-checkpoint tree (UCT) for the VD;
identifying, within the UCT, a set of user-checkpoint branches (UCBs) comprising an active UCB and at least one inactive UCB;
generating a VD image (VDI) based on the at least one inactive UCB and the active UCB; and
after generating the VDI for each VD of the at least one VD, to obtain at least one VDI:
generating, for the VM, a VM image (VMI) comprising the at least one VDI.

US Pat. No. 10,339,010

SYSTEMS AND METHODS FOR SYNCHRONIZATION OF BACKUP COPIES

1. A method providing a technical solution to the technical problem of how to efficiently create and maintain multiple backup copies without having to separately read the original data for each of the backup copies, the method comprising:(a) receiving, at a data processing engine running on a first electronic device from a second electronic device associated with original storage, first data corresponding to a first version of original data stored in the original data storage;
(b) effecting creating, in primary backup storage at a location remote to the first electronic device based on the received first data, a primary backup copy of the first version of the original data;
(c) receiving, at the data processing engine running on the first electronic device from the primary backup storage, second data corresponding to the primary backup copy stored at the primary backup storage;
(d) effecting creating, in secondary backup storage at a second location remote to the first electronic device based on the received second data, a secondary backup copy of the first version of the original data;
(e) periodically, based on a first time interval,
(i) determining, by the data processing engine, whether the original data stored in the original data storage has been changed, and
(ii) if it is determined that the original data stored in the original data storage has been changed, automatically synchronizing, by the data processing engine based on data received from the original data storage, the primary backup copy stored in the primary backup storage to correspond to an updated version of the original data,
(iii) wherein this includes
(A) determining, by the data processing engine, that the original data stored in the original data storage has been changed to a second version, and
(B) based on determining that the original data stored in the original data storage has been changed to the second version, automatically synchronizing, by the data processing engine based on data received from the original data storage, the primary backup copy stored in the primary backup storage to correspond to the second version; and
(g) periodically, based on a second time interval,
(i) determining, by the data processing engine, whether the primary backup copy stored in the primary backup storage differs from the secondary backup copy stored in the secondary backup storage, and
(ii) if it is determined that the primary backup copy stored in the primary data storage differs from the secondary backup copy stored in the secondary data storage, automatically synchronizing, by the data processing engine based on differential data received from the primary backup storage, the secondary backup copy to correspond to the first backup copy,
(iii) wherein this includes
(A) determining, by the data processing engine, that the primary backup copy stored in the primary backup storage differs from the secondary backup copy stored in the secondary backup storage, and
(B) based on determining that the primary backup copy stored in the primary data storage differs from the secondary backup copy stored in the secondary data storage, automatically synchronizing, by the data processing engine based on differential data received from the primary backup storage, the secondary backup copy to correspond to the second version;
(h) wherein, via performance of this method, the secondary backup copy stored in the secondary data storage is periodically indirectly synchronized to data stored in the original data storage via the primary backup copy stored in the primary data storage being both
(i) periodically synchronized, based on the first time interval, to data in the original data storage, and
(ii) periodically used to synchronize, based on the second time interval, the secondary backup copy stored in the secondary data storage.

US Pat. No. 10,339,009

SYSTEM FOR FLAGGING DATA MODIFICATION DURING A VIRTUAL MACHINE BACKUP

International Business Ma...

6. A computer program product for virtual machine backup in a computer system, computer system comprising:a processor unit arranged to run a hypervisor running one or more virtual machines;
a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag, wherein the image modification flag indicates whether a virtual machine being backed up has modified the cache line; and
a memory connected to the cache and arranged to store an image of at least one virtual machine;
the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to:
define a log in the memory;
in response to a determination that a virtual machine that modified the cache line is being backed up, set, in the cache, the image modification flag for the cache line modified by the virtual machine being backed up, wherein the image modification flag is not set if the cache line is modified by a virtual machine not being backed up; and
write only the memory address of the cache rows flagged with the image modification flag in the defined log.

US Pat. No. 10,339,007

AGILE RE-ENGINEERING OF INFORMATION SYSTEMS

CA, Inc., Islandia, NY (...

1. A non-transitory computer-readable storage medium, with instructions stored thereon, which when executed by at least one processor of a computer, cause the computer to:receive, from a user interface utilized to select patterns, a selected pattern to be implemented for a service model that corresponds to a system comprising a set of information technology (IT) resources, wherein the selected pattern includes a group of configuration item configuration settings;
when the selected pattern is received for implementation, issue commands to one or more of the IT resources that correspond to configuration items included in the selected pattern to modify configuration item configuration settings based on the selected pattern;
when a configuration change resulting from the commands to modify configuration item configuration settings is identified, compare a current configuration of the service model to a previous configuration to identify modified configuration item configuration settings;
in response to determining, based on a performance indicator for the system, that the system performance is improved, store the identified modified configuration item configuration settings as a candidate pattern in a pattern database;
identify a performance indicator violation within a dataset of performance metric data for the system;
query the pattern database to retrieve the candidate pattern including the group of configuration item configuration settings;
instantiate the configuration item configuration settings to implement the retrieved candidate pattern; and
apply at least one performance metric to the IT resources to confirm the performance indicator violation has been resolved.

US Pat. No. 10,339,006

PROXYING SLICE ACCESS REQUESTS DURING A DATA EVACUATION

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:selecting a second storage unit based a decentralized agreement module decision decided by a decentralized agreement module, wherein the decentralized agreement module receives a ranked scoring information request from a requestor with regards to a set of candidate storage unit resources and, for each of the candidate storage unit resources, the decentralized agreement module performs a deterministic function on a location identifier (ID) of the candidate storage unit resource or an asset ID of the ranked scoring information request;
initiating an evacuation of encoded data slices from a first storage unit to the second storage unit;
receiving, at the second storage unit, a checked write slice request from a requesting entity, the checked write slice request including a requested encoded data slice;
determining, at the second storage unit, that locally stored encoded data slices do not include the requested encoded data slice; and
generating, at the second storage unit, a response to include one or more of: a code associated with the checked write slice request, a name of the encoded data slice, or a revision level.

US Pat. No. 10,339,005

STRIPE MAPPING IN MEMORY

Micron Technology, Inc., ...

1. A method for stripe mapping, comprising:storing a first stripe map, wherein the first stripe map includes a number of stripe indexes to identify a number of stripes stored in a plurality of memory devices and a number of element identifiers to identify elements included in each of the number of stripes;
storing a second stripe map, wherein the second stripe map is an inverse stripe map of the first stripe map; and
performing a redundant array of independent disks (RAID) read error recovery operation using the second stripe map to identify a plurality of stripes that each include a bad element, wherein the RAID read error recovery operation corrects data in the bad element using parity data, moves the corrected data to a different element, and updates element identifiers of the plurality of stripes to include an identifier for the different element.

US Pat. No. 10,339,004

CONTROLLER AND OPERATION METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A controller within a memory system comprising:an initialization circuit suitable for initializing values and states of variable nodes and initializing values of check nodes;
a variable node update circuit suitable for updating the values and states of the variable nodes provided from the initialization circuit;
a check node update circuit suitable for updating the values of the check nodes based on the updated values and states of the variable nodes provided from the variable node update circuit; and
a syndrome check circuit suitable for deciding iteration of the operation of the variable node update circuit and the check node update circuit when the values of the check nodes provided from the check node update are not all in a satisfied state,
wherein the variable node update circuit calculates reliability values of the variable nodes and a reference flip value based on a result of a previous iteration, and
wherein the variable node update circuit updates the values and states of the variable nodes based on the reference flip value and the reliability values and states of the variable nodes.

US Pat. No. 10,339,003

PROCESSING DATA ACCESS TRANSACTIONS IN A DISPERSED STORAGE NETWORK USING SOURCE REVISION INDICATORS

INTERNATIONAL BUSINESS MA...

1. A method comprises:sending, by a dispersed storage (DS) processing unit of a dispersed storage network (DSN), a set of data access requests to a set of storage units of the DSN, wherein the set of data access requests is regarding a data access transaction involving a set of encoded data slices, wherein a data segment of a data object is dispersed storage error encoded into the set of encoded data slices, and wherein the set of storage units stores, or is to store, the set of encoded data slices;
receiving, by the DS processing unit from each of at least some storage units of the set of storage units, a storage-revision indicator, wherein the storage-revision indicator includes a content-revision field, a delete-counter field, and a contest-counter field, wherein the content-revision uniquely identifies content of an encoded data slice of the set of encoded data slices, wherein the delete-counter indicates a number of times the encoded data slice has been deleted, and wherein the contest-counter indicates a number of data access contests the encoded data slice has participated in;
generating, by the DS processing unit, an anticipated storage-revision indicator for the data access transaction based on a current revision level of the set of encoded data slices and based on a data access type of the data access transaction;
comparing, by the DS processing unit, the anticipated storage-revision indicator with the storage-revision indicators received from the at least some storage units; and
when a threshold number of the storage-revision indicators received from the at least some storage units substantially match the anticipated storage-revision indicator, executing, by the DS processing unit, the data access transaction.

US Pat. No. 10,339,002

CATASTROPHIC DATA LOSS AVOIDANCE

VMware, Inc., Palo Alto,...

1. A computer-implemented method for recovering data that has been divided into a plurality of portions, the method comprising:detecting an indication of a loss of at least one portion of the plurality of portions of the data, wherein the data is recoverable using a subset of the plurality of portions of the data stored in multiple storage devices;
copying remaining portions of the data not indicated as being lost to backup storage devices in response to the detected indication; and
after the copying of the remaining portions of the data is initiated, recovering the data using the remaining portions of the data,
wherein the copying of the remaining portions of the data to the backup storage devices results in a risk reduction of further loss of the portions of the data.

US Pat. No. 10,339,001

METHOD AND SYSTEM FOR IMPROVING FLASH STORAGE UTILIZATION BY PREDICTING BAD M-PAGES

EMC IP Holding Company LL...

1. A method for managing persistent storage, the method comprising:issuing, by a control module, a proactive read request to a page in the persistent storage;
receiving, in response to the proactive read request, a bit error value (BEV) for data stored on the page, wherein the BEV is based on a number of incorrect bits in the data;
obtaining, by the control module and based on at least one parameter associated with the page, a BEV threshold (T); and
based a determination that the BEV is greater than T, setting an m-page as non-allocatable for future operations, wherein the m-page is a set of pages in the persistent storage and the page is in the set of pages.

US Pat. No. 10,339,000

STORAGE SYSTEM AND METHOD FOR REDUCING XOR RECOVERY TIME BY EXCLUDING INVALID DATA FROM XOR PARITY

SanDisk Technologies LLC,...

1. A storage system comprising:a memory; and
a controller in communication with the memory, wherein the controller is configured to:
generate a first exclusive-or (XOR) parity for pages of data written to the memory, wherein the pages of data are protected by a data protection scheme;
after the first XOR parity has been generated, determine whether a percentage of errors for the pages is above a threshold;
based on a determination that the percentage of errors for the pages is below the threshold:
determine that the data protection scheme cannot correct at least one error in a page; and
use the first XOR parity to recover the page that contains the error;
based on a determination that the percentage of errors for the pages is above the threshold:
generate a second XOR parity for the pages of data that excludes the at least one page of invalid data, wherein the second XOR parity is generated by performing an XOR operation using the first XOR parity and the at least one page of invalid data as inputs;
determine that the data protection scheme cannot correct an error in a page; and
use the second XOR parity to recover the page that contains the error, wherein using the second XOR parity to recover the page that contains the error is faster than using the first XOR parity to recover the page that contains the error.

US Pat. No. 10,338,999

CONFIRMING MEMORY MARKS INDICATING AN ERROR IN COMPUTER MEMORY

International Business Ma...

1. A method of confirming memory marks indicating an error in computer memory, the method comprising:detecting, by memory logic responsive to a memory read operation, an error in a memory location;
marking, by the memory logic in an entry in a hardware mark table, the memory location as containing the error, the entry including one or more parameters for correcting the error; and
responsive to detecting the error in the memory location, retrying, by the memory logic, the memory read operation, including:
responsive to again detecting the error in the memory location, determining whether the error is correctable at the memory location using the parameters included in the entry; and
if the error is correctable at the memory location using the one or more parameters included in the entry, confirming the error in the entry of the hardware mark table.

US Pat. No. 10,338,998

METHODS FOR PRIORITY WRITES IN AN SSD (SOLID STATE DISK) SYSTEM AND APPARATUSES USING THE SAME

SHANNON SYSTEMS LTD., Sh...

1. A method for priority writes in an SSD (Solid State Disk) system, performed by a processing unit, comprising:receiving a priority write command instructing the processing unit to write first data whose length is less than a page length in a storage unit;
directing a buffer controller to store the first data from the next available sub-region of a buffer, which is associated with a priority write, in a first direction;
receiving a non-priority write command instructing to write second data whose length is less than page length in the storage unit; and
directing the buffer controller to store the second data from the next available sub-region of the buffer, which is associated with a non-priority write, in a second direction.

US Pat. No. 10,338,997

APPARATUSES AND METHODS FOR FIXING A LOGIC LEVEL OF AN INTERNAL SIGNAL LINE

Micron Technology, Inc., ...

1. A method comprising:enabling a register of a semiconductor device, wherein the enabled register is configured to provide data bus inversion information or data mask information to a data terminal of the semiconductor device;
providing, from the register of the semiconductor device, a first control signal to a control circuit of the semiconductor device, the first control signal including information indicative of an operation mode of an error check operation being enabled and an operation mode of a data bus inversion operation being disabled; and
responsive to the first control signal, providing a voltage level of a signal line coupled to an external terminal of the semiconductor device at a constant level, the external terminal configured to receive the data bus inversion information or the data mask information.

US Pat. No. 10,338,994

PREDICTING AND ADJUSTING COMPUTER FUNCTIONALITY TO AVOID FAILURES

SAS INSTITUTE INC., Cary...

1. A system comprising:a processing device; and
a memory device including instructions that are executable by the processing device for causing the processing device to:
receive prediction data representing a prediction, wherein the prediction data forms a time series that spans a future time-period;
receive a plurality of files defining abnormal data-point patterns to be identified in the prediction data, wherein each file in the plurality of files includes customizable program-code for identifying a respective abnormal pattern of data-point values in the prediction data;
automatically identify a plurality of abnormal data-point patterns in the prediction data by interpreting and executing the customizable program-code in the plurality of files;
automatically determine a plurality of override processes that correspond to the plurality of abnormal data-point patterns in response to identifying the plurality of abnormal data-point patterns in the prediction data, wherein the plurality of override processes are automatically determined using correlations between the plurality of abnormal data-point patterns and the plurality of override processes, and wherein an override process involves replacing a value of at least one data point in the prediction data with another value that is configured to mitigate an impact of an abnormal data-point pattern on the prediction;
automatically determine that the plurality of override processes are to be applied to the prediction data in a particular order;
automatically generate a corrected version of the prediction data in response to determining the plurality of override processes, wherein the corrected version of the prediction data is generated by executing the plurality of override processes in the particular order; and
automatically adjust one or more computer parameters based on the corrected version of the prediction data.

US Pat. No. 10,338,993

ANALYSIS OF FAILURES IN COMBINATORIAL TEST SUITE

SAS Institute Inc., Cary...

1. A computer-program product tangibly embodied in a non-transitory machine-readable storage medium, the computer-program product including instructions operable to cause a computing device to:generate a test suite that provides test cases for testing a system comprising different
components, wherein each element of a test case of the test suite is a test condition for testing one of categorical factors for the system, each of the categorical factors representing one of the different components, and wherein a test condition in the test suite comprises one of different levels representing different options assigned to a categorical factor for the system;
receive a set of input weights for one or more levels of the test suite;
receive a failure indication indicating a test conducted according to the test cases failed;
in response to receiving the failure indication, determine a plurality of cause indicators based on the set of input weights and any commonalities between test conditions of any failed test cases of the test suite that resulted in a respective failed test outcome, wherein each cause indicator represents a likelihood that a test condition or combination of test conditions of the any failed test cases caused the respective failed test outcome;
identify, based on comparing the plurality of cause indicators, a most likely potential cause for a potential failure of the system; and
output an indication of the most likely potential cause for the potential failure of the system.

US Pat. No. 10,338,992

SEMICONDUCTOR APPARATUS AND DISPLAY APPARATUS

Japan Display Inc., Toky...

1. A semiconductor apparatus comprising:a plurality of semiconductor devices that includes a first semiconductor device including a first anomaly detection circuit and a second semiconductor device including a second anomaly detection circuit,
wherein the first anomaly detection circuit is configured to detect anomalies in a plurality of first functions implemented in the first semiconductor device and output a first anomaly detection signal to the second anomaly detection circuit and a device outside the semiconductor apparatus,
wherein the second anomaly detection circuit is configured to detect anomalies in a plurality of second functions implemented in the second semiconductor device and output a second anomaly detection signal to the first anomaly detection circuit,
wherein the first anomaly detection circuit is configured to generate the first anomaly detection signal when the first anomaly detection circuit detects (a) an anomaly in at least one of the first functions, (b) the second anomaly detection signal that is output from the second anomaly detection circuit, or (c) both, and
wherein the second anomaly detection circuit is configured to generate the second anomaly detection signal when the second anomaly detection circuit detects an anomaly in at least one of the second functions.

US Pat. No. 10,338,991

CLOUD-BASED RECOVERY SYSTEM

Microsoft Technology Lice...

1. A computing system, comprising:a communication system configured to:
receive a diagnostic data package from a client computing device that is remote from the computing system, the diagnostic data package including:
a problem scenario identifier that identifies a problem scenario indicative of a problem associated with the client computing device, and
first problem-specific diagnostic data that is obtained from the client computing device and specific to the problem associated with the client computing device;
a state-based diagnostic system configured to:
identify a problem-specific diagnostic analyzer, that is specific to the problem associated with the client computing device, based on mapping information that maps the problem scenario to the problem-specific diagnostic analyzer; and
run the problem-specific diagnostic analyzer to:
obtain second problem-specific diagnostic data from a server environment in which the computing system is deployed, the second problem-specific diagnostic data being specific to the problem associated with the client computing device; and
aggregate the first problem-specific diagnostic data and the second problem-specific diagnostic data to obtain aggregated data;
data analysis logic configured to:
identify an estimated root cause for the problem scenario based on the aggregated data; and
identify a suggested recovery action, based on the estimated root cause,
wherein the communication system is configured to communicate the suggested recovery action to the client computing device.

US Pat. No. 10,338,990

CULPRIT MODULE DETECTION AND SIGNATURE BACK TRACE GENERATION

VMware, Inc., Palo Alto,...

1. A computer-implemented method for identifying a culprit module and for generating a signature back trace corresponding to a symptom of a crash of a computer system, said method comprising:receiving a core dump at a crash analyzer, wherein said core dump corresponds to said crash of said computer system;
generating, at said crash analyzer, an essential stack of functions corresponding to said crash of said computer system;
determining a tag sequence and a tag depth corresponding to said essential stack of functions, at said crash analyzer;
deriving a list of permissible tag permutations corresponding to said computer system;
utilizing said tag sequence and said tag depth in combination with said list of permissible tag permutations, by said crash analyzer, to identify a culprit module responsible for said computer crash; and
generating a signature back trace from said essential stack of functions including at least one function corresponding to said culprit module, and providing said signature back trace as an output from said crash analyzer, wherein said signature back trace pertains to a symptom of said crash of said computer system.

US Pat. No. 10,338,989

DATA TUPLE TESTING AND ROUTING FOR A STREAMING APPLICATION

International Business Ma...

1. An apparatus comprising:at least one processor;
a memory coupled to the at least one processor; and
a streaming application residing in the memory and executed by the at least one processor, the streaming application comprising a flow graph that includes a plurality of operators that process a plurality of data tuples, wherein the plurality of operators comprises:
a plurality of parallel test operators that test in parallel the plurality of data tuples; and
a tuple testing and routing operator that routes the plurality of data tuples to the plurality of parallel test operators, receives feedback from the plurality of parallel test operators regarding the results of testing the plurality of data tuples, and routes a first selected data tuple from the plurality of data tuples to a first operator when the first selected data tuple passes the plurality of parallel test operators according to a specified pass threshold;
wherein the streaming application is executed under control of a streams manager and is configured by the streams manager according to a specified routing method that determines a number of the plurality of parallel test operators that operate in parallel on each of the plurality of data tuples.

US Pat. No. 10,338,988

STATUS MONITORING SYSTEM AND METHOD

EMC IP Holding Company LL...

1. A user-configurable decoder circuit, associated with a controlled subcomponent, configured to:receive a cumulatively-encoded status signal, wherein the cumulatively-encoded status signal includes a warning of an upcoming change in power output of at least one monitored subcomponent of a plurality of monitored subcomponents and an amplitude of the cumulatively-encoded status signal indicates a number of monitored subcomponents with the warning of an upcoming change in power output;
compare the cumulatively-encoded status signal to a user-definable threshold that defines a subcomponent policy for the controlled subcomponent, wherein the amplitude of the user-definable threshold is based upon, at least in part, a threshold number of monitored subcomponents asserting the warning of an upcoming change in power output; and
effectuate a procedure on the controlled subcomponent based, at least in part, upon the comparison of the cumulatively-encoded status signal and the user-definable threshold, wherein effectuating the procedure on the controlled subcomponent includes reducing a power demand of the controlled subcomponent prior to the upcoming change in the power output of the at least one monitored subcomponent based, at least in part, upon the subcomponent policy defined for the controlled subcomponent.

US Pat. No. 10,338,987

TESTING MODULE COMPATIBILITY

Dell Products LP, Round ...

1. A method for checking support and compatibility of a module without inserting the module into one of one or more empty slots of a chassis comprising:obtaining a platform specification and a configuration of the chassis;
receiving information about the module from a Near Field Communication (NFC) tag coupled to the module;
analyzing the information about the module against the platform specification, and the chassis configuration;
based on the analysis, determining that one of a plurality of conditions exists, wherein the plurality of conditions comprise:
a first condition exists when the module will not be supported according to the platform specification;
a second condition exists when the module will be supported according to the platform specification and there are no empty slots for which the module will be compatible with the chassis configuration; and
a third condition exists when the module will be supported according to the platform specification and there is at least one empty slot for which the module will be compatible with the chassis configuration; and
generating an indication, perceptible to a user, of a determined condition to allow the user to decide whether to insert the module.

US Pat. No. 10,338,986

SYSTEMS AND METHODS FOR CORRELATING ERRORS TO PROCESSING STEPS AND DATA RECORDS TO FACILITATE UNDERSTANDING OF ERRORS

Microsoft Technology Lice...

1. A system comprising:a processor and memory; and
machine readable instructions, when executed by the processor and memory, configured to:
receive one or more commands from a computer program file;
assign a first set of identifiers to different portions of the computer program file to link errors occurring during execution of one or more of a plurality of processing steps to corresponding portions of the computer program file;
execute the plurality of processing steps to process data in a data processing system distributed over a plurality of nodes in a cluster, wherein each of the nodes is a computing device;
compose a graph including vertices representing the plurality of processing steps;
assign a second set of identifiers to the vertices;
collect, from the plurality of nodes, in response to an error occurring on executing a first processing step of the processing steps on the plurality of nodes, information about the error stored on the plurality of nodes, the information about the error including a first identifier associated with one of the vertices representing the first processing step in the graph;
process the information about the error from the plurality of nodes to correlate the error to the first processing step based on the first identifier that is associated with the first processing step and that is included in the information about the error stored on the plurality of nodes;
generate, based on the processed information, correlation between the error and the first processing step, wherein the correlation between the error and the first processing step indicates a cause and a location of the error; and
in response to one or more of the plurality of processing steps being modified by reordering, consolidating, or discarding the one or more steps prior to executing the plurality of processing steps, retain identifiers of the one or more processing steps being modified with the modified one or more processing steps,
wherein the retained identifiers are used to correlate one or more errors to the one or more processing steps in the event of the modification.

US Pat. No. 10,338,985

INFORMATION PROCESSING DEVICE, EXTERNAL STORAGE DEVICE, HOST DEVICE, RELAY DEVICE, CONTROL PROGRAM, AND CONTROL METHOD OF INFORMATION PROCESSING DEVICE

Toshiba Memory Corporatio...

1. An information processing system comprising:a host device and a storage device coupled with the host device;
the storage device including:
a nonvolatile memory including a plurality of blocks; and
a first controller configured to
control the nonvolatile memory,
determine whether a data write operation to the nonvolatile memory is prohibited based on a first value and a first threshold value, the first value being a value of a number of free blocks, the first threshold value corresponding to the first value, and
send, when determining the data write operation to the nonvolatile memory is prohibited, information indicating that data write operation to the nonvolatile memory is prohibited;
the host device being connectable to a display, the host device including a second controller, the second controller configured to
acquire a second value from the storage device, the second value being at least one value of a plurality of pieces of statistical information,
cause the display to show a certain message when the acquired second value exceeds a second threshold value, the second threshold value corresponding to the second value, and
recognize the storage device as a read only device that supports only a read operation of read and write operations of the nonvolatile memory when receiving the information.

US Pat. No. 10,338,984

STORAGE CONTROL APPARATUS, STORAGE APPARATUS, AND STORAGE CONTROL METHOD

SONY CORPORATION, Tokyo ...

1. A storage control apparatus, comprising:circuitry configured to determine a unit-of-storage of a memory cell for a non-volatile memory as suspected of having a defect,
wherein the unit-of-storage is determined as suspected of having the defect based on a number of errors in one of a reset operation or a set operation of the non-volatile memory that exceeds a first threshold value and based on a total value of a number of errors in the reset operation and a number of errors in the set operation that exceeds a second threshold value.

US Pat. No. 10,338,983

METHOD AND SYSTEM FOR ONLINE PROGRAM/ERASE COUNT ESTIMATION

EMC IP Holding Company LL...

1. A method for managing persistent storage, the method comprising:selecting a sample set of physical addresses in a solid state memory module (SSMM), wherein the sample set of physical addresses is associated with a region in the SSMM;
performing a garbage collection operation on the sample set of physical addresses;
after the garbage collection operation, issuing a write request to the sample set of physical addresses;
after issuing the write request, issuing a read request to the sample set of physical addresses to obtain a copy of data stored in the sample set of physical addresses;
determining an error rate in the copy of the data stored using at least one selected from a group consisting of an Error Correction Code (ECC) codeword and, known data in the write request;
determining a calculated P/E cycle value for the SSMM using at least the error rate; and
updating an in-memory data structure in a control module with the calculated P/E cycle value.

US Pat. No. 10,338,982

HYBRID AND HIERARCHICAL OUTLIER DETECTION SYSTEM AND METHOD FOR LARGE SCALE DATA PROTECTION

International Business Ma...

1. A method comprising:receiving metadata associated with one or more data backup jobs performed on one of more storage devices, wherein the metadata comprises univariate time series data for each variable of a multivariate time series, and the multivariate time series comprises different variables that exhibit different characteristics over time; and
decreasing likelihood of a failure in data protection involving the one or more data backup jobs by:
for each variable of the multivariate time series:
selecting, from different anomaly detection models with different performance costs, an anomaly detection model suitable for the variable based on one or more characteristics exhibited by corresponding univariate time series data for the variable and covariations and interactions between the variable and at least one other variable of the multivariate time series; and
detecting an anomaly on the variable utilizing the anomaly detection model selected for the variable; and
based on each anomaly detection model selected for each variable of the multivariate time series,
determining whether the multivariate time series is anomalous at a particular time point, and generating data indicative of whether the multivariate time series is anomalous at the particular time point.

US Pat. No. 10,338,981

SYSTEMS AND METHODS TO FACILITATE INFRASTRUCTURE INSTALLATION CHECKS AND CORRECTIONS IN A DISTRIBUTED ENVIRONMENT

VMware, Inc, Palo Alto, ...

1. An apparatus comprising:a first virtual appliance including a first management endpoint, the first virtual appliance to organize tasks to be executed to install a computing infrastructure; and
a first component server including a first management agent to communicate with the first management endpoint, the first virtual appliance to assign a first role to the first component server and to determine a subset of prerequisites associated with the first role, the subset of prerequisites selected from a plurality of prerequisites based on an applicability of the subset of prerequisites to the first role, each of the subset of prerequisites associated with an error correction script, the first component server to determine whether the first component server satisfies the subset of prerequisites associated with the first role, the first component server to address an error when the first component server is determined not to satisfy at least one of the subset of prerequisites by executing the error correction script associated with the at least one of the subset of prerequisites.

US Pat. No. 10,338,979

MESSAGE PATTERN DETECTION AND PROCESSING SUSPENSION

Chicago Mercantile Exchan...

1. A computer implemented method for processing electronic data transaction request messages for a data object in a data transaction processing system, the method comprising:receiving, by a processor from a first source, a first electronic data transaction request message to perform a first transaction of a first transaction type on a data object;
processing, by the processor, the first electronic data transaction request message, wherein processing an electronic data transaction request comprises determining whether the electronic data transaction request message matches with another electronic data transaction request message;
receiving, by the processor from a second source, a second electronic data transaction request message to perform a second transaction of the first transaction type on the data object;
processing, by the processor, the second electronic data transaction request message;
receiving, by the processor from the first source, a third electronic data transaction request message to undo results of processing the first electronic data transaction request message;
processing, by the processor, the third electronic data transaction request message;
receiving, by the processor from the first source, within a first predetermined amount of time after receiving the third electronic data transaction request message, a fourth electronic data transaction request message to perform a first transaction of a second transaction type on the data object;
upon determining that processing the fourth electronic data transaction request message would result in a match between the second and the fourth electronic data transaction request messages, automatically preventing, by the processor, further processing of the fourth electronic data transaction request message; and
after a passage of a second predetermined amount of time, enabling further processing, by the processor, of the fourth electronic data transaction request message.

US Pat. No. 10,338,978

ELECTRONIC DEVICE TEST SYSTEM AND METHOD THEREOF

PRIMAX ELECTRONICS LTD., ...

1. A method for testing a Macintosh compliant electronic device and labeling the electric device through a Windows system computer, used to detect a memory serial number of the Macintosh compliant electronic device and using the Windows system to generate a bar code label corresponding to the memory serial number, the method comprising the following steps:(a) using a Macintosh computer to detect the memory serial number of the Macintosh compliant electronic device;
(b) using the Macintosh computer to transmit the memory serial number to the Windows system computer by means of an RS232 interface;
(c) using the Windows system computer to compare whether the memory serial number satisfies a coding rule; if the memory serial number does not satisfy the coding rule, the Windows system computer generates an alarm message, and if the memory serial number satisfies the coding rule, the Windows system computer performs the next steps (d) and (e):
(d) executing an analog keyboard event to input the memory serial number; and
(e) driving a printer to print a bar code label that includes the memory serial number of the Macintosh compliant electronic device; and
(f) adhering the bar code label to the Macintosh compliant electronic device.

US Pat. No. 10,338,976

METHOD AND APPARATUS FOR PROVIDING SCREENSHOT SERVICE ON TERMINAL DEVICE AND STORAGE MEDIUM AND DEVICE

Baidu Online Network Tech...

1. A method for providing a screenshot service on a terminal device, the method comprising:executing, by a producer thread, a screenshot operation in response to a received screenshot command instruction, and writing screen data captured into a buffer; and
reading, by a consumer thread, the screen data stored by the producer thread from the buffer, executing image processing on the screen data to generate a screenshot image, and returning the screenshot image to an application invoking the screenshot service;
the method further comprising:
starting, by a main thread of the screenshot service, the producer thread and the consumer thread, and establishing, at a specified port, a session connection to the application invoking the screenshot service; and
determining, by the producer thread, a screenshot command instruction being received, by listening to a data reading instruction on a session connection.

US Pat. No. 10,338,975

CONTENTION MANAGEMENT IN A DISTRIBUTED INDEX AND QUERY SYSTEM

VMware, Inc., Palo Alto,...

1. A method of contention management in a distributed index and query system, the method comprising:utilizing one or more index processing threads of an index thread pool in a distributed index and query system to index documents buffered into a work queue buffer in a memory of the distributed index and query system after being received via a network connection;
simultaneous to the indexing, utilizing one or more query processing threads of a query thread pool to process queries, received via the network connection, of indexed documents, wherein a sum of the index processing threads and the query processing threads is a plurality of processing threads;
responsive to the work queue buffer reaching a predefined fullness, emptying the work queue buffer by backing up the work queue buffer into an allotted storage space in a data storage device of the distributed index and query system; and
setting a number of index processing threads, of the plurality of processing threads allocated to the index thread pool, in a linear relationship to a ratio of a utilized amount of the allotted storage space to a total amount of the allotted storage space.

US Pat. No. 10,338,974

VIRTUAL RETRY QUEUE

Intel Corporation, Santa...

1. An apparatus comprising:a computing system block comprising logic implemented at least in part in hardware circuitry, wherein the logic is to:
enter a starvation mode based on a determination that one or more particular requests in one or more retry queues of the computing system block fail to make forward progress, wherein the starvation mode blocks the one or more retry queues and activates one or more virtual retry queues for the one or more particular requests, each of the one or more virtual retry queues comprises a respective table of pointers to entries in one or more of the retry queues, the virtual retry queues are to be used for retries during the starvation mode instead of the retry queues, and ordering of retries defined in at least one of the virtual retry queues is different from ordering of retries defined in a corresponding one of the one or more retry queues;
identify in a particular one of the virtual retry queues, a particular dependency of a first one of the particular requests, wherein the first request is in a particular one of the one or more retry queues;
determine that the particular dependency is acquired; and
retry the first request during the starvation mode based on acquisition of the particular dependency, wherein the first request is retried before another request ahead of the first request in the particular retry queue.

US Pat. No. 10,338,972

PREFIX BASED PARTITIONED DATA STORAGE

Amazon Technologies, Inc....

5. A system, comprising:one or more processors; and
memory with instructions that, as a result of being executed by the one or more processors, cause the system to:
for a request to access a data object identified by an identifier, determine a subsequence of the identifier associated with a partition in which the data object is stored, the partition tracks a maximum number of subsequences;
increment a counter corresponding to the subsequence, the counter maintained by the partition; and
perform one or more mitigating actions as a result of the counter reaching a threshold value, the one or more mitigating actions includes generating a new subsequence associated with a generated second partition such that the generated second partition fulfills requests for the new subsequence.

US Pat. No. 10,338,971

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM

Ricoh Company, Ltd., Tok...

8. A method of processing information performed by an information processing apparatus connected to a plurality of computational resource groups, each group including a plurality of computational resources, through a network, the information processing method comprising:monitoring a state of each computational resource belonging to each computational resource group;
identifying an unavailable computational resource group, in which a ratio of unusable computational resources to total computational resources is equal to or greater than a threshold, from among the plurality of computational resource groups, the identifying being based on the state of each computational resource monitored by the monitoring; and
receiving a target request that includes an allocation destination designating a computational resource from among the plurality of computational resources to execute a requested process, wherein
in a case where the computational resource designated by the allocation destination of the target request is a usable computational resource included within the computational resource group identified as the unavailable computational resource group, the target request is sent to the computational resource designated by the allocation destination, and
in a case where the computational resource designated by the allocation destination of the target request is an unusable computational resource included within the computational resource group identified as the unavailable computational resource group, the target request is sent to one or more usable computational resources belonging to a computational resource group other than the unavailable computational resource group.

US Pat. No. 10,338,970

MULTI-PLATFORM SCHEDULER FOR PERMANENT AND TRANSIENT APPLICATIONS

INTERNATIONAL BUSINESS MA...

1. A method of scheduling assignment of computer resources to a plurality of applications, the method comprising:determining, by a processor, shares of the computer resources assigned to each application during a first period;
determining, by the processor, shares of the computer resources assigned to each application during a second period that occurs after the first period;
determining, by the processor, an imbalance value for each application that is based on a sum of the shares assigned to the corresponding application over both periods;
determining, by the processor, a first set of the imbalance values that are below a range; and
assigning a first container to a first application among the applications associated with the lowest imbalance value of the first set to satisfy a first request of the first application for computer resources when the first container is available with enough computer resources to satisfy the first request.

US Pat. No. 10,338,969

MANAGING A VIRTUALIZED APPLICATION WORKSPACE ON A MANAGED COMPUTING DEVICE

VMware, Inc., Palo Alto,...

1. A method for managing a virtualized application workspace on a managed computing device, the method comprising:authenticating a first user and verifying that the first user belongs to a first group maintained by a directory service, based on first user credentials, wherein belonging to a group indicates entitlements;
obtaining the entitlements associated with the authenticated user from the directory service, the entitlements including one or more indications of software available to the authenticated user, and the entitlements further including a list of application family objects associated with the authenticated user, the list comprising a first application family object comprising a first set of rules wherein the first set of rules determines that the first application family object resolves to a local application object that is local to the managed computing device, the list further comprising a second application family object comprising a second set of rules wherein the second set of rules determines that the second application family object resolves to a remote application object that is remote to the managed computing device;
resolving the first set of rules of the first application family object and the second set of rules of the second application family object so as to obtain a result vector that includes indications of application objects that are available to the authenticated user, wherein the result vector comprises (a) the local application object, and (b) the remote application object;
processing the result vector to identify application objects for which installation operations are to be performed, wherein the processing comprises determining that (a) the local application object is local to the managed computing device, or (b) that the remote application object is remote to the managed computing device; and
subsequent to the processing, performing the installation operations to install one or more of the identified application objects on the managed computing device.

US Pat. No. 10,338,968

DISTRIBUTED NEUROMORPHIC PROCESSING PERFORMANCE ACCOUNTABILITY

SAS INSTITUTE INC., Cary...

1. An apparatus comprising a processor and a storage to store instructions that, when executed by the processor, cause the processor to perform operations comprising:receive, at a portal, and from a remote device via a network, a request to repeat an earlier performance, described in a first instance log of multiple instance logs stored in one or more federated areas, of a first job flow defined in a first job flow definition of multiple job flow definitions stored in the one or more federated areas, or to provide objects to the remote device to enable the remote device to repeat the earlier performance, wherein:
the portal is provided on the network to control access by the remote device to the one or more federated areas via the network;
the one or more federated areas are maintained within one or more storage devices to store at least the multiple job flow definitions and the multiple instance logs; and
the request specifies a first instance log identifier of the first instance log;
use the first instance log identifier to retrieve the first instance log from among the multiple instance logs stored in the one or more federated areas, wherein the first instance log comprises a first job flow identifier of the first job flow definition, a task routine identifier for each task routine used to perform a task specified in the first job flow definition, and a data object identifier for each data object associated with the earlier performance of the first job flow;
analyze the first job flow definition to determine whether performances of the first job flow comprise use of a neural network;
in response to a determination that performances of the first job flow do comprise use of the neural network, analyze an object associated with the first job flow to determine whether the neural network was trained to perform an analytical function using a training data set derived from at least one performance of a second job flow defined by a second job flow definition stored in the one or more federated areas, wherein:
the object associated with the first job flow comprises at least one of the first job flow definition, the first instance log, or a task routine executed during the earlier performance of the first job flow; and
performances of the second job flow comprise performances of the analytical function in a manner that does not use any neural network; and
in response to the request comprising a request to repeat the earlier performance, in response to a determination that performances of the first job flow do comprise use of the neural network, and in response to a determination that the neural network was trained using the training data set derived from at least one performance of the second job flow, the processor is caused to perform operations comprising:
repeat the earlier performance of the first job flow with one or more data sets associated with the earlier performance of the first job flow, wherein the repetition of the earlier performance of the first job flow comprises execution, by the processor, of each task routine identified by a task routine identifier in the first instance log;
perform the second job flow with the one or more data sets associated with the earlier performance of the first job flow, wherein the performance of the second job flow comprises execution, by the processor, of a most recent version of a task routine to perform each task identified by a flow task identifier in the second job flow definition;
analyze an output of the repetition of the earlier performance of the first job flow relative to a corresponding output of the performance of the second job flow to determine a degree of accuracy of the first job flow in performing the analytical function relative to a predetermined threshold of accuracy to determine whether the second job flow is to be used in place of the first job flow to perform the analytical function; and
transmit at least the output of the repetition of the earlier performance of the first job flow and an indication of the degree of accuracy or the results of the comparison to the requesting device.

US Pat. No. 10,338,967

SYSTEMS AND METHODS FOR PREDICTING PERFORMANCE OF APPLICATIONS ON AN INTERNET OF THINGS (IOT) PLATFORM

Tata Consultancy Services...

1. A method for predicting performance of one or more applications being executed on an Internet of Things (IoT) platform, comprising:obtaining, by said IoT platform, at least one of (i) one or more user requests and (ii) one or more sensor observations from one or more sensors;
identifying and invoking one or more Application Programming Interface (APIs) of said IoT platform based on said at least one of (i) one or more user requests and (ii) one or more sensor observations from said one or more sensors;
identifying, based on said one or more invoked APIs, one or more open flow requests and one or more closed flow requests of one or more systems connected to said IoT platform;
identifying one or more workload characteristics of said one or more open flow requests and said one or more closed flow requests to obtain one or more segregated open flow requests and one or more segregated closed flow requests, and a combination of open and closed flow requests;
executing one or more performance tests with said one or more invoked APIs based on said one or more workload characteristics;
concurrently measuring utilization of one or more resources of said one or more systems and computing one or more service demands of each of said one or more resources;
executing said one or more performance tests with said one or more invoked APIs based on a volume of workload characteristics pertaining to said one or more applications; and
predicting, using a queuing network, performance of said one or more applications for said volume of workload characteristics.

US Pat. No. 10,338,966

INSTANTIATING CONTAINERS WITH A UNIFIED DATA VOLUME

Red Hat, Inc., Raleigh, ...

1. A system comprising:a first host including a first memory;
a second memory located across a network from the first host;
one or more processors;
an orchestrator including a scheduler and a container engine;
the orchestrator executing on the one or more processors to:
request, by the scheduler, a first persistent storage to be provisioned in the second memory based on at least one of an image file and metadata associated with the image file, wherein the first persistent storage is mounted to the first host;
copy, by the container engine, the image file to the first memory as a lower system layer of an isolated guest based on the image file, wherein the lower system layer is write protected;
construct, by the container engine, an upper system layer in the first persistent storage based on the image file, wherein a baseline snapshot is captured of the first persistent storage including the upper system layer after the upper system layer is constructed; and
launch the isolated guest, wherein the isolated guest is attached to the lower system layer and the upper system layer.

US Pat. No. 10,338,965

MANAGING A SET OF RESOURCES

Hewlett Packard Enterpris...

1. A cache controller for managing resources, comprising:a first tracker structure, in a coherent request tracker, having first tracker entries each statically associated with one of a first set of transaction identifiers that are assigned to the coherent request tracker for use by the coherent request tracker to manage coherent requests issued to a processor external to the cache controller;
a second tracker structure, in a victim request tracker, having second tracker entries each dynamically associable with one of a second variable set of the transaction identifiers that are assigned to the victim request tracker for use by the victim request tracker to manage writes to a memory that are associated with eviction of entries from a directory cache;
a resource sharing mechanism to, in response to determining that no transaction identifier in the second variable set of the transaction identifiers is available to process a resource request from the cache controller, borrow an idle transaction identifier associated with a first tracker entry among the first tracker entries, associate the borrowed transaction identifier with a second tracker entry among the second tracker entries, and lock the first tracker entry; and
the victim request tracker to assign the borrowed transaction identifier to the resource request.

US Pat. No. 10,338,964

COMPUTING NODE JOB ASSIGNMENT FOR DISTRIBUTION OF SCHEDULING OPERATIONS

Capital One Services, LLC...

1. A method, comprising:receiving, by a computing node included in a set of computing nodes, a corresponding set of heartbeat messages that originated at the set of computing nodes,
wherein the set of heartbeat messages is related to selecting a scheduler computing node, of the set of computing nodes, for scheduling a set of jobs associated with the set of computing nodes,
wherein each heartbeat message indicates, for a corresponding computing node of the set of computing nodes:
a number of times that the corresponding computing node has been selected as the scheduler computing node, and
whether the corresponding computing node is currently executing a scheduler to schedule one or more jobs for the set of computing nodes;
determining, by the computing node and based on the set of heartbeat messages, that the computing node has been selected as the scheduler computing node the fewest number of times as compared to other computing nodes included in the set of computing nodes;
determining, by the computing node and based on the set of heartbeat messages, that the scheduler is not being executed by any computing node included in the set of computing nodes; and
selecting, by the computing node, the computing node as the scheduler computing node based on determining that the computing node has been selected as the scheduler computing node the fewest number of times and that the scheduler is not being executed by any computing node included in the set of computing nodes.

US Pat. No. 10,338,962

USE OF METRICS TO CONTROL THROTTLING AND SWAPPING IN A MESSAGE PROCESSING SYSTEM

Microsoft Technology Lice...

1. A system comprising a processor and a memory communicatively coupled to the processor, the memory comprising instructions that, when executed by the processor, cause the system to:determine a workload of the system based on performance metrics of the system;
receive a message and, in response to receiving the message, create an instance of a process or route the message to an existing instance of a process;
idle the created instance or the existing instance based on the determined workload of the system;
determine a predicted duration for the idling based on the performance metrics;
based on the predicted duration, move the idled instance out of active memory and into secondary storage associated with the system; and
update the determined workload based on updated performance metrics and said moving the idled instance out of active memory.

US Pat. No. 10,338,961

FILE OPERATION TASK OPTIMIZATION

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:receiving, by a data processing apparatus, a plurality of file operation requests, each file operation request representing a request to perform an operation on at least one file maintained in a distributed file system and corresponding to a priority and to an operation type;
indexing, by the data processing apparatus, the plurality of file operation requests at least by the priority corresponding to the requests as a priority index;
indexing, by the data processing apparatus, the plurality of file operation requests at least by the operation type corresponding to the requests as an operation index;
selecting, by the data processing apparatus, a particular file operation request from the priority index based on a level of priority of the particular file operation request;
in response to selecting the particular file operation request from the priority index based on the level of priority of the particular file operation request, selecting, by the data processing apparatus and based on the operation type of the particular file operation request selected from the priority index, a group of file operation requests from the operation index that have an operation type in common with the particular file operation request selected from the priority index; and
sending, by the data processing apparatus, a request to perform the group of file operation requests, including the particular operation request, that have the common operation type.

US Pat. No. 10,338,960

PROCESSING DATA SETS IN A BIG DATA REPOSITORY BY EXECUTING AGENTS TO UPDATE ANNOTATIONS OF THE DATA SETS

International Business Ma...

1. A computer-implemented method for processing data sets in a data repository for storing at least unstructured data, the method comprising:providing agents, wherein each of the agents triggers processing of one or more of the data sets, wherein execution of each of the agents is triggered in response to one or more conditions assigned to that agent being met for a data set whose processing is triggered by the agent;
in response to executing a first agent of the agents to trigger processing of a first data set of the one or more of the data sets, wherein the execution is triggered by the one or more conditions of the first agent being met for the first data set,
updating, by the first agent, annotations of the first data set, thereby including a result of the processing of the first data set triggered by the first agent in the annotations; and
executing a second agent of the agents, wherein the execution is triggered by the updated annotations of the first data set meeting the one or more conditions of the second agent, wherein the execution of the second agent triggers a further processing of the first data set and a further updating of the annotations of the first data set by the second agent; and
in response to generation, by the first agent, of a second data set that is a derivative of the first data set,
updating, by the first agent, the annotations of the first data set to add a link that points to a storage location of the second data set; and
processing, by the second agent, the second data set.

US Pat. No. 10,338,959

TASK STATE TRACKING IN SYSTEMS AND SERVICES

Microsoft Technology Lice...

14. A computer-implemented method for decoupling task state tracking from an execution of a task, comprising:receiving, at a task-agnostic shared task completion platform, task registration data for a plurality of tasks from a plurality of task owner resources, wherein each task in the plurality of tasks is executable by a particular task owner resource and wherein the task registration data for each task in the plurality of tasks comprises at least one mandatory parameter that is necessary to be collected for execution of each task in the plurality of tasks and at least one optional parameter;
storing, in a storage device associated with the task-agnostic shared task state platform, the task registration data;
receiving an input from a user of the shared task state platform;
determining, based at least in part, on the received input and the task registration data, whether the received input is associated with at least one task;
when it is determined that the received input is associated with the at least one task, updating, by the task-agnostic shared task completion platform, a task state tracker that manages the state of the at least one task based, at least in part, on the received input and determines a subsequent action associated with processing the received input;
determining whether at least the at least one mandatory parameter for the at least one task has been collected; and
when it is determined that at least the at least one mandatory parameter data has been collected, transmitting the at least one mandatory parameter data, over a network, to at least one of the plurality of task owner resources that is determined to be responsible for executing the at least one task.

US Pat. No. 10,338,958

STREAM ADAPTER FOR BATCH-ORIENTED PROCESSING FRAMEWORKS

Amazon Technologies, Inc....

1. A system, comprising:one or more computing devices comprising one or more hardware processors and memory and configured to:
receive, from a client of a batch-oriented data processing service implementing a MapReduce programming model at a provider network, an indication of an input data stream comprising a plurality of data records that are to be batched for at least a first computation at the data processing service, wherein the plurality of data records are received from a plurality of data producers and retained during a time window by a multi-tenant stream management service of the provider network;
retrieve from the stream management service, based at least in part on respective sequence numbers associated with data records of the input data stream by the stream management service, a set of data records of the input data stream on which the first computation is to be performed during a particular processing iteration at the batch-oriented data processing service, wherein the set of data records comprises a plurality of retrieved data records of the input data stream;
save, in a persistent repository, metadata that corresponds to the set of data records of the input data stream, the metadata for the set of data records of the input data stream comprising an iteration identifier that uniquely identifies the particular processing iteration for the plurality of retrieved data records of the input data stream with respect to at least one of previous processing iterations completed prior to the particular processing iteration for sets of data records of input data streams and subsequent processing iterations to be performed subsequent to the particular processing iteration for other sets of data records of input data streams at the batch-oriented data processing service, and wherein the metadata for the set of data records of the input data stream further comprises two or more sequence numbers of respective records of the set of data records that indicate a range of sequence numbers of the set of data records of the input data stream on which the first computation is to be performed during the particular processing iteration for the plurality of retrieved records of the input data stream at the batch-oriented data processing service;
generate a batch representation of the set of data records in accordance with a data input format supported at the batch-oriented processing service;
schedule the particular processing iteration at selected nodes of the batch-oriented data processing service; and
execute the scheduled particular processing iteration at the selected nodes based on the saved metadata.

US Pat. No. 10,338,957

PROVISIONING KEYS FOR VIRTUAL MACHINE SECURE ENCLAVES

Intel Corporation, Santa...

1. At least one non-transitory machine accessible storage medium having code stored thereon, the code when executed on a machine, causes the machine to:identify a launch of a particular virtual machine on a host computing system, wherein the particular virtual machine is launched to comprise a secure quoting enclave to perform an attestation of one or more aspects of the virtual machine;
generate, using a secure migration enclave hosted on the host computing system, a root key for the particular virtual machine, wherein the root key is to be used in association with provisioning the secure quoting enclave with an attestation key to be used in the attestation; and
register the root key with a virtual machine registration service.

US Pat. No. 10,338,956

APPLICATION PROFILING JOB MANAGEMENT SYSTEM, PROGRAM, AND METHOD

FUJITSU LIMITED, Kawasak...

1. An application profiling job management system, configured to compose and initiate one or more application profiling tasks for profiling a software application, the application profiling job management system comprising:processing hardware coupled to memory hardware, the memory hardware storing processing instructions which, when executed by the processing hardware, cause the processing hardware to perform a process comprising including:
receiving user input information, wherein the user input information specifies a profiling target and profiling execution requirements, the profiling target including the software application;
storing, in a profiler specification storage area of the memory hardware, a profiler specification of each of a plurality of application profilers accessible to the application profiling job management system;
determining which of the plurality of application profilers satisfy the profiling execution requirements, based on one of respective profiler specifications, and for each of the application profilers determined to satisfy the profiling execution requirements, generating one or more application profiling tasks, each application profiling task specifying an application profiler from among the plurality of application profilers and the profiling target;
selecting one or more systems of hardware resources to perform each of the application profiling tasks; and
initiating execution of each of one or more application profiling tasks with the respective selected one or more systems of hardware resources.

US Pat. No. 10,338,955

SYSTEMS AND METHODS THAT EFFECTUATE TRANSMISSION OF WORKFLOW BETWEEN COMPUTING PLATFORMS

GoPro, Inc., San Mateo, ...

1. A system configured to effectuate transmission of workflow between computing platforms, the system comprising:one or more physical computer processors configured by computer readable instructions to:
receive, from a client computing platform, a first command, the first command including a proxy image representing a lower resolution version of an image stored on the client computing platform;
associate an identifier with the proxy image;
effectuate transmission of the identifier to the client computing platform, the identifier to be associated with the image stored on the client computing platform;
determine edits, at a remote computing platform, to the image based on the proxy image;
effectuate transmission of instructions from the remote computing platform to the client computing platform, the instructions including the identifier and causing the client computing platform to process the edits on the image; and
determine classifications of the image based on one or more objects recognized within the proxy image.

US Pat. No. 10,338,954

METHOD OF SWITCHING APPLICATION AND ELECTRONIC DEVICE THEREFOR

Samsung Electronics Co., ...

1. An electronic device, comprising:a display; and
at least one processor that is configured to:
control the display to display an execution screen of an application,
control the display to display a reduced size object corresponding to the application based on a reducing event generated for the execution screen,
control the display to display the execution screen of the application in an area of the display if a hovering input is detected on the reduced size object corresponding to the application, and
control the display to display the reduced size object corresponding to the application if the hovering input is released.

US Pat. No. 10,338,953

FACILITATING EXECUTION-AWARE HYBRID PREEMPTION FOR EXECUTION OF TASKS IN COMPUTING ENVIRONMENTS

Intel Corporation, Santa...

1. An apparatus comprising:detection/reception logic to detect a software application being hosted by a computing device, wherein the software applications to facilitate one or more tasks that are capable of being executed by a graphics processor of the computing device;
preemption selection logic to select one of a fine grain preemption or a coarse grain preemption based on comparison of a first time estimation and a second time estimation relating to the one or more tasks at thread level execution and work group level execution, respectively, the preemption selection logic to select the one of the fine grain preemption or the coarse grain preemption in response to the detection/reception logic detecting a preemption request while the fine grain preemption and the coarse grain preemption are not being performed;
preemption initiation and application logic to initiate performance of the selected one of the fine grain preemption and the coarse grain preemption; and
watermark time logic to set a timer to pause the work group level execution to wait to receive a refined set of the second time estimation, wherein the preemption selection logic is operable to select the fine grain preemption if the timer expires prior to receiving the refined second time estimation set, wherein the preemption selection logic is further operable to to select the coarse grain preemption based on the refined second time estimation set.

US Pat. No. 10,338,952

PROGRAM EXECUTION WITHOUT THE USE OF BYTECODE MODIFICATION OR INJECTION

International Business Ma...

1. A processor-implemented method for registering a plurality of callbacks, the method comprising:registering each of a plurality of callback functions in a virtual machine tool interface within a virtual machine to a list of callback functions for an event based on a plurality of event context elements associated with each callback function;
in response to the event occurring, generating a local frame for each registered callback function within the list of callback functions for the determined event; and
executing each registered callback function, concurrently, based on each generated local frame associated with each at least one registered callback function.

US Pat. No. 10,338,951

VIRTUAL MACHINE EXIT SUPPORT BY A VIRTUAL MACHINE FUNCTION

Red Hat, Inc., Raleigh, ...

1. A method of securing a state of a guest, comprising:determining, by a virtual machine function within a guest running on a virtual machine, a guest central processing unit (CPU) state that is stored in one or more registers of a CPU and associated with the guest;
encrypting, by the virtual machine function, a first portion of the guest CPU state that is not used to execute a privileged instruction being attempted by the guest;
sending, by the virtual machine function, one or more requests based on the privileged instruction to a hypervisor, the virtual machine and the hypervisor running on a common host machine; and
after execution of the privileged instruction is completed, decrypting, by the virtual machine function, the first portion of the guest CPU state.

US Pat. No. 10,338,950

SYSTEM AND METHOD FOR PROVIDING PREFERENTIAL I/O TREATMENT TO DEVICES THAT HOST A CRITICAL VIRTUAL MACHINE

Veritas Technologies LLC,...

1. A computer-implemented method comprising:generating a mapping of a group of virtual machine disk blocks to a group of corresponding offsets in a logical unit number (LUN) of a storage unit, wherein the LUN is one of a plurality of LUNs of the storage unit,
each corresponding offset of the group of corresponding offsets corresponds to a corresponding virtual machine disk block of the group of virtual machine disk blocks,
the mapping identifies a plurality of universally unique identifiers (UUIDs),
each UUID of the plurality of UUIDs uniquely identifies the corresponding virtual machine disk block that begins at the corresponding offset, and
each UUID is stored at a fixed offset in a related LUN of the plurality of LUNs;
detecting that an input/output (I/O) operation is directed to a specific LUN among the plurality of LUNs in the storage unit;
based on the mapping, determining a specific virtual machine from which the I/O operation originated, wherein
the specific virtual machine is one of a plurality of virtual machines, and
the determining comprises identifying a specific UUID of the plurality of UUIDs that is associated with the related LUN, wherein
the identifying comprises reading data at the fixed offset in the related LUN;
identifying a priority level of the specific virtual machine, wherein
the priority level is identified based on the mapping; and
assigning the I/O operation a matching quality rating based on the priority level, wherein
the matching quality rating represents a quality of one or more shared computing resources available to the specific virtual machine.

US Pat. No. 10,338,949

VIRTUAL TRUSTED PLATFORM MODULE FUNCTION IMPLEMENTATION METHOD AND MANAGEMENT DEVICE

Huawei Technologies Co., ...

1. A virtual trusted platform module (vTPM) function implementation method for use at an exception level EL3 of a processor that uses an ARM V8 architecture, the method comprising:generating, according to requirements of one or more virtual machines (VMs), one or more vTPM instances corresponding to each VM, and storing the generated one or more vTPM instances in preset secure space, wherein each vTPM instance has a dedicated instance communication queue for a VM corresponding to itself to use, and a physical address is allocated to each instance communication queue; and
interacting with a virtual machine monitor (VMM) and the VM, for causing the VM to acquire a VM communication queue virtual address, in VM virtual address space, corresponding to a communication queue physical address of the vTPM instance, by:
sending a first query request to an EL2, wherein the first query request comprises the communication queue physical address of the vTPM instance, for causing the EL2 to determine, according to the first query request and a mapping table that is between a physical address and an intermediate physical address and is stored at the EL2, an intermediate physical address corresponding to the communication queue physical address of the vTPM instance, and send the intermediate physical address to the EL3;
receiving the intermediate physical address sent by the EL2; and
sending a second query request to an EL1 wherein the second query request comprises the intermediate physical address, for causing the EL1 to determine, according to the second query request and a mapping table that is between an intermediate physical address and a virtual address and is stored at the EL1 a virtual address corresponding to the intermediate physical address,
wherein the determined virtual address is the VM communication queue virtual address, and
wherein the VM communicates with a vTPM instance communication queue by using the VM communication queue virtual address.

US Pat. No. 10,338,948

METHOD AND DEVICE FOR MANAGING EXECUTION OF SCRIPTS BY A VIRTUAL COMPUTING UNIT

Wipro Limited, Bangalore...

1. A method for managing execution of scripts by a virtual computing unit, on a host computing device, comprising:configuring, by a host computing device, one or more ports for establishing a communication interface between the host computing device and a virtual computing unit, wherein the virtual computing unit is configured in the host computing device;
providing, by the host computing device, one or more scripts to be executed by the virtual computing unit and one or more parameters associated with the one or more scripts to the virtual computing unit via the communication interface, wherein the virtual computing unit executes the one or more scripts upon locating the one or more scripts from an associated memory location;
receiving, by the host computing device, during the execution of the one or more scripts, real time status of the execution of the one or more scripts from the virtual computing unit via the communication interface, wherein the real time status comprises information of successfully executed scripts, information of unsuccessfully executed scripts, a number of exceptions, type of exceptions, a number of errors, and type of errors; and
instructing, by the host computing device, the virtual computing unit to complete execution of unsuccessfully executed scripts upon handling each of the exceptions and errors, wherein the exceptions and the errors are handled based on the one or more parameters, and wherein the exceptions and the errors are handled based on at least one of priority, availability of data and severity associated with each of the scripts.

US Pat. No. 10,338,947

EXTENT VIRTUALIZATION

Microsoft Technology Lice...

1. A method, comprising:employing at least one processor configured to execute computer-executable instructions stored in memory to perform the following acts:
identifying a first set of one or more contiguous storage blocks to be allocated for storage of a master-image virtual hard disk;
extending the first set of one or more contiguous storage blocks by one or more additional storage blocks reserved for patches to the master-image virtual hard disk different from updates to the master-image virtual hard disk that are represented by one or more differencing virtual hard disks, wherein the one or more differencing virtual hard disks are dependent on the master-image virtual hard disk;
allocating space in a physical file system for the extended first set of contiguous storage blocks for the master-image virtual hard disk and for the patches to the master-image virtual hard disk; and
allocating additional space in the physical file system for a second set of contiguous storage blocks for the one or more differencing virtual hard disks, wherein the additional space in the physical file system is physically contiguous with and after the space in the physical file system.

US Pat. No. 10,338,946

COMPOSABLE MACHINE IMAGE

Amazon Technologies, Inc....

1. A method for executing a computer system image on a computing node, comprising:receiving from a user data indicative of a selection of a specification file from a plurality of specification files, wherein the plurality of specification files are defined by a plurality of other users, wherein the user selects one of the plurality of specification files;
obtaining, based on the data indicative of the selection, the specification file, wherein the specification file comprises references to components of the computer system image, the components including a base system image and a resource, the specification file also comprising at least a signature associated with the resource for validating the specification file;
preparing the computer system image based on the components specified by the specification file by at least ensuring that the resource is incorporated into the computer system image; and
executing the computer system image on the computing node.

US Pat. No. 10,338,945

HETEROGENEOUS FIELD DEVICES CONTROL MANAGEMENT SYSTEM BASED ON INDUSTRIAL INTERNET OPERATING SYSTEM

KYLAND TECHNOLOGY CO., LT...

1. A heterogeneous field devices control management system based on an industrial internet operating system, wherein the heterogeneous field devices control management system comprises a plurality of servers, each of the plurality of servers comprises a memory storing first instructions, a physical communication interface and at least one processor; wherein a virtual machine management layer, a real-time virtual machine, and a non-real-time virtual machine are operated on the each of the plurality of servers, and each the real-time virtual machine and each the non-real-time virtual machine are respectively installed with a plurality of service instances; and wherein the at least one processor is configured to read and execute the first instructions to:control the virtual machine management layer to perform a configuration, operating scheduling and hardware access management of the real-time virtual machine and the non-real-time virtual machine;
control the real-time virtual machine to communicate with heterogeneous field devices, and to control the heterogeneous field devices to perform corresponding operations;
control the non-real-time virtual machine to communicate with an off-site device and process a specified service without a real-time requirement; and
control the real-time virtual machine and the non-real-time virtual machine to:
for any service instance, ascertain whether or not the service instance has a bound physical communication interface, according to an one-to-one binding relationship between a service instance and a physical communication interface;
when the service instance has a bound physical communication interface, transmit information of the service instance to a destination service instance via the bound physical communication interface; and
when the service instance does not have a bound physical communication interface, ascertain a server where the destination service instance is located by means of logical addressing, upon sending the information of the service instance to the destination service instance; submit the information of the service instance to an internal transmission queue when the service instance and the destination service instance are in a same server, and send the information of the service instance to the destination service instance via the internal transmission queue; or call an interface driver to transmit the information of the service instance to the destination service instance, when the service instance and the destination service instance are in different servers.

US Pat. No. 10,338,944

AUTOMATIC DISCOVERY AND CLASSFICATION OF JAVA VIRTUAL MACHINES RUNNING ON A LOGICAL PARTITION OF A COMPUTER

INTERNATIONAL BUSINESS MA...

1. A method of automatic discovery and classification of Java virtual machines on a logical partition (LPAR) of a computing system, the computing system comprising a main storage memory comprising at least volatile memory, wherein the volatile memory includes a common collector and a plurality of address space control blocks (ASCB), the plurality of ASCBs comprising at least one ASCB for each address space of a plurality of address spaces of the LPAR, wherein the common collector comprises a data space in system memory, the method comprising:constructing a Service Request Block (SRB) routine in the common collector along with a parameter list, wherein the SRB routine is independent from at least one system service, wherein the at least one system service includes a network service;
examining, via the SRB routine, each ASCB of the plurality of ASCBs to identify one or more address spaces of the plurality of address spaces of the LPAR that are eligible to operate a Java virtual machine (JVM), wherein examining each ASCB includes examining each ASCB for flags that indicate that a corresponding address space is dubbed into UNIX System Services (USS) and that the corresponding address space of the ASCB is dispatchable;
retrieving CSVINFO, by a JVM management system via a CSVINFO macro call to each of the plurality of ASCBs on the LPAR of the computing system, in a predetermined interval;
automatically discovering, through the CSVINFO retrieved, one or more JVMs running on the LPAR of the computing system; and
automatically classifying, through a plurality of Content Directory Entries examined using the CSVINFO macro call, the one or more JVMs discovered;
wherein the retrieving and the discovering comprises:
for each ASCB of the plurality of ASCBs:
calling the SRB routine to retrieve CSVINFO from the ASCB, wherein the CSVINFO from the ASCB includes a list of modules loaded on the ASCB; and
discovering a JVM when the CSVINFO from the ASCB includes one or more JVM modules by at least detecting by the SRB routine, whether a JVM module named libjvm.so is present in the list of modules, wherein the SRB routine is configured to return a path name to the libjvm.so module in response to detecting that the libjvm.so module is present.

US Pat. No. 10,338,943

TECHNIQUES FOR EMULATING MICROPROCESSOR INSTRUCTIONS

SYMANTEC CORPORATION, Mo...

1. A computer-implemented method for emulating microprocessor instructions, the method comprising:identifying, in a computing device, an instruction of a first software application using a second software application that emulates instructions of a type of microprocessor, wherein the instruction includes an instruction prefix and an operation code;
adding, in the computing device, an additional bit to a length of the operation code of the instruction to create an extended operation code, wherein the additional bit accounts for a program state set by the instruction prefix and wherein the extended operation code, including the additional bit, is represented in an operation code table of the second software application; and
emulating, in the computing device, execution of the instruction using the second software application and the extended operation code.

US Pat. No. 10,338,942

PARALLEL PROCESSING OF DATA

Google LLC, Mountain Vie...

1. A computer-implemented method, comprising:executing a deferred, combined parallel operation, which is included in a dataflow graph that comprises deferred parallel data objects and deferred, combined parallel operations corresponding to a data parallel pipeline, to produce materialized parallel data objects corresponding to deferred parallel data objects, wherein the executing comprises:
determining an estimated size of data associated with the deferred, combined parallel operation being executed;
determining that the estimated size of data associated with the deferred, combined parallel operation does not exceed a threshold size based at least on accessing annotations in the dataflow graph that represent an estimate of the size of the data associated with the deferred, combined parallel operation; and
in response to determining that the estimated size does not exceed the threshold size, executing the deferred, combined parallel operation as a local, sequential operation.

US Pat. No. 10,338,941

ADJUSTING ADMINSTRATIVE ACCESS BASED ON WORKLOAD MIGRATION

International Business Ma...

1. A computer-implemented method of migrating a workload from a source system to a target system, the method comprising:detecting migration of the workload from the source system to the target system, wherein the workload includes one or more virtual machines, and wherein the source system is an unallocated server and the target system is a server allocated to a system pool;
scanning a hardware management console (HMC) on the source system to determine an identity of an administrator associated with the workload;
adjusting access rights of the identified administrator to an HMC on the target system to provide access to the migrated workload based, at least in part, on access rights of the identified administrator to the source system;
adjusting access rights of the identified administrator to the HMC on the source system based on the migration of the workload from the source system to the target system, wherein adjusting access rights of the identified administrator to the HMC on the source system comprises:
determining whether to revoke access rights of the identified administrator to the source system based on whether or not the identified administrator owns workloads other than the migrated workload executing on the source system,
granting the administrator access rights to the server allocated to the system pool consistent with access rights of the administrator to the unallocated server,
scanning a management console of the system pool to determine categories of policies available in the system pool,
granting the administrator access rights with respect to policies within the categories that are analogous to policies applicable to the workload on the unallocated server,
revoking access rights of the administrator to tasks that conflict with the active policies defined for the system pool within the categories, and
upon determining that the administrator no longer owns a workload on the unallocated server subsequent to the migration, revoking access rights of the administrator to the unallocated server; and
executing the migrated workload on the target system based on the adjusted access rights of the identified administrator.

US Pat. No. 10,338,940

ADJUSTING ADMINSTRATIVE ACCESS BASED ON WORKLOAD MIGRATION

International Business Ma...

1. A non-transitory computer-readable storage medium storing an application, which, when executed on a processor, performs an operation of migrating a workload from a source system to a target system, the operation comprising:detecting migration of the workload from the source system to the target system, Wherein the workload includes one or more virtual machines, and wherein the source system is an unallocated server and the target system is a server allocated to a system pool;
scanning a hardware management console (HMC) on the source system to determine an identity of an administrator associated with the workload;
adjusting access rights of the identified administrator to an HMC on the target system to provide access to the migrated workload based, at least in part, on access rights of the identified administrator to the source system;
adjusting access rights of the identified administrator to the HMC on the source system based on the migration of the workload from the source system to the target system, wherein adjusting access rights of the identified administrator to the HMC on the source system comprises:
determining whether to revoke access rights of the identified administrator to the source system based on whether or not the identified administrator owns workloads other than the migrated workload executing on the source system,
granting the administrator access rights to the server allocated to the system pool consistent with access rights of the administrator to the unallocated server,
scanning a management console of the system pool to determine categories of policies available in the system pool,
granting the administrator access rights with respect to policies within the categories that are analogous to policies applicable to the workload on the unallocated server,
revoking access rights of the administrator to tasks that conflict with the active policies defined for the system pool within the categories, and
upon determining that the administrator no longer owns a workload on the unallocated server subsequent to the migration, revoking access rights of the administrator to the unallocated server; and
executing the migrated workload on the target system based on the adjusted access rights of the identified administrator.

US Pat. No. 10,338,939

SENSOR-ENABLED FEEDBACK ON SOCIAL INTERACTIONS

Bose Corporation, Framin...

1. A computer-implemented method comprising:receiving location information associated with a location of a user-device, the location information comprising information representing a number of individuals interacting with a user of the user-device at the location;
receiving, from one or more sensor devices, user-specific information about the user associated with the user device;
estimating, by one or more processors, based on (i) the user-specific information and (ii) the location information, a set of one or more parameters indicative of social interactions of the user, the one or more parameters indicating, at least in part,
(i) a participation metric indicative of a relative amount of time the user speaks in a conversation with the number of individuals, and
(ii) a social outing metric indicative of a measure of spread of the user's physical locations;
determining that the relative amount of time the user speaks in the conversation with the number of individuals satisfies a first threshold condition;
determining that the measure of spread of the user's physical locations satisfies a second threshold condition;
responsive to determining that the relative amount of time the user speaks in the conversation with the number of individuals satisfies the first threshold condition, and determining that the measure of spread of the user's physical locations satisfies the second threshold condition, generating a signal representing informational output regarding the user's participation in the conversation and participation in social outings; and
presenting the informational output on a user-interface displayed on an output device, the user-interface configured to provide feedback to the user regarding the user's participation in the conversation and participation in social outings.

US Pat. No. 10,338,938

PRESENTING ELEMENTS BASED ON CONFIGURATION OF DEVICE

Lenovo (Singapore) Pte. L...

1. An apparatus, comprising:a touch-enabled display;
a processor; and
storage accessible to the processor and bearing instructions executable by the processor to:
make a first determination that a device is being or has been transitioned between a laptop configuration and a tablet configuration; and
at least in part based on the first determination, make a second determination pertaining to at least one change in presentation of an element presented on the touch-enabled display relative to its presentation prior to the first determination, the element associated with an application, the change in presentation being from a first presentation to a second presentation;
wherein the instructions are executable by the processor to make the second determination at least in part based on a third determination that the application has launched a threshold number of launches each following a transition to one of the laptop configuration and tablet configuration.

US Pat. No. 10,338,937

MULTI-PANE GRAPHICAL USER INTERFACE WITH DYNAMIC PANES TO PRESENT WEB DATA

Red Hat, Inc., Raleigh, ...

1. A method comprising:receiving a selection of one or more system administration data items in a data item pane in a web application graphical user interface (GUI), the web application GUI comprising the data item pane and a multi-selection pane in a window of the web application GUI, the data item pane comprising a set of system administration data items of the web application;
determining a number of the one or more system administration data items from the set of system administration data items of the web application that have been selected in the data item pane;
responsive to determining that a single system administration data item of the set of system administration items has been selected in the data item pane, sliding, by a processing device, another pane with a display of permission information for the selected system administration data item in a horizontal direction away from the data item pane within the window of the web application GUI to cover the multi-selection pane in the window, wherein content of the data item pane is fully visible in the window of the web application GUI;
responsive to determining that a plurality of the system administration data items of the set of system administration items have been selected in the data item pane, providing one or more actions to remove permissions from each system administration data item of the selected plurality of system administration data items, the one or more actions being provided in the multi-selection pane;
receiving a request to perform an action of the one or more actions for the selected plurality of system administration data items in the data item pane to remove a permission from each of the plurality of system administration data items selected in the data item pane from the window of the web application GUI, the request corresponding to a selection of the action at the multi-selection pane; and
responsive to receiving the request, removing, by the processing device, the permission from each of the plurality of system administration data items selected in the data item pane from the window of the web application GUI.

US Pat. No. 10,338,936

METHOD FOR CONTROLLING SCHEDULE OF EXECUTING APPLICATION IN TERMINAL DEVICE AND TERMINAL DEVICE IMPLEMENTING THE METHOD

Sony Corporation, Tokyo ...

1. A terminal device comprising:a memory configured to store a first application and a second application, the first application and the second application being applications that have data related with the first application and the second application respectively preserved while in a standby mode; and
circuitry, comprising a processor, configured to
implement a first timer associated with the first application;
implement a second timer different from the first timer;
determine whether the first timer wakes up the processor;
in a case where the circuitry determines the first timer wakes up the processor, wake up the processor from the standby mode when the first timer measures a first predetermined amount of elapsed time and, after the processor is woken up, execute the first application and the second application on the processor, wherein the second application does not have a function to wake up the processor from the standby mode in the case where the circuitry determines the first timer wakes up the processor; and
in a case where the circuitry determines the first timer does not wake up the processor, wake up the processor from the standby mode when the second timer measures a second predetermined amount of elapsed time and, after the processor is woken up, execute the first application and the second application on the processor, wherein neither the first application nor the second application has a function to wake up the processor from the standby mode in the case where the circuitry determines the first timer does not wake up the processor.

US Pat. No. 10,338,935

CUSTOMIZING PROGRAM LOGIC FOR BOOTING A SYSTEM

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for generating customized program, the method comprising:determining, by a reporting unit of a target system, one or more hardware devices operatively connected with the target system;
sending, by the reporting unit, a first list of identifiers of the determined hardware devices to a server system;
receiving, by the server system, the first list of device identifiers;
automatically selecting from a set of drivers, by the server system, for each of the device identifiers in the received first list, at least one driver operable to control the identified device, thereby generating a sub-set of said set of drivers;
retrieving, by the server system, a core program logic being free of any drivers of the target system;
automatically complementing the core program logic with said driver sub-set to generate the customized program logic; and
deploying the customized program logic to the target system for loading of the customized program logic into a memory of the target system, the customized program logic configured to use the sub-set of drivers for downloading an operating system to the target system.

US Pat. No. 10,338,933

METHOD FOR GENERATING CUSTOM BIOS SETUP INTERFACE AND SYSTEM THEREFOR

Dell Products, LP, Round...

1. A computer implemented method comprising:actuating a predetermined key during boot initialization of an information handling system to display a basic input/output system (BIOS) setup interface;
determining that a first configuration option is not available at the BIOS setup interface;
exiting the BIOS setup interface to allow the information handling system to complete the boot initialization and to load an operating system;
invoking, by a user of the information handling system, a runtime application, the runtime application identifying a plurality of configuration options, including the first configuration option; and
selecting, by the user at an interface of the runtime application, the first configuration option from the plurality of configuration options, the selecting causing a software agent to update BIOS firmware to include the first configuration option at the BIOS setup interface.

US Pat. No. 10,338,932

BOOTSTRAPPING PROFILE-GUIDED COMPILATION AND VERIFICATION

Google LLC, Mountain Vie...

1. A method, comprising:receiving, at a server computing device, a request to provide a software package for a particular software application;
determining composite application execution information (AEI) for at least the particular software application using the server computing device, the composite AEI comprising a composite list of software for at least the particular software application, wherein the composite list of software comprises data about software methods of the particular software application executed by at least one computing device other than the server computing device, the software methods of the particular software application including a frequently-executed software method and an initialization software method;
extracting particular AEI related to the particular software application from the composite AEI using the server computing device, the particular AEI providing a compiler hint for indicating to compile the frequently-executed software method before runtime of the particular software application and for indicating to compile the initialization software method during runtime of the particular software application;
generating the software package using the server computing device, wherein the software package includes the particular software application and the particular AEI; and
providing the software package using the server computing device.

US Pat. No. 10,338,931

APPROXIMATE SYNCHRONIZATION FOR PARALLEL DEEP LEARNING

INTERNATIONAL BUSINESS MA...

1. A computer program product for deep learning, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processing component to cause the processing component to:generate, by the processing component, first output data based on first input data associated with machine learning and received by the processing component and one or more other processing components;
transmit, by the processing component, the first output data to a first processing component from the one or more other processing components, wherein the first processing component is determined by the processing component or the one or more other processing components, wherein the processing component and the first processing component are operated in synchronization for the deep learning, and wherein the first output data is stored in a memory operatively coupled to the processing component;
generate, by the processing component, second output data based on the first output data and communication data that is generated by the first processing component; and
transmit, by the processing component, the second output data to a second processing component from the processing component, wherein the second processing component is determined by the processing component or the one or more other processing components, and wherein the processing component and the second processing component are operated in synchronization for the deep learning.

US Pat. No. 10,338,929

METHOD FOR HANDLING EXCEPTIONS IN EXCEPTION-DRIVEN SYSTEM

Imagination Technologies ...

1. A method of processing exceptions in an exception-driven computing-based system, the method comprising:executing, using a processor of the system, a main program which causes the system to operate first in an initialisation mode and then in an exception-driven mode;
detecting, using the processor, that an exception has occurred;
in response to detecting that an exception has occurred, executing, using the processor, one of one or more sets of exception handling instructions using a main register set; and
wherein when the system is operating in the initialisation mode the set of exception handling instructions that are executed invoke a first exception handler that causes the processor to save the main register set prior to processing the exception and restore the main register set after processing the exception, and when the system is operating in the exception-driven mode the set of exception handling instructions that are executed invoke a second exception handler that does not cause the processor to save and restore the main register set.

US Pat. No. 10,338,928

UTILIZING A STACK HEAD REGISTER WITH A CALL RETURN STACK FOR EACH INSTRUCTION FETCH

Oracle International Corp...

1. A method comprising:fetching a first instruction; and
in response to detecting the first instruction is a call instruction based on decode information generated from an opcode fetched with the first instruction:
determining a first return address corresponding to the call instruction;
storing the first return address in a stack head register; and
pushing the first return address onto a call return stack that is separate from the stack head register; and
on every instruction fetch:
reading the stack head register to obtain a speculative return address stored therein; and
storing the speculative return address in a temporary storage location prior to determining whether a corresponding fetched instruction comprises a return instruction.

US Pat. No. 10,338,927

METHOD AND APPARATUS FOR IMPLEMENTING A DYNAMIC OUT-OF-ORDER PROCESSOR PIPELINE

Intel Corporation, Santa...

1. An apparatus comprising:an instruction fetch unit to fetch Very Long Instruction Words (VLIWs) in program order from memory, each of the VLIWs comprising a plurality of reduced instruction set computing (RISC) instruction syllables grouped into the VLIWs in an order which removes data-flow dependencies and false output dependencies between the syllables, and wherein the plurality of RISC instruction syllables in the VLIWs include one or more false anti-dependencies;
a decode unit to decode the VLIWs in program order and output the syllables of each decoded VLIW in parallel; and
an out-of-order execution engine to execute at least some of the syllables in parallel with other syllables, wherein at least some of the syllables are to be executed in a different order than the order in which they are received from the decode unit.

US Pat. No. 10,338,925

TENSOR REGISTER FILES

Microsoft Technology Lice...

1. An apparatus comprising:a plurality of tensor operation calculators each configured to perform a type of tensor operation of a plurality of types of tensor operations, the plurality of tensor operation calculators comprising multiple instances of tensor operation calculators configured to perform a first type of the tensor operations;
a plurality of tensor register files, each of the tensor register files being associated with one of the plurality of tensor operation calculators; and
logic configured to store tensors in the plurality of tensor register files in accordance with the type of tensor operation to be performed on the respective tensors;
wherein the logic is further configured to store multiple separate copies of tensors, for which the apparatus is to perform the first type of tensor operation, in each of the dedicated tensor register files that are associated with the multiple instances of the tensor operation calculators configured to perform the first type of tensor operation, to the exclusion of others of the plurality of tensor register files.

US Pat. No. 10,338,924

CONFIGURABLE EVENT SELECTION FOR MICROCONTROLLER TIMER/COUNTER UNIT CONTROL

RENESAS ELECTRONICS CORPO...

1. A microcontroller comprising:a central processing unit (CPU);
a memory for storing instructions executable by the CPU;
first and second input pins for receiving first and second external event signals, respectively, from one or more devices that are external to the microcontroller;
a first timer/counter (T/C) channel coupled to receive control values generated by the CPU in response to executing the instructions, and further coupled to receive the first external event signal, and the second external event signal;
wherein each of the first and second external event signals is a binary signal that can transition between a first state and a second state;
wherein the first T/C channel is configured to generate a plurality of event signals based on the first event signal or the second external event signal;
wherein the first T/C channel is configured to select one of the plurality of event signals based on one or more of the control values;
wherein the first T/C channel is configured to generate a first control signal as a function of the selected event signal;
wherein a first function of the first T/C channel can be controlled by the first control signal.

US Pat. No. 10,338,923

BRANCH PREDICTION PATH WRONG GUESS INSTRUCTION

INTERNATIONAL BUSINESS MA...

1. A method comprising:receiving, with a processor, a branch wrong guess instruction located at a branch wrong guess instruction address;
determining, by the processor, whether any branch address in a branch prediction mechanism matches the branch wrong guess instruction address;
subsequent to the determining whether any branch address in the branch prediction mechanism matches the branch wrong guess instruction address, receiving, by the processor, an end branch wrong guess instruction that includes the branch wrong guess instruction address, wherein the end branch wrong guess instruction is distinct and separate from the branch wrong guess instruction;
responsive to determining that the branch wrong guess instruction address does not match any branch address in the branch prediction mechanism:
inducing a branch prediction error by prefetching an instruction immediately sequentially following the branch wrong guess instruction address; and
decoding and executing instructions in a state invariant region, wherein the state invariant region is a two-instruction state invariant region comprising decode wrong stream instructions, and the state invariant region immediately sequentially follows the branch wrong guess instruction and immediately sequentially precedes the end branch wrong guess instruction; and
prefetching, by the processor, an instruction at a branch target address in response to the end branch wrong guess instruction, even if the branch wrong guess instruction has not yet been executed.

US Pat. No. 10,338,921

ASYNCHRONOUS INSTRUCTION EXECUTION APPARATUS WITH EXECUTION MODULES INVOKING EXTERNAL CALCULATION RESOURCES

Huawei Technologies Co., ...

13. An asynchronous instruction execution method, wherein the method is executed by an asynchronous instruction execution apparatus, the asynchronous instruction execution apparatus comprises a vector execution unit control (VXUC) module and n vector execution unit data (VXUD) modules, n is a positive integer, the n VXUD modules are cascaded and separately connected to the VXUC module, a bit width of data processed by the asynchronous instruction execution apparatus is M, a bit width of each VXUD module is N, n=M/N, and the method comprises:decoding, by the VXUC module, an instruction from a vector instruction fetcher (VIF), to obtain decoded instruction information;
managing, by the VXUC module according to the decoded instruction information obtained by a decoding submodule, token transfer between the asynchronous instruction execution apparatus and another asynchronous instruction execution apparatus and token transfer inside the asynchronous instruction execution apparatus;
when the decoded instruction information obtained by the decoding submodule indicates that an external calculation resource needs to be invoked, generating, by the VXUC module, a clock pulse signal corresponding to the external calculation resource, and sending control information comprised in the decoded instruction information and the clock pulse signal to a first VXUD module of the n VXUD modules;
sending, by the first VXUD module, the clock pulse signal and the control information to the external calculation resource, to enable the external calculation resource to perform a data calculation according to the clock pulse signal and the control information; and
receiving, by the first VXUD module, a data calculation result from the external calculation resource.