US Pat. No. 10,169,336

TRANSLATING STRUCTURED LANGUAGES TO NATURAL LANGUAGE USING DOMAIN-SPECIFIC ONTOLOGY

International Business Ma...

1. A computer-implemented method, comprising steps of:determining one or more similarities among multiple natural language query interpretations derived from an input query;
determining one or more differences among the multiple natural language query interpretations derived from the input query;
generating one or more natural language descriptions of each of the multiple natural language query interpretations based on analysis of (i) the one or more determined similarities, (ii) the one or more determined differences, and (iii) the input query;
producing, for each of the multiple natural language query interpretations, a natural language string that represents one or more unambiguous interpretations of the input query, wherein said producing comprises consolidating the generated natural language descriptions; and
outputting each of the produced natural language strings to a user;
wherein the steps are carried out by at least one computing device.

US Pat. No. 10,169,335

CONTEXTUAL VALIDATION OF SYNONYMS IN OTOLOGY DRIVEN NATURAL LANGUAGE PROCESSING

International Business Ma...

1. A method for providing contextual validation of synonyms in ontology driven natural language processing, the method comprising the computer-implemented steps of:receiving, via at least one computing device, a user input of electronic text structured as a linear sequence of symbols;
determining, via at least one computing device, based on the linear sequence of symbols, a token that identifies a linguistic unit of the electronic text, the linguistic unit comprising at least one of a word, a punctuation symbol, a number, or a letter;
structuring, via at least one computing device, the user input into a semantic model comprising a set of classes each containing a set of related permutations of the token, wherein the semantic model is stored as data in memory of the at least one computing device;
quantifying, via at least one computing device, a linear distance between the token and a contextual token within the user input, wherein the linear distance is a quantity of additional tokens in between the token and the contextual token;
comparing, via at least one computing device, the linear distance to a pre-specified linear distance limit;
designating, via at least one computing device, the token as a synonym of one of the set of related permutations based on a result of the comparison;
annotating, via at least one computing device, the token with a class from the set of classes corresponding to the one of the set of related permutations;
when, based on the comparing, the quantified linear distance is within the pre-specified linear distance limit to the number of words,
assigning a high confidence level to the annotation, and
validating the annotation based on the high confidence level,
wherein the validating the annotation comprises restructuring, via at least one computing device, the semantic model to include a knowledge structure containing the contextual token, the linear distance, the pre-specified linear distance limit, and the designation of the token as a synonym of the one of the set of related permutations; and
when, based on the comparing, the quantified linear distance is not within the pre-specified linear distance limit to the number of words, assigning a low confidence level to the annotation.

US Pat. No. 10,169,334

SYSTEMATIC TUNING OF TEXT ANALYTIC ANNOTATORS WITH SPECIALIZED INFORMATION

International Business Ma...

1. A computer-implemented method of tuning one or more text analytic annotators comprising:generating, via a processor, a data structure including domain information and one or more information extraction rules, wherein the domain information includes one or more enumerators associated with one or more data types defining respective information categories of the domain, one or more text forms associated with one or more of the enumerators representing forms of the enumerators appearing in text, and one or more context patterns associated with one or more of the text forms, wherein the one or more extraction rules are associated with the enumerators, and wherein the domain information is generic with respect to requirements of more than one organization;
tuning, via the processor, the one or more extraction rules to a specified set of unannotated documents with specialized information including domain specific terminology of a particular organization, wherein the tuning includes:
identifying one or more additional new context patterns within the set of unannotated documents for the enumerators of the generic domain information of the data structure in a first iteration through the set of unannotated documents, wherein the first iteration:
determines exact matches between tokens within the set of unannotated documents and the one or more text forms associated with the enumerators of the generic domain information;
identifies enumerators of the generic domain information in the set of unannotated documents in response to context patterns of tokens of exact matches within the set of unannotated documents matching context patterns associated with the enumerators of the generic domain information; and
extracts the new context patterns from the set of unannotated documents for enumerators of the specialized information in response to context patterns of tokens of exact matches within the set of unannotated documents not matching context patterns associated with the enumerators of the generic domain information;
identifying one or more additional new context patterns and new text forms within the set of unannotated documents for enumerators of the specialized information in a second iteration through the set of unannotated documents, wherein the second iteration:
determines partial matches between tokens within the set of unannotated documents and the one or more text forms associated with enumerators of the generic domain information, wherein the partial matches are based on matching n-grams having a length less than the tokens and text forms;
extracts the new context patterns for tokens of the partial matches within the set of unannotated documents; and
identifies the additional new context patterns and text forms in response to the extracted context patterns for the partial matches matching one of: the context patterns of the enumerators of the generic domain information and the context patterns of the specialized information from the first iteration;
updating the data structure with the additional new context patterns and text forms from the first and second iterations without user intervention to expand the generic domain information to cover the specialized information; and
analyzing the set of unannotated documents based on the updated data structure and generating one or more additional extraction rules based on the analysis;
configuring, via the processor, one or more text analytic annotators for the specialized information based at least on the additional extraction rules, identified enumerators of the generic domain information, and enumerators of the specialized information; and
processing documents with the specialized information via the configured text analytic annotators.

US Pat. No. 10,169,333

SYSTEM, METHOD, AND RECORDING MEDIUM FOR REGULAR RULE LEARNING

INTERNATIONAL BUSINESS MA...

1. A regular rule learning system, comprising:a processor; and
a memory, the memory storing instructions to cause the processor to execute:
an analyzing circuit configured to analyze a corpus of sentences to discover lexical features of the sentences and to find semantic relationships between sentence constituents that are responsible for specific senses of words in that sentence by describing the semantic relationships and grammatical relations that are actuated in the sentence,
wherein the lexical features are unknown prior to the analyzing circuit discovering the lexical features.

US Pat. No. 10,169,332

DATA ANALYSIS FOR AUTOMATED COUPLING OF SIMULATION MODELS

International Business Ma...

1. A method linking variables within disparate simulation models, the method comprising:extracting, with a distributed processor, a first variable description associated with a first variable within a simulation input data structure comprising input data that is to be read and operated upon by a first simulation model;
extracting, with the distributed processor, a plurality of variable descriptions within an output data structure comprising output data that has been operated upon and written by a second simulation model;
determining, with the distributed processor, character strings within an information corpus that are similar to the first variable description;
ranking, with the distributed processor, the character strings in order of confidence levels, wherein each confidence level indicates a degree of similarity between an associated character string and the first variable description;
determining, with the distributed processor, a particular variable description of the plurality of variable descriptions within the output data structure is equal to a character string, wherein the particular variable description is associated with a second variable; and
linking, with the distributed processor, the first variable to the second variable if the rank of the equal character string is greater than a confidence level threshold.

US Pat. No. 10,169,331

TEXT MINING FOR AUTOMATICALLY DETERMINING SEMANTIC RELATEDNESS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method, comprising:by a text mining server that autonomously determines semantic relatedness of a plurality of semantic concepts:
establishing a network connection to one or more social networking platform servers;
identifying, by programmatically navigating at least one application programming interface (API) of at least one social networking application hosted by the one or more social networking platform servers, multiple reference documents and multiple test documents published by the one or more social networking platform servers;
applying a first text mining analysis on the reference documents and extracting a non-redundant set of reference concepts from the reference documents;
for each reference concept of the set of reference concepts, computing a reference co-occurrence frequency (RCCF), the RCCF indicating the frequency of co-occurrence of the reference concept with all other reference concepts within the reference documents;
applying a second text mining analysis on the test documents and extracting a non-redundant set of test concepts, the test concepts comprising one or more new concepts that are not elements of the set of reference concepts and the test concepts comprising one or more of the reference concepts;
computing an extended co-occurrence matrix indicating the frequency of co-occurrence of each new concept and each reference concept with all other new concepts and reference concepts within the test documents;
for each of the new concepts, computing a new concept relatedness score (NCRS) as a function of the co-occurrences of the new concept in the extended co-occurrence matrix, the NCRS representing the semantic relatedness of the new concept to a totality of the reference concepts;
for each of the test documents, computing a document similarity score (DSS) by aggregating the NCRS of each new concept and the RCCF of each reference concept contained in the test document, the DSS representing the semantic relatedness of the test document to the totality of the reference concepts;
automatically identifying any of the test documents with a computed DSS below a DSS threshold value; and
one or more of marking, blocking and removing any identified test documents with the computed DSS below the DSS threshold value.

US Pat. No. 10,169,329

EXEMPLAR-BASED NATURAL LANGUAGE PROCESSING

Apple Inc., Cupertino, C...

1. A non-transitory computer-readable storage medium for natural language processing comprising computer-executable instructions for causing a processor to:receive a speech input representing a user request;
generate a first text phrase corresponding to the speech input;
determine, with respect to a semantic space, a plurality of semantic edit distances between the first text phrase and a plurality of exemplar text phrases, wherein each exemplar text phrase of the plurality of exemplar text phrases is associated with a respective predetermined intent of a plurality of predetermined intents;
determine a plurality of centroid distances between a centroid position of the first text phrase in the semantic space and a plurality of centroid positions of the plurality of exemplar text phrases in the semantic space;
determine a plurality of degrees of semantic similarity between the first text phrase and the plurality of exemplar text phrases, wherein each degree of semantic similarity of the plurality of degrees of semantic similarity is determined based on a linear combination of a respective semantic edit distance of the plurality of semantic edit distances and a respective centroid distance of the plurality of centroid distances;
identify, based on the plurality of degrees of semantic similarity, a first exemplar text phrase of the plurality of exemplar text phrases, wherein the first exemplar text phrase is most semantically similar to the first text phrase among the plurality of exemplar text phrases, and wherein the first exemplar text phrase is associated with a first predetermined intent of the plurality of predetermined intents;
determine, based on the identified first exemplar text phrase, a user intent corresponding to the first text phrase, wherein the determined user intent corresponds to the first predetermined intent; and
in accordance with the determined user intent, perform one or more tasks responsive to the user request.

US Pat. No. 10,169,328

POST-PROCESSING FOR IDENTIFYING NONSENSE PASSAGES IN A QUESTION ANSWERING SYSTEM

International Business Ma...

1. A method, in a data processing system, for identifying nonsense passages, the method comprising:annotating, by an annotator in a nonsense identification component within a natural language processing pipeline configured to execute in the data processing system, an input passage with linguistic features to form an annotated passage;
counting, by metric counters component in the nonsense identification component, a number of instances of each type of linguistic feature in the annotated passage to form a set of feature counts;
determining, by the metric counters component, a value for a metric based on the set of feature counts;
comparing, by a comparator component of the nonsense identification component, the value for the metric to a predetermined model threshold;
determining, by a filter component of the nonsense identification component, whether the input passage is a nonsense passage based on a result of the comparison;
responsive to the filter component determining the given evidence passage is a nonsense passage, sending, by the filter component of the nonsense identification component, the input passage to a semi-structured data pipeline configured to execute in the data processing system and preventing the input passage from proceeding in the natural language processing pipeline; and
responsive to the filter component not determining that the input passage is a nonsense passage, passing, by the filter component, the input passage to the natural language processing pipeline.

US Pat. No. 10,169,327

COGNITIVE REMINDER NOTIFICATION MECHANISMS FOR ANSWERS TO QUESTIONS

International Business Ma...

1. A method, in a data processing system comprising a processor and a memory that operate to implement a natural language processing system, the method comprising:generating, by the natural language processing system implemented by the data processing system, a result of processing a natural language query;
determining, by the data processing system, that at least one of the natural language query or the result comprises a temporal characteristic;
in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, generating a reminder notification data structure having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query;
storing the reminder notification data structure in a data storage device; and
at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, outputting a reminder notification to a client device associated with a user, wherein the reminder notification specifies the result generated for the natural language query and a historical listing that specifies a history of changes to the result occurring from a time that the result was originally generated for the natural language query and the scheduled reminder notification time, wherein the historical listing includes at least one change from the time that the result was originally generated for the natural language query and the scheduled reminder notification time.

US Pat. No. 10,169,326

COGNITIVE REMINDER NOTIFICATION MECHANISMS FOR ANSWERS TO QUESTIONS

International Business Ma...

1. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device implementing a natural language processing system, causes the computing device to:generate, by the natural language processing system, a result of processing a natural language query;
determine that at least one of the natural language query or the result comprises a temporal characteristic;
generate, in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, a reminder notification data structure having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query;
store the reminder notification data structure in a data storage device; and
output, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification to a client device associated with a user, wherein the reminder notification specifies the result generated for the natural language query and a historical listing that specifies a history of changes to the result occurring from a time that the result was originally generated for the natural language query and the scheduled reminder notification time, wherein the historical listing includes at least one change from the time that the result was originally generated for the natural language query and the scheduled reminder notification time.

US Pat. No. 10,169,325

SEGMENTING AND INTERPRETING A DOCUMENT, AND RELOCATING DOCUMENT FRAGMENTS TO CORRESPONDING SECTIONS

INTERNATIONAL BUSINESS MA...

13. A computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:receive a first item and a second item;
determine that the first item is a fragment matching a lexicon;
place the fragment in a first section of a document, the first section selected based on the matching lexicon;
segment the document into multiple sections, wherein each of the multiple sections corresponds to a respective section type of multiple section types;
segment items in a first section of multiple sections of the document into multiple fragments, wherein the first section corresponds to a first section type;
determine a section type of each of the multiple fragments in the first section;
determine whether the multiple fragments include fragments that correspond to different section types and that are interspersed among each other in even proportions; and
based on the multiple fragments in the first section including fragments that correspond to different section types and that are interspersed among each other in even proportions:
determine that the fragments that correspond to different section types and that are interspersed among each other in even proportions do not belong in the first section;
generate a new section corresponding to a section type that corresponds to a section type that is different than the multiple section types; and
re-locate the fragments that correspond to different section types and that are interspersed among each other in even proportions to the new section.

US Pat. No. 10,169,323

DIAGNOSING AUTISM SPECTRUM DISORDER USING NATURAL LANGUAGE PROCESSING

International Business Ma...

1. A method for evaluating a textual conversation, comprising:generating a machine learning (ML) model using training data comprising a first plurality of training examples, each example being a text of a conversation labeled as exhibiting at least one characteristic of autism;
receiving text of an ongoing conversation between a plurality of participants;
annotating the text of the conversation using natural language processing;
identifying features in the conversation using the annotations;
evaluating the features using the ML model to determine a measure of probability that a first one of the plurality of participants in the conversation falls on the autism spectrum; and
based on the measure of probability, outputting for display, during the conversation, a notice indicating that the first participant has exhibited a characteristic of autism.

US Pat. No. 10,169,322

PERSONAL DICTIONARY

Dinky Labs, LLC, Bartlet...

1. A method for controlling a user device including a processor and a display, the display including a graphical user interface, the method comprising steps:receiving, at the processor, a request from a user to construct a word entry of a word, wherein the processor is in data communication with one or more non-transitory memory mediums;
retrieving, by the processor, a user profile of the user from a non-transitory memory medium, wherein the user profile includes the user's native language and age;
compiling, by the processor, a list of one or more definition databases according to the user's native language and age, wherein the list is stored in a partition of a non-transitory memory medium by the processor;
ranking, by the processor, definitions of the word provided by the definition databases according to the user's native language and age, wherein the definition databases include a first, a second, and a third definition database,
if the first definition database is appropriate for both the native language and the age of the user profile, the first definition database ranks first,
if the second definition database is appropriate for the native language but not the age of the user profile, the second definition database ranks second,
if the third definition database is appropriate for the age but not the native language of the user profile, the third definition database ranks third;
retrieving, by the processor, a top ranked definition of the word from its corresponding definition database;
formatting, by the processor, the definitions retrieved from the definition databases according to a resolution of the display of the user device; and
displaying, by the processor, the formatted definitions on the graphical user interface of the display.

US Pat. No. 10,169,321

BROWSER EXTENSION FOR FIELD DETECTION AND AUTOMATIC POPULATION

CAPITAL ONE SERVICES, LLC...

1. A browser extension system comprising:a communication device configured to communicate with a computing device executing a browser extension application;
a memory storing instructions; and
a processor configured to execute the instructions to perform operations comprising:
generating a regular expression configured to detect a plurality of fields in a web page, wherein the regular expression is a sequence of characters defining a search pattern and the web page includes a merchant-provided payment process, and wherein the payment process includes a message;
providing the regular expression to the browser extension application;
receiving, from the browser extension application, an indication of an unrecognized field in the web page based on an execution of the browser extension application by the computing device, the execution comprising using the regular expression to detect a transaction field in a web page;
in response to the received indication of the at least one unrecognized field, providing, via a pop-up notification, suggested transaction data to the browser extension application based on:
(1) transaction data not used to automatically populate a recognized transaction field, and
(2) a characteristic of the unrecognized field detected by the regular expression, the characteristic comprising a number of characters required to populate the field, and one or more types of characters required to populate the field;
receiving, from the browser extension application, an indication of a selection of the suggested transaction data to populate the unrecognized field;
generating an updated regular expression configured to detect the unrecognized field in the web page based on the selection of the suggested transaction data;
providing the updated regular expression to the browser extension application; and
receiving, from the browser extension application, an indication of an additional transaction field detected in the message.

US Pat. No. 10,169,319

SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR IMPROVING DIALOG SERVICE QUALITY VIA USER FEEDBACK

INTERNATIONAL BUSINESS MA...

1. A computer-implemented dialog performance improvement method, the method comprising:computing a plurality of question classes and a confidence score for each of the question classes for a language input of a user;
comparing the confidence score to an upper threshold and a lower threshold for each of the question classes to determine which of at least one action to perform;
receiving a language feedback from the user for the performed action; and
adjusting at least one of the upper threshold and the lower threshold based on the language feedback from the user,
wherein the at least one action to perform includes a different action based on the confidence score being above, below, or in between the upper threshold and the lower threshold,
wherein, if the confidence score is between the upper threshold and the lower threshold, querying the user for a second language feedback, and decreasing the upper threshold to produce an updated upper threshold if the second language feedback includes a positive feedback or increasing the lower threshold to produce an updated lower threshold if the second language feedback includes a negative feedback, thereby to refine a distance between the upper threshold and the lower threshold in order to increase an accuracy of an answer to the plurality of question classes, and
wherein a confidence score of a next language input of the user is compared with the updated upper threshold or the updated lower threshold to determine which of the at least one action to perform.

US Pat. No. 10,169,318

FILTER AND SORT BY FORMAT

Microsoft Technology Lice...

1. A method executable on a computer system for organizing data cells, the computer system having a graphical user interface including a display device and one or more user interface selection devices, the method comprising:providing a sheet comprising a plurality of data cells, comprising:
at least a first data cell associated with a first format;
at least a second data cell associated with a second format; and
at least a third data cell associated with a third format;
receiving a selection of data cells within the plurality of data cells, wherein the selected data cells comprise at least the first data cell and the second data cell but not the third data cell, and wherein the selected data cells are not associated with the third format;
analyzing the selected data cells to identify one or more formats associated with the selected data cells, wherein analyzing the selected data cells includes identifying the first format and the second format but not the third format;
providing a menu, wherein the menu includes options for organizing the selected data cells, and wherein the options include the first format and the second format but not the third format;
organizing the selected data cells based on the first format;
displaying the organized data cells, including displaying at least the first data cell having the first format and at least the second data cell having the second format; and
simultaneously displaying at least the third data cell associated with the third format.

US Pat. No. 10,169,317

RENDERING COMMON CELL FORMATTING FOR ADJACENT CELLS

Apple Inc., Cupertino, C...

1. One or more non-transitory, tangible machine-readable media comprising instructions to:identify a set of adjacent cells in a table of cells that have at least one border edge in common with another cell in the set of adjacent cells and at least one type of cell formatting in common, wherein the at least one type of cell formatting comprises a fill pattern having a particular shape;
identify a contiguous border around the set of adjacent cells;
apply the fill pattern contiguously to an area inside the contiguous border; and
render on a display the set of adjacent cells in the table of cells with the cell formatting applied contiguously within the contiguous border, rendering the at least one type of cell formatting as a single entity instead of individually for each cell, such that the cell formatting is automatically rendered on the display to appear seamless between each cell in the set of adjacent cells by displaying the cell formatting without cell borders within the contiguous border.

US Pat. No. 10,169,316

METHOD AND SYSTEM TO CONVERT DOCUMENT SOURCE DATA TO XML VIA ANNOTATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:providing a defined set of XML tags to a user for annotating a document;
receiving, by one or more computing systems and from the user, the document that includes source data and one or more annotations by the user regarding a subset of the source data, wherein the one or more annotations are selected from the defined set of XML tags and wherein each of the one or more annotations describe and delineate boundaries of the subset of the source data;
parsing, by the one or more computing systems, the document based on the one or more annotations, wherein parsing the document includes:
generating, based at least in part on the one or more annotations, one or more data structures corresponding to the subset of the source data; and
constructing, based at least in part on mapping of the one or more data structures, a target XML document that consists of information extracted from the subset of the source data, wherein the subset includes multiple distinct subsets of the source data, and wherein at least one of the distinct subsets includes a data table having a first number of columns that is distinct from a second number of columns included in a data table associated with at least one other of the distinct subsets.

US Pat. No. 10,169,314

SYSTEM AND METHOD FOR MODIFYING WEB CONTENT

1. A web server for modifying web content, comprising:a network interface configured to receive, via a network, a body of existing digital text web content from a tag that is embedded within and hosted by a web page having the body of digital text stored therein; and
a processor configured to
identify at least one keyword included within the body of the existing digital text,
determine supplemental digital text web content from a web page of a different website having content to add to the body of the existing digital text web content based on the identified at least one keyword, and
generate additional text to add to the body of the existing digital text web content, the generating comprising retrieving a string of additional words which include empty data fields interspersed therein, and filling-in the empty data fields with auto-detected keywords from the supplemental digital text web content of the web page of the different website to generate a filled-in string of words that comprises a description based on a position between the beginning and the end of the body of the existing digital text web content where the filled-in string of words are to be embedded,
wherein the network interface is further configured to transmit the filled-in string of words to the tag embedded within and hosted by the web page thereby integrating the filled-in string of words into the body of the existing digital text stored within the web page without removing text content from the body of the existing digital text web content.

US Pat. No. 10,169,313

IN-CONTEXT EDITING OF TEXT FOR ELEMENTS OF A GRAPHICAL USER INTERFACE

SAP SE, Walldorf (DE)

1. A non-transitory computer-readable medium having stored thereon computer-executable instructions for causing a computer system, when programmed thereby, to perform:receiving user input specifying a graphical user interface (“GUI”);
determining multiple GUI elements associated with the GUI;
retrieving code defining the multiple GUI elements, wherein, for at least a portion of the multiple GUI elements, the code defines multiple textual content items to be displayed in association with a respective GUI element;
rendering a display divided into a first, shell portion, and a second, editing portion, wherein the first and second portions are displayed such that GUI elements of the multiple GUI elements displayed in the first portion do not overlap or obscure, and are not overlapped or obscured by, GUI elements of the multiple GUI elements displayed in the second portion;
rendering for display the GUI in the first portion of the display, the GUI comprising the multiple GUI elements, based at least in part on the retrieved code, at least a portion of the multiple GUI elements being operable by a user;
concurrently rendering for display in the second portion of the display a first textual content item and at least a second textual content item associated with at least one GUI element, at least one of the first textual content item and the at least a second textual content item being concurrently displayed in the first portion of the display;
receiving user input from the user in the second portion of the display for the first textual content item of the at least one GUI element;
based at least in part on the user input, updating at least some of the text of the first textual content item specified by the at least one GUI element in the displayed GUI; and
updating the code defining the at least one GUI element with the updated text, wherein the updated text is displayed when the at least one GUI element is rendered in another display.

US Pat. No. 10,169,310

RICH TEXT HANDLING FOR A WEB APPLICATION

INTERNATIONAL BUSINESS MA...

1. A computer program product comprising a computer readable hardware storage memory device having readable program code embodied in the hardware storage memory device, the computer program product includes at least one component, the at least one component executed by a processor to:initialize a dictionary containing words;
create at least one signature for each dictionary word;
add each dictionary word to at least one list keyed by each of the at least one signatures for each dictionary word;
determine that a word is misspelled by checking the dictionary for the misspelled word resulting in a null value, the checking the dictionary comprising determining whether the misspelled word is present in the at least one list for a primary signature of the misspelled word, and when the misspelled word is not present in the at least one list, then the misspelled word is not spelled correctly resulting in the null value;
create substitution list for the misspelled word, when the misspelled word is not spelled correctly, which includes:
creating at least one signature associated with the misspelled word;
finding all the dictionary words in the at least one list keyed by the at least one signature associated with the misspelled word; and
selecting best matches to the misspelled word;
provide from the selected best matches at least one replacement word for the misspelled word in the documents having rich text,
wherein the dictionary is initialized from wordlists and then instantiated and serialized as a serialized hashtable,
the serialized hashtable of the dictionary comprises property files in Java code for a dictionary class,
the dictionary is stored on a server side in a hardware server memory for fast retrieval, and
the at least one component is executed in a servlet environment which includes server-side Java programs that are loaded and run within a framework of a web server.

US Pat. No. 10,169,309

GENERATION OF COMBINED DOCUMENTS FROM CONTENT AND LAYOUT DOCUMENTS BASED ON SEMANTICALLY NEUTRAL ELEMENTS

International Business Ma...

1. A method for managing markup documents each having a definition conforming to a pre-defined specification, the method comprising:receiving, by a software application executing on a web server, a request to access a web site hosted on the web server;
dispatching, by the software application, the request to a web application executing on the web server;
retrieving, by the web application, a definition of a content markup document from a web page repository that maintains web pages for the web site, the definition of the content markup document comprising a set of one or more content portions each enclosed within a content element of semantically neutral type and having a corresponding content identifier;
returning, by the web application, the definition of the content markup document to a combiner application executing on the web server;
retrieving, by the combiner application interacting with the web application, a definition of a layout markup document, the definition of the layout markup document comprising a set of one or more layout elements of semantically neutral type each having a corresponding layout identifier;
generating, by the combiner application, a definition of a combined markup document from the definition of the layout markup document and the definition of the content markup document, wherein generating the definition of the combined markup document comprises inserting the content portion enclosed within each content element of the content markup document into the definition of the layout markup document in correspondence to each layout element with the corresponding layout identifier matching the corresponding content identifier of the content element;
returning, by the combiner application, the definition of the combined markup document to the software application;
wherein each content element comprises a start metadata element for a start thereof and having a first attribute with a pre-defined start value and a second attribute whose value defines the content identifier, and an end metadata element for an end thereof and having a second attribute with a pre-defined end value, and wherein each layout element is a metadata element having a third attribute with a pre-defined layout value and a fourth attribute whose value defines the layout identifier;
determining the layout elements comprised in the definition of the layout markup document; and
storing an indication of the layout elements, wherein the step of generating the definition of the combined markup document comprises:
retrieving the stored indication of the layout elements; and
for the retrieved stored indication of each layout element of the layout elements, searching for each content element in the content markup document with the corresponding content identifier that matches the corresponding layout identifier of the layout element;
updating a placeholder repository comprising the indication of the layout elements if the definition of the layout markup document has been updated since a last use thereof, wherein updating the placeholder repository comprises:
scanning the layout markup document to search for potential placeholders; and
responsive to finding a placeholder in the layout markup document, adding a corresponding layout element to an entry for the layout markup document maintained in the placeholder repository;
wherein the definition of the layout markup document and the definition of the content markup document each comprise a header and a body, and wherein metadata elements are used to enclose the content portions in the header of the content page, and to define placeholders in the header of the layout page.

US Pat. No. 10,169,306

ENHANCED FAVORITES SERVICE FOR WEB BROWSERS AND WEB APPLICATIONS

Oath Inc., Dulles, VA (U...

1. A computer-implemented method for providing persistent access to a data feed listing in a web page, the method comprising the following operations performed by at least one processor:displaying, on a user device, a window of a web browser;
displaying, in the window of the web browser, a list linking to one or more feed-enabled pages;
receiving, via the user device, input specifying a user operation associated with at least one of the one or more feed enabled pages;
providing, to at least one server, data associated with the user operation;
caching, by the at least one server, the data associated with the user operation;
analyzing, by the at least one server, the data associated with the user operation for URL information;
matching, by the at least one server, the URL information with taxonomy path data in a database; and
in response to the user operation associated with at least one of the one or more feed-enabled pages, displaying a tearoff object configured to automatically provide updated feed information in a persistent window separate from the window of the web browser.

US Pat. No. 10,169,304

PROVIDING DIFFERENT FONT HINTS BASED ON DEVICE, TEXT AND FONT CONTEXT

Amazon Technologies, Inc....

1. A method for a user device, the method comprising:receiving, at the user device, an electronic document comprising text in a first font of a plurality of fonts and a hint tag set comprising a suggested order of hint types for the first font, wherein a first hint type is arranged in the suggested order according to a corresponding quality score for the first hint type, the quality score indicating how closely characters from a generated simulated presentation of the text match a predefined presentation of the characters;
determining, by a processing device, a hint type for the first font from the suggested order of hint types and according to a capability of the user device; and
utilizing the hint type for a presentation of the text of the electronic document.

US Pat. No. 10,169,302

METHOD AND SYSTEM FOR PAGE DISPLAY, SERVER-END DEVICE, CLIENT DEVICE AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A method for page display, wherein the method is operable in a server-end device and comprises:obtaining a first page content corresponding to a page to be displayed at a client device;
allocating a page identity information to the first page content, wherein the page identity information has a one-to-one correspondence relationship with the first page content, comprising:
obtaining a page identity information category corresponding to the first page content, the page identity information category is divided as size ranges, colors and background brightness of the page;
obtaining a current value of a visual business to customer tag (vb2ctag) corresponding to the first page content based on the page identity information category, wherein the vb2ctag is a 5-digit number representing different page identity information; and
increasing the current value of the vb2ctag by one as the page identity information of the first page content;
generating a first page representation information from the first page content and the page identity information allocated to the first page content, and storing the generated first page representation information in the server-end device;
generating page invoking information from the page identity information, wherein the page invoking information contains the page identity information and is used to invoke the first page representation information;
obtaining a second page representation information; and
obtaining a second page content and the page identity information corresponding to the second page content from the second page representation information,
wherein a process of obtaining the second page content and the page identity information from the second page representation information is inverse to a process of generating the first page representation information from the first page content and the page identity information, and
the first page content and the second page content correspond to the same page identity information.

US Pat. No. 10,169,301

INTEGRATED DOCUMENT EDITOR

8. The computing device according to claim 6 wherein said region is further defined based on at least one of said plurality of document locations as represented on the touch screen being within said region.

US Pat. No. 10,169,299

ANALYZING DOCUMENT CONTENT AND GENERATING AN APPENDIX

International Business Ma...

1. A method for generating an appendix from document content comprising:analyzing a document to identify a structure of the document;
in response to identifying the structure of the document, extracting semantic relationships, wherein the extracting semantic relationships further comprises:
extracting a semantic relationship from each identified sentence using example statistical modeling, wherein the semantic relationship comprises a subject, a predicate, and an object;
responsive to extracting and identifying the semantic relationship, applying statistical distribution analysis to record a position where the subject and the object appear in the document;
identifying and eliminating semantic relationships that are trivial relations, wherein trivial relations are semantic relationships that do not have content relevant to a main topic of the document, and wherein the trivial relations are at least one of:
the subject being evenly distributed within a section of the document, wherein a section is selected from a group consisting of a sentence, a paragraph, a page, and a chapter; and
the object being evenly distributed within the section of the document; and
storing a relation as a candidate appendix topic in persistent storage for further evaluation for inclusion in the appendix, wherein the relation is a semantic relationship that is either the main topic of the document or does not have a subject or an object evenly distributed in the document;
in response to extracting semantic relationships, determining candidate appendix topics based on a degree of interdependency;
in response to determining candidate appendix topics, executing a web mining operation, wherein the web mining operation calculates a measure of relevance of the mined web page to the determined candidate appendix topics, and wherein the determining further comprises:
detecting at least one interdependency between two or more relations stored in the persistent storage;
graphing the at least one interdependency between the two or more relations, wherein the two or more relations comprise nodes of a graph, and the at least one interdependency comprise an edge of the graph; and
identifying candidate appendix topics, based on a degree of relatedness between the nodes of the graph, centrality, between-ness, and connected-ness;
storing the identified candidate appendix topics in persistent storage; and
formatting the appendix based on the mined intermediate results.

US Pat. No. 10,169,297

RESISTIVE MEMORY ARRAYS FOR PERFORMING MULTIPLY-ACCUMULATE OPERATIONS

HEWLETT PACKARD ENTERPRIS...

1. A resistive memory array comprising:a number of resistive memory elements to receive a common-valued read signal, in which a resistance of a resistive memory element defines a value within a matrix;
a number of multiplication engines to perform a multiply operation by:
receiving a memory element output from a corresponding resistive memory element;
receiving an input signal; and
generating a multiplication output based on a received memory element output and a received input signal;
a conditioning resistor to condition the multiplication outputs; and
an accumulation engine to sum the multiplication outputs from the number of multiplication engines, in which the summed multiplication outputs represent a multiplication of the matrix and a number of input signals.

US Pat. No. 10,169,296

DISTRIBUTED MATRIX MULTIPLICATION FOR NEURAL NETWORKS

Intel Corporation, Santa...

1. An apparatus, comprising:a plurality of memory elements to store matrix data, wherein the matrix data comprises a plurality of input matrices; and
a plurality of processing elements to perform a matrix operation associated with the plurality of input matrices, wherein the plurality of processing elements is configured to:
partition the plurality of input matrices into a plurality of input partitions, wherein the plurality of input matrices is partitioned based on a number of available processing elements;
distribute the plurality of input partitions among the plurality of processing elements, wherein each input partition is distributed to a particular processing element of the plurality of processing elements;
perform a plurality of partial matrix operations using the plurality of processing elements;
transmit partial matrix data between the plurality of processing elements while performing the plurality of partial matrix operations; and
determine a result of the matrix operation based on the plurality of partial matrix operations.

US Pat. No. 10,169,294

CONFIGURABLE FFT ARCHITECTURE

Imagination Technologies ...

1. A device for performing a Fast Fourier Transform (FFT) on an input dataset, the device comprising:an FFT pipeline comprising a first stage configured to receive the input dataset, a plurality of intermediate stages and a final stage, each stage comprising: a stage input; a computational element; and a stage output;
a controller configured to select a size for the FFT;
a multiplexer configured to: receive data output from one of the intermediate stages and data output from the final stage; select one of the received outputs in dependence on the selected FFT size; and output said selection as a result of the FFT on the input dataset;
a complex multiplier configured to perform multiplication of data at a point in the FFT pipeline, wherein the controller is further configured to select a multiplication factor for performing the multiplication by the complex multiplier in dependence on the FFT size; and
a constant multiplier configured to perform multiplication of data at a point in the FFT pipeline, wherein the controller is further configured to select a value from a precomputed set of values for performing the multiplication by the constant multiplier, the value being selected in dependence on the FFT size.

US Pat. No. 10,169,292

STRING VARIABLES REPRSENTATION IN SOLVERS

International Business Ma...

1. A method for solving a Constraint Satisfaction Problem (CSP) having a constraint associated with at least one string variable, comprising:defining at least one string variable using a string domain data structure representing a domain of string values for string variables, wherein the string data structure represents the domain of string values as a Deterministic Finite Automaton (DFA), wherein the DFA comprising nodes and edges, wherein the nodes comprise an initial node and one or more accepting nodes, wherein the DFA has no back loops having size greater than 1, wherein the edges of the DFA represent one or more characters, wherein the domain of string values comprises each string for which a path in the DFA, according to characters in the edges of the path, exists beginning in the initial node and ending in one of the one or more accepting nodes;
defining a constraint for the CSP, wherein the constraint involves the at least one variable and wherein the constraint is to be complied with by a solution to the CSP;
invoking a CSP solver adapted to determine a solution to variables including the at least one string variable while complying with the constraint, and to invoke operations performed over the domain of the at least one string variable, wherein the CSP solver is configured to perform at least one of the following:
value propagation into the domain of string values, wherein the value propagation reduces a size of the domain; and
value selection of a value from the domain of string values, wherein the value selection reduces the domain, whereby value propagation to domains of one or more other variables is invoked by the CSP solver.

US Pat. No. 10,169,290

DATA PROCESSING METHOD AND APPARATUS

Huawei Technologies Co., ...

1. A method for processing data by a data processing apparatus, the method comprising:acquiring historical data, by a network interface of the data processing apparatus, wherein the historical data belongs to a first level and a second level, and data corresponding to the first level comprises data corresponding to the second level;
generating, by a processor of the data processing apparatus, from the historical data, a first-granularity data set according to a first granularity;
generating, by the processor, from the historical data, a second-granularity data set according to a second granularity, wherein the first granularity and the second granularity respectively correspond to the first level and the second level;
performing, by the processor, modeling for a second-granularity forecasting model according to the first-granularity data set and the second-granularity data set;
performing, by the processor, an performing forecasting by using the second-granularity forecasting model; and
obtain, by the network interface, second-granularity forecast data.

US Pat. No. 10,169,289

MEMORY SYSTEM AND METHOD FOR ACCELERATING BOOT TIME

SK Hynix Inc., Gyeonggi-...

1. A memory system comprising:a plurality of memory channels, each of the plurality of memory channels includes a plurality of memory dies and a die processor, each of the plurality of memory dies includes a plurality of memory blocks; and
a memory controller including a monarch processor, coupled to the plurality of memory channels,
wherein the die processor on each of the plurality of memory channels is configured in parallel to:
process to find last written data within at least a predetermined block of the plurality of memory dies; and
provide information regarding the last written data to the monarch processor, the monarch processor determines which boot record to be used to identify firmware images based on the information.

US Pat. No. 10,169,288

NODE INTERCONNECT ARCHITECTURE TO IMPLEMENT HIGH-PERFORMANCE SUPERCOMPUTER

International Business Ma...

1. A computing system, comprising:a plurality N of computing groups which are optically connected to each other to form a computing system, wherein each computing group comprises:
a local group of N multi-processor modules;
a local group of N optical redistribution boxes;
a plurality (N×N) of local optical bundles which optically connect the local group of N multi-processor modules to the local group of N optical redistribution boxes; and
a plurality (N×N) of global optical bundles, wherein N of the global optical bundles optically connect the local group of N optical redistribution boxes to the local group of N multi-processor modules, and wherein (N×N)—N of the global optical bundles optically connect other local groups of optical redistribution boxes in other computing groups to the local group of N multi-processor modules.

US Pat. No. 10,169,287

IMPLEMENTING MODAL SELECTION OF BIMODAL COHERENT ACCELERATOR

International Business Ma...

1. A system for implementing modal selection of a bimodal coherent accelerator in a computer system comprising:a system processor;
a Peripheral Component Interconnect Express (PCIE) standard Vendor Specific Extended Capability (VSEC) structure or Coherently Attached Processor Interface (CAPI) VSEC data in the configuration space of a CAPI-capable PCIE adapter;
said system processor using the CAPI VSEC data in the configuration space of a CAPI-capable PCIE adapter and procedures defined in the Coherent Accelerator Interface Architecture (CAIA) to detect, enable and control a coherent coprocessor adapter over PCIE;
said system processor enabling the CAPI-capable PCIE adapter to be bimodal and operate in a conventional PCI-Express (PCIE) transaction mode or a CAPI mode utilizing CAIA coherence and programming interface capabilities;
configuration firmware in the computer system in which the PCIE adapter is installed; and
said CAPI-capable PCIE adapter is enabled to be selectively configured and enabled in either PCIE transaction mode or CAPI mode by the configuration firmware in the computer system in which the PCIE adapter is installed; and
said CAPI mode enabling CAPI coherent accelerator functions over PCIE utilizing a Coherent Accelerator Interface Architecture (CAIA) accelerator including the configuration space, a Processor Service Layer (PSL) and a plurality of Accelerator Function Units (AFUs).

US Pat. No. 10,169,285

PORTABLE COMPUTING SYSTEM AND PORTABLE COMPUTER FOR USE WITH SAME

1. A computing system having a disconnected state and a connected state for operation, the computing system comprising:a portable computer comprising:
a processor;
a controller;
at least one integrated circuit for storing data; and
at least one connector,
the portable computer being no greater in size than 100 mm by 60 mm and no greater than 6 mm thick, a thermally conductive coating being disposed between the processor and an outer metallic case of the portable computer; and
a reader comprising:
a housing;
at least one input port;
at least one output port; and
at least one connector,
the reader having no user application processing CPU,
the portable computer and the reader comprising, in the connected state:
the at least one connector of the portable computer connected to the at least one connector of the reader;
the at least one input port and the at least one output port of the reader providing access to the portable computer, in order to access the stored data contained on the at least one integrated circuit of the portable computer; and
the reader routing signals between the at least one input port and the at least one output port of the reader and the portable computer, for accessing the stored data contained on the at least one integrated circuit of the portable computer; and
the portable computer and the reader comprising, in the disconnected state:
the at least one connector of the portable computer disconnected from the at least one connector of the reader; and
the at least one input port and the at least one output port of the reader providing no access to the at least one integrated circuit, so there is no ability to access the stored data contained in the at least one integrated circuit of the portable computer via the reader.

US Pat. No. 10,169,283

CUSTOM DATA TRANSFER CONNECTOR AND ADAPTER

ARRIS Enterprises LLC, S...

1. A system for transferring data between a first device and a second device, the system comprising:an adapter base, wherein the adapter base is attached to the first device, and wherein the adapter base is secured to an enclosure of the first device;
a data transfer adapter, wherein the data transfer adapter comprises a serial AT attachment adapter, wherein the data transfer adapter is attached to the adapter base, and wherein the data transfer adapter comprises a clip at each end of the data transfer adapter, the clips providing for an attachment of the data transfer adapter to the adapter base and for a removal of the data transfer adapter from the adapter base; and
a connector opening coupled to the second device, wherein the connector opening comprises a serial AT attachment connector, wherein the connector opening is mounted to the bottom of an enclosure of the second device, and wherein the connector opening comprises one or more connector ports positioned within the connector opening according to a first orientation;
wherein the data transfer adapter comprises a pinout configuration that mates with the one or more connector ports positioned within the connector opening according to the first orientation, wherein the pinout configuration of the data transfer adapter mates with the one or more connector ports positioned within the connector opening when the enclosure of the first device is temporarily attached to the enclosure of the second device.

US Pat. No. 10,169,282

BUS SERIALIZATION FOR DEVICES WITHOUT MULTI-DEVICE SUPPORT

International Business Ma...

1. A method comprising:providing, by a first communication hardware set, data communication between a first master device and a set of controlled device(s) through a set of bus communication line(s);
providing, by a second communication hardware set, data communication between the second master device and the set of controlled device(s) through the set of bus communication line(s);
during the pendency of a communication session, between the first master device and the set of controlled device(s) through the first communication hardware set, sending, by a control unit, a first signal to the second master device to cause the second master device to suspend, as a hardware response, any communication with the set of controlled device(s); and
during the pendency of a communication session, between the second master device and the set of controlled device(s) through the second communication hardware set, sending, by the control unit, a second signal to the first master device to cause the first master device to suspend, as a hardware response, any communication with the set of controlled device(s);
wherein:
the set of bus communication line(s) includes a two line Inter-Integrated Circuit (I2C) bus;
the first signal is a clock stretching signal;
the second signal is a clock stretching signal; and
the first hardware communication set includes a first serializer-master communication hardware set, a first switch and a first controlled-bus communication hardware set.

US Pat. No. 10,169,280

DATA PROCESSING APPARATUS AND TERMINAL

HUAWEI TECHNOLOGIES CO., ...

1. An apparatus, comprising:an input switching module;
a buffer module; and
an output switching module;
wherein the buffer module comprises N buffer units, and N is a positive integer greater than 1;
wherein a first input end to an Nth input end of the input switching module are respectively connected to a first input end to an Nth input end of the apparatus, and a first output end to an Nth output end of the input switching module respectively correspond to a first buffer unit to an Nth buffer unit comprised in the buffer module; and
wherein a first input end to an Nth input end of the output switching module respectively correspond to the first buffer unit to the Nth buffer unit, and a first output end to an Nth output end of the output switching module are respectively connected to a first output end to an Nth output end of the apparatus;
wherein the input switching module is configured to acquire target data transmitted by a target input end of the apparatus, wherein the target input end is one or more input ends of the apparatus;
wherein the apparatus further comprises a write arbiter, a read arbiter, and a rearranger;
wherein a control end of the write arbiter is connected to a control end of the input switching module, and the write arbiter is configured to control the input switching module to store the target data into a target buffer unit, wherein the target buffer unit is one or more buffer units of the N buffer units;
wherein a control end of the read arbiter is connected to a control end of the output switching module, and the read arbiter is configured to control the output switching module to read the target data from the target buffer unit;
wherein the first output end to the Nth output end of the output switching module are respectively connected to a first input end to an Nth input end of the rearranger;
wherein a first output end to an Nth output end of the rearranger are respectively connected to the first output end to the Nth output end of the apparatus;
wherein the read arbiter is further configured to control the output switching module to transmit the target data to a target input end of the rearranger, wherein the target input end is an input end of the rearranger that is used to transmit the data to a target output end of the rearranger, the target output end is an output end of the rearranger that is connected to a destination port of the target data, and a destination end of the target data is one or more output ends of the apparatus; and
wherein the rearranger is configured to, when there are a plurality of pieces of data that are in a storage space of the rearranger and whose destination ports are the same as the destination port of the target data, sort the plurality of pieces of data whose destination ports are the same, and then output the plurality of pieces of data whose destination ports are the same to the destination port according to a result of the sorting.

US Pat. No. 10,169,279

INPUT/OUTPUT CONTROL DEVICE, INPUT/OUTPUT CONTROL SYSTEM, AND INPUT/OUTPUT CONTROL METHOD FOR CONVERSION OF LOGICAL ADDRESS OF INSTRUCTION INTO LOCAL ADDRESS OF DEVICE SPECIFIED IN INSTRUCTION

NEC CORPORATION, Tokyo (...

1. An input/output control device connected to an input/output switch which transfers a received input/output instruction to an input/output device whose local address is specified in the input/output instruction, the input/output control device comprising:a memory that stores specific information about a processor as well as storing a conversion table for converting a logical address of the input/output device into the local address, with the specific information and the conversion table each being associated with a group ID (Identification) of a device group which includes the processor and the input/output device; and
circuitry which has a configuration to identify the group ID of the device group from the specific information about a processor of sender which information is obtained when an input/output instruction is received, convert the logical address included in the input/output instruction into the local address which is obtained from the conversion table for the identified group ID of the device group, and send the input/output instruction to the input/output switch,
wherein the local address is associated with an input/output device ID identifying the input/output device, the input/output device ID including a bus number representing a bus to which the input/output device is connected.

US Pat. No. 10,169,278

LIN BUS MODULE

INFINEON TECHNOLOGIES AG,...

1. A network node for connecting to a Local Interconnect Network (LIN), the network node comprising:a bus terminal operably coupled to a data line to receive a data signal representing serial data via the data line, the data signal having a high signal level and a low signal level;
a receiver circuit coupled to the bus terminal, the receiver circuit including a comparator having a first input coupled to the bus terminal and a second input configured to receive a reference signal, wherein the comparator is configured to compare the data signal with the reference signal, and the comparator generates a binary output signal representing a result of the comparison;
a measurement circuit having an input coupled to the bus terminal, the measurement circuit configured to measure an amplitude of the high signal level of the data signal and to provide a first voltage signal at an output of the measurement circuit, the first voltage signal having a voltage proportional to the high signal level of the data signal received on the data line via the bus terminal; and
a scaling circuit having an input coupled to the output of the measurement circuit and an output coupled to the second input of the comparator, the scaling circuit configured to generate the reference signal from the first voltage signal, wherein the reference signal is provided at the output of the scaling circuit, and the reference signal is proportional to the first voltage signal.

US Pat. No. 10,169,275

SYSTEM, METHOD, AND RECORDING MEDIUM FOR TOPOLOGY-AWARE PARALLEL REDUCTION IN AN ACCELERATOR

INTERNATIONAL BUSINESS MA...

1. A topology-aware parallel reduction system, comprising:a partitioning device configured to partition data in each accelerator of a plurality of accelerators into partitions that transfer data in a first and second directions based on a topology of connections between the plurality of accelerators;
a control device configured to control, based on the topology of connections between the plurality of accelerators, a type of parallel reduction of data to use; and
an intra-root reduction device configured to use a full-duplex configuration of a PCIe bandwidth such that each accelerator of the plurality of connected accelerators selectively transfers data either in either direction or in each direction simultaneously.

US Pat. No. 10,169,274

SYSTEM AND METHOD FOR CHANGING A SLAVE IDENTIFICATION OF INTEGRATED CIRCUITS OVER A SHARED BUS

QUALCOMM Incorporated, S...

1. A method for resetting a slave identification (SID) of an integrated circuit (IC) on a computing device, the method comprising:determining that a plurality of ICs in communication with a shared bus operating in a master/slave configuration have the same SID;
identifying a common memory address of the plurality of ICs where data stored in the common memory address of a first of the plurality of ICs is different than data stored in the common memory address of a second of the plurality of ICs;
receiving at each of the plurality of ICs over the shared bus a first new SID value and a second new SID value;
receiving at each of the plurality of ICs over the shared bus a match data;
comparing with logic at each of the plurality of the ICs the received match data with the data stored in the common memory address of the plurality of ICs; and
based on the comparison, when the received match data is the same as the data stored in the common memory address, changing the SID of the IC to the received first new SID value.

US Pat. No. 10,169,273

FORCED COMPRESSION OF SINGLE I2C WRITES

QUALCOMM Incorporated, S...

1. A method performed at a physical layer interface in a master device coupled to a serial bus, comprising:buffering a first single-byte transaction addressed to a first register at a first address in a slave device coupled to the serial bus in a first-in-first-out buffer of the physical layer interface;
receiving at the physical layer interface a second single-byte transaction addressed to a second register at a second address in the slave device coupled to the serial bus;
determining in the physical layer interface whether the second address is incrementally greater than the first address;
combining the second single-byte transaction with the first single-byte transaction to obtain a multi-byte transaction;
replacing the first single-byte transaction with the multi-byte transaction in the first-in-first-out buffer; and
transmitting a sequence of transactions output by the first-in-first-out buffer over the serial bus.

US Pat. No. 10,169,272

DATA PROCESSING APPARATUS AND METHOD

INTERNATIONAL BUSINESS MA...

1. A data processing apparatus comprising:a plurality of processor cores;
a shared processor cache, the shared processor cache being connected to each of the processor cores and to a main memory, wherein in operation, the shared processor cache receives a descriptor sent by one of the processor cores regarding a transfer of requested data indicated by the descriptor from the shared processor cache to an input/output (I/O) device;
a bus controller, the bus controller being connected to the shared processor cache and receiving from the shared processor cache the descriptor sent by the one or more processor cores regarding the transfer of the requested data to trigger the bus controller to perform a data transfer according to the descriptor, wherein the descriptor is passed through the shared processor cache to the bus controller in parallel with the shared processor cache initiating prefetching the requested data from the shared processor cache or main memory by performing a direct memory access, and wherein based on receipt of the descriptor, the bus controller generates a data request to the shared processor cache to transfer the requested data to the I/O device;
a bus unit, the bus unit being connected to the bus controller and facilitating transferring data to or from the I/O device; and
wherein, by operation of the bus controller, the requested data is transferred from the shared processor cache to the bus unit for transfer to the I/O device.

US Pat. No. 10,169,271

DIRECT MEMORY ACCESS DESCRIPTOR

XILINX, INC., San Jose, ...

1. A system comprising:a memory;
a first buffer;
a second buffer; and
a direct memory access circuit coupled to the memory and first and second buffers and configured to:
receive a data transfer request indicating a first descriptor and a second descriptor, wherein the first descriptor indicates a first set of addresses of the first buffer from which a set of data is to be read and the second descriptor indicates a second set of addresses of the second buffer to which the set of data is to be written;
wherein:
the first descriptor references a first linked list of descriptor blocks,
the second descriptor references a second linked list of descriptor blocks, and
each of the descriptor blocks is stored in a contiguous portion of the memory, each descriptor block stores a set of descriptor entries that references a plurality of addresses of the first or second sets of addresses, and each descriptor entry includes a marker;
in response to receiving the data transfer request, transfer the set of data from the first set of addresses in the first buffer to the second set of addresses in the second buffer by traversing the first and second linked lists of descriptor blocks;
in response to the marker in a descriptor entry of the first descriptor or the second descriptor being a pause marker, pausing the transfer of the set of data from the first set of addresses in the first buffer for a period of time until the pause marker is removed; and
in response to the marker in a descriptor entry of the first descriptor or the second descriptor being a stop marker, ending the transfer the set of data from the first set of addresses in the first buffer.

US Pat. No. 10,169,270

TECHNIQUES FOR HANDLING INTERRUPT RELATED INFORMATION IN A DATA PROCESSING SYSTEM

International Business Ma...

1. A method of handling queued interrupts, comprising:determining, by an interrupt presentation controller (IPC), whether a received memory mapped input/output (MMIO) store request is associated with preempting a virtual processor (VP) thread;
in response to determining the MMIO store request is associated with preempting the VP thread:
writing, by the IPC, interrupt context information of the VP thread to a specified location in memory;
determining, by the IPC, whether an interrupt context table (ICT) indicates an interrupt is currently pending for the VP thread; and
in response to determining the ICT indicates an interrupt is currently pending for the VP thread, issuing, by the IPC, a redistribute message for the interrupt that is currently pending that causes the interrupt to be reassigned to a different VP thread.

US Pat. No. 10,169,269

ARCHITECTURE AND METHOD FOR MANAGING INTERRUPTS IN A VIRTUALIZED ENVIRONMENT

INTEL CORPORATION, Santa...

1. A system to manage interrupts, comprising:a processor; and
a non-transitory computer readable medium to store a set of instructions for execution by the processor, the set of instructions to cause the processor to:
receive an interrupt for a virtual machine (VM);
determine whether the VM is in a first mode of operation or a second mode of operation;
route the interrupt directly to a processor core for the VM and bypass a hypervisor when the VM is in the first mode of operation;
route the interrupt to the hypervisor for the VM when the VM is in the second mode of operation;
receive an end-of-interrupt (EOI) for the interrupt, the EOI to indicate completion of interrupt processing of the interrupt by the VM; and
determine whether a VM exit event is to occur based on a bit value stored in a register in response to the received EOI, the VM exit event to transition execution of an instruction by the VM to execution by the processor core or the hypervisor.

US Pat. No. 10,169,268

PROVIDING STATE STORAGE IN A PROCESSOR FOR SYSTEM MANAGEMENT MODE

Intel Corporation, Santa...

1. A system comprising:a first processor including a first core to execute instructions and to enter a system management mode (SMM), a first indicator to indicate whether a thread executing on the first core is in a long flow operation, a second indicator to indicate whether the thread is in a system management interrupt (SMI)-inhibited state, and a storage unit, wherein upon entry to the SMM the first core is to store an active state present in a state storage of the first core into the storage unit and to store a SMM execution state into the state storage of the first core, the storage unit dedicated to storage of the active state of the first core during the SMM;
a second processor including a second core to execute instructions and to enter the SMM, a first indicator to indicate whether a second thread executing on the second core is in a long flow operation, a second indicator to indicate whether the second thread is in the SMI-inhibited state, and a second storage unit, wherein upon entry to the SMM the second core is to store an active state present in a state storage of the second core into the second storage unit and to store a SMM execution state into the state storage of the second core, the second storage unit dedicated to storage of the active state of the second core during the SMM; and
a dynamic random access memory (DRAM) coupled to the first and second processors, wherein a portion of the DRAM is a system management random access memory (SMRAM) for the system.

US Pat. No. 10,169,267

TRANSACTIONAL EXECUTION ENABLED SUPERVISOR CALL INTERRUPTION WHILE IN TX MODE

International Business Ma...

1. A computer implemented method for managing an interruption while a processor is executing a transaction in a transactional-execution (TX) mode, the method comprising:executing, by a processor initiated into a TX mode by the executing, a transaction in a program context;
detecting, by the processor in the TX mode, an interruption request for an interruption;
based on the interruption being a TX compatible routine, accepting the interruption by the processor to execute the TX compatible routine in a supervisor context for changing supervisor resources;
executing the TX compatible routine within the TX mode;
executing, by the processor in the TX mode, a supervisor call (SC) instruction for requesting a rescindable operation by a supervisor on a supervisor resource, execution of the SC instruction causing the interruption request; and
returning to the program context to complete execution of the transaction.

US Pat. No. 10,169,264

IMPLEMENTING ROBUST READBACK CAPTURE IN A PROGRAMMABLE INTEGRATED CIRCUIT

XILINX, INC., San Jose, ...

1. A memory circuit in a programmable integrated circuit (IC), the memory circuit comprising:a control port and a clock port;
a configurable random access memory (RAM) having a control input and a clock input;
input multiplexer logic coupled to the control input and the clock input; and
a state machine coupled to the input multiplexer logic and configuration logic of the programmable IC, the state machine configured to:
in response to being enabled by the configuration logic, control the input multiplexer logic to switch a connection of the control input from the control port to the state machine and, subsequently, switch a connection of the clock input from the clock port to a configuration clock source; and
in response to being disabled by the configuration logic, control the input multiplexer logic to switch the connection of the clock input from the configuration clock source to the clock port and, subsequently, switch the connection of the control input from the state machine to the control port.

US Pat. No. 10,169,263

MEMORY SUBSYSTEM AND COMPUTER SYSTEM

International Business Ma...

1. A method comprising:estimating an access request frequency from a CPU to a memory subsystem by counting a number of CPU access requests and a number of requests other than CPU access requests, wherein a CPU access request is counted as plus one (+1) and a request other than a CPU access request and a system bus idle state are each counted as minus one (?1), wherein the CPU is connected to the memory subsystem via a system bus, and the memory subsystem comprises a DDR memory and a memory controller connected to the system bus;
comparing the estimated access request frequency with a predetermined threshold value stored in a register;
generating a clock gate signal to decimate an operating clock of the memory controller in response to a result of comparing the estimated access request frequency with the predetermined threshold value;
generating a dummy cycle signal to delay the timing of signal data output from the memory controller to the system bus in response to the result of comparing the estimated access request frequency with the predetermined threshold value; and
generating a clock enable signal to decimate an operating clock of the DDR memory in response to the result of comparing the estimated access request frequency with the predetermined threshold value.

US Pat. No. 10,169,262

LOW-POWER CLOCKING FOR A HIGH-SPEED MEMORY INTERFACE

QUALCOMM Incorporated, S...

1. A method for operating a communication interface coupling a memory device and a memory controller, comprising:transmitting a first clock signal having a first frequency to the memory device;
using the first clock signal to control transmissions of commands to the memory device over a command bus of the communication interface;
using the first clock signal to control transmissions of first data over a data bus of the communication interface in a first mode of operation; and
in a second mode of operation,
transmitting a second clock signal having a second frequency greater than the first frequency to the memory device, and
using the second clock signal to control transmissions of second data over the data bus,
wherein the second clock signal is suppressed in the first mode of operation.

US Pat. No. 10,169,261

ADDRESS LAYOUT OVER PHYSICAL MEMORY

International Business Ma...

1. A computer system including an address translation device (ATD) configured to translate, within a main memory of the computer system, a physical address of a memory line to a storage location of the memory line, the main memory including a plurality of memory devices, each memory device of the plurality memory devices having a respective memory capacity, each of the respective memory capacities including at least one contiguous memory portion of a uniform size, the memory line being stored in one of the at least one contiguous memory portions, the ATD comprising:a first data table structure having a set of consecutive rows, each row of the set of consecutive rows configured to uniquely identify one of the at least one contiguous memory portions; and
a first index calculation unit configured to calculate, for the physical address, a first row index that identifies a row of the first data table structure that identifies a memory portion, of the at least one contiguous memory portions, that includes the storage location of the memory line.

US Pat. No. 10,169,260

MULTIPROCESSOR CACHE BUFFER MANAGEMENT

International Business Ma...

1. A method comprising:receiving, from a first processor in a set of processors sharing a bus, a request for a first set of data;
receiving, from a second processor in the set of processors sharing the bus, a request for a second set of data;
writing a first portion of the first set of data and a first portion of the second set of data to a buffer;
writing additional portions of the first set of data and additional portions of the second set of data to the buffer as each additional portion is received;
determining that a portion of the first set of data has a higher priority to the bus than a portion of the second set of data based on a priority scheme, wherein the priority scheme specifies priority to the bus based on a sequential order, wherein the sequential order comprises:
(a) one or more sets of data that have not yet returned any portions of data; and
(b) one or more sets of data that have only returned one portion of data; and
granting the portion of the first set of data access to the bus.

US Pat. No. 10,169,259

PATTERN-BASED SERVICE BUS ARCHITECTURE USING ACTIVITY-ORIENTED SERVICES

Savigent Software, Inc., ...

1. A pattern-based service bus comprising:a plurality of bus endpoints that interacts with bus participants external to the pattern-based service bus, each of the plurality of bus endpoints identified by a unique address, and type of interaction to be provided by the bus endpoint;
a bus-hosted service that implements patterns that define allowed interactions between each of the plurality of bus endpoints and the bus-hosted service, wherein the implemented patterns can be utilized by the plurality of bus endpoints to interact with the bus-hosted service and wherein the bus endpoints interact with the bus-hosted service without knowledge of one another;
a bus storage component that interacts with the bus-hosted service to store information relevant to operation of the pattern-based service bus; and
an access tool that provides an interface to allow an outside system and/or user to interact with the bus-hosted service and/or the bus storage component.

US Pat. No. 10,169,258

MEMORY SYSTEM DESIGN USING BUFFER(S) ON A MOTHER BOARD

Rambus Inc., Sunnyvale, ...

17. A system comprising:a processor coupled to one or more communication channels to communicate commands;
a first communication channel electrically coupling a first set of two or more dual in-line memory modules (DIMMs) and a first primary data buffer on a mother board, wherein at least one DIMM in said first set of two or more DIMMs comprises a first internal data buffer coupled to said first primary data buffer via said first communication channel;
a second communication channel electrically coupling a second set of two or more DIMMs and a second primary data buffer on said mother board, wherein at least one DIMM in said second set of two or more DIMMs comprises a second internal data buffer coupled to said second primary data buffer via said second communication channel; and
a third communication channel electrically coupling said first primary data buffer to said second primary data buffer, and coupling said first primary data buffer and said second primary data buffer to said processor, wherein said first primary data buffer and said second primary data buffer are configured to sample and retransmit data.

US Pat. No. 10,169,257

MODULE BASED DATA TRANSFER

Rambus Inc., Sunnyvale, ...

1. A method for transferring data between memory modules, the method comprising:sending a read request to a first memory module, the first memory module comprising one or more non-volatile memory devices;
sending a write request to a second memory module, the second memory module comprising one or more volatile memory devices, wherein the write request comprises an indicator that the second memory module is to capture data from a data bus, the data sent directly from the first memory module;
in response to the read request, sending data from the first memory module on the data bus, wherein the data bus electrically couples the first memory module, the second memory module, and a processor; and
in response to the write request, storing the data from the data bus into the second memory module.

US Pat. No. 10,169,255

INFORMATION-SHARING DEVICE, METHOD, AND TERMINAL DEVICE FOR SHARING APPLICATION INFORMATION

Sony Corporation, Tokyo ...

1. An information-sharing device, comprising:circuitry configured to:
obtain, from a sound output device, first application information, wherein the first application information indicates at least one first application of the sound output device and types of audio source devices connected to the sound output device,
wherein each audio source device of the audio source devices corresponds to an audio source that outputs audio data to the sound output device;
obtain second application information that indicates at least one second application on the information-sharing device;
generate shared information that is shared between the sound output device and the information-sharing device,
wherein the shared information is generated in a form of a list, wherein the list includes an order of an arrangement of a plurality of icons associated with the at least one first application and the at least one second application, and
wherein the order includes an ordinal position of each of the plurality of icons in the list,
wherein the shared information is generated based on the first application information and the second application information;
transmit the generated shared information to the sound output device;
select a first icon of the plurality of icons based on operation information received from the sound output device, wherein the operation information indicates an ordinal position of the first icon that is displayed on the sound output device; and
control a display screen of the information-sharing device to display the selected first icon.

US Pat. No. 10,169,254

INCREASING VIRTUAL-MEMORY EFFICIENCIES

Intel Corporation, Santa...

1. An apparatus comprising:one or more computer processors; and
a virtual machine monitor to be operated by the one or more computer processors to:
cause a first core and a second core of a computer processor of the one or more computer processors to execute one or more first instructions in accordance with permissions from a first virtual memory page table;
determine the second core should operate with a different set of permissions; and
cause the second core of the computer processor to execute one or more second instructions in accordance with permissions from a second virtual memory page table contemporaneously with execution of the one or more first instructions by the first core.

US Pat. No. 10,169,253

CRYPTOGRAPHIC MULTI-SHADOWING WITH INTEGRITY VERIFICATION

1. In a computer system comprising a virtual machine monitor (VMM) running on system hardware and supporting a virtual machine (VM), a method of controlling access to a cloaked data page stored in a system memory, the method comprising:creating, by a first shim in an address space of an application and in coordination with the VMM, a first shadow context associated with the application;
receiving, by the VMM, a request for access to the cloaked data page;
responsive to determining the cloaked data page is plaintext and the request does not correspond to a first execution context associated with the application:
unmapping, by the VMM, the cloaked data page from any mapped references to the cloaked data page not corresponding to the first execution context associated with the application,
encrypting, by the VMM, data in the cloaked data page, and
mapping, by the VMM, a location of the cloaked data page into a second shadow context associated with an execution context to which the request corresponds; and
responsive to determining the cloaked data page is encrypted and the request does correspond to the first execution context associated with the application:
verifying, by the VMM, integrity of encrypted data in the cloaked data page, and
if the integrity of the encrypted data in the cloaked data page is verified: decrypting, by the VMM, the encrypted data in the cloaked data page and storing the decrypted cloaked data page; and
mapping, by the VMM, a location of the decrypted cloaked data page into the first shadow context associated with the application.

US Pat. No. 10,169,252

CONFIGURING FUNCTIONAL CAPABILITIES OF A COMPUTER SYSTEM

International Business Ma...

1. A computer program product for configuring functional capabilities of a computer system comprising two or more persistent memories and two or more replaceable functional units, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising:transferring, in response to a repair action for a first functional unit, first enablement data stored on the first functional unit to a servicer, wherein the first enablement data stored on the first functional unit is in a first persistent memory of the first functional unit, wherein the first enablement data is associated with a first unique identification number that corresponds to the first functional unit, the first enablement data specifying one or more functional capabilities of the first functional unit, and wherein the one or more functional capabilities are enabled in the first functional unit;
erasing, after transferring the first enablement data in the first persistent memory to the servicer, the first enablement data from the first persistent memory;
obtaining, by the servicer, in response to a replacement action for the first functional unit, a second unique identification number that corresponds to a second functional unit;
transferring, from the servicer, the first unique identification number and the second unique identification number to a trusted environment in the computer system;
transforming, in the trusted environment, the first enablement data to second enablement data by replacing the first unique identification number with the second unique identification number; and
transferring the second enablement data to the second functional unit, wherein the second enablement data is stored in a second persistent memory of the second functional unit, wherein the second enablement data specifies one or more functional capabilities of the second functional unit, and wherein the one or more functional capabilities of the second functional unit are the same as the one or more functional capabilities of the first functional unit.

US Pat. No. 10,169,250

METHOD AND APPARATUS METHOD AND APPARATUS FOR CONTROLLING ACCESS TO A HASH-BASED DISK

TENCENT TECHNOLOGY (SHENZ...

1. A method for controlling access to a hash-based disk, the disk comprising a storage object, the storage object comprising a set of records and a hash value, the method comprising:constructing a Bloom filter for the storage object, the Bloom filter comprising an initial bit and a plurality of Bloom filter bits;
determining whether the storage object is accessed for a first time, wherein when the initial bit is a predefined value, the storage object is determined being accessed for the first time;
when the storage object is determined as not being accessed for the first time, reading the set of records in the storage object;
filtering an access request to the storage object using the Bloom filter;
counting a number of unnecessary accesses and a number of read accesses, wherein the number of unnecessary accesses is a number of access requests filtered out by the Bloom filter, and the number of read accesses is a number of access requests to the storage object;
calculating a ratio of unnecessary accesses to the storage object based on the number of unnecessary accesses and the number of read accesses, wherein the ratio of unnecessary accesses is calculated by dividing the number of unnecessary accesses by the number of read accesses; and
when the ratio of unnecessary accesses is within a threshold range, selecting the storage object and allocating the Bloom filter to a second storage object.

US Pat. No. 10,169,249

ADJUSTING ACTIVE CACHE SIZE BASED ON CACHE USAGE

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing a cache in at least one memory device in a computer system to cache tracks stored in a storage, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:maintaining an active cache list indicating tracks in an active cache comprising a first portion of the at least one memory device to cache the tracks in the storage during computer system operations;
maintaining an inactive cache list indicating tracks demoted from the active cache;
during caching operations, gathering information on active cache hits comprising access requests to tracks indicated in the active cache list and inactive cache hits comprising access requests to tracks indicated in the inactive cache list;
staging a track requested by an access request indicated in the inactive cache list from the storage to the active cache;
adding indication of the staged track to the active cache list; and
using the gathered information to determine whether to provision a second portion of the at least one memory device unavailable to cache user data to be part of the active cache for use to cache user data during the computer system operations.

US Pat. No. 10,169,248

DETERMINING CORES TO ASSIGN TO CACHE HOSTILE TASKS

INTERNATIONAL BUSINESS MA...

1. A computer program product for dispatching tasks in a computer system having a plurality of cores, wherein each core is comprised of a plurality of processing units and at least one cache memory shared by the processing units on the core to cache data from a memory, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:processing a task to determine one of the cores on which to dispatch the task;
determining whether the processed task is classified as cache hostile, wherein a task is classified as cache hostile when the task accesses more than a threshold number of memory address ranges in the memory; and
dispatching the processed task to at least one of the cores assigned to process cache hostile tasks.

US Pat. No. 10,169,247

DIRECT MEMORY ACCESS BETWEEN AN ACCELERATOR AND A PROCESSOR USING A COHERENCY ADAPTER

International Business Ma...

8. An adapter for direct memory access (‘DMA’) between a processor and an accelerator, the processor coupled to the accelerator by the adapter, the adapter configured to carry out the steps of:providing, by the adapter, a translation tag (‘XTAG’) to the accelerator; and
responsive to receiving a DMA instruction for a DMA transfer, wherein the DMA instruction comprises the XTAG, generating, by the adapter, a DMA instruction comprising a real address based on the XTAG.

US Pat. No. 10,169,246

REDUCING METADATA SIZE IN COMPRESSED MEMORY SYSTEMS OF PROCESSOR-BASED SYSTEMS

QUALCOMM Incorporated, S...

1. A compressed memory system of a processor-based system, comprising:a metadata circuit comprising a plurality of metadata entries each having a bit size of N bits omitted from a bit size of a full physical address addressable to a system memory, the system memory comprising a plurality of 2N compressed data regions each comprising a plurality of memory blocks each associated with a full physical address, and a set of free memory lists of a plurality of 2N sets of free memory lists, each corresponding to a plurality of free memory blocks of the plurality of memory blocks;
the metadata circuit configured to associate a plurality of virtual addresses to a plurality of abbreviated physical addresses stored in the plurality of metadata entries, each abbreviated physical address among the plurality of abbreviated physical addresses omitting N upper bits from a corresponding full physical address addressable to the system memory; and
a compression circuit configured to:
receive a memory access request comprising a virtual address;
select a compressed data region of the plurality of 2N compressed data regions in the system memory, and a set of free memory lists of the plurality of 2N sets of free memory lists based on a modulus of the virtual address and 2N;
retrieve an abbreviated physical address corresponding to the virtual address from the metadata circuit; and
perform a memory access operation on a memory block of the plurality of memory blocks associated with the abbreviated physical address in the selected compressed data region.

US Pat. No. 10,169,245

LATENCY BY PERSISTING DATA RELATIONSHIPS IN RELATION TO CORRESPONDING DATA IN PERSISTENT MEMORY

Intel Corporation, Santa...

1. An apparatus comprising:a memory unit for a processor, the memory unit to include a prefetcher to:
detect a relationship between two or more addresses of a byte-addressable random access persistent memory based on an access pattern to the persistent memory by an application executed by the processor;
cause information to be stored in a pre-allocated portion of the persistent memory that indicates the relationship between the two or more addresses; and
retrieve the information stored in the pre-allocated portion when the application subsequently accesses an address from among the two or more addresses to cause data to be prefetched from the two or more addresses of the persistent memory based on the relationship indicated in the information.

US Pat. No. 10,169,244

CONTROLLING ACCESS TO PAGES IN A MEMORY IN A COMPUTING DEVICE

ADVANCED MICRO DEVICES, I...

1. A method for handling memory accesses by virtual machines in a computing device, the computing device including a reverse map table (RMT) and a separate guest accessed pages table (GAPT) for each virtual machine, the RMT including a plurality of entries, each entry including information for identifying a virtual machine that is permitted to access an associated page of data in a memory, and each GAPT including a record of pages being accessed by a corresponding virtual machine, the method comprising:receiving, in a table walker, a request to translate a virtual address to a system physical address, the request originating from a given virtual machine;
acquiring, from a corresponding guest page table, a guest physical address associated with the virtual address, and, from a nested page table, a system physical address associated with the virtual address;
checking, based on the guest physical address and the system physical address, at least one of the RMT and a corresponding GAPT to determine whether the given virtual machine has access to a corresponding page; and
when the given virtual machine does not have access to the corresponding page, terminating translating the virtual address to the system physical address.

US Pat. No. 10,169,243

REDUCING OVER-PURGING OF STRUCTURES ASSOCIATED WITH ADDRESS TRANSLATION

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing purging of structure entries associated with address translation, said computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
determining, by a processor, whether a block of memory of a computing environment for which a purge request has been received is backing an address translation structure, the block of memory having an address to identify the block of memory; and
based on determining the block of memory is not backing the address translation structure, performing an action to selectively purge from a structure associated with address translation entries created from translating the address that are at one level of address translation, wherein other entries at one or more other levels of address translation created from translating the address remain in the structure.

US Pat. No. 10,169,242

HETEROGENEOUS PACKAGE IN DIMM

SK Hynix Inc., Gyeonggi-...

1. A memory system comprising:a memory module including:
a first memory device including a first memory and a first memory controller controlling the first memory to store data; and
a second memory device including a second memory and a second memory controller controlling the second memory to store data; and
a processor executing an operating system (OS) and an application to access a data storage memory through the first and second memory devices,
wherein the first and second memories are separated from the processor,
wherein the processor accesses the second memory device through the first memory device,
wherein the first memory controller transfers a signal between the processor and the second memory device based on at least one of values of a memory selection field and a handshaking information field included in the signal,
wherein the memory module includes one or more memory stacks,
wherein one or more volatile memories as the first memory, one or more non-volatile memories as the second memory, and the first and second memory controllers are stacked in the memory stacks,
wherein the first and second memory devices stacked in the memory stacks are communicatively coupled to each other through a through-via,
wherein the first and second memory controllers interface with the first and second memories and the processor through the through-via.

US Pat. No. 10,169,241

MANAGING MEMORY ALLOCATION BETWEEN INPUT/OUTPUT ADAPTER CACHES

International Business Ma...

1. A method for managing memory allocation of caching storage input/output adapters (IOAs) in a redundant caching configuration, the method comprising:detecting a first cache of a first IOA storing a first amount of data that satisfies a memory shortage threshold of the first cache;
transmitting a first request for extra memory for the first cache in response to detecting that the first amount of data satisfying the memory shortage threshold, wherein the first request is transmitted to a plurality of IOAs;
detecting a second cache of a second IOA of the plurality of IOAs storing a second amount of data that satisfies a memory dissemination threshold of the second cache;
allocating memory from the second cache to the first cache in response to both the first request and detecting that the second amount of data satisfies the memory dissemination threshold; and
wherein the first cache has a memory dissemination threshold and the second cache has a memory shortage threshold, further comprising:
detecting the second cache storing a third amount of data that satisfies a memory shortage threshold of the second cache;
identifying an outstanding request for extra memory, wherein the outstanding request for extra memory is a prior request for extra memory from another IOA of the plurality of IOAs, wherein the prior request has not resulted in an allocation of memory to a cache of the requesting IOA; and
deferring a new request for extra memory for the second cache in response to identifying the outstanding request for extra memory from the another IOA.

US Pat. No. 10,169,240

REDUCING MEMORY ACCESS BANDWIDTH BASED ON PREDICTION OF MEMORY REQUEST SIZE

QUALCOMM Incorporated, S...

1. A method of managing memory access bandwidth, the method comprising:determining a size of a used portion of a first cache line stored in a first cache which is accessed by a processor, wherein the first cache is a level-two (L2) cache comprising at least a first accessed bit corresponding to a first half cache line size data of the first cache line and a second accessed bit corresponding to a second half cache line size data of the first cache line, wherein determining the size of the used portion of the first cache line is based on which one or more of the first accessed bit or second accessed bit is/are set;
determining which one or more of the first accessed bit or the second accessed bit is set when the first cache line is evicted from the L2 cache;
for a first memory region in a memory comprising the first cache line, selectively updating a prediction counter for making predictions of sizes of cache lines to be fetched from the first memory region, based on the size of the used portion, wherein selectively updating the prediction counter comprises:
incrementing the prediction counter by a first amount when only one of the first accessed bit or the second accessed bit is set when the first cache line is evicted from the L2 cache;
decrementing the prediction counter by a second amount when both the first accessed bit and the second accessed bit are set when the first cache line is evicted from the L2 cache; or
decrementing the prediction counter by a third amount when a request is received from the processor at the first cache for a portion of the first cache line which was not fetched; and
adjusting a memory access bandwidth between the processor and the memory to correspond to the sizes of the cache lines to be fetched, comprising at least one of:
adjusting the memory access bandwidth for fetching a second cache line from the first memory region to correspond to the half cache line size if the value of the prediction counter is greater than zero; or
adjusting the memory access bandwidth for fetching a second cache line from the first memory region to correspond to the full cache line size if the value of the prediction counter is less than or equal to zero.

US Pat. No. 10,169,239

MANAGING A PREFETCH QUEUE BASED ON PRIORITY INDICATIONS OF PREFETCH REQUESTS

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing prefetch queues, said computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
obtaining a prefetch request based on executing a prefetch instruction included within a program and determining that data at a memory location designated by the prefetch instruction is not located in a selected cache, the prefetch request having a priority assigned thereto, the priority indicating confidence by the program in whether the prefetch request will be used;
determining, based on obtaining the prefetch request, whether the prefetch request may be placed on a prefetch queue, the determining comprising:
determining whether the prefetch queue is full;
checking, based on determining the prefetch queue is full, whether the priority of the prefetch request is considered a high priority;
determining, based on the checking indicating the priority of the prefetch request is considered a high priority, whether another prefetch request on the prefetch queue may be removed;
removing the other prefetch request from the prefetch queue, based on determining the other prefetch request may be removed; and
adding the prefetch request to the prefetch queue, based on removing the other prefetch request.

US Pat. No. 10,169,238

MEMORY ACCESS FOR EXACTLY-ONCE MESSAGING

International Business Ma...

1. A computer-implemented method executed for enabling exactly-once messaging, the method comprising:transmitting a plurality of messages from a first location to a second location via read requests and write requests made to a memory;
controlling the read and write requests by a memory controller including a read queue, a write queue, and a lock address list, each slot of the lock address list associated with a lock bit;
initiating the read requests from the memory via the memory controller when associated lock bits are enabled;
initiating the write requests from the memory via the memory controller when associated lock bits are disabled; and
enabling and disabling the lock bits after the initiation of the write and read requests, respectively.

US Pat. No. 10,169,237

IDENTIFICATION OF A COMPUTING DEVICE ACCESSING A SHARED MEMORY

International Business Ma...

1. A method for identifying, in a system including two or more computing devices that are able to communicate with each other, with each computing device having a cache and connected to a corresponding memory, the computing device accessing one of the memories, the method comprising:monitoring memory access to any of the memories, wherein monitoring memory access comprises identifying respective computing devices that access respective memories, wherein monitoring memory access to any of the memories further comprises:
collecting an access time, a type of command, and a first memory address from a first memory read access to a first memory based on information acquired from a first probe attached to a first bus connecting the first memory to a first computing device, wherein the first memory is remote from the first computing device, and wherein a cache line in the first computing device is in an invalid state;
collecting a third access time, a third type of command, and a third memory address from a first memory write access to the first memory based on information acquired from the first probe, wherein the cache line in the first computing device enters a modified state;
monitoring cache coherency commands between computing devices, wherein monitoring cache coherency commands further comprises:
monitoring a first cache coherency command sent from the first computing device to at least a second computing device at a first cache coherency time based on information acquired from a second probe attached to a second interconnect connecting the first computing device and the second computing device, wherein the second probe collects the cache coherency time, a type of command, a second memory address, and an identification of the computing device issuing the first cache coherency command; and
identifying the computing device accessing one of the memories by using information related to the memory access and cache coherency commands, wherein identifying the computing device further comprises:
identifying the first computing device as the device that accessed the first memory based on the first memory address being equivalent to the second memory address, and further based on a difference between the access time and the first cache coherency time being smaller than a difference between the access time and any other cache coherency time;
wherein the first computing device is identified as the device performing the first memory write access based on the third memory address being identical to the first memory address and further based on the first computing device performing the first memory read access to the first memory;
wherein the system is configured in a non-uniform memory access (NUMA) design, wherein each memory comprises at least one dual in-line memory module (DIMM);
wherein the access time, the type of command, the first memory address, the cache coherency time, the second memory address, and the identification of the computing device issuing the first cache coherency command are stored in a hard disk drive (HDD) accessible by the system; and
wherein the system utilizes a modified, exclusive, shared, invalid (MESI) cache coherence protocol.

US Pat. No. 10,169,236

CACHE COHERENCY

ARM Limited, Cambridge (...

1. A cache coherency controller comprising:a directory indicating, for memory addresses cached by one or more of a group of one or more cache memories connectable in a coherent cache structure, which of the cache memories are caching those memory addresses; and
control circuitry configured to detect a directory entry relating to a memory address to be accessed so as to coordinate, amongst the cache memories, an access to a memory address by one of the cache memories or a coherent agent in instances when the directory entry indicates that another of the cache memories is caching that memory address;
the control circuitry being responsive to status data indicating whether each cache memory in the group is currently subject to cache coherency control so as to take into account, in the detection of the directory entry relating to the memory address to be accessed, only those cache memories in the group which are currently subject to cache coherency control.

US Pat. No. 10,169,235

METHODS OF OVERRIDING A RESOURCE RETRY

Apple Inc., Cupertino, C...

1. An apparatus, comprising:a memory configured to implement a first queue and a second queue, wherein the first queue has a plurality of entries, each configured to store a memory access instruction having one of a set of priority levels, wherein the second queue has fewer entries than the first queue, and wherein each entry in the second queue corresponds to one of the set of priority levels; and
a control circuit configured to:
determine an availability of a memory resource associated with a given memory access instruction, wherein the memory resource associated with the given memory access instruction is included in a plurality of memory resources;
determine a particular priority level of the given memory access instruction in response to a determination that the memory resource associated with the given memory access instruction is unavailable; and
add the given memory access instruction to the second queue in response to a determination that an entry in the second queue corresponding to the particular priority level is available, and that the particular priority level is greater than a respective priority level of each memory access instruction currently in the second queue.

US Pat. No. 10,169,234

TRANSLATION LOOKASIDE BUFFER PURGING WITH CONCURRENT CACHE UPDATES

International Business Ma...

1. A method for purging a translation lookaside buffer concurrently with cache updates in a computer system with a translation lookaside buffer and a primary cache memory having a first cache line that contains a virtual address field and a data field, the method comprising:initiating a translation lookaside buffer purge process;
initiating a cache update process;
determining that the translation lookaside buffer purge process and the cache update process each perform a write operation to the first cache line concurrently;
in response to the determining:
overwriting, by the cache update process, the data field of the first cache line of the primary cache memory,
restoring the translation lookaside buffer purge process from a current state to an earlier state, and
restarting the translation lookaside buffer process from the earlier state.

US Pat. No. 10,169,233

TRANSLATION LOOKASIDE BUFFER PURGING WITH CONCURRENT CACHE UPDATES

International Business Ma...

1. A computer program product for purging a translation lookaside buffer concurrently with cache updates in a computer system with a translation lookaside buffer and a primary cache memory having a first cache line that contains a virtual address field and a data field, comprising a computer readable storage medium having stored thereon program instructions programmed to perform:initiating a translation lookaside buffer purge process;
initiating a cache update process;
determining that the translation lookaside buffer purge process and the cache update process each perform a write operation to the first cache line concurrently;
in response to determining that the translation lookaside buffer purge process and the cache update process each perform a write operation to the first cache line concurrently:
overwriting, by the cache update process, the data field of the first cache line of the primary cache memory,
restoring the translation lookaside buffer purge process from a current state to an earlier state, and
restarting the translation lookaside buffer process from the earlier state.

US Pat. No. 10,169,232

ASSOCIATIVE AND ATOMIC WRITE-BACK CACHING SYSTEM AND METHOD FOR STORAGE SUBSYSTEM

Seagate Technology LLC, ...

1. A method for caching in a data storage subsystem, comprising:receiving a write request indicating one or more logical addresses and one or more data blocks to be written correspondingly to the one or more logical addresses;
in response to the write request, allocating one or more physical locations in a cache memory from a free list;
storing the one or more data blocks in the one or more physical locations;
determining a hash table slot in response to a logical address;
determining whether any of a plurality of entries in the hash table slot identifies the logical address;
storing identification information identifying the one or more physical locations in one or more data structures;
updating an entry in a hash table to include a pointer to the one or more data structures;
maintaining a count of data access requests, including read requests, pending against each physical location in the cache memory having valid data; and
returning a physical location to the free list when the count indicates no data access requests are pending against the physical location.

US Pat. No. 10,169,231

EFFICIENT AND SECURE DIRECT STORAGE DEVICE SHARING IN VIRTUALIZED ENVIRONMENTS

International Business Ma...

1. A method of providing direct storage device sharing in a virtualized environment, comprising:a storage controller assigning each of a plurality of virtual functions of a plurality of guests an associated memory area of a physical memory, including at a first boot of one of the guests, the storage controller receiving a request from the one of the guests, said request including an authentication key, and in response to the request, triggering an interrupt of a physical function to a hypervisor;
the storage controller providing the guests with direct access, via the authentication key and without intervention of the hypervisor, to a specified storage area in the storage device, including the storage controller receiving from the hypervisor a configuration command over the physical function, the configuration command setting up hardware in the storage controller to allocate storage in the storage device for said one of the guests and to provide a mapping function for the authentication key to provide the one of the guests with access to the specified storage area in the storage device; and
the guests directly accessing the specified storage area over the authentication key, including
said one of the guests directly accessing the storage device over the authentication key, including said one guest sending to the storage controller, over one of the virtual functions, the authentication key in a command block requesting access to the storage area allocated to said one guest; and
the storage controller receiving the command block requesting access from the one of the guests, and setting up a mapping for the authentication key to provide said one of the guests with direct access, without intervention of the hypervisor, only to the specified storage area in the storage device set up for the authentication key.

US Pat. No. 10,169,230

METHOD FOR ACCESS TO ALL THE CELLS OF A MEMORY AREA FOR PURPOSES OF WRITING OR READING DATA BLOCKS IN SAID CELLS

MORPHO, Issy-les-Mouline...

1. An access method to access to cells in a memory area of a card for purposes of writing or reading data blocks in said cells, the memory area comprising N+1 separately addressable, physically contiguous cells, where N is an integer greater than or equal to 0;the address of each cell being between 0 and N, where each address is unique;
wherein said method comprises:
for each access time to said cells in said memory area to be accessed, the total number of access times being N+1 per memory area, performing a process of determining an address of a cell of the memory area to be accessed at said access time, the address determined for an access time not being once again determined for another access time, the address determination process comprising:
pseudorandomly determining a pseudorandom bit which can thus take either a first value or a second value;
and testing the value taken by said pseudorandom bit that switches said process either to:
(1) in the event of the first value, determining an index as being equal to the value of a first index, followed by incrementing a unit of said first index modulo N+1, or
(2) in the event of the second value, determining said index as being equal to the value of a second index, followed by decrementing a unit of said second index modulo N+1, the value of said index being the value of said address of the cell of the memory area to be accessed at said access time.

US Pat. No. 10,169,229

PROTOCOLS FOR EXPANDING EXISTING SITES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by computing device within a dispersed storage and task network (DSTN) including at least one site housing a plurality of current distributed storage and task (DST) execution units, the method comprising:determining that a plurality of new DST execution units are to be added to the at least one site;
in response to determining that the plurality of new DST execution units are to be added to the at least one site, assigning the new DST execution units to positions within the at least one site to limit a number of DST execution units through which data must be moved during migration of data to the new DST execution units to a maximum number, the maximum number being less than the number of current DST execution units included in the at least one site, wherein assigning the new DST execution units to positions within the at least one site includes:
obtaining first address ranges assigned to the plurality of current DST execution units;
determining a common magnitude of second address ranges to be assigned to the plurality of new DST execution units and the plurality of current DST execution units;
determining insertion points for each of the plurality of new DST execution units, wherein the insertion points are selected to intersperse the plurality of new DST execution units among the current DST execution units in a pattern arranged so that each current DST execution unit is no more than a predetermined number of current DST execution units distant from one of the plurality of new DST execution units;
determining transfer address ranges, where transfer address ranges correspond to at least a portion of the first address ranges to be transferred to the plurality of new DST execution units in accordance with the insertion points; and
facilitating transfer of address range assignments from particular current DST execution units to particular new DST execution units.

US Pat. No. 10,169,228

MULTI-SECTION GARBAGE COLLECTION

International Business Ma...

1. A computer program product for facilitating garbage collection within a computing environment, the computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
obtaining processing control by a handler executing within a processor of the computer environment based on execution of a load instruction and a determination that an object pointer to be loaded indicates a location within a selected portion of memory undergoing a garbage collection process;
obtaining by the handler an image of the load instruction and calculating an object pointer address from the image, the object pointer address specifying a location of the object pointer, the object pointer indicating a location of an object pointed to by the object pointer;
determining by the handler whether the object pointer is to be modified;
modifying by the handler, based on determining the object pointer is to be modified, the object pointer to provide a modified object pointer; and
storing the modified object pointer in a selected location.

US Pat. No. 10,169,226

PERSISTENT CONTENT IN NONVOLATILE MEMORY

Micron Technology, Inc., ...

1. A method comprising:initiating an operation to freeze an application;
responsive to initiating the operation to freeze the application, migrating one or more pages of persistent storage from a volatile memory to a nonvolatile memory;
locking one or more page tables such that entries in the page tables refer only to pages of persistent storage in the nonvolatile memory; and
migrating the one or more frozen page tables to the nonvolatile memory.

US Pat. No. 10,169,225

MEMORY SYSTEM AND MEMORY-CONTROL METHOD WITH A PROGRAMMING STATUS

Silicon Motion, Inc., Jh...

1. A memory system with a programming status, comprising:at least one first memory, wherein each of the at least one first memory comprises a plurality of memory regions to store data;
at least one second memory, wherein each of the at least one second memory comprises a plurality of memory regions for programming the data from the at least one first memory, and the at least one second memory is a flash memory; and
a controller, coupled to the second memory and utilized to record a programming status of the data, wherein the controller checks whether the programming is successful or not by inquiring the programming status in response to the at least one first memory or the at least one second memory being going to be accessed, and the at least one first memory stores the data until the programming is checked to be successful.

US Pat. No. 10,169,222

APPARATUS AND METHOD FOR EXPANDING THE SCOPE OF SYSTEMS MANAGEMENT APPLICATIONS BY RUNTIME INDEPENDENCE

INTERNATIONAL BUSINESS MA...

1. A computer program product for automatic conversion of existing systems management software applications to run in multiple middleware runtime frameworks, the computer program product comprising:a non-transitory readable storage medium having stored thereon program instructions executable by a processor to cause the processor to:
scan the frameworks of system management components to form individual function modules;
map application program interface calls to a generic applicant program interface call layer by creating an association of the individual function modules;
perform runtime dependency analysis by generating ontology alignment mechanisms and outputting a mapping table of ontologies;
perform model unification by mapping runtime dependent functions to semantic counterparts using the ontology alignment mechanisms;
generate multiple runtime independent proxy components for the system management components; and
automatically refactor each of the system management components into two modules: a runtime independent module and a runtime dependent proxy module, wherein the runtime independent module replaces runtime dependent code with runtime independent code counterparts, wherein the model unification is dictionary-based, structure-based, or a combination thereof.

US Pat. No. 10,169,221

METHOD AND SYSTEM FOR WEB-SITE TESTING

ACCELERATE GROUP LIMITED,...

1. A testing service comprising:one or more testing-service computer systems connected to the Internet that
execute testing-service routines,
maintain one or more databases,
receive requests for modifications to a data-object-model representation of a web page under test from user computers, and
respond to a received request by selecting a web-page variant using a probability-based weight associated with the web-page variant and transferring, to the user computer from which the request was received, modifications to the data-object-model representation of the web page under test that direct a browser on the user computer to display the selected web-page variant; and
a client web server that serves web pages to users, the client web server storing a library of routines downloaded to the client web server by the testing service and storing encodings of web pages, the encoding of each web page tested by the testing service including modifications that direct a user's web browser to download the library of routines from the client web server and to request modifications to a data-object-model representation of the web page by calling a script-library routine.

US Pat. No. 10,169,220

PRIORITIZING RESILIENCY TESTS OF MICROSERVICES

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a memory that stores computer executable components; and
a processor that executes the computer executable components stored in the memory, wherein the computer executable components comprise:
a test execution component that:
traverses an application program interface call subgraph of a microservices-based application in a depth first traversal pattern; and
during the traversal, performs resiliency testing of parent application program interfaces of the application program interface call subgraph according to a systematic resilience testing algorithm that reduces redundant resiliency testing of parent application program interfaces, the systematic resilience testing algorithm comprising:
during the traversal at a stop at a parent application program interface of the application program interface call subgraph:
in response to the parent application program interface having multiple dependent application program interfaces, calls to all direct and indirect dependent application program interfaces of the parent application program interface annotated as having been bounded retry pattern tested and circuit breaker pattern tested, and the parent application program interface not being annotated as having been bulkhead pattern tested, perform a bulkhead pattern test on the parent application program interface and annotate the parent application program interface as bulkhead pattern tested.

US Pat. No. 10,169,219

SYSTEM AND METHOD TO INFER CALL STACKS FROM MINIMAL SAMPLED PROFILE DATA

Nintendo Co., Ltd., Kyot...

1. A system for inferring call stacks from a software profile, comprising:a processing system having at least one processor, the processing system configured to:
monitor an executing program to create a sample profile of one or more executing functions to form a call stack database, wherein each function in the sample profile forms part of a respective call stack and each call stack having a call stack depth size;
after creating the call stack database:
for each function, attempt to match, under a first inference strategy, the function and associated call stack depth size to an entry in the call stack database;
infer a call stack for each function that matches to the entry in the call stack database based on the first inference strategy;
if the function does not match with any entry in the call stack database or if the function matches to more than one entry in the call stack database, determine, under a second inference strategy, one or more functions in a temporally adjacent sample; and
infer the call stack, under the second inference strategy, based on at least the temporally adjacent sample.

US Pat. No. 10,169,217

SYSTEM AND METHOD FOR TEST GENERATION FROM SOFTWARE SPECIFICATION MODELS THAT CONTAIN NONLINEAR ARITHMETIC CONSTRAINTS OVER REAL NUMBER RANGES

GENERAL ELECTRIC COMPANY,...

1. A method comprising:receiving, at processing circuitry of a software test generation system, software specification models of software including at least one nonlinear arithmetic constraint over a Real number range;
generating, via the processing circuitry, satisfiable modulo theories (SMT) formulas that are semantically equivalent to the software specification models of the software including the at least one nonlinear arithmetic constraint over a Real number range;
analyzing, via the processing circuitry, the SMT formulas using at least one SMT solver of an analytical engine pool to generate test case data for each of the SMT formulas; and
post-processing, via the processing circuitry, the test case data to automatically generate one or more tests comprising inputs and expected outputs for testing the software including the at least one nonlinear arithmetic constraint over a Real number range;
wherein the generating the SMT formulas comprises flattening one or more state-based operations in the software specification models into SMT formulas that are stateless and capable of being analyzed by the at least one SMT solver;
wherein the post-processing comprises converting ranges of values indicated in the test case data into particular values for one or more input variables and one or more output variables of the software to be verified and the converting comprises truncating the ranges of values indicated in the test case data at a particular precision to yield the particular values.

US Pat. No. 10,169,214

TESTING OF COMBINED CODE CHANGESETS IN A SOFTWARE PRODUCT

International Business Ma...

1. A method for testing changesets in a software product, the method comprising:determining, by one or more processors, whether there is sufficient building and testing capacity to test a single changeset individually, wherein a changeset is a set of changes to a software product; and
in response to determining that there is not sufficient building and testing capacity to test the single changeset individually:
selecting, by one or more processors, a first combination of changesets from multiple changesets;
calculating, by one or more processors and for each combination of two or more changesets from the multiple changesets, an interaction between changesets in said each combination, wherein the interaction is an overlapping of code found in two or more changesets;
determining, by one or more processors, that the first combination of changesets has a lower amount of overlapping of code than any other combination of changesets from the multiple changesets; and
selecting, by one or more processors, the first combination of changesets for building and testing, wherein said first combination of changesets has the lower amount of overlapping of code than any other combination of changesets from the multiple changesets; and
building and testing, by one or more processors, the first combination of changesets.

US Pat. No. 10,169,213

PROCESSING OF AN APPLICATION AND A CORRESPONDING TEST FILE IN A CONTENT REPOSITORY

Red Hat, Inc., Raleigh, ...

1. A method, comprising:retrieving, by a continuous integration module, a first application code and a test code corresponding to the first application code from an archive of a computing system, wherein a host operating system of the computing system comprises the continuous integration module and the host operating system provides a graphical user interface, wherein the continuous integration module is interfaced by a continuous integration application programming interface (API) to access the archive, wherein the archive is coupled with a software development tool that comprises a first application corresponding to the first application code and a test file corresponding to the test code;
installing, by the continuous integration module, the first application code and the test code in a content repository of a hardware platform of the computing system, wherein the test code is installed as metadata for the first application code in the content repository;
executing, by the continuous integration module, the test file corresponding to the test code to generate test results for the application code;
storing, by the continuous integration module, the test results in the metadata for the first application code in the content repository;
determining, by the continuous integration module, additional data related to the test file, wherein the additional data is at least one type of data selected from a group consisting of archive checksum, test success ratio, test fail ratio, build creator, and current date and time;
storing, by the continuous integration module, the additional data in the metadata for the first application code in the content repository, wherein the host operating system is to provide, via the graphical user interface, access to search content comprising the test results and the additional data independent of the first application corresponding to the first application code; and
integrating, by the continuous integration module, a second application code with the first application code in the content repository, wherein the second application code is a different version of the first application code or a different application code from the first application code, wherein the integrating comprises monitoring, by the continuous integration module, the archive for the second application code and updating the content repository in view of the second application code.

US Pat. No. 10,169,159

AUTOMATED DATA RECOVERY FROM REMOTE DATA OBJECT REPLICAS

International Business Ma...

1. A method for recovering data objects in a distributed data storage system, the method comprising:storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network, wherein a first data center of the one or more data centers includes one or more clusters and each cluster of the one or more clusters includes a respective plurality of compute nodes and the each cluster further includes a respective database that stores metadata specifying list of candidate clusters from which the one or more replicas can be recovered;
recording health information metadata that is within the database about said one or more replicas, wherein the health information comprises data about availability of a replica to participate in a restoration process;
in response to determining that the first data object is to be recovered, calculating a query-priority for the first data object;
querying, based on the calculated query-priority, the health information metadata that is within the database for the one or more replicas to determine which of the one or more replicas is available for restoration of the first data object;
calculating a restoration-priority for the first data object based on the health information metadata that is within the database for the one or more replicas; and
restoring the first data object from the one or more of the available replicas, based on querying the list of candidate clusters and further based on the calculated restoration-priority, wherein the query-priority is calculated based on a priority function P(D)=Func(R(D),C(D),n), where:
D represents a data object with multiple replicas in multiple clusters;
R(D)i, i=1 . . . n, where “i” and “n” are natural numbers, with a remote replica indexed i of D out of n remote replicas;
C(D) represents cost of losing N replicas of D;
P(D) represents priority given by the system for the query operation of D; and
Func( ) represents some function.

US Pat. No. 10,169,158

APPARATUS, SYSTEM AND METHOD FOR DATA COLLECTION, IMPORT AND MODELING

International Business Ma...

1. A method for data analysis of a backup system, the method comprising:extracting predetermined configuration and state information from respective dump files of a plurality of different computer systems, the predetermined configuration and state information is in different native formats based on the respective dump file from which it was extracted;
translating the predetermined configuration and state information from a native format used by each of the plurality of different computer systems into a normalized format, wherein the translated configuration and state information comprises configuration and state information irrespective of which of the plurality of different computer systems from which it was generated; and
determining what components are in the backup system, how the backup system works, how data is stored in the backup system, how efficiently data is stored in the backup system, a total capacity of the backup system, a remaining capacity of the backup system, and an operating cost of the backup system by analyzing the normalized predetermined configuration and state information.

US Pat. No. 10,169,157

EFFICIENT STATE TRACKING FOR CLUSTERS

INTERNATIONAL BUSINESS MA...

1. A method for efficient state tracking for clusters by a processor device in a distributed shared memory architecture, the method comprising:performing an asynchronous calculation of deltas while concurrently receiving client requests and concurrently tracking client requests times;
responding to each of the client requests for data of the same concurrency during a certain period with currently executing client requests with updated views based upon results of the asynchronous calculation; concurrently executing each of the client requests occurring after the certain period on the updated views, wherein all deltas and views are updated; and
bounding a latency for the client requests by a time necessitated for the asynchronous calculation of at least two of the deltas; wherein a first state snapshot is atomically taken while simultaneously calculating the at least two of the deltas, and each of the client requests received during the certain period are served with the updated views of the asynchronously calculated at least two of the deltas.

US Pat. No. 10,169,156

AUTOMATIC RESTARTING OF CONTAINERS

International Business Ma...

1. A method for automatically restarting a container, comprising:reading, by a computing device, custom predefined policy information including one or more condition categories, each of which having a respective reference to a respective log file and defining at least one respective condition for restarting the container;
monitoring, by an agent included in a container engine executed by the computing device, one or more respective log files, each of which corresponding to the respective reference to the respective log file of a corresponding condition category of the one or more condition categories, to detect an occurrence of any one of the at least one condition defined by any one of the one or more condition categories of the custom predefined policy information;
detecting, by the agent, the occurrence of the any one of the at least one condition based on a presence of a string of characters, corresponding to the any one of the at least one condition, in a log file of a corresponding condition category of the any one of the at least one condition to which the custom predefined policy information refers, the string of characters being generated to the log file of the corresponding condition category on behalf of the container;
in response to the detecting of the occurrence of the any one of the at least one condition, saving, by the agent, a state of the container including a state of one or more applications within the container;
automatically restarting the container, by the agent, after detecting the occurrence and saving the state of the container; and
after the automatic restarting of the container, restoring, by the agent, the state of the container, including the state of the one or more applications, wherein the one or more applications continue executing from where the one or more applications left off, thereby improving performance of the computing device.

US Pat. No. 10,169,155

SYSTEM AND METHOD FOR SYNCHRONIZATION IN A CLUSTER ENVIRONMENT

EMC IP Holding Company LL...

1. A computer-implemented method comprising:performing, via a first computing device, a copy sweep operation to a first range of data on a source storage device;
determining that the copy sweep operation has failed;
sending a message to a second computing device to suspend I/O operations to the first range of data; and
retrying the copy sweep operation based upon, at least in part, determining that the copy sweep operation has failed, wherein the copy sweep operation is retried without the first computing device receiving acknowledgement that the I/O operations to the first range of data are suspended by the second computing device.

US Pat. No. 10,169,154

DATA STORAGE SYSTEM AND METHOD BY SHREDDING AND DESHREDDING

International Business Ma...

1. A method for encoding data for storage in a plurality of storage units by use of at least one processor comprising:dividing data into a set of separate pieces of data;
performing a redundancy function and a plurality of transformations on a separate piece of data of the set of separate pieces of data to generate a plurality of encoded data elements, wherein a threshold number of encoded data elements of the plurality of encoded data elements is needed to recover the separate piece of data, in which the threshold number of encoded data elements is less than all of the plurality of encoded data elements, wherein the plurality of transformations includes first transformations performed before performing the redundancy function and second transformations performed after performing the redundancy function;
generating metadata regarding the plurality of encoded data elements, wherein the metadata includes identification for each encoded data element and sequencing information regarding an order in which the redundancy function and the plurality of transformations were performed;
sending the plurality of encoded data elements to the plurality of storage units; and
sending the metadata to one of the storage units of the plurality of storage units or to another storage unit separately from sending the plurality of encoded data elements to the plurality of storage units.

US Pat. No. 10,169,153

REALLOCATION IN A DISPERSED STORAGE NETWORK (DSN)

INTERNATIONAL BUSINESS MA...

1. A computing device comprising:an interface configured to interface and communicate with a dispersed storage network (DSN);
memory that stores operational instructions; and
a processing module operably coupled to the interface and to the memory, wherein the processing module, when operable within the computing device based on the operational instructions, is configured to:
within a dispersed or distributed storage network (DSN) that includes a plurality of storage units (SUs) that distributedly store a set of encoded data slices (EDSs) associated with a data object, during a transition from a first system configuration of a Decentralized, or Distributed, Agreement Protocol (DAP) to a second system configuration of the DAP, direct at least one SU of the plurality of SUs to service a data access request based on at least one EDS of the set of EDSs based on a DAP transition mapping between the first system configuration of the DAP to the second system configuration of the DAP, wherein the data object is segmented into a plurality of data segments, wherein a data segment of the plurality of data segments is dispersed error encoded in accordance with dispersed error encoding parameters to produce the set of EDSs.

US Pat. No. 10,169,152

RESILIENT DATA STORAGE AND RETRIEVAL

International Business Ma...

1. A computer-implemented method for data recovery following loss of a volume manager, the method comprising:determining that the volume manager, and an associated volume manager index, for a distributed storage have been lost;
installing, in response to determining that the volume manager has been lost, a new volume manager, the new volume manager lacking an associated volume manager index;
receiving location information and credentials to access the distributed storage;
receiving a command to recover data from the distributed storage, the data to be recovered comprising one or more data files, each data file stored as two or more data portions, each data portion comprising metadata, the metadata comprising a file ID tag;
attempting to retrieve each data portion from the distributed storage;
retrieving a first data portion and recording a first location in the distributed storage that the first data portion was retrieved from;
reading the first file ID tag attached to the first data portion; and
constructing, in response to determining that the associated volume manager index has been lost, a new volume manager index by storing the first file ID tag and the first location associated with the first data portion in the distributed storage in the new volume manager index such that the new volume manager index provides a reference, to the new volume manager, for the first location and the first file ID tag, the reference associated with the first data portion.

US Pat. No. 10,169,151

UTILIZING REQUEST DEADLINES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a dispersed storage and task (DST) processing unit that includes a processor, the method comprises:generating a first plurality of access requests that include a first execution deadline time, the first plurality of access requests for transmission via a network to a corresponding first subset of a plurality of storage units;
receiving a first deadline error notification via the network from a first storage unit of the first subset;
calculating a missed deadline cost value in response to receiving the first deadline error notification;
comparing the missed deadline cost value to a new request cost threshold;
selecting a new one of the plurality of storage units not included in the first subset in response to receiving the first deadline error notification;
generating a new access request for transmission to the new one of the plurality of storage units via the network that includes an updated execution deadline time, wherein the new access request is based on a one of the first plurality of access requests sent to the first storage unit of the first subset, wherein the new one of the plurality of storage units is selected and the new access request is generated for transmission to the new one of the of the plurality of storage units when the missed deadline cost value compares favorably to the new request cost threshold; and
generating a proceed with execution notification for transmission via the network to the first storage unit of the first subset indicating a request to continue executing the access request when the missed deadline cost value compares unfavorably to the new request cost threshold.

US Pat. No. 10,169,150

CONCATENATING DATA OBJECTS FOR STORAGE IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a computing device of a dispersed storage network (DSN), the method comprises:identifying an independent data object of a plurality of independent data objects for retrieval from DSN memory of the DSN, wherein the plurality of independent data objects is combined to produce a concatenated data object and wherein the concatenated data object is encoded in accordance with a dispersed storage error encoding function to produce a set of encoded data slices;
identifying an encoded data slice of the set of encoded data slices corresponding to the independent data object based on a mapping of the plurality of independent data objects in a data matrix;
sending a retrieval request to a storage unit of the DSN memory regarding the encoded data slice; and
when the encoded data slice is received, decoding the encoded data slice in accordance with the dispersed storage error encoding function and the mapping to reproduce the independent data object.

US Pat. No. 10,169,149

STANDARD AND NON-STANDARD DISPERSED STORAGE NETWORK DATA ACCESS

International Business Ma...

1. A method comprises:determining, by a computing device of a dispersed storage network (DSN), whether to utilize a non-standard DSN data accessing protocol or a standard DSN data accessing protocol to access data from the DSN, wherein the data is dispersed storage error encoded into one or more sets of encoded data slices and wherein the one or more sets of encoded data slices are stored in a set of storage units of the DSN;
when the computing device determines to use the non-standard DSN data accessing protocol:
generating, by the computing device, a set of non-standard data access requests regarding the data, wherein a non-standard data access request of the set of non-standard data access requests includes a network identifier of a storage unit of the set of storage units, a data identifier corresponding to the data, and a data access function;
sending, by the computing device, the set of non-standard data access requests to at least some storage units of the set of storage units, which includes the storage unit;
converting, by the storage unit, the non-standard data access request into one or more DSN slice names;
determining, by the storage unit, that the one or more DSN slice names are within a slice name range allocated to the storage unit; and
when the one or more DSN slice names are within the slice name range, executing, by the storage unit, the data access function regarding one or more encoded data slices corresponding to the one or more DSN slice names.

US Pat. No. 10,169,148

APPORTIONING STORAGE UNITS AMONGST STORAGE SITES IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method of apportioning storage units in a dispersed storage network (DSN), the method comprising:generating storage unit apportioning data indicating a mapping of a plurality of desired numbers of storage units, represented as a plurality of numerical values in accordance with the plurality of desired numbers, to a plurality of storage sites based on site reliability data, wherein the mapping includes a first desired number of storage units corresponding to a first one of the plurality of storage sites that is greater than a second desired number of storage units corresponding to a second one of the plurality of storage sites in response to the site reliability data indicating that a first reliability score corresponding to the first one of the plurality of storage sites is more favorable than a second reliability score corresponding to the second one of the plurality of storage sites; and
allocating a plurality of storage units to the plurality of storage sites based on the storage unit apportioning data, wherein each of the plurality of storage units includes at least one processor and at least one memory device.

US Pat. No. 10,169,147

END-TO-END SECURE DATA STORAGE IN A DISPERSED STORAGE NETWORK

International Business Ma...

1. A method comprises:generating, by a first computing device of a dispersed storage network (DSN), a set of encryption keys;
encrypting, by the first computing device, a data matrix based on the set of encryption keys to produce an encrypted data matrix, wherein the data matrix includes data blocks of a data segment of a data object;
sending, by the first computing device, the encrypted data matrix to a second computing device of the DSN;
dispersed storage error encoding, by the second computing device, the data matrix to produce a set of encrypted encoded data slices; and
sending, by the second computing device, the set of encrypted encoded data slices to a set of storage units of the DSN for storage therein.

US Pat. No. 10,169,146

REPRODUCING DATA FROM OBFUSCATED DATA RETRIEVED FROM A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a computing device in a dispersed storage network (DSN), the method comprises:first encoding first data into a first plurality of sets of encoded data slices, wherein the first encoding is in accordance with a first dispersed error encoding function such that, for a set of encoded data slices of the first plurality of sets of encoded data slices, a first decode threshold number of encoded data slices is required to recover a corresponding first data segment of the first data;
second encoding second data into a second plurality of sets of encoded data slices, wherein the second encoding is in accordance with a second dispersed error encoding function such that, for a set of encoded data slices of the second plurality of sets of encoded data slices, a second decode threshold number of encoded data slices is required to recover a corresponding second data segment of the first data, wherein the second data segment is different from the first data segment;
creating a plurality of mixed sets of encoded data slices from the first and second plurality of sets of encoded data slices in accordance with a mixing pattern; and
outputting the plurality of sets of mixed encoded data slices to storage units of the DSN for storage therein.

US Pat. No. 10,169,145

READ BUFFER ARCHITECTURE SUPPORTING INTEGRATED XOR-RECONSTRUCTED AND READ-RETRY FOR NON-VOLATILE RANDOM ACCESS MEMORY (NVRAM) SYSTEMS

INTERNATIONAL BUSINESS MA...

1. A system, comprising:a read buffer memory configured to store data to support integrated XOR reconstructed data and read-retry data, the read buffer memory comprising a plurality of read buffers, each read buffer being configured to store at least one data unit; and
a processor and logic integrated with and/or executable by the processor, the logic being configured to cause the processor to:
receive one or more data units and read command parameters used to read the one or more data units from at least one non-volatile random access memory (NVRAM) device;
determine an error status for each of the one or more data units, wherein the error status indicates whether each data unit comprises errored data or error-free data; and
store error-free data units and the read command parameters used to read the error-free data units to a read buffer of the read buffer memory.

US Pat. No. 10,169,144

NON-VOLATILE MEMORY INCLUDING SELECTIVE ERROR CORRECTION

Micron Technology, Inc., ...

1. An apparatus comprising:a first memory area included in a memory device and a second memory area included in the memory device, the first and second memory area selectively coupled to each other through a conductive path in the memory device; and
control circuitry included in the memory device to communicate with a memory controller, the memory controller including an error correction engine, the control circuitry of the memory device configured to retrieve first information stored in the first memory area and store the first information after the error correction engine performs an error detection operation on the first information, and to retrieve second information stored in the first memory area and store the second information in the second memory area without an additional error detection operation performed on the second information such that the error correction engine skips performing an additional error detection operation on the second information if a result from the error detection operation performed by the error correction engine on the first information meets a threshold condition.

US Pat. No. 10,169,143

PREFERRED STATE ENCODING IN NON-VOLATILE MEMORIES

Invensas Corporation, Sa...

1. An electronic system, comprising:a processor;
a main memory coupled to the processor;
a memory interface coupled to the processor and the main memory;
a memory controller coupled to the memory interface; and
a first non-volatile memory (NVM) integrated circuit coupled to the memory controller, the NVM integrated circuit further comprising:
a memory array having a plurality of pages, a page buffer coupled to the memory array, and a data input/output interface coupled to the page buffer,
wherein:
a first page in the memory array is selected for programming,
first user write data is stored in the page buffer,
preferred state encoding (PSE) is applied to the first user data to generate first PSE encoded write data which is stored in the page buffer according to a first allocation map,
error correction code (ECC) encoding is applied to the first PSE encoded write data to generate first ECC encoded write data which is stored in the page buffer according to the first allocation map, and
the contents of the page buffer are programmed into the selected first page in the memory array.

US Pat. No. 10,169,142

GENERATING PARITY FOR STORAGE DEVICE

Futurewei Technologies, I...

1. A method performed by a solid state device (SSD) controller to generate a parity, the method comprising:receiving, by the SSD controller, input data to be stored to first and second pages of a storage device, wherein the first page is allocated with N codewords and at least one non-integer number of codeword, the second page is allocated with M codewords, N and M are integer, each non-integer number of codeword corresponding to a part of a codeword, and wherein a total number of codewords in the first page is different from a total number of codewords in the second page;
determining, by the SSD controller, a max impact number (MIN) of the storage device dynamically, wherein the MIN is an integer no less than N+1 and no less than M;
configuring, by the SSD controller, codewords of the first and second pages into multiple groups, wherein each group has an integer number of codewords, and wherein the integer number of codewords in each group is no less than the MIN;
generating, by the SSD controller, parities for the multiple groups; and
storing, by the SSD controller, the parities to reserved spaces of the storage device.

US Pat. No. 10,169,141

MODIFIABLE STRIPE LENGTH IN FLASH MEMORY DEVICES

SK Hynix Inc., Gyeonggi-...

1. A memory device comprising:a memory comprising a plurality of memory cells for storing data; and
a controller communicatively coupled to the memory and configured to organize the data as a plurality of stripes, wherein each individual stripe of the plurality of stripes comprises:
a plurality of data groups, each of the plurality of data groups stored in the memory using a subset of the plurality of memory cells, wherein:
a stripe length for the individual stripe is determined by the controller based on detecting a condition associated with one or more data groups of the plurality of data groups, and
the stripe length for the individual stripe is a number of the plurality of data groups included in the individual stripe; and
at least one data group of the plurality of data groups for each of the individual stripes comprising parity data for correcting bit errors associated with the subset of the plurality of memory cells for the individual stripe.

US Pat. No. 10,169,140

LOADING A PHASE-LOCKED LOOP (PLL) CONFIGURATION USING FLASH MEMORY

International Business Ma...

1. A method for loading a phase-locked loop (PLL) configuration into a PLL module of an Application Specific Integrated Circuit (ASIC) using Flash memory, the method comprising:responsive to the PLL module in the ASIC locking a current PLL configuration from a set of current configuration registers in the ASIC, loading, by reset logic in the ASIC, a Flash data image configuration from the Flash memory into a set of holding registers in the ASIC;
responsive to the Flash data image configuration failing to be corrupted, comparing, by comparison logic in the ASIC, the Flash data image configuration in the set of holding registers to the current PLL configuration in the set of current configuration registers;
responsive to the Flash data image configuration differing from the current PLL configuration, loading, by the reset logic, the Flash data image configuration onto a PLL module input; and
responsive to the PLL module locking the Flash data image configuration, loading by the reset logic, the Flash data image configuration in the set of holding registers into the set of current configuration registers.

US Pat. No. 10,169,139

USING PREDICTIVE ANALYTICS OF NATURAL DISASTER TO COST AND PROACTIVELY INVOKE HIGH-AVAILABILITY PREPAREDNESS FUNCTIONS IN A COMPUTING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method for proactive natural disaster preparedness, comprising:receiving, at a computer of a computing environment, a notification of an impending natural disaster;
determining, by the computer, a threshold corresponding to a likelihood of the predicted disaster, further comprising:
using the likelihood to index into a data structure that stores, for each of a plurality of likelihood values, a corresponding threshold, wherein:
each of the stored thresholds represents a different cost tier;
each successively-higher cost tier corresponds to a successively-higher weighted business cost value; and
the determined threshold is the stored threshold that corresponds, in the data structure, to a likelihood value that is less than or equal to the likelihood of the predicted disaster;
determining, by the computer for the threshold, at least one proactive measure corresponding thereto for enabling the computing environment to maintain high availability, further comprising:
using the threshold to index into a data store that stores, for each of a plurality of successively-higher weighted business cost values, a corresponding proactive measure and executable functionality to invoke the corresponding proactive measure; and
selecting, as the determined at least one proactive measure, each of the corresponding proactive measures that corresponds, in the data store, to a weighted business cost value that is less than or equal to the threshold; and
automatically causing the computing environment to carry out the executable functionality to invoke each of the at least one determined proactive measure and thereby maintain the high availability of the computing environment without manual invocation.

US Pat. No. 10,169,137

DYNAMICALLY DETECTING AND INTERRUPTING EXCESSIVE EXECUTION TIME

International Business Ma...

1. A method, comprising:executing, by a first function of a first process executing on a processor, a plurality of calls to a second function of a second process;
programmatically generating, based on a respective amount of time required for each of the plurality of calls to complete, a time threshold for calls from the first function to the second function; and
subsequent to the plurality of calls completing:
storing, by an operating system (OS) kernel executing on the processor and in a queue of the OS kernel, an indication that the first function of the first process executing on the processor has made an additional call to the second function of the second process;
collecting process data for at least one of the first process and the second process;
determining, by the OS kernel, that an amount of time that has elapsed since the first function of the first process made the additional call to the second function of the second process exceeds the programmatically defined time threshold;
storing the queue and the process data as part of a failure data capture; and
performing a predefined operation on at least one of the first process and the second process.

US Pat. No. 10,169,136

DYNAMIC MONITORING AND PROBLEM RESOLUTION

International Business Ma...

1. A method comprising:determining, by one or more processors, a monitoring tier of a first component, of a plurality of components, that is a cause of a malfunction is activated;
in response to determining the monitoring tier of the first component is activated, determining, by one or more processors, a plurality of measurements for the plurality of components;
identifying, by one or more processors, a component of the plurality of components with the greatest number of activated monitoring tiers, based on the plurality of measurements; and
determining, by one or more processors, whether the component with the greatest number of activated monitoring tiers is the first component.

US Pat. No. 10,169,134

ASYNCHRONOUS MIRROR CONSISTENCY AUDIT

International Business Ma...

1. A method for auditing data consistency in an asynchronous data replication environment, the method comprising:executing, at a primary storage system, a “write with no data” command that performs functions associated with a conventional write command but writes no data to a source volume, the “write with no data” command performing the following:
serializing a source data track; and
creating a record set that contains a copy of data in the source data track, a timestamp indicating when the source data track was copied, and a command type indicating that the copy in the record set is not to be applied to a corresponding target data track of a target volume;
replicating the record set from the primary storage system, hosting the source volume, to a secondary storage system, hosting the target volume;
applying, to the target data track, all updates received for the target data track with timing prior to the timestamp;
reading the target data track at the secondary storage system after the updates have been applied;
comparing the target data track to the source data track; and
recording an error if the target data track does not match the source data track.

US Pat. No. 10,169,133

METHOD, SYSTEM, AND APPARATUS FOR DEBUGGING NETWORKING MALFUNCTIONS WITHIN NETWORK NODES

Juniper Networks, Inc., ...

1. A method comprising:building a collection of debugging templates that comprises a first debugging template that corresponds to a first potential cause of a certain networking malfunction and a second debugging template that corresponds to a second potential cause of the certain networking malfunction by:
receiving user input from a user of a network;
creating, based at least in part on the user input, the first debugging template that defines a first set of debugging steps that, when performed by a computing system, enable the computing system to determine whether the first potential cause led to the certain networking malfunction; and
creating, based at least in part on the user input, the second debugging template that defines a second set of debugging steps that, when performed by the computing system, enable the computing system to determine whether the second potential cause led to the certain networking malfunction;
detecting a computing event that is indicative of the certain networking malfunction within a network node included in the network;
determining, based at least in part on the computing event, potential causes of the certain networking malfunction, wherein the potential causes comprise the first potential cause of the certain networking malfunction and the second potential cause of the certain network malfunction;
performing the first set of debugging steps defined by the first debugging template that corresponds to the first potential cause, wherein the first debugging template comprises a generic debugging template that enables the computing system to determine that the certain networking malfunction resulted from the first potential cause irrespective of a software configuration of the network node; and
determining, based at least in part on the first set of debugging steps defined by the first debugging template, that the certain networking malfunction resulted from the first potential cause.

US Pat. No. 10,169,132

PREDICTING A LIKELIHOOD OF A CRITICAL STORAGE PROBLEM

International Business Ma...

1. A method for predicting, by a computerized storage-management system, a likelihood of a critical storage problem, the method comprising:a processor of a computerized storage controller of the computerized storage-management system receiving a sample set, where a sample size identifies a number of samples in the sample set, and where a first sample of the sample set identifies a first amount of storage space that was available to a storage device at a time when the first sample was recorded;
the processor deriving a mean of the sample set and a standard deviation of the sample set;
the processor further deriving a Chi-square statistic of the sample set as a function of the first mean and the first standard deviation, where the Chi-square statistic identifies whether the sample size is large enough to ensure that the sample set is statistically valid;
the processor, as a function of the deriving and of the further deriving, determining whether the sample set is statistically valid;
the processor, as a function of the determining, identifying a likelihood of a critical storage problem occurring within a threshold time;
the processor, as a further function of the determining, directing the computerized storage-management system to select an adjusted sample-set size,
where the duration of the threshold time is selected to allow the processor to perform the identifying in real time, and
where the adjusted sample-set size identifies a size of a future sample set that the computerized storage controller will request and receive at a future time in order to identify available a future available storage space of the storage device at the future time.

US Pat. No. 10,169,131

DETERMINING A TRACE OF A SYSTEM DUMP

International Business Ma...

1. A method for improving system analytics by determining an extra trace of a system dump after an event triggering the system dump, the method comprising:receiving, by one or more computer processors, a system dump request, wherein the system dump request includes performing a system dump utilizing a dumping tool, wherein the system dump includes a trace wherein the trace comprises one or more trace entries collected in a trace table;
determining, by one or more computer processors, an initial trace of the system dump;
determining, by one or more computer processors, the extra trace, wherein determining the extra trace includes determining a time period subsequent to the initial trace of the system dump to collect trace entries, and wherein the extra trace refers to a plurality of trace data entries collected during the time period subsequent to the initial trace of the system dump and subsequent to an event triggering the system dump;
determining, by one or more computer processors, an updated trace table, wherein determining the updated trace table includes collecting the plurality of trace entries during the time period subsequent to the initial trace of the system dump and subsequent to an event triggering the system dump, appending the trace table with the plurality of trace entries, and wrapping the one or more trace entries collected in the initial trace of the system dump in the event the updated trace table cannot store all of the plurality of trace entries; and
displaying, by one or more computer processors, the extra trace at the end of the initial trace.

US Pat. No. 10,169,130

TAILORING DIAGNOSTIC INFORMATION IN A MULTITHREADED ENVIRONMENT

International Business Ma...

1. A computer-implemented method for tailoring diagnostic information specific to current activity of multiple threads within a computer system, the method comprising:creating, by one or more processors, a system dump, including main memory and system state information;
storing, by one or more processors, the system dump to a database;
executing, by one or more processors, a program to provide tailored diagnostic information;
creating, by one or more processors, a virtual memory image of a system state, based on the memory dump, in the address space of the program, by creating a second hardware memory mapping of the hardware memory addresses of the address space of the program to the virtual memory addresses of the virtual memory image of the system state;
scanning, by one or more processors, the virtual memory image and system state information, using the second hardware memory mapping, to identify tasks that were running, tasks that have failed due to an error, and tasks that were suspended when the system dump was made;
collecting and collating based on task number, by one or more processors, from the system dump, using the second hardware memory mapping, state information and control blocks associated with the identified tasks; and
storing, by one or more processors, to the database, a formatted system dump, including the collected and collated state information and control blocks for the identified tasks.

US Pat. No. 10,169,127

COMMAND EXECUTION RESULTS VERIFICATION

International Business Ma...

1. A computer program product comprising a computer readable storage medium that is not a transitory signal per se, the computer readable storage medium having computer readable codes stored thereon that cause one or more devices to conduct a method comprising:receiving, by a processor, a file including a plurality of commands and an expected result related to the plurality of commands from a command line interface, the command line interface operating in a script mode that allows a user, with a single login to the command line interface, to define a list of commands to be executed in order by the command line interface;
executing the plurality of commands to create one or more processes for performing one or more tasks corresponding to the plurality of commands;
performing the one or more tasks;
generating one or more result codes corresponding to performance of the one or more tasks, the one or more result codes comprising a first indication of successful command execution or a second indication of errors;
determining whether the one or more result codes satisfy the expected result based on the first indication or the second indication in the one or more result codes matching the expected result; and
sending a response to the command line interface in response to determining whether the one or more result codes satisfy the expected result,
wherein:
the response includes one of an error message and a success code,
the error message comprises an error code indicating one of an unexpected error and an unexpected success in the one or more result codes,
determining whether the one or more result codes satisfy the expected results comprises determining whether the first indication of successful command execution or the second indication of errors matches at least a subset of the expected results,
sending the response to the command line interface comprises:
sending the success code to the command line interface in response to determining a match, and
sending the error code to the command line interface in response to determining a non-match,
the error code comprises one of:
a first error indicating an unexpected error in the one or more result codes in response to the subset of expected results including a successful result, and
a second error indicating an unexpected success in the one or more result codes in response to the subset of expected results including an error result.

US Pat. No. 10,169,125

RE-ENCODING DATA IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method comprises:determining to create a new set of encoded data slices based on an unfavorable storage performance level associated with one or more storage units (SUs) within a dispersed storage network (DSN);
partially decoding, by a storage unit (SU) of the DSN, a first encoded data slice of a set of encoded data slices in accordance with previous dispersed storage error encoding parameters having a previous threshold number to produce a partially decoded first encoded data slice, wherein the first encoded data slice is stored by another SU of the DSN and is transmitted from the another SU via the DSN and received via an interface of the SU that is configured to interface and communicate with the DSN, and wherein a data segment of a data object is encoded into the set of encoded data slices in accordance with the previous dispersed storage error encoding parameters;
partially re-encoding, by the SU, the partially decoded first encoded data slice in accordance with updated dispersed storage error encoding parameters having an updated threshold number to produce a first partially re-encoded data slice, wherein the first partially re-encoded data slice is used to create a new first encoded data slice of the new set of encoded data slices that corresponds to the data segment being dispersed storage error encoded in accordance with the updated dispersed storage error encoding parameters, wherein
the partially re-encoding comprises:
obtaining a new encoding matrix corresponding to the updated dispersed storage error encoding parameters;
reducing the new encoding matrix based on a matrix position corresponding to the new first encoded data slice of the new set of encoded data slices that corresponds to the data segment being dispersed storage error encoded in accordance with the updated dispersed storage error encoding parameters; and
matrix multiplying the reduced new encoding matrix with the partially decoded first encoded data slice to produce the first partially re-encoded data slice;
receiving, by the SU via the DSN and via the interface of the SU, a plurality of second partially re-encoded data slices from a sub-set of other SUs of the DSN, wherein the plurality of second partially re-encoded data slices is created in accordance with the updated dispersed storage error encoding parameters based on partially re-encoding by the sub-set of other SUs of the DSN; and
generating, by the SU, a new second encoded data slice of the new set of encoded data slices from the plurality of second partially re-encoded data slices.

US Pat. No. 10,169,123

DISTRIBUTED DATA REBUILDING

INTERNATIONAL BUSINESS MA...

1. A method for use in a distributed storage network (DSN) storing sets of encoded data slices in sets of storage units, the method comprising:identifying, by a first storage unit included in a set of storage units, a storage error associated with an encoded data slice of a set of encoded data slices, the encoded data slice assigned to be stored in the first storage unit;
selecting a second storage unit to generate a rebuilt encoded data slice to replace the encoded data slice assigned to be stored in the first storage unit;
transmitting, from the first storage unit to the second storage unit, a rebuild request associated with the storage error;
generating, by the second storage unit, the rebuilt encoded data slice in response to the rebuild request;
transmitting the rebuilt encoded data slice from the second storage unit to the first storage unit; and
storing the rebuilt encoded data slice in the first storage unit.

US Pat. No. 10,169,122

METHODS FOR DECOMPOSING EVENTS FROM MANAGED INFRASTRUCTURES

Moogsoft, Inc., San Fran...

1. A system for clustering events, comprising:a first engine that receives message data from a managed infrastructure that includes managed infrastructure physical hardware which supports the flow and processing of information;
a second engine that determines common characteristics of events and produces clusters of events relating to the failure of errors in the managed infrastructure, where membership in a cluster indicates a common factor of the events that is a failure or an actionable problem in the physical hardware managed infrastructure directed to supporting the flow and processing of information, and producing events that relate to the managed infrastructure while converting the events into words and subsets used to group the events that relate to failures or errors in the managed infrastructure, including the managed infrastructure physical hardware;
a compare and merge engine that receives outputs from the second engine, the compare and merge engine communicating with one or more user interfaces in a situation room; and
wherein the second engine or a third engines uses a source address for each event make a change to at least a portion of the managed infrastructure, and in response to producing events that relate to the managed infrastructure while converting the events into words and subsets physical changes are made to managed infrastructure physical hardware.

US Pat. No. 10,169,121

WORK FLOW MANAGEMENT FOR AN INFORMATION MANAGEMENT SYSTEM

Commvault Systems, Inc., ...

1. A computer-implemented method for distributing tasks, in an information management system, using first and second work queues managed by a storage manager, the method comprising:receiving tasks to be performed in the information management system at the storage manager, which facilitates a transfer of data between primary storage devices and secondary storage devices in the information management system, and which schedules and manages the tasks for multiple, different client computing devices in the information management system;
scheduling information management policy tasks for a client computing device using the storage manager,
wherein scheduling the information management policy tasks includes populating the first work queue with the information management policy tasks,
wherein the information management policy tasks include data storage operations that are defined by a data storage policy and include creating secondary copies of data on secondary storage devices from primary copies of data stored on primary storage devices, restoring the secondary copies of data from the secondary storage devices to the primary storage devices, or retaining the secondary copies of data on the secondary storage devices, and
wherein the secondary storage devices are located remotely from the primary storage devices;
transmitting the information management policy tasks from the storage manager to the client computing device, in accordance with the first work queue;
scheduling information management system tasks for the client computing device using the storage manager,
wherein scheduling the information management system tasks includes populating the second work queue with the information management system tasks,
wherein the information management system tasks include tasks that are related to maintenance of software or hardware components of the information management system and that do not read or write data to the secondary storage devices;
transmitting the information management system tasks from the storage manager to the client computing device in accordance with the second work queue and based on an availability of the client computing device;
executing, at the client computing device, the transmitted information management policy tasks, in accordance with the first work queue, and the transmitted information management system tasks, in accordance with the second work queue;
determining parameters of an information management system operation failure; and
providing an alert of failure if at least one of the parameters exceeds a predetermined threshold.

US Pat. No. 10,169,120

REDUNDANT SOFTWARE STACK

International Business Ma...

1. A method for creating redundant software stacks, the method comprising:identifying, by one or more computer processors, a first container with a set of rules and with first software stack and a valid multipath configuration, wherein the first software stack is a first path of the valid multipath configuration;
creating, by one or more computer processors, a second container, wherein the second container has the same set rules as the first container;
creating, by one or more computer processes, a second software stack in the second container, wherein the second software stack is a redundant software stack of the first software stack;
creating, by one or more computer processors, a second path from the first container to the second software stack, wherein the second path bypasses the first software stack;
identifying, by one or more computer processors, a data load on the first path that is creating latency; and
sending, by one or more computer processors, at least a portion of the data load on the first path to the second path to reduce the latency on the first path.

US Pat. No. 10,169,119

METHOD AND APPARATUS FOR IMPROVING RELIABILITY OF DIGITAL COMMUNICATIONS

1. A method performed by a radio comprising:receiving a network identifier comprising a data unit identifier, the data unit identifier configured to identify a type of a data unit being communicated;
checking validity of the data unit identifier;
combinatorially processing a data unit for which the data unit identifier is uncertain according to a plurality of possible data unit identifier values;
selecting a most likely data unit identifier value based according to results of the combinatorially processing the data unit;
performing subsequent processing of the data unit in accordance with the most likely data unit identifier value.

US Pat. No. 10,169,118

REMOTE PRODUCT INVOCATION FRAMEWORK

INTERNATIONAL BUSINESS MA...

1. A method for remote product invocation comprising:configuring an invocation framework, the invocation framework comprising an integration module and an endpoint/handler module;
wherein the integration module is configured to:
receive a source object;
format data from the source object based on requirements of a target machine supporting an external service that performs a desired operation;
utilize the endpoint/handler module, which comprises two distinct subcomponents, an endpoint and a handler, the endpoint to contain information for making a connection to an external service, and the handler to use the information from the endpoint to make connection to the external service and execute the desired operation using the data from the source object; and
with a logical management operation of the invocation framework, defining an action to be executed in response to receiving the source object so as to provide an interface between an entity submitting the source object to the integration module and the integration module.

US Pat. No. 10,169,117

INTERFACING BETWEEN A CALLER APPLICATION AND A SERVICE MODULE

International Business Ma...

1. A method for interfacing between a caller application and a service module, said method comprising:receiving, by a processor of a computer system, a request for performing a transaction from the caller application, wherein the request comprises at least one caller application attribute describing the request;
subsequent to said receiving the request, said processor building a service module data structure pursuant to the received request, wherein the service module data structure comprises a generic service document and at least one service module attribute, and wherein said building the service module data structure comprises:
creating one or more containers in the generic service document, wherein each container of the one or more containers is respectively associated with each service module attribute of the at least one service module attribute in each mapping of the at least one mapping in a mapping table of the service module data structure, wherein each container comprises a data value for each service module attribute of the at least one service module attribute; and
subsequent to said creating the one or more containers in the generic service document, naming each container of said at least one container in the generic service document after each mapping of said at least one mapping in the mapping table;
subsequent to said building the service module data structure, said processor storing each service module attribute in a relational table of the service module data structure;
subsequent to said storing each service module attribute, said processor servicing the request within the service module data structure, wherein said servicing results in instantiating the generic service document, and wherein said servicing comprises: performing the transaction, retrieving each mapping of at least one mapping in the mapping table of the service module data structure, and reloading each container of at least one container from the relational table into respective containers of the generic service document according to each retrieved mapping; and
subsequent to said servicing, said processor returning the generic service document to the caller application.

US Pat. No. 10,169,116

IMPLEMENTING TEMPORARY MESSAGE QUEUES USING A SHARED MEDIUM

International Business Ma...

1. A method for implementing temporary message queues using a shared medium at a coupling facility shared between multiple systems each having a queue manager handling messages from the system's applications, the method carried out at the coupling facility comprising:defining a list structure on the shared medium wherein the list structure has multiple lists;
providing a list which is allocated to a single queue manager in which message entries are located which belong to multiple shared temporary dynamic queues (STDQs) created by the single queue manager, wherein the message entries are located by reference to a key which determines a message entry's position in the list, the list including:
a list header which can be partitioned for multiple current STDQs by assignment of key ranges to message entries belonging each current STDQ; and
a list control entry which holds information about the assignment of key ranges to the multiple current STDQs and shares the information with other queue managers using the STDQs, wherein the list control entry is updated by the single queue manager when an STDQ is created or deleted, wherein the list control entry includes a name of an STDQ which is in accordance with a queue naming convention and includes: an indication of the list structure, an indication of the single queue manager, a list header number, a start key on the list header, and a unique identifier for the STDQ; and
sending a message from a first system of the multiple systems to a second system of the multiple systems based on the list.

US Pat. No. 10,169,115

PREDICTING EXHAUSTED STORAGE FOR A BLOCKING API

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for operating a blocking application program interface (API) of an application, the method comprising:receiving, from a requestor, a request for data from the application;
creating, by the blocking API of the application, a buffer for the data;
receiving, by the application, a data record corresponding to the request;
storing, by the blocking API, the data record in the buffer;
based on a determination that the buffer is full, providing, by the blocking API, data records in the buffer to the requestor; and
based on a determination that the buffer is not full, determining by the blocking API, based at least in part on an amount of available storage in the buffer, whether to provide the data records in the buffer to the requestor or to wait for another data record before providing the data records in the buffer to the requestor.

US Pat. No. 10,169,114

PREDICTING EXHAUSTED STORAGE FOR A BLOCKING API

INTERNATIONAL BUSINESS MA...

8. A computer program product, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to:receive, from a requestor, a request for data from an application;
create, by a blocking API of the application, a buffer for the data;
receive, by the application, a data record corresponding to the request;
store, by the blocking API, the data record in the buffer;
based on a determination that the buffer is full, provide, by the blocking API, data records in the buffer to the requestor; and
based on a determination that the buffer is not full, determine by the blocking API, based at least in part on an amount of available storage in the buffer, whether to provide the data records in the buffer to the requestor or to wait for another data record before providing the data records in the buffer to the requestor.

US Pat. No. 10,169,113

STORAGE AND APPLICATION INTERCOMMUNICATION USING ACPI

International Business Ma...

1. A method for event-driven intercommunication, the method comprising:issuing an interrupt based on a first event from a first kernel-mode module to a second kernel-mode module via an interface,
wherein the first event corresponds to an operational parameter of a first node based, at least in part, on a shared namespace accessible by the first kernel-mode module and the second kernel-mode module, wherein the operational parameter of the first node is an anticipated status of the first node, based on one or more non-consecutive operations scheduled to be executed by the first node,
wherein the first node is a storage subsystem of a computing device in communication with the second node, the second node comprising a user-level application stored externally and accessed through a communication network by the computing device, and
issuing, by the second kernel-mode module, a second event to a second node, wherein the second event corresponds to an object of the shared namespace.

US Pat. No. 10,169,112

EVENT SEQUENCE MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:obtaining, by one or more processors, an action sequence that includes a plurality of actions executed on behalf of a plurality of users for achieving at least one goal;
generating, by one or more processors, from the obtained action sequence an event sequence that includes a plurality of events, wherein the event sequence includes respective time points at which respective actions associated with respective events are executed, and wherein each of the plurality of events is associated with a unique type of action from the plurality of actions;
determining, by one or more processors, an association model based on the generated event sequence, wherein the association model defines a chronological relationship among events associated with the at least one goal;
building, by one or more processors, a plurality of sub-models from a plurality of sub-sequences that are extracted from the event sequence, wherein at least one of the plurality of sub-models defines a chronological relationship among events associated with a portion of the at least one goal;
combining, by one or more processors, the plurality of sub-models into the association mode;
for a specific event from the plurality of events included in the event sequence, extracting from the event sequence, by one or more processors, a group of sub-sequences that include the specific event;
determining, by one or more processors, a sub-model from the extracted group of sub-sequences based on respective time points included in the sub-sequences; and
selecting from the event sequence, by one or more processors, a sub-sequence that ends at the specific event for inclusion in the sub-model.

US Pat. No. 10,169,111

FLEXIBLE ARCHITECTURE FOR NOTIFYING APPLICATIONS OF STATE CHANGES

MICROSOFT TECHNOLOGY LICE...

1. A method for providing notifications to clients in response to state property changes, comprising:receiving a notification request at an Application Program Interface (API) from a client application on the computing device to receive a notification in response to an event that originates on the computing device; wherein the event is associated with a change in a state property of the computing device; wherein the Application Program Interface (API) is utilized by the client application to register the notification request;
ensuring that the state property is registered via the API, wherein the API is useable to register for notifications regarding state properties that are updated by different components within the computing device;
determining when the state property changes, wherein determining when the state property changes comprises using the API to specify a batching operation on changes to the state property that occur within a predetermined time period; wherein a call to the API batching operation specifies a time period for which a value of the state property is to remain constant before notifying the client application of a change to the state property;
determining when the client should receive notification of the state property change; and
notifying the client of the state property change on the computing device when determined that the client should receive notification of the state property change;
wherein the call to the API batching operation reduces a number of instances of notifying the client of the state property change during the time period.

US Pat. No. 10,169,108

SPECULATIVE EXECUTION MANAGEMENT IN A COHERENT ACCELERATOR ARCHITECTURE

International Business Ma...

1. A computer-implemented method for speculative execution management in a coherent accelerator architecture, the method comprising:detecting, with respect to a set of cache lines of a single shared memory in the coherent accelerator architecture, a first access request from a first Accelerator Functional Unit (AFU);
detecting, with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, a second access request from a second AFU; and
processing, by a speculative execution management engine using a speculative execution technique, the first and second access requests with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, wherein the speculative execution technique is configured to allow both the first and second AFU's to simultaneously access the set of cache lines without a locking mechanism;
determining a number of data entries in the set of cache lines of the single shared memory;
comparing the number of data entries in the set of cache lines of the single shared memory to a threshold number of data entries;
determining an elapsed period since a previous capture on the set of cache lines of the set of single shared memory;
comparing the elapsed period since the previous capture to a time threshold;
capturing, in response to the number of data entries exceeding the threshold number of data entries and in response to the elapsed period exceeding the time threshold, by the speculative execution management engine, a set of checkpoint roll-back data, wherein the set of checkpoint roll-back data includes an image of the first AFU at a first point in time, an image of the second AFU at the first point in time, and an image of the set of cache lines of the single shared memory at the first point in time;
evaluating the first and second access requests, wherein evaluating the first and second access requests further comprises:
identifying a first subset of target cache lines of the set of cache lines for the first access request;
identifying a second subset of target cache lines of the set of cache lines for the second access request, wherein the first and second subsets of target cache lines indicate read and write operations by the first and second access requests; and
determining, based on the identified first and second subset of target cache lines, whether a conflict exists;
in response to a determination that a conflict does not exist:
updating, in response to processing the first and second access requests with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture, a host memory directory in a batch fashion which includes a set of update data for both the first and second access requests in a single set of data traffic; and
in response to a determination that a conflict exists:
identifying a subset of cache lines of the set of cache lines where the conflict exists;
determining a number of cache lines in the subset of cache lines;
comparing the number of cache lines to a severity threshold;
rolling-back, in response to the number of cache lines exceeding the severity threshold, based on the set of checkpoint roll-back data, the coherent accelerator architecture to a prior state, wherein the rolling-back includes rolling back only the subset of cache lines where the conflict exists;
retrying, without using the speculative execution technique and in a separate fashion in relation to the second access request, the first access request with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture; and
retrying, without using the speculative execution technique and in the separate fashion in relation to the first access request, the second access request with respect to the set of cache lines of the single shared memory in the coherent accelerator architecture.

US Pat. No. 10,169,107

ALMOST FAIR BUSY LOCK

International Business Ma...

1. A method comprising:publishing a current state of a lock and a claim non-atomically to the lock by a next owning thread, in an ordered set of threads, that has requested to own the lock, the claim comprising a structure capable of being read and written only in a single memory access,
obtaining, by each thread in the ordered set of threads, a ticket,
wherein the claim comprises an identifier of a ticket obtained by the next owning thread, and an indication that the next owning thread is claiming the lock;
comparing the ticket obtained by the next owning thread with a current ticket;
responsive to a match between the ticket obtained by the next owning thread and the current ticket, preventing thread monitoring preemptions; and
responsive to a match between the ticket obtained by the next owning thread and the current ticket, non-atomically acquiring the lock.

US Pat. No. 10,169,106

METHOD FOR MANAGING CONTROL-LOSS PROCESSING DURING CRITICAL PROCESSING SECTIONS WHILE MAINTAINING TRANSACTION SCOPE INTEGRITY

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for managing critical section processing, the method comprising:generating, using a processor, a transaction scope for a process in response to processing in a critical section;
collecting data related to the process;
generating, using the processor, a plurality of requests using the collected data;
storing the plurality of requests and data as a plurality of pending items chained together to form an ordered list in a private storage during critical section processing; and
processing, using the processor, the request based on the transaction scope, the processing comprising implementing a check of the process for any pending items in response to a transaction scope application programming interface being called or other processing relating to the pending items,
wherein the pending items are processed in the order they are created by using the ordered list, and
wherein one of the plurality of requests is a rollback request that includes at least one from a group consisting of removing the pending items from the private storage, releasing the private storage for all pending items, and resuming normal processing.

US Pat. No. 10,169,105

METHOD FOR SIMPLIFIED TASK-BASED RUNTIME FOR EFFICIENT PARALLEL COMPUTING

QUALCOMM Incorporated, S...

1. A method of scheduling and executing lightweight computational procedures in a computing device, comprising:determining whether a first task pointer in a task queue is a simple task pointer for a lightweight computational procedure;
scheduling a first simple task for the lightweight computational procedure for execution by a first thread in response to determining that the first task pointer is a simple task pointer;
retrieving a kernel pointer for the lightweight computational procedure from an entry of a simple task table, wherein the entry is associated with the simple task pointer; and
directly executing the lightweight computational procedure as the first simple task.

US Pat. No. 10,169,104

VIRTUAL COMPUTING POWER MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:receiving a request for a computing resource for a computing task, wherein the computing task is active in a computing system, and wherein the computing system includes a current power consumption profile for the computing task and a historical power consumption profile for the computing task;
determining whether a peak in a current power consumption profile is expected based on the historical power consumption profile for the computing task; and
responsive to determining a peak is expected, delaying the request for the computing resource by: (i) initiating an allocation timeout, (ii) determining whether the allocation timeout is effective in reducing the current power consumption profile, (iii) responsive to determining the allocation timeout is not effective in reducing the current power consumption profile, granting the request for the computing resource, and (iv) updating the historical power consumption profile.

US Pat. No. 10,169,103

MANAGING SPECULATIVE MEMORY ACCESS REQUESTS IN THE PRESENCE OF TRANSACTIONAL STORAGE ACCESSES

International Business Ma...

1. A processing unit, comprising:a processor core;
a cache memory coupled to the processor core; and
transactional memory logic that, responsive to receipt of a speculative memory access request at the cache memory that includes a target address of data speculatively requested for the processor core, determines whether the target address of the speculative memory access request matches an address in a set of addresses forming a store footprint of a memory transaction and, responsive to determining that the target address of the speculative memory access request matches an address in the set of addresses forming the store footprint of a memory transaction, causes the cache memory to reject servicing the speculative memory access request;
wherein:
the transactional memory logic determines whether the speculative memory access request is a transactional speculative memory access request or a non-transactional speculative memory access request;
the transactional memory logic causes the cache memory to reject servicing the speculative memory access request in response to determining the memory access request is a transactional speculative memory access request; and
the transactional memory logic fails the memory transaction in response to determining the speculative memory access request is a non-transactional speculative memory access request.

US Pat. No. 10,169,102

LOAD CALCULATION METHOD, LOAD CALCULATION PROGRAM, AND LOAD CALCULATION APPARATUS

FUJITSU LIMITED, Kawasak...

7. A load calculation method comprising:acquiring processor usage information including usage of a processor of a managed physical machine, and usage of the processor for each of a plurality of virtual machines generated by a hypervisor executed on the managed physical machine;
acquiring data transmission information including a data transmission amount for each of a plurality of virtual network interfaces used by the plurality of virtual machines;
determining a virtual network interface having a second correlation between overhead processor usage, obtained by subtracting a sum of processor usage of the plurality of virtual machines from processor usage of the managed physical machine, and a data transmission amount of the virtual network interface, to be a second virtual network interface that performs data transmission via the hypervisor among the plurality of virtual network interfaces;
determining a virtual network interface having a first correlation that is smaller than the second correlation between the overhead processor usage and the data transmission amount of the virtual network interface, to be a first virtual network interface that performs data transmission without routing through the hypervisor, among the plurality of virtual network interfaces;
calculating load information including amount of increase or decrease of the processor usage for data transmission in the managed physical machine, which a virtual machine to be added or deleted will be added to or deleted from, based on whether each of the virtual machines uses the first virtual network interface or the second virtual interface; and
adding or deleting the virtual machine to be added or deleted to or from a managed physical machine that is selected based on the calculated amount of increase or decrease of the processor usage for data transmission.

US Pat. No. 10,169,101

SOFTWARE BASED COLLECTION OF PERFORMANCE METRICS FOR ALLOCATION ADJUSTMENT OF VIRTUAL RESOURCES

International Business Ma...

1. A method for collecting and processing performance metrics, the method comprising:assigning, by the one or more computer processors, an identifier corresponding to a first workload, wherein a first workload includes inbound input-output transaction from input-output devices and accelerators associated with a first virtual machine, wherein the first virtual machine is a container;
recording, by the one or more computer processors, resource consumption data, wherein the resource consumption data is selected from a group consisting of: one or more time stamps, one or more identified workloads, and one or more resource consumption estimates associated with the one or more time stamps, of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
creating, by the one or more computer processors, a relational association of the first workload and the first virtual machine to the resource consumption data of the at least one processor, wherein creating a relational association between the first workload and the first virtual machine further comprises using the calculated difference in resource consumption between the performance monitoring interrupt and a previous interrupt to track a change in resource consumption of the at least one processor over time;
determining, by the one or more computer processors, if the first workload is complete; responsive to determining that the first workload is not complete, calculating, by the one or more computer processors, a difference in recorded resource consumption data between the performance monitoring interrupt and a previous performance monitoring interrupt;
assigning, by the one or more computer processors, an identifier corresponding to a second workload associated with the first virtual machine;
recording, by the one or more computer processors, resource consumption data of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
creating, by the one or more computer processors, a relational association of the second workload and the first virtual machine to the resource consumption data of the at least one processor;
determining, by the one or more computer processors, if the second workload is complete;
responsive to determining that the second workload is complete, switching, by the one or more computer processors, the first virtual machine to a third workload;
aggregating, by the one or more computer processors, the recorded resource consumption data to provide one or more resource consumption estimates; and
notifying, by the one or more computer processors, a resource manager, wherein the resource manager is a hardware component, of a workload switch between the second workload and the third workload and data regarding changes in resource consumption of the at least one processor over time.

US Pat. No. 10,169,100

SOFTWARE-DEFINED STORAGE CLUSTER UNIFIED FRONTEND

INTERNATIONAL BUSINESS MA...

1. A method, comprising:initializing a plurality of first layer software defined storage (SDS) clusters, each of the first layer SDS clusters comprising multiple storage nodes, each of the multiple storage nodes executing in separate independent virtual machines on respective separate independent servers;
defining a second layer SDS cluster comprising a combination of the first layer SDS clusters; and
managing, using a distributed management application, the second layer SDS cluster, the distributed management application comprising multiple management nodes executing on all of the servers; wherein each of the separate independent virtual machines comprises a first virtual machine, and wherein each server comprises a second virtual machine that executes a given management node; and wherein the distributed management application comprising the multiple management nodes executing on all of the servers provides a unified front-end interface for accessing each of the first layer SDS cluster and the second layer SDS clusters.

US Pat. No. 10,169,099

REDUCING REDUNDANT VALIDATIONS FOR LIVE OPERATING SYSTEM MIGRATION

International Business Ma...

1. A method to reduce redundant validations for live operating system migration to increase performance of a previous mobility event, the method comprising:monitoring, by a virtualization manager, for configuration changes in a validation inventory, wherein the validation inventory comprises data selected from a group consisting of: physical hardware data related to a previous mobility event, and virtual hardware data related to the previous mobility event, and wherein the live operating system migration is performed by a control point and the virtualization manager in combination, and wherein monitoring for the configuration changes in the validation inventory based on determining configuration changes in one or more of a Virtual Fiber Channel (VFC), a Storage Area Network (SAN), and an external storage subsystem;
receiving a request to perform the live operating system migration of a logical partition (LPAR) from a source LPAR on a source computer to a target LPAR on a target computer, wherein the target LPAR is created using the validation inventory corresponding to the source LPAR,
receiving from a repository of validation inventory and based on the received request the validation inventory corresponding to the source LPAR;
in response to determining that the monitored validation inventory has changed, re-validating the received validation inventory prior to beginning the live operating system migration of the source LPAR to the target LPAR and re-caching the repository of validation inventory with the re-validated validation inventory, perform the live operating system migration of the source LPAR to the target LPAR;
and
in response to determining that the monitored validation inventory has not changed, perform the live operating system migration of the source LPAR to the target LPAR by using contents of the unchanged validation inventory, allowing the source LPAR to continually run during the live operating system migration, and without performing the re-validation of the received validation inventory.

US Pat. No. 10,169,098

DATA RELOCATION IN GLOBAL STORAGE CLOUD ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method of data relocation in global storage cloud environments, comprising:providing a computer system, being operable to:
mapping a user device to a home data server to store data of a user;
locating a data server near a travel location of the user based on one or more travel plans of the user, the one or more travel plans include one or more final travel locations and one or more intermediate travel locations including temporary locations the user travels prior to reaching the one or more final travel locations including a stopover or a layover;
locating the one or more intermediate travel locations during a user's travels using online travel web sites;
indexing and sorting one or more user-defined policies based on an owner and class of each policy of the one or more policies;
accessing the one or more user-defined policies by a primary key which includes an owner and a class of a desired policy out of the one or more user-defined policies;
filtering data from the stored data based on the one or more user-defined policies to determine which stored data is to be transferred; and
transferring the filtered data from the home data server near a home location of the user to the data server near the travel location.

US Pat. No. 10,169,097

DYNAMIC QUORUM FOR DISTRIBUTED SYSTEMS

Microsoft Technology Lice...

1. In a distributed computing system in which performance of a computing task within the distributed system is based at least in part upon each of a minimum number of nodes or devices providing authorization for performance of the computing task, a method of dynamically managing the minimum number of nodes or devices required to enable performance of the computing task, the method comprising:instantiating a dynamic quorum daemon in the distributed system, the dynamic quorum daemon running as a background task in the distributed system and the dynamic quorum daemon managing a set of nodes within the distributed system that are enabled to authorize performance of a computing task in the distributed system;
establishing that each of one or more nodes and zero or more devices in the distributed system is designated as an authorizing entity enabled to authorize performance of a computing task in the distributed system;
establishing a minimum number of the authorizing entities which are required to authorize performance of the computing task in order to allow performance of the computing task in the distributed system;
the dynamic quorum daemon determining that a state of a node or device in the distributed system has changed;
based on the determined change in the state of a node or device, the dynamic quorum daemon changing a designation of whether the node or device is an authorizing entity that is enabled to authorize performance of a computing task in the distributed system; and
based on the change of the designation of the node or device, the dynamic quorum daemon adjusting the minimum number of authorizing entities which are required to authorize performance of a computing task in order to allow performance of the computing task in the distributed system, the adjustment of the minimum number of authorizing entities being based at least in part upon a quorum policy which comprises one of node-majority with disk witness or node-majority with file share witness.

US Pat. No. 10,169,096

IDENTIFYING MULTIPLE RESOURCES TO PERFORM A SERVICE REQUEST

Hewlett-Packard Developme...

1. A method for scheduling a service request, the method comprising:receiving the service request including a latency associated with a publication of a result of the service request;
retrieving heterogeneous data upon the receipt of the service request;
filtering the heterogeneous data to obtain data relevant to the service request;
determining an amount of relevant data prior to computing a workload for the service request;
computing the workload for the service request;
identifying multiple resources, based on the latency and the workload, to perform the service request;
distributing the workload to the identified multiple resources; and
publishing the results of the service request in accordance with the latency.

US Pat. No. 10,169,092

SYSTEM, METHOD, PROGRAM, AND CODE GENERATION UNIT

International Business Ma...

1. A system for performing exclusive control among tasks, the system comprising:a lock status storage unit for storing update information that is updated in response to acquisition and release of an exclusive lock by one task and for storing task identification information for identifying a task that has acquired an exclusive lock;
an exclusive execution unit for causing processing in a critical section included in a first task to be performed by acquiring the exclusive lock, wherein the exclusive execution unit releases the exclusive lock and updates the update information after the processing in the critical section included in the first task; and
a nonexclusive execution unit for causing processing in a critical section included in a second task, the processing in the critical section included in the second task excluding acquiring the exclusive lock;
wherein, when the processing in the critical section by the second task has not been successfully completed when a predetermined condition has been reached and the update information has been changed, the nonexclusive execution unit acquires the exclusive lock and the processing in the critical section included in the second task is performed.

US Pat. No. 10,169,091

EFFICIENT MEMORY VIRTUALIZATION IN MULTI-THREADED PROCESSING UNITS

NVIDIA CORPORATION, Sant...

1. A method for scheduling tasks for execution in a parallel processor comprising two or more streaming multiprocessors, the method comprising:receiving a set of tasks associated with a first processing context related to a first page table included in a plurality of page tables;
selecting a first task that is associated with a first address space identifier (ASID) from the set of tasks and associated with the first processing context;
determining a minimum a number of streaming multiprocessors included in the two or more streaming multiprocessors able to execute the tasks included in the set of tasks based on a number of tasks each streaming multiprocessor is able to execute concurrently, wherein the minimum number of streaming multiprocessors includes at least a first streaming multiprocessor;
assigning the tasks included in the set of tasks to the minimum number of streaming multiprocessors;
selecting the first streaming multiprocessor from the two or more streaming multiprocessors to execute the first task;
scheduling the first task to execute on the first streaming multiprocessor;
selecting a second task that is associated with a second ASID from the set of tasks and associated with the first processing context; and
scheduling the second task to execute on the first streaming multiprocessor, wherein scheduling the second task occurs prior to scheduling any other task from the set of tasks to execute on a second streaming multiprocessor included in the two or more streaming multiprocessors.

US Pat. No. 10,169,089

COMPUTER AND QUALITY OF SERVICE CONTROL METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A computer, comprising:a system bus comprising a bus management device;
a processor coupled to the system bus;
a storage coupled to the system bus, the storage comprising an operating system, and the operating system comprising a scheduling subsystem; and
at least one other device coupled to the system bus,the processor being configured to invoke the scheduling subsystem to allocate, to at least one container of the computer, a container identity (ID) corresponding one-to-one to the at least one container,the processor or the at least one other device being configured to send a bus request carrying the container ID and a hardware device ID of a hardware device used by the at least one container indicated by the container ID to the system bus, the hardware device comprising a memory in the storage or the processor, and the bus request being sent to the system bus comprising sending, to the system bus using a first memory management (MM) subsystem in the operating system, the bus request carrying the container ID and the hardware device ID, and
the bus management device being configured to:
search, according to the bus request, for a quality of service (QoS) parameter corresponding to both the container ID and the hardware device ID, the QoS parameter being stored in the bus management device; and
configure, according to the found QoS parameter, a resource required when the at least one container corresponding to the QoS parameter uses the hardware device corresponding to the QoS parameter, the resource comprising at least one of bandwidth, a delay, or a priority.

US Pat. No. 10,169,088

LOCKLESS FREE MEMORY BALLOONING FOR VIRTUAL MACHINES

1. A method of managing memory, comprising:receiving, by a hypervisor, an inflate notification from a guest running on a virtual machine, the virtual machine and the hypervisor running on a host machine, the inflate notification including a first identifier corresponding to a first time, and the inflate notification indicating that a set of guest memory pages is unused by the guest at the first time;
determining whether the first identifier precedes a last identifier corresponding to a second time and included in a previously sent inflate request to the guest;
if the first identifier does not precede the last identifier:
for a first subset of the set of guest memory pages modified since the first time, determining, by the hypervisor, to not reclaim a first set of host memory pages corresponding to the first subset of guest memory pages, and
for a second subset of the set of guest memory pages not modified since the first time, reclaiming, by the hypervisor, a second set of host memory pages corresponding to the second subset of guest memory pages; and
if the first identifier precedes the last identifier, discarding the inflate notification, wherein discarding the inflate notification includes determining to not reclaim a set of host memory pages corresponding to the set of guest memory pages specified in the inflate notification.

US Pat. No. 10,169,087

TECHNIQUE FOR PRESERVING MEMORY AFFINITY IN A NON-UNIFORM MEMORY ACCESS DATA PROCESSING SYSTEM

International Business Ma...

1. A non-transitory computer readable device having a computer program product for preserving memory affinity in a non-uniform memory access data processing system, said non-transitory computer readable device comprising:program code for, in response to a request for memory access to a page within a first memory affinity domain, determining whether or not said request is initiated by a remote processor associated with a second memory affinity domain;
program code for, in response to a determination that said request is initiated by a remote processor associated with a second memory affinity domain, determining whether or not a page migration tracking module associated with said first memory affinity domain includes an entry for said remote processor;
program code for, in response to a determination that said first page migration tracking module includes an entry for said remote processor, incrementing an access counter associated with said entry within said page migration tracking module;
program code for determining whether or not there is a page ID match with an entry within said page migration tracking module:
program code for, in response to a determination that there is no page ID match with any entry within said page migration tracking module, selecting an entry within said page migration tracking module and providing said entry with a new page ID and a new memory affinity ID;
program code for, in response to the determination that there is a page ID match with an entry within said page migration tracking module, determining whether or not there is a memory affinity ID match with said entry having the page ID field match;
program code for in response to a determination that there is no memory affinity ID match, updating said entry with the page ID field match with a new memory affinity ID;
program code for, in response to a determination that there is a memory affinity ID match, incrementing an access counter of said entry having the page ID field match;
program code for determining whether or not said access counter has reached a predetermined threshold; and
program code for, in response to a determination that said access counter has reached a predetermined threshold, migrating said page from said first memory affinity domain to said second memory affinity domain.

US Pat. No. 10,169,086

CONFIGURATION MANAGEMENT FOR A SHARED POOL OF CONFIGURABLE COMPUTING RESOURCES

International Business Ma...

1. A computer-implemented method of managing a shared pool of configurable computing resources, the method comprising:collecting a set of scaling factor data related to an active workload on a configuration of the shared pool of configurable computing resources, wherein the set of scaling factor data includes an actual number of transactions per time period being processed;
ascertaining a set of workload resource data associated with the active workload, wherein the set of workload resource data includes a hardware configuration template specifying a processor requirement and a memory requirement for an expected number of transactions per time period to be processed;
computing, by subtracting the actual number of transactions per time period from the expected number of transactions per time period, a transaction per time period difference value;
comparing the transaction per time period difference value to a transaction per time period difference threshold to determine whether the transaction per time period difference value exceeds the transaction per time period difference threshold;
detecting, in response to a determination that the transaction per time period difference value exceeds the transaction per time period difference threshold, a triggering event; and
performing, in response to detecting the triggering event, a configuration action with respect to the configuration of the shared pool of configurable computing resources, wherein the configuration action includes:
reconfiguring the configuration of the shared pool of configurable computing resources.

US Pat. No. 10,169,085

DISTRIBUTED COMPUTING OF A TASK UTILIZING A COPY OF AN ORIGINAL FILE STORED ON A RECOVERY SITE AND BASED ON FILE MODIFICATION TIMES

International Business Ma...

1. A computer-implemented method comprising:receiving a request to perform a task indicating at least a first file used to perform the task, wherein the first file is modified at a first update time and is stored on a production site comprising hardware resources configured to store original data and process tasks associated with the original data, and wherein a first copy of the first file is created at a first copy time and stored on a recovery site comprising hardware resources configured to store copies of the original data and process tasks associated with the copies of the original data;
determining the task is a candidate for processing on the recovery site by:
determining that processing the task comprises reading the first file and creating a result file based on the first file;
determining that processing the task can be completed without user input;
determining that the task does not define a physical location for processing the task; and
determining that the task does not alter the first file as a result of performing the task;
determining the first file and the first copy of the first file match by determining the first update time is earlier than the first copy time, wherein determining the first file and the first copy of the first file match further comprises:
determining a first difference between a current time and the first copy time is above a time threshold, wherein the first copy time indicates a time the first copy of the first file began to be created, wherein the time threshold comprises an amount of time greater than an amount of time used to create the first copy of the first file;
performing the task using resources of the recovery site and using the first copy of the first file stored in the recovery site in response to determining that the first file and the first copy of the first file match, wherein performing the task further comprises:
selecting a first resource on the recovery site for processing the task based on the first resource having a first processing utilization below a processing utilization threshold, wherein the first processing utilization comprises an ongoing processing amount in the first resource divided by a processing capacity amount of the first resource, wherein the processing utilization threshold comprises 80%;
selecting the first resource on the recovery site for processing the task further based on the first resource having a first network speed above a network speed threshold, wherein the first network speed is calculated by sending a test file from the first resource to a second resource connected to the first resource via a network and measuring a transfer speed of the test file, wherein the network speed threshold comprises five megabytes per second; and
storing a result file in the recovery site in response to performing the task; and
outputting a file path in response to performing the task, wherein the file path indicates a location of the result file stored in the recovery site.

US Pat. No. 10,169,084

DEEP LEARNING VIA DYNAMIC ROOT SOLVERS

International Business Ma...

1. A computer implemented method comprising:identifying, by a host computer processor, graphic processor units (GPUs) that are available (available GPUs);
identifying, by the host computer processor, GPUs that are idle (initially idle GPUs) among the available GPUs for an initial iteration of deep learning;
choosing, by the host computer processor, one of the initially idle GPUs as an initial root solver GPU for the initial iteration;
initializing, by the host computer processor, weight data for an initial set of multidimensional data;
transmitting, by the host computer processor, the initial set of multidimensional data to the available GPUs;
forming, by the host computer processor, an initial set of GPUs into an initial binary tree architecture, wherein the initial set comprises the initially idle GPUs and the initial root solver GPU, wherein the initial root solver GPU is the root of the initial binary tree architecture;
calculating, by the initial set of GPUs, initial gradients and a set of initial adjusted weight data with respect to the weight data and the initial set of multidimensional data via the initial binary tree architecture;
in response to the calculating the initial gradients and the initial adjusted weight data, identifying, by the host computer processor, a first GPU among the available GPUs to become idle (first currently idle GPU) for a current iteration of deep learning;
choosing, by the host computer processor, the first currently idle GPU as a current root solver GPU for the current iteration;
transmitting, by the host computer processor, a current set of multidimensional data to the current root solver GPU;
in response to the identifying the first currently idle GPU, identifying, by the host computer processor, additional GPUs that are currently idle (additional currently idle GPUs) among the available GPUs;
transmitting, by the host computer processor, the current set of multidimensional data to the additional currently idle GPUs;
forming, by the host computer processor, a current set of GPUs into a current binary tree architecture, wherein the current set comprises the additional currently idle GPUs and the current root solver GPU, wherein the current root solver GPU is the root of the current binary tree architecture;
calculating, by the current set of GPUs, current gradients and a set of current adjusted weight data with respect to at least the weight data and the current set of multidimensional data via the current binary tree architecture;
in response to the initial root solver GPU receiving a set of calculated initial adjusted weight data, transmitting, by the initial root solver GPU, an initial update to the weight data to the available GPUs;
in response to the current root solver GPU receiving a set of current initial adjusted weight data, transmitting, by the current root solver GPU, a current update to the weight data to the available GPUs; and
repeating the identifying, the choosing, the transmitting, the forming, and the calculating with respect to the weight data, updates to the weight data, and subsequent sets of multidimensional data.

US Pat. No. 10,169,083

SCALABLE METHOD FOR OPTIMIZING INFORMATION PATHWAY

EMC IP Holding Company LL...

1. An apparatus comprising:a receiving module configured to receive a request for task execution at a central processing node for worldwide data; wherein the central processing node is connected to sub-processing network nodes; wherein the sub-processing network nodes are grouped into clusters; wherein each cluster has a distributed file system mapping out network nodes for each respective cluster; wherein each cluster stores a subset of the worldwide data; and wherein each cluster is enabled to use the network nodes of the cluster to perform parallel processing; wherein the central processing node is communicatively coupled to a global distributed file system that maps over each of the cluster's distributed file systems to enable orchestration between the clusters;
a dividing module configured to divide by a worldwide job tracker the request for task execution into worldwide task trackers to be distributed to sub-processing network nodes of the clusters; wherein the network sub-nodes manages a portion of the worldwide data for each respective cluster; wherein each worldwide task tracker maintains records of sub-activities executed as part of the worldwide job;
a transmitting module configured to transmit to each of the sub-processing network nodes for each respective cluster the respective portion of the divided task execution by assigning each worldwide task tracker corresponding to the respective portion to the respective each cluster; and
a leveraging module configured to generate a graph layout of data pathways, the pathways calculated based upon physical distance between the processing nodes and bandwidth constraints, the leveraging module further configured to distribute task execution based upon the processing power of the processing nodes, graph layout, and the size of data processed by the sub-processing network nodes to reduce data movement between the central processing node and the sub-processing nodes.

US Pat. No. 10,169,082

ACCESSING DATA IN ACCORDANCE WITH AN EXECUTION DEADLINE

INTERNATIONAL BUSINESS MA...

1. A method for execution by a processing module of a dispersed storage and task (DST) execution unit that includes a processor, the method comprises:receiving a data request for execution by the DST execution unit, the data request including an execution deadline;
comparing the execution deadline to a current time, which includes:
determining an estimated un-accelerated processing duration and an estimated accelerated processing duration;
determining that the execution deadline compares favorably to the current time when an addition of the estimated un-accelerated processing duration to the current time does not exceed the execution deadline; and
determining that the execution deadline compares favorably to the current time when an addition of the estimated accelerated processing duration to the current time does not exceed the execution deadline and the addition of the estimated un-accelerated processing duration to the current time exceeds the execution deadline;
generating an error response when the execution deadline compares unfavorably to the current time; and
when the execution deadline compares favorably to the current time:
determining a priority level based on the execution deadline; and
executing the data request in accordance with the priority level, wherein executing the data request includes accelerating the executing of the data request when the addition of the estimated accelerated processing duration to the current time does not exceed the execution deadline and the addition of the estimated un-accelerated processing duration to the current time exceeds the execution deadline.

US Pat. No. 10,169,081

USE OF CONCURRENT TIME BUCKET GENERATIONS FOR SCALABLE SCHEDULING OF OPERATIONS IN A COMPUTER SYSTEM

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions, which when executed by one or more hardware processors, cause performance of operations comprising:determining a time for performing an action on a first object stored in a data repository, wherein the action comprises one of:
deleting the first object from the data repository,
modifying content of the first object, or
transferring the first object from one location in the repository to another location in the repository;
responsive to determining, at runtime, that a first time bucket generation of a plurality of time bucket generations is a time bucket generation last-configured for storing references included in an object processing index: selecting the first time bucket generation of the plurality of time bucket generations for storing a first reference to the first object, wherein each time bucket generation comprises time buckets that are (a) of a same interval size and (b) correspond to different time periods;
wherein the object processing index comprises references to objects that are to be processed at a particular time;
responsive to selecting the first time bucket generation: selecting a first time bucket of the first time bucket generation based on the time for performing the action on the first object;
storing the first reference to the first object in the first time bucket of the first time bucket generation;
adding a second time bucket generation to the plurality of time bucket generations by configuring the second time bucket generation for the object processing index;
wherein the first time bucket generation and the second time bucket generation are concurrently configured for the object processing index on a temporary basis while the object processing index is transitioned from using the first time bucket generation to using the second time bucket generation;
determining a time for performing an action on a second object stored in the data repository, wherein the action comprises one of:
deleting the second object from the data repository,
modifying the content of the second object, or
transferring the second object from one location in the repository to another location in the repository;
responsive to determining, at runtime, that the second time bucket generation of the plurality of time bucket generations is the time bucket generation last-configured for storing references included in the object processing index: selecting the second time bucket generation of the plurality of time bucket generations for storing a second reference to the second object;
responsive to selecting the second time bucket generation: selecting a second time bucket of the second time bucket generation based on the time for performing the action on the second object;
storing the second reference to the second object in the second time bucket of the second time bucket generation,
wherein the first object corresponding to the first time bucket in the first time bucket generation and the second object corresponding to the second time bucket in the second time bucket generation are processed in accordance with the object processing index.

US Pat. No. 10,169,080

METHOD FOR WORK SCHEDULING IN A MULTI-CHIP SYSTEM

Cavium, LLC, Santa Clara...

1. A method of processing work items in a multi-chip system, the method comprising:designating, by a work source component associated with a source chip device, a work item to a scheduler processor for scheduling, the source chip device being one of multiple chip devices of the multi-chip system, the work source component comprising a core processor or a coprocessor configured to create work items;
assigning, by the scheduler processor, the work item to a destination chip device of the multiple chip devices for processing, the scheduler processor being one of one or more scheduler processors each associated with a corresponding chip device of the multiple chip devices.

US Pat. No. 10,169,079

TASK STATUS TRACKING AND UPDATE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A method for providing status updates while collaboratively resolving an issue, the method comprising:receiving, using a processing device, an electronic text-based message from a user;
identifying, using the processing device, one or more key phrases in the electronic text-based message, wherein the one or more key phrases are identified based at least in part on training a neural network using training data and applying the neural network to the electronic text-based message, wherein the training data includes key phrases manually indicated by a user;
in response to identifying the one or more key phrases in the received electronic text-based message, automatically displaying, by the processing device, the one or more key phrases to the user with highlighted text;
receiving, by the processing device, a selection from a user of a displayed key phrase from the one or more key phrases that were displayed with highlighted text; and
in response to the user selecting, the displayed key phrase from the one or more key phrases displayed with highlighted text, providing at least one status-based suggestion to the user to change a status milestone associated with a problem resolution based on the user selected key phrase;
wherein the providing of the at least one status-based suggestion to the user based on the user selected key phrase comprises:
building a table to map a key phrase to one or more status identifiers;
mapping the key phrase to one or more status identifiers to associate the key phrase with the at least one status-based suggestion;
in response to the user selecting the displayed key phrase having highlighted text, matching the highlighted text to the key phrase of the table to identify the at least one status-based suggestion that is associated with the matching key phrase in the table and then displaying the at least one status-based suggestion to the user for selection; and
displaying a corresponding status milestone based on the user selecting from the at least one status-based suggestion.

US Pat. No. 10,169,078

MANAGING THREAD EXECUTION IN A MULTITASKING COMPUTING ENVIRONMENT

International Business Ma...

1. A method for managing thread execution, the method comprising:predicting, by one or more computer processors, an amount of processor usage that would be used by a thread in a computing system for execution of a critical section of code, where the critical section of code is defined by a starting marker and an ending marker in a program code that contains the critical section of code;
determining that the thread has a sufficient processor usage allowance to execute the critical section of code to completion; and
in response to determining that the thread has sufficient processor usage allowance to execute the critical section of code to completion:
scheduling, by one or more computer processors, the thread for execution of the critical section of code;
receiving, by one or more computer processors, a request to deschedule the thread, wherein the request is made in response to determining that the thread has insufficient processor usage allowance to continue execution;
responsive to receiving a request to deschedule the thread, scheduling, by one or more computer processors, the thread to complete execution of the critical section of code;
responsive to scheduling the thread to complete execution, determining, by one or more computer processors, processor usage debt accumulated by the thread;
determining that the thread has completed execution of the critical section of code;
responsive to determining that the thread has completed execution of the critical section of code, suspending the thread; and
preventing further execution of the thread until after the processor has executed one or more other threads for an amount of time equal to the amount of processor usage debt accumulated by the thread;
wherein:
the predicted amount of processor usage is a percentage of total execution capacity of the processor that the thread is predicted to use during execution of the critical section of code;
the processor usage debt comprises an amount of time for which the thread is executing while the thread has both insufficient processor usage allowance to continue execution and is executing the critical section of code; and
the one or more computer processors are one or more field programmable gate arrays.

US Pat. No. 10,169,077

SYSTEMS, DEVICES, AND METHODS FOR MAINFRAME DATA MANAGEMENT

United Services Automobil...

1. A method comprising:loading, by a processor, a utility program onto an operating system of a mainframe, wherein the operating system hosts an application that includes a plurality of running jobs, wherein the utility program includes a set of configuration metadata for the application;
configuring, by the processor, the utility program such that the utility program is configured to receive a user input from a workstation that is in communication with the mainframe and the utility program is configured to interface with the application based on the set of configuration metadata responsive to the user input, wherein the mainframe is in communication with the workstation;
creating, by the processor, a job via the utility program interfacing with the application based on the set of configuration metadata responsive to the user input;
submitting, by the processor, the job to the application via the utility program interfacing with the application based on the set of configuration metadata responsive to the user input;
querying, by the processor, the job at the application before completion for an error via the utility program based on the set of configuration metadata;
triggering, by the processor, an alert based on the set of configuration metadata via the utility program responsive to the error; and
outputting, by the processor, the alert to the workstation in communication with the utility program.

US Pat. No. 10,169,076

DISTRIBUTED BATCH JOB PROMOTION WITHIN ENTERPRISE COMPUTING ENVIRONMENTS

International Business Ma...

1. A computer-implemented method for batch code promotion between enterprise scheduling system environments, the method comprising the steps of:connecting, by one or more processors, a graphical interface of an entity to one or more enterprise scheduling environments for promoting changes of batch code of the entity between the one or more enterprise scheduling environments, the batch code is processed during a batch job, the batch job is a low priority job, wherein the low priority batch job is processed by the one or more enterprise scheduling environments;
mapping, by the one or more processors, parameters to batch code fields of the batch code that changes between a first scheduling level of the one or more enterprise scheduling environments to a second scheduling level of the one or more enterprise scheduling environments to create a mapping table to the batch code fields that changes from the first scheduling level and the second scheduling level, wherein the parameters include at least one batch job scheduling object identification, wherein the scheduling object identification further includes a container for all low priority batch jobs, an identification of the batch code, and an identification of the network workstations of the one or more enterprises scheduling environments for promoting the batch code between the first scheduling level to the second scheduling level;
generating a backup, in memory, of the mapping table to the batch code fields;
in response to an action on the graphical interface to promote the changes of the batch code fields between the mapped parameters of the first scheduling level and the second scheduling level, assigning, by the one or more processors, identification to the changes of the batch code fields;
in response to a request to promote the identified changes of the batch code fields, promoting, by the one or more processors, the requested identified changes from the first scheduling level to the second scheduling level using the mapped parameters of the first scheduling level and the second scheduling level; and
correlating, by the one or more processors, the mapping table of changed batch code fields of the first scheduling level with the mapping table of changed batch code fields of the second scheduling level, wherein the correlated mapping table of the batch code fields that change between the first scheduling level and the second scheduling level includes metadata of batch code for each one of the first and the second scheduling levels, and wherein the metadata of the batch code for each one of the first and the second scheduling levels is identified for promoting changes of the batch code fields from first scheduling level to the second scheduling level, further includes the steps of:
creating the batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
verifying the created batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
operating the verified batch job of batch code fields of the metadata during the change of the first scheduling level and the second scheduling level between the first and the second scheduling levels;
generating a second mapping table based on the mapped parameters and the created, verified, and operated batch job of batch code fields; and
promoting the operated batch job between the second scheduling level and a third scheduling level based on the second mapping table.

US Pat. No. 10,169,075

METHOD FOR PROCESSING INTERRUPT BY VIRTUALIZATION PLATFORM, AND RELATED DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A method for processing an interrupt by a virtualization platform, wherein the method is applied to a computing node, wherein the computing node comprises a physical hardware layer, a host running at the physical hardware layer, at least one virtual machine (VM) running on the host, and virtual hardware that is virtualized on the at least one VM, wherein the physical hardware layer comprises X physical central processing units (pCPUs) and Y physical input/output devices, wherein the virtual hardware comprises Z virtual central processing units (vCPUs), wherein the Y physical input/output devices comprise a jth physical input/output device, wherein the at least one VM comprises a kth VM, wherein the jth physical input/output device directs to the kth VM, wherein the method is executed by the host, and wherein the method comprises:determining an nth pCPU from U target pCPUs when an ith physical interrupt occurs in the jth physical input/output device, wherein the U target pCPUs are pCPUs that comprise an affinity relationship with both the ith physical interrupt and V target vCPUs, wherein the V target vCPUs are vCPUs that are virtualized on the kth VM and comprise an affinity relationship with an ith virtual interrupt, wherein the ith virtual interrupt corresponds to the ith physical interrupt, wherein the X pCPUs comprise the U target pCPUs, and wherein the Z vCPUs comprise the V target vCPUs;
setting the nth pCPU to process the ith physical interrupt;
determining the ith virtual interrupt according to the ith physical interrupt; and
determining an mth vCPU from the V target vCPUs such that the kth VM uses the mth vCPU to execute the ith virtual interrupt,
wherein X, Y, and Z are positive integers greater than 1, wherein U is a positive integer greater than or equal to 1 and less than or equal to X, wherein V is a positive integer greater than or equal to 1 and less than or equal to Z, and wherein i, j, k, m, and n are positive integers.

US Pat. No. 10,169,074

MODEL DRIVEN OPTIMIZATION OF ANNOTATOR EXECUTION IN QUESTION ANSWERING SYSTEM

International Business Ma...

1. A method, in a data processing system comprising a processor and a memory, for scheduling execution of pre-execution operations of an annotator of a question and answer (QA) system pipeline, the method comprising:using, by the data processing system, a model to represent a system of annotators of the QA system pipeline, wherein the model represents each annotator in the system of annotators as a node having one or more performance parameters for indicating a performance of an execution of an annotator corresponding to the node, wherein each annotator in the s stem of annotators is a program that takes a portion of unstructured input text, extracts structured information from the portion of the unstructured input text, and generates annotations or metadata that are attached by the annotator to a source of the unstructured input text, wherein, for each node in the model, the one or more performance parameters corresponding to the node comprise an arrival rate parameter and a service rate parameter of the annotator associated with the node, wherein the arrival rate parameter indicates a number of jobs arriving in the node per second, and wherein the service rate parameter indicates a number of jobs being serviced by the node per second;
determining, by the data processing system, for each annotator in a set of annotators of the system of annotators, an effective response time for the annotator based on the one or more performance parameters;
calculating, by the data processing system, a pre-execution start interval for a first annotator based on an effective response time of a second annotator, wherein execution of the first annotator is sequentially after execution of the second annotator; and
scheduling, by the data processing system, execution of pre-execution operations associated with the first annotator based on the calculated pre-execution start interval for the first annotator.

US Pat. No. 10,169,073

HARDWARE ACCELERATORS AND METHODS FOR STATEFUL COMPRESSION AND DECOMPRESSION OPERATIONS

Intel Corporation, Santa...

1. A hardware processor comprising:a core to execute a thread and offload at least one of a compression thread and a decompression thread; and
a hardware compression and decompression accelerator to execute the at least one of the compression thread and the decompression thread to consume input data and generate output data, wherein the hardware compression and decompression accelerator is coupled to a plurality of input buffers to store the input data, a plurality of output buffers to store the output data, an input buffer descriptor array with an entry for each respective input buffer, an input buffer response descriptor array with a corresponding response entry for each respective input buffer, an output buffer descriptor array with an entry for each respective output buffer, and an output buffer response descriptor array with a corresponding response entry for each respective output buffer.

US Pat. No. 10,169,072

HARDWARE FOR PARALLEL COMMAND LIST GENERATION

NVIDIA CORPORATION, Sant...

1. A method for providing an initial default state for a multi-threaded processing environment, the method comprising:receiving, from an application program, a plurality of separate command lists corresponding to a plurality of parallel threads associated with the application program, wherein each thread in the plurality of parallel threads generates a separate command list in the plurality of command lists;
causing a first command list associated with a first thread included in the plurality of parallel threads to be executed by a processing unit based on a first processing state, wherein the first processing state includes a set of graphics parameters;
after the processing unit executes the first command list, causing a second command list associated with a second thread included in the plurality of parallel threads to be executed by the processing unit based on the first processing state inherited from the first command list;
causing a single unbind method to be executed by the processing unit, wherein the unbind method resets one or more parameters included in the set of graphics parameters to an initial processing state; and
causing commands included in a third command list to be executed by the processing unit after the unbind method is executed.

US Pat. No. 10,169,071

HYPERVISOR-HOSTED VIRTUAL MACHINE FORENSICS

MICROSOFT TECHNOLOGY LICE...

1. A computing system comprising:a processor; and
memory storing instructions executable by the processor, wherein the instructions, when executed, provide a hypervisor configured to:
host a virtualization environment that includes a set of virtual machine (VM) partitions that each include an isolated execution environment managed by the hypervisor, the set of VM partitions comprising:
a root virtual machine (VM) partition,
a first child VM partition that is hypervisor-aware,
a second child VM partition that is non-hypervisor-aware, and
a forensics VM partition that:
includes a forensics service application programming interface (API),
is configured to directly access hardware resources associated with the computing system, and
is separate from, and more privileged than, the first child VM partition; and
create, in the virtualization environment:
a first inter-partition communication mechanism configured to provide a communication channel between the forensics VM partition and the first child VM partition, and
a second inter-partition communication mechanism;
wherein the forensics VM partition is configured to:
acquire, by the first inter-partition communication mechanism, forensics data from a VM running in the first child VM partition; and
provide the forensics data to a forensics service using the forensics service API.

US Pat. No. 10,169,070

MANAGING DEDICATED AND FLOATING POOL OF VIRTUAL MACHINES BASED ON DEMAND

United Services Automobil...

1. A computer-implemented method for managing demand of a pool of virtual machines, the method comprising:determining a demand for a use of virtual machines in a pool of virtual machines, wherein, for a pool that is managed as a dedicated pool, the demand is determined based on resource usage per virtual machine in the pool, and, for a pool that is managed as a floating pool, the demand is determined based on times that one or more virtual machines in the pool are unassigned to users of the pool;
identifying that the determined demand is outside a threshold resource usage of the pool; and
provisioning one or more additional resources to the pool.

US Pat. No. 10,169,069

SYSTEM LEVEL UPDATE PROTECTION BASED ON VM PRIORITY IN A MULTI-TENANT CLOUD ENVIRONMENT

International Business Ma...

1. A computer-implemented method for managing system activities in a cloud computing environment, comprising:determining a type of system activity to perform on one or more servers in the cloud computing environment;
identifying a set of locking parameters at a plurality of hierarchal levels of the cloud computing environment available for restricting system activity on the one or more servers, wherein each locking parameter corresponds to a different type of system activity and is associated with a particular hierarchal level of the plurality of hierarchal levels;
determining whether to perform the type of system activity based on a value of a locking parameter of the set of locking parameters that is associated with the type of system activity, the particular hierarchal level of the plurality of hierarchal levels associated with the locking parameter of the set of locking parameters, and a priority associated with the type of system activity; and
performing the type of system activity after determining to perform the type of system activity.

US Pat. No. 10,169,068

LIVE MIGRATION FOR VIRTUAL COMPUTING RESOURCES UTILIZING NETWORK-BASED STORAGE

Amazon Technologies, Inc....

1. A system, comprising:a plurality of compute nodes comprising one or more processors and memory, configured to implement:
a plurality of hosts for virtual compute instances;
a control plane; and
the control plane, configured to:
for a virtual compute instance that is identified for migration from a source host to a destination host and is a client of a network-based storage resource that stores data for which access is enforced according to a lease state for hosts connected to the network-based resource:
direct the destination host to establish a connection with the network-based storage resource with a standby lease state; and
direct that a request be sent to the network-based storage resource to promote the standby lease state for the destination host to a primary lease state and to change a primary lease state for the source host to another lease state.

US Pat. No. 10,169,067

ASSIGNMENT OF PROXIES FOR VIRTUAL-MACHINE SECONDARY COPY OPERATIONS INCLUDING STREAMING BACKUP JOB

Commvault Systems, Inc., ...

1. A computer-readable medium, excluding transitory propagating signals, storing instructions that, when executed by a computing device having one or more processors and non-transitory computer-readable memory, cause the computing device to perform a method comprising:identifying, by a first data agent executing on the computing device, one or more proxies in a storage management system that are eligible to back up a given virtual machine in a first set of virtual machines in the storage management system,
wherein any one proxy among the one or more proxies is one of:
(a) a first virtual machine that executes on a first computing device, wherein the first virtual machine executes a second data agent for virtual-machine backup, and
(b) a second computing device that executes a second data agent for virtual-machine backup;
wherein the identifying comprises:
(i) determining (A) a set of candidate proxies for backing up the given virtual machine, and (B) a mode of access available to each respective candidate proxy for accessing the given virtual machine's data as a source for backup,
wherein the mode of access has a predefined tier of preference,
wherein the determining is based on analyzing, by the first data agent, data from a database that is associated with a storage manager component that manages the storage management system, and
wherein the storage manager component designates the first data agent as a coordinator data agent for a first backup job for the first set of virtual machines,
(ii) classifying each candidate proxy in the set of candidate proxies based on the predefined tier of preference for the respective candidate proxy's mode of access to the given virtual machine's data as the source for backup, and
(iii) defining one or more candidate proxies that are classified in a highest tier of preference as being eligible to back up the given virtual machine; and
wherein if the defining results in the given virtual machine being stranded without an eligible proxy, subsequently defining one or more candidate proxies, which are classified in a next highest tier of preference that is less than the highest tier of preference, as being eligible to back up the given virtual machine.