US Pat. No. 10,216,721

SPECIALIZED LANGUAGE IDENTIFICATION

Hewlett-Packard Developme...

1. A system comprising:multiple engines that are each to produce output representative of a summary of the document, wherein each one of the multiple engines applies a different type of engine selected from a group of engines comprising an extractive type of engine, an abstractive type of engine, and a frequency type of engine, wherein the output from each of the multiple engines varies between the multiple engines in accordance with a respective type of engine;
a composite engine to generate a filtered set of content in a single output to reduce a size of the output produced by the multiple engines, wherein the filtered set of content comprises different combinations of the output from the multiple engines that have different densities of specialized word usage;
an identification engine to:
apply a weighting mechanism to the different combinations of the output in the filtered set of content;
obtain a value corresponding to the different combinations of the output in the filtered set of content;
identify specialized language from the different combinations of the output in the filtered set of content, wherein the value corresponding to the different combinations of the output in the filtered set of content reaching at least a particular threshold indicates specialized language within that output; and
index the document based on the specialized language that is identified to identify other documents salient to the document based on the specialized language.

US Pat. No. 10,216,720

COMPARATOR ASSOCIATED WITH DICTIONARY ENTRY

Hewlett Packard Enterpris...

1. A circuit comprising:a dictionary entry storing a dictionary word;
a register storing an input word; and
a hardware comparator associated with the dictionary entry to compare the dictionary word and the input word, based on a bit-by-bit comparison, the comparator having an output line on which to output a signal as the bit-by-bit comparison occurs, the signal indicating if the dictionary word is, based on a number of bits that the hardware comparator has thus far compared, less than the input word, equal to the input word, greater than the input word, or indeterminate, wherein indeterminate means the comparison is not yet complete,
wherein the circuit is a sorting circuit is to sort a plurality of dictionary words including the dictionary word stored by the dictionary entry by utilizing a content-addressable memory that indicates whether the dictionary word is greater than or less than the input word when adding the input word to a sorted order of the dictionary words, including determining a location of the input word stored by the register within the sorted order of the dictionary words in a length of time having an upper-bounded limit irrespective of a number of the dictionary words.

US Pat. No. 10,216,719

RELATION EXTRACTION USING QANDA

International Business Ma...

1. A computer-implemented method of extracting entity relations, the method comprising:associating, by a computer, one or more preprogrammed questions with one or more first entity types;
associating, by the computer, one or more second entity types with one or more answers to the one or more preprogrammed questions;
identifying, by the computer, an entity annotated within a document;
extracting, by the computer, a portion of content in a proximity to the entity;
determining, by the computer, whether the entity corresponds to at least one of the one or more first entity types;
based on determining that the entity corresponds to the at least one of the one or more first entity types, determining, by the computer, the one or more answers to the one or more questions based on the extracted portion of content, wherein the determined one or more answers describe a relation between the identified entity and one or more other entities included within the portion of content;
weighting, by the computer, the determined one or more answers;
ranking, by the computer, the determined one or more answers based on the weighting;
determining, by the computer, whether a first ranked answer of the determined one or more answers is correct by comparing an entity type corresponding to the first ranked answer to the one or more second entity types associated with the determined one or more answers to the one or more questions;
based on determining that the first ranked answer is incorrect, rewording, by the computer, the one or more questions;
determining, by the computer, one or more second answers to the one or more reworded questions based on the extracted portion of content; and
associating, by the computer, the one or more second answers to the one or more reworded questions with the entity.

US Pat. No. 10,216,718

MAINTAINING CONVERSATIONAL CADENCE IN AN ONLINE SOCIAL RELATIONSHIP

International Business Ma...

1. A method for maintaining conversational cadence, comprising:determining, by a processor, a conversational cadence associated with a user in a social network;
detecting, by the processor, a reduction in the conversational cadence of the user, wherein detecting the reduction in the conversational cadence of the user comprises at least one of detecting a reduction in an average number of messages transmitted by the user over a preset time period during a selected time duration being less than a predetermined limit and detecting an absence of messages from the user for more than a predetermined time period;
providing, by the processor, in response to detecting the reduction in the conversational cadence of the user, a set of fill-in messages to a communications device of another user in the social network that creates an appearance to the other user in the social network of no reduction in the conversational cadence, wherein providing the set of fill-in messages comprises retroactively automatically distributing a portion of the set of fill-in messages to the other user over at least an earlier preset time period corresponding to when the reduction in the conversational cadence occurs, wherein the portion of the set of fill-in messages are retroactively automatically distributed over the earlier preset time period that corresponds to the reduction in conversational cadence of the user by automatically predating the set of fill-in messages by the processor to correlate with the conversational cadence associated with the user and each message of the portion of the set of fill-in messages indicating a time separation that correlates to the conversational cadence associated with the user; and
identifying the fill-in messages as being provided by a system on behalf of the user by including an indication or obvious notification in the fill-in messages that the fill-in messages are provided by a system or machine and not the actual user.

US Pat. No. 10,216,717

ACTIONABLE EMAIL DOCUMENTS

Microsoft Technology Lice...

1. A storage media comprising instructions that, when executed, cause a computing device to modify a spreadsheet document, comprising:sending an email document comprising one or more table-to-email linkage identifiers to a recipient, wherein the email document is operable to collect data from the recipient as collected data using one or more data fields associated with the one or more table-to-email linkage identifiers, wherein the email document further comprises one or more table-to-email linkage identifiers for automatically mapping the collected data to the spreadsheet document;
receiving the collected data from the recipient;
automatically mapping the collected data to at least one field of the spreadsheet document using the one or more table-to-email linkage identifiers, wherein the at least one field is associated with a syntactic constraint; and
automatically inserting the collected data into the spreadsheet document based on the mapping, wherein the collected data is validated prior to insertion into the at least one field of the spreadsheet document using one or more syntactic checks to verify that the collected data is the expected data type for the at least one field.

US Pat. No. 10,216,716

METHOD AND SYSTEM FOR ELECTRONIC RESOURCE ANNOTATION INCLUDING PROPOSING TAGS

BRITISH TELECOMMUNICATION...

1. A method of electronic resource annotation comprising operating a computer system to:arrange a plurality of tags applied by a plurality of users into at least two groups of tags favored by respective groups of users;
store the arrangement of the plurality of tags into the at least two groups of tags favored by respective groups of users;
store a tagging history for a user which aggregates tags used by said user in tagging a plurality of electronic resources;
establish a degree to which each of the groups of tags favored by respective groups of users is represented in the tags included in the user's tagging history by comparing said user's tagging history with each of said plurality of groups of tags favored by respective groups of users to thereby provide a plurality of comparisons;
based on the comparisons, identify one or more of said groups of tags favored by respective groups of users as being under-represented in the user's tagging history;
based on the identification, propose tags from said identified under-represented group or groups of tags to said user as said user applies tags to a resource; and
as a result of selection by the user of at least one of the proposed tags from said identified under-represented group or groups of tags, update the respective degrees to which each of the groups of tags favored by respective groups of users is represented in the tags included in the user's tagging history toward respective target values.

US Pat. No. 10,216,715

METHOD AND SYSTEM FOR SUGGESTING REVISIONS TO AN ELECTRONIC DOCUMENT

BLACKBOILER LLC, Arlingt...

1. A method for suggesting revisions to a document-under-analysis (“DUA”) from a seed database, the seed database comprising a plurality of original texts each respectively associated with one of a plurality of final texts, the method for suggesting revisions comprising:tokenizing the DUA into a plurality of statements-under-analysis (“SUAs”);
selecting a first SUA of the plurality of SUAs;
generating a first similarity score for each of the plurality of the original texts, the similarity score representing a degree of similarity between the first SUA and each of the original texts, respectively;
selecting a first candidate original text of the plurality of the original texts;
aligning the first SUA with the first candidate original text according to a first alignment;
aligning the first candidate original text with a first candidate final text associated with the first candidate original text according to the first alignment;
determining a first set of one or more edit operations that convert the first candidate original text to the first candidate final text according to the first alignment; and
creating a first edited SUA (first “ESUA”) by applying to the SUA the determined first set of one or more edit operations according to the first alignment.

US Pat. No. 10,216,714

TEXT CHARACTER AND FORMATTING CHANGES IN COLLABORATIVE CONTEXTS

APPLE INC., Cupertino, C...

1. A processor-implemented method for processing collaborative data inputs, comprising:on a local electronic device, receiving an initial input to trigger generation of a forward action, a detail action, and an inverse action generated by a remote electronic device, wherein the detail action comprises one or more steps for implementing the forward action on the local electronic device and the inverse detail action comprises an opposite action of the detail action;
updating a first field of a data structure with the forward action;
updating a second field of the data structure with the detail action;
changing a local version of a collaborative document by executing the detail action;
updating a third field of the data structure with the inverse detail action;
updating the first field of the data structure based on the inverse detail action; and
updating a fourth field of the data structure with the forward action.

US Pat. No. 10,216,713

GENERATING DOCUMENTS USING TEMPLATES

MICROSOFT TECHNOLOGY LICE...

1. A method performed by a computing device, the method comprising:detecting an input that is associated with a user;
based on the input, selecting a template associated with a first computing program;
based on a programmatic search of the template, identifying a scripting language node in the template that corresponds to a document data field and includes a programmatic script defining a data retrieval operation associated with the document data field;
based on execution of the programmatic script to perform the data retrieval operation, retrieving data from a data store associated with a second computing program that is different than the first computing program; and
generating, by the first computing program, a document according to the template, the document including the document data field with the retrieved data.

US Pat. No. 10,216,712

WEB PAGE DISPLAY METHOD AND DEVICE

UC Mobile Limited, Beiji...

1. A web page display method, comprising:determining a reference region in a display region of a first web page, the first web page being a web page displayed in a first display state of a display screen, wherein the area of the reference region is less than or equal to a preset threshold;
determining a first non-full-screen web page element located in the reference region in the first web page;
according to coordinates of the first non-full-screen web page element in the first web page and in a second web page with respect to edges of the display screen, calculating a moving displacement of the first non-full-screen web page element in the second web page, wherein the coordinates of the first non-full-screen web page element in the second web page are coordinates of the first non-full-screen web page element displayed in the second web page in a second display state, obtained after the display screen is switched to the second display state from the first display state;
according to the moving displacement of the first non-full-screen web page element in the second web page, moving web page elements in the second web page; and
displaying the second web page after the web page elements are moved.

US Pat. No. 10,216,711

INFORMATION COLLECTION METHOD AND APPARATUS

Xiaomi Inc., Beijing (CN...

1. A method for collecting information, comprising:receiving, at a terminal device having a user account in a social group that is established by a communication service, a trigger message that is sent by a specific user account of the social group to members of the social group, the trigger message comprising a prompt text and a jump instruction to a page for collecting specific information;
generating a text link associated with the jump instruction based on the prompt text;
displaying the text link on an interface page for the communication service;
detecting a trigger event with respect to the text link;
executing the jump instruction to display the page for collecting the specific information in response to detecting the trigger event;
providing, via the page, a first option to access stored information within a data storage at the terminal device, wherein the stored information includes the specific information;
receiving a user selection of a subset of the stored information from within the data storage;
transmitting the subset of the stored information to a server that is configured to compile a plurality of information including the subset of the stored information and provide the compiled plurality of information for access to each one of a plurality of user accounts that transmitted to the server the respective specific information of the plurality of information,
wherein the compiled plurality of information including the subset of the stored information further includes a plurality of subsets of stored information acquired from the plurality of user accounts,
wherein contents of the plurality of subsets of stored information are accessible via a target page that is configured to display the plurality of compiled information in a form of a plurality of links to a plurality of subpages that are each associated with a corresponding one of the plurality of subsets of stored information, and
wherein the contents of the plurality of subsets of stored information are viewable by each one of the plurality of user accounts via the plurality of subpages that are accessible via the target page.

US Pat. No. 10,216,710

COMBINING AND DISPLAYING MULTIPLE DOCUMENT AREAS

International Business Ma...

1. A device for combining and displaying a plurality of areas of a document, the device comprising computer hardware components configured to perform a method comprising:storing, in response to a user marking an area of the document, information on the marked area, wherein the area of the document is marked by the user selecting a start point and an end point using a pointer of a pointing device;
displaying an icon representing the marked area, wherein a shape of the icon is determined according to the content of the area, and wherein the icon continues to be displayed when the area is not within a window displaying the document, and wherein a connecting line connects the icon to an upper side or lower side of the window when the area is not within the window;
conducting the storing operation and the displaying operation for a different area of the document; and
forming, in response to an operation by the user for arranging two or more icons to be in contact with each other, a joined icon by joining the icons together, wherein the joined icon is created according to the relative orientation and position of the two or more icons as arranged by the user;
combining marked areas represented by the two or more respective icons, according to a state of contact; and
changing the joined icon and changing a combining state of the marked areas in response to an operation by the user for rotating the joined icon.

US Pat. No. 10,216,709

UNIFIED MESSAGING PLATFORM AND INTERFACE FOR PROVIDING INLINE REPLIES

Microsoft Technology Lice...

1. A system comprising:at least one hardware processing unit; and
at least one memory storing computer executable instructions that, when executed by the at least one processing unit, cause the system to:
receive a message, wherein the message includes first text content;
scan the message to identify a structure of the first text content, the structure identifying at least one element of the first text content, the one element having a location within the first text content;
receive an indication of a selection of the one element identified by the structure within the first text content of the message;
identify the location of the indication within the structure of the first text content based on the selection of the one element;
launch a reply interface for receiving reply text content via a new message input field at the identified location within the first text content based on the selection;
receive the reply text content into the reply interface at the location via the new message input field; and
send a reply to the message, wherein a reply comprises the reply text content integrated into the message at the location.

US Pat. No. 10,216,708

PAGINATED VIEWPORT NAVIGATION OVER A FIXED DOCUMENT LAYOUT

Adobe Systems Incorporate...

1. A computer-implemented method comprising:defining a logical flow of multiple content regions in a web page according to a hierarchy that organizes the content regions in the web page, the defining comprising associating authored indications of viewability with respective ones of the multiple content regions, wherein:
the hierarchy organizes a first content region and a second content region of the web page in a first level of the hierarchy,
the hierarchy further organizes content sub-regions of the first content region in a deeper level of the hierarchy, the deeper level starting from the first content region in the first level,
the logical flow specifies a navigation from the first content region to the second content region based on a first association between the first level of the hierarchy and a first navigation input from a computing device, and
the logical flow further specifies a navigation between the content sub-regions based on a second association between the deeper level of the hierarchy and a second navigation input from the computing device;
initiating, in response to a request for the web page, display of the web page in a window on a display screen of the computing device, wherein the first content region and the second content region are presented in a single view within the window based on the first level of the hierarchy, and wherein the content sub-regions are presented in the single view within the first content region based on the deeper level of the hierarchy;
initiating, in response to a navigation input display of the first content region in the window, wherein the first content region is resized to fit the window according to a first zoom level; and
in response to receiving an additional navigation input to display a next content region following the display of the first content region:
identifying the second content region and an authored indication of viewability associated therewith, the second content region identified by at least determining that (i) the additional navigation input matches the first navigation input associated with the first level of the hierarchy and that (ii) the logical flow specifies a display of the second region following the display of the first content region based on the first association;
analyzing parameters of the second content region, parameters of the authored indication of viewability, and parameters of the window on the display screen, the analyzing including determining a second zoom level for displaying the second content region in the window by resizing the second content region to fit the window, wherein the second zoom level is different from the first zoom level; and
initiating the display of the second content region in the window according to the second zoom level.

US Pat. No. 10,216,707

METHODS AND SYSTEMS FOR CALCULATING UNCERTAINTY

Clarkson University, Pot...

1. A computing system for calculating uncertainty without a fatal division-by-zero error or a square-root-of-an-imaginary-number error, the computing system comprising:a memory including computer executable instructions stored therein that are configured to calculate a number result and an associated resultant error;
a user interface device configured for inputting a numeric value and an error value associated with said numeric value;
a processor in communication with said memory and said user interface device, wherein said processor utilizes said computer executable instructions to perform the steps:
a) converting said numeric value and said error value into a trans-imaginary input dual, wherein said trans-imaginary input dual is a hybrid of numeric and geometric information having a real number input component representing said numeric value and a complex number input component representing said error value;
b) performing a dual calculation operation using said trans-imaginary input dual to generate a trans-imaginary output dual having a real number output component representing said number result and a complex number output component representing said resultant error; and
c) rendering said trans-imaginary output dual, wherein said real number output component generates said number result as a real number and said complex number output component generates said associated resultant error as a real number error range,
wherein steps a)-c) avoid the fatal division-by-zero error or the square-root-of-an-imaginary-number error through first converting said numeric value and said error value to said trans-imaginary input dual and then rendering said output dual to generate said number result and said associated resultant error.

US Pat. No. 10,216,705

PERMUTING IN A MATRIX-VECTOR PROCESSOR

Google LLC, Mountain Vie...

1. A circuit comprising:an input register configured to receive an input vector of input elements;
a control register configured to receive a control vector of control elements, wherein each control element of the control vector corresponds to a respective input element of the input vector, and wherein each control element of the control vector specifies a permutation of a corresponding input element of the input vector, the permutation specifying a number of positions to rotate the corresponding input element of the input vector; and
a permute execution circuit configured to generate an output vector of output elements corresponding to a permutation of the input vector, wherein generating each output element of the output vector comprises:
accessing, at a particular position of the input register, a particular input element of the input vector;
accessing, at the control register, a particular control element of the control vector corresponding to the particular input element of the input vector;
selecting a particular position of the output vector based on (i) the particular position of the particular input element in the input register and (ii) a number of positions to rotate the particular element of the input vector specified by the particular control element of the control vector; and
outputting the particular input element of the input vector as an output element at the particular position of the output vector.

US Pat. No. 10,216,704

NATIVE TENSOR PROCESSOR, AND SYSTEMS USING NATIVE SENSOR PROCESSORS

NOVUMIND LIMITED, Grand ...

1. A computer system comprising:a processor subsystem having at least one processor; and
a native tensor subsystem having at least one native tensor processor implemented on a single integrated circuit, the native tensor processor comprising a contraction engine that calculates a contraction of tensors TX and TY by executing calculations that effect a matrix multiplication X×Y=Z, where X is an unfolded matrix for tensor TX and Y is an unfolded matrix for tensor TY, the contraction engine comprising:
a plurality of outer product units (OPUs) that calculate matrix multiplications by a sum of outer products;
a distribution section coupled to the plurality of outer products, the distribution section partitioning the X×Y matrix multiplication with respect to a contraction index k into a plurality of Xk×Yk outer products and directing the Xk×Yk outer products to the OPUs; and
a collection section coupled to the plurality of OPUs, the collection section summing the outer products calculated by the OPUs into a product for the matrix multiplication.

US Pat. No. 10,216,703

ANALOG CO-PROCESSOR

Spero Devices, Inc., Act...

1. A co-processor circuit comprising:at least one vector matrix multiplication (VMM) core configured to perform a VMM operation, each VMM core comprising:
at least one array of VMM circuits, each of the VMM circuits being configured to compute a respective product on T-bit subsets of an N-bit total for the VMM operation, each of the VMM circuits comprising:
a signal generator configured to generate a programming signal based on at least one coefficient for the VMM operation;
a memristor network having an array of analog memristor devices arranged in a crossbar configuration;
a read/write control circuit configured to selectively enable read and write operations at the memristor network;
a memristor control circuit configured to selectively enable a selection of the analog memristor devices, the memristor control circuit including a column switch multiplexor, a row switch multiplexor, and an address encoder;
a write circuit configured to set at least one resistance value within the network based on the programming signal, the write circuit including a voltage driver;
a read input circuit configured to apply at least one input signal to the memristor network, the input signal corresponding to a vector, the read input circuit including a voltage driver; and
a readout circuit configured to read at least one current value at the memristor network and generate an output signal based on the at least one current value;
a read circuit array to convert at least one input vector into an analog signal to be applied to the memristor network;
a write circuit array to convert at least one set signal, based on a multiplicative coefficient, to an analog set signal to be applied to the memristor network;
an ADC array to convert at least one VMM analog output from the memristor network into digital values;
a shift register array configured to format the digital values of the ADC array;
an adder array configured to add outputs from the memristor network arrays, each of the adders performing a subset of a VMM operation associated with the multiplicative coefficient; and
a combiner configured to combine the output signal of each of the adder arrays to generate a combined output signal, the output signal of each adder array representing one of the respective products, the combiner being configured to aggregate the respective products into a combined output representing a solution to the VMM operation at floating point precision.

US Pat. No. 10,216,702

MACHINE FOR DIGITAL IMPACT MATRIX DEVELOPMENT

Accenture Global Solution...

1. A machine comprising:a processor, the processor configured to:
determine a set of organizational processes executed by an organization;
determine a set of digital technologies that is utilized by the organization; and
generate a first matrix of the set of organizational processes against the set of digital technologies, wherein the first matrix stores a plurality of impacts of individual ones of the digital technologies on individual ones of the set of organizational processes;
user interface circuitry coupled to the processor, the user interface circuitry configured to:
provide a user interface to assign a plurality of impact categorizations to the plurality of impacts of the first matrix; and
provide the plurality of impact categorizations to the processor;
the processor further configured to incorporate the plurality of impact categorizations into the first matrix; and
the user interface circuitry further configured to effect display of a graphical representation of the first matrix incorporating the plurality of impact categorizations.

US Pat. No. 10,216,701

IMAGE-BASED POINT-SPREAD-FUNCTION MODELLING IN TIME-OF-FLIGHT POSITRON-EMISSION-TOMOGRAPHY ITERATIVE LIST-MODE RECONSTRUCTION

The Regents of the Univer...

1. A method of performing time-of-flight (TOF) list-mode reconstruction of a positron-emission tomography (PET) image, the method comprising:detecting gamma rays by a PET detector;
generating count data based on the detected gamma rays;
determining a TOF geometric projection matrix G including effects of object attenuation;
estimating an image-blurring matrix R in image space;
obtaining a diagonal matrix D that includes TOF-based normalization factors;
calculating a system matrix H as H=DGR; and
reconstructing the PET image from the count data using the calculated system matrix.

US Pat. No. 10,216,700

METHOD AND APPARATUS FOR PULSE WIDTH MODULATION

Tempo Semiconductor, Inc....

1. A ternary pulse width modulation (“PWM”) method adapted for use with a PWM signal chain, the method comprising using the PWM signal chain to perform steps of:[1.1] receiving a first input sample during a reference frame;
[1.2] developing a first compensated composite waveform as a function of at least the first input sample;
[1.3] receiving a second input sample during a current frame; and
[1.4] developing a second compensated composite waveform as a function of the second input sample and a selected one of the first compensated composite waveform and a boundary of the reference frame that is not a boundary of the current frame.

US Pat. No. 10,216,699

METHOD AND SYSTEM FOR SETTING PARAMETERS OF A DISCRETE OPTIMIZATION PROBLEM EMBEDDED TO AN OPTIMIZATION SOLVER AND SOLVING THE EMBEDDED DISCRETE OPTIMIZATION PROBLEM

1QB Information Technolog...

1. A method for setting parameters of a discrete optimization problem embedded to an optimization solver hardware and solving the embedded discrete optimization problem using the optimization solver hardware, the method comprising:use of a processing unit for:
receiving an indication of a discrete optimization problem and a corresponding embedded graph Gemb into an optimization solver hardware graph;
converting the discrete optimization problem to a K-spin problem, wherein the K-spin problem is defined as:
wherein parameter K is the order of the discrete optimization problem, parameter J is a coupling value between two vertices, parameter h is a local field value, sj is the jth variable of a K-spin problem which can take a value from {?1, +1}, Jj1j2 . . . jk denotes the magnitude of a kth order interaction and sj1sj2 . . . sjk represents a kth order interaction between variable j1j2 and jk;for each variable j of the K-spin problem associated with a corresponding node:
computing a parameter Cj associated with the local field and the coupling values of each adjacent edge to the corresponding node,
evaluating if a variable selection criterion is met with the computed parameter Cj,
if the variable selection criterion is met with the computed parameter Cj:
setting a value of a selected variable j to a given fixed value,
adding the selected variable j with the given fixed value to a partial solution list, and
removing the selected variable from the K-spin problem and from the corresponding embedded graph to thereby provide a reduced K-spin problem and a corresponding reduced embedded graph, the corresponding reduced embedded graph comprising a plurality of edges and vertices;
setting the parameter J of each edge of the reduced embedded graph corresponding to an existing edge in the corresponding reduced K-spin problem by distributing the parameter J in the reduced K-spin problem according to a defined distributing strategy;
setting the parameter h of each given vertex of the reduced embedded graph by distributing the corresponding parameter h in the reduced K-spin problem using a linear combination of a corresponding parameter C for the given vertex and the parameter J of each edge of the corresponding variable in the reduced K-spin problem adjacent to the given vertex;
setting the parameter J of each edge of the reduced embedded graph connecting two vertices representing the same corresponding variable in the reduced K-spin problem using a distribution of the parameter C of the corresponding variable in the reduced K-spin problem calculated previously;
solving the reduced K-spin problem with the optimization solver hardware using the corresponding reduced embedded graph and its corresponding h and J parameters to provide at least one solution;
combining the at least one solution obtained from the optimization solver hardware with the partial solution list to thereby provide a solution to the discrete optimization problem; and
wherein the optimization solver hardware is a quantum annealer.

US Pat. No. 10,216,698

ANALYSIS DEVICE INCLUDING A MEMS AND/OR NEMS NETWORK

California Institute of T...

1. A device for analyzing a fluid, comprising:only one sensor layer including a plurality of sensors of MEMS or NEMS to generate information associated with a chemical composition of the fluid, each sensor of the plurality of sensors including at least one mobile component that reacts to one or more characteristic stimuli of the fluid, the mobile component being suspended relatively to a fixed component, and each mobile component configured to move independently from mobile components of other sensors of the plurality of sensors;
a processing circuitry layer including processing circuitry configured to process the information transmitted by the sensors, the processing circuitry being electrically connected to the sensors; and
a distribution layer positioned on the only one sensor layer on a side of a face including the sensors, the distribution layer including a distributor to spatially and temporally distribute stimulus or stimuli to the sensors, the distributor comprising a plurality of channels to bring onto each sensor or group of sensors independently the stimulus or the stimuli simultaneously or quasi simultaneously or one channel to bring onto each sensor or group of sensors the stimulus or the stimuli successively, the only one sensor layer, the processing circuitry layer, and the distribution layer being arranged in a stacked fashion with the only one sensor layer being between the distribution layer and the processing circuitry layer.

US Pat. No. 10,216,697

MANAGEMENT SYSTEM FOR SKIN CONDITION MEASUREMENT ANALYSIS INFORMATION AND MANAGEMENT METHOD FOR SKIN CONDITION MEASUREMENT ANALYSIS INFORMATION

MAXELL HOLDINGS, LTD., K...

1. A management system for managing skin condition measurement analysis information, comprising:a user client used by a user of a skin condition measuring device, connected to the skin condition measuring device so as to be able to transmit data to and receive data from the skin condition measuring device, and also connected to a network to transmit and receive data;
a data management server configured to transmit data to and receive data from the user client via the network, to store information related to the user, and to provide information to the user as a primary user of the information and a secondary user of the information who is different from the primary user;
an analysis result outputting unit configured to receive analysis result data of a skin condition obtained by analyzing measurement data measured by the skin measuring device, and to output the analysis result data to be displayable on the user client; and
a secondary user client used by the secondary user, configured to transmit data to and receive data from the data measurement server, and to be provided with information related to the user,
wherein the user client includes:
a user data transmitting unit configured to transmit user data to the data management server for each user in correlation to a unique and non-duplicative client ID, the user data being input by the user and including personal information leading to specification of individual of the user and accompanying information excluding the personal information of the user;
a measurement data transmitting unit configured to transmit the input measurement data correlated to the client ID to the data measurement server when the measurement data of the user measured by the skin condition measuring device is received from the skin condition measuring device; and
a display unit configured to display the analysis result data when the analysis result data with respect to the measurement data is received from the analysis result outputting unit,
wherein the data management server includes:
(a) a database in which:
i) the personal information received from each of a plurality of user clients is registered, correlated to the client ID;
ii) the accompanying information received from each of a plurality of user clients is registered in correlation to the client ID, and acquisition time of the accompanying information is also registered; and
iii) the measurement data received from each of a plurality of user clients is registered in correlation to the client ID, and acquisition time of the measurement data is also registered,
(b) a data providing unit which, when acquisition of the data registered in the database is requested by the secondary user client, can extract a group of data consisting of the measurement data and the acquisition time of the measurement data, and/or a group of data consisting of the accompanying information and the acquisition time of the accompanying information, all designated by the secondary user client from the database, and transmit the extracted data to the secondary user client, and
wherein the secondary user client includes:
a data requesting unit configured to transmit the secondary user ID set for each secondary user, and to request data registered in the database from the data management server;
a receiving/storing unit configured to receive and store a group of data consisting of the accompanying information and the acquisition time of the accompanying information, and/or a group of data consisting of measurement data and the acquisition time of the measurement data, all in accordance with the request of data requesting unit.

US Pat. No. 10,216,696

DATA PROCESSING SYSTEM FOR ADAPTIVE VISUALIZATION OF FACETED SEARCH RESULTS

ONTOFORCE NV, Ghent (BE)...

1. A data processing system for adaptive visualisation of faceted search results comprising:an input configured to receive a search query;
a retriever connected to said input and configured to receive from said input said search query, and retrieve a plurality of search results in function of said search query, each of said search results comprising a plurality of search result properties of which at least one of the search result properties is a search result facet;
a data type determiner connected to said retriever and configured to receive one or more of said search result facets from said retriever and determine the data type of one or more of said search result facets;
a visualisation type associator connected to said data type determiner and configured to receive said data type from said data type determiner, and associate a visualisation type with said data type in function of a predetermined visualisation correlation between said data type and said visualisation type;
a visualizer connected to said visualisation type associator and said retriever and configured to receive said one or more search result facets from said retriever and said visualisation types from said visualisation type associator, present said one or more search result facets by a visualisation in function of said visualisation types to one or more users, and present a visualisation modifier user interface to said one or more users configured to request a visualisation type modification by said one or more users of the visualisation type of said presented visualisation;
a modification aggregator connected to said visualizer and configured to receive said visualisation type modifications from said visualizer, and aggregate said visualisation type modifications;
a correlation adaptor connected to said modification aggregator and said visualisation type associator, and being configured to exchange said aggregated visualisation type modifications with said modification aggregator and said predetermined visualisation correlation with said visualisation type associator, and adapt said predetermined visualisation correlation between said data types of said search result facets and said visualisation types in function of said aggregated visualisation type modifications.

US Pat. No. 10,216,695

DATABASE SYSTEM FOR TIME SERIES DATA STORAGE, PROCESSING, AND ANALYSIS

PALANTIR TECHNOLOGIES INC...

1. A system comprising:a communications interface configured to receive time series data including measurements captured by one or more data measurement sensors;
one or more storage devices configured to store:
a first database storing a plurality of sets of time series data including at least a first set of time series data received via the communications interface stored on a first storage device and a second set of time series data received via the communications interface stored on a second storage device;
a second database storing metadata related to the plurality of sets of time series data, the metadata including information for locating and accessing particular sets of time series data from the first database and also including at least one of:
indications of types of measurements included in the respective sets of time series data,
indications of locations of data measurement sensors used to generate the measurements included in the sets of time series data,
indications of properties of devices associated with the data measurement sensors, or
timing information indicating when the measurements included in the respective sets of time series data were generated; and
one or more processors configured to:
receive an indication of metadata filter criteria, wherein the metadata filter criteria includes at least one of: a type of measurement, a location of a data measurement sensor, a property of a device, or timing information;
in response to receiving the indication of the metadata filter criteria, access the second database to identify, based on the stored metadata, one or more sets of time series data that satisfy the metadata filter criteria;
transmit an indication of a quantity of sets of time series data included in the one or more sets of time series data satisfying the metadata filter criteria;
receive an instruction including an indication of a computation to perform on the one or more sets of time series data in the first database that satisfy the metadata filter criteria;
access the second database to determine, from the metadata, information for locating and accessing the one or more sets of time series data from the first database;
locate and access, from the first database and using the information for locating and accessing the one or more sets of time series data from the first database, at least a portion of the one or more sets of time series data; and
perform the computation using at least the portion of the one or more sets of time series data accessed from the first database;
whereby the one or more sets of time series data that satisfy the metadata filter criteria are identified via the metadata stored in the second database without accessing each of the plurality of sets of time series data from the first database.

US Pat. No. 10,216,694

GENERIC SCHEDULING

GOOGLE LLC, Mountain Vie...

1. A method for setting a schedule of a crawl of a content from a social network, the method comprising:parsing, by a processor, the content from the social network into a first portion and a second portion, the first portion categorized as a post category, the second portion being categorized as an engagement category, the post category being associated with a content in a post to the social network, the engagement category being associated with a content produced in response to the post;
causing, by the processor, a first thread to obtain a first endpoint object from a data source object, the data source object related to the post to the social network;
causing, by the processor, a second thread to obtain a second endpoint object from an active engagement object, the active engagement object related to the content produced in response to the post;
determining, by the processor whether the social network is a first type of social network or a second type of social network;
updating, by the processor and in response to a determination that the social network is the first type of social network, a data source endpoint record for the first thread with a next fetch time;
updating, by the processor and in response to the determination that the social network is the first type of social network, an active engagement table for the second thread with a value that causes the processor to refrain from fetching the content produced in response to the post until the processor updates the next fetch time in response to a determination that a new content produced in response to the post is available;
rescheduling, by the processor and in response to a determination that the social network is the second type of social network, the crawl of the content from the social network in accordance with a check rate; and
setting, by the processor, the schedule of the crawl of the content from the social network according to a type of the social network, the type being the first type or the second type.

US Pat. No. 10,216,693

COMPUTER WITH HYBRID VON-NEUMANN/DATAFLOW EXECUTION ARCHITECTURE

Wisconsin Alumni Research...

1. A computer with improved function comprising:a general computer processor providing:
(a) a memory interface for exchanging data and instructions with an electronic memory;
(b) an arithmetic logic unit receiving input data and instructions from the memory interface to process the same and to provide output data to the memory interface; and
(c) a program counter identifying instructions for execution by the arithmetic logic unit;
a dataflow computer processor providing:
(a) a memory interface for exchanging data and instructions with electronic memory;
(b) multiple functional units interconnected to receive input data from the memory interface or other functional units and provide output data to the memory interface or other functional units, including interconnections between functional units allowing conditional branches to either of two functional units, wherein the functional units execute in a sequence determined by the availability of data, and
(c) an interconnection control circuit controlling the interconnection of the multiple functional units to exchange data according to the dataflow description; and
a transfer interface operating to transfer the execution of an application program between the general purpose computer processor and the dataflow computer processor:
(a) at a beginning of a set of instructions of the application program identified as executable on the dataflow computer processor, switching execution from the general computer processor to the dataflow computer processor and providing to the dataflow computer processor a dataflow description of the set of instructions; and
(b) at a completion of execution of the set of instructions by the dataflow computer processor, returning execution to the general computer processor.

US Pat. No. 10,216,692

MULTI-CORE PARALLEL PROCESSING SYSTEM

Massively Parallel Techno...

1. A multiprocessor system on a chip (MPSoC) for implementing parallel processing comprising:a plurality of cores, each comprising a system on a chip, wherein each of the cores functions as a node in a parallel processing system operating as a structured cascade; and
an on-chip switch fabric directly connected to each of the cores;
wherein the on-chip switch fabric is configurable for coordinated simultaneous direct communication between multiple pairs of the cores to form the structured cascade based upon position of each of the plurality of cores within the structured cascade, wherein, for each direct communication, data transfer from/to each core of the pair is directly coupled in time.

US Pat. No. 10,216,690

SINGLE-WIRE INTERFACE BUS TRANSCEIVER SYSTEM BASED ON I2C-BUS, AND ASSOCIATED METHOD FOR COMMUNICATION OF SINGLE-WIRE INTERFACE BUS

NXP B.V., Eindhoven (NL)...

1. A single-wire interface bus transceiver system comprising:an I2C master, a master transceiver, a signal wire, a slave transceiver and an I2C slave, wherein:
the master transceiver is adapted to encode master data SDA and master clock SCL received from the I2C master using Manchester code, generate a Manchester coded master single wire signal and transfer the Manchester encoded master data SDA and master clock SCL of the single wire signal to the slave transceiver through the signal wire;
the master transceiver is also adapted to decode Manchester-encoded slave signal received from the signal wire and transfer the decoded slave data to I2C master;
the slave transceiver is adapted to encode slave data received from I2C slave using Manchester code, generate slave single wire signal and transfer it to the master transceiver through the signal wire; and
the slave transceiver is also adapted to decode Manchester-encoded master signal received from the signal wire, generate the recovered master clock and transfer the decoded master data and recovered master clock to I2C slave.

US Pat. No. 10,216,689

COORDINATING MULTIPLE REAL-TIME FUNCTIONS OF A PERIPHERAL OVER A SYNCHRONOUS SERIAL BUS

Intel Corporation, Santa...

1. A system controller, comprising:a microcontroller to generate a first message, the first message to comprise a flag to indicate a type of a first action, and a payload including an action code for the first action, with a first deadline time when the first action is of a timing-critical type, or without a deadline time when the first action is of a timing-noncritical type, the first deadline time is expressed as a system time, and wherein when the first action is of a timing-critical type, the first action is to be performed based on the first deadline time by a peripheral device coupled with the microcontroller via a serial bus; and
a time protocol engine coupled to the microcontroller to convert the first deadline time from the system time to a first number of bus-clock cycles.

US Pat. No. 10,216,688

SYSTEMS AND METHODS FOR ACCURATE TRANSFER MARGIN COMMUNICATION

AVAGO TECHNOLOGIES INTERN...

1. A data processing system, the system comprising:a sampling latch operable to sample a received serial data input and provide a corresponding serial data output;
a first duration margin determination circuit operable to: define a first contour of a data signal eye corresponding to the serial data output over a first number of bit periods, and determine a first margin characteristic based upon the first contour;
a second duration margin determination circuit operable to: define a second contour of the data signal eye corresponding to the serial data output over a second number of bit periods, and determine a second margin characteristic based upon the second contour;
a margin normalization circuit operable to calculate a normalized value based upon a combination of the first margin characteristic and the second margin characteristic; and
a transmission circuit operable to transmit an output to a requesting device, wherein the output transmitted by the transmission circuit is indicative of the normalized value.

US Pat. No. 10,216,687

SUBSCRIBER STATION FOR A BUS SYSTEM, AND METHOD FOR INCREASING THE DATA RATE OF A BUS SYSTEM

Robert Bosch GmbH, Stutt...

1. A subscriber station, the subscriber station comprising:a transmit/receive device configured to be directly connected to a communication bus for communication with a plurality of additional subscriber stations connected directly to the communication bus, the transmit/receive device being configured to:
receive a message transmitted from one subscriber station in the plurality of additional subscriber stations;
identify a bit pattern corresponding to an identifier contained in the message;
hide the message in response to the bit pattern not corresponding to a predetermined bit pattern associated with an identifier of the subscriber station;
check the message for errors based on a cyclical redundancy check (CRC) in response to the bit pattern corresponding to the predetermined bit pattern associated with the identifier of the subscriber station: and
transmit an error message through the communication bus to the plurality of additional subscriber stations indicating an error in response to the message containing an error.

US Pat. No. 10,216,686

METHOD AND APPARATUS FOR FULL DUPLEX TRANSMISSION BETWEEN ELECTRONIC DEVICES

Samsung Electronics Co., ...

1. An electronic device, comprising:an interface for supporting a connection with another electronic device;
a plurality of communication paths operating according to different standards; and
a controller configured to:
control first data communication for first data based on a first communication path according to a first standard between electronic devices connected through the interface,
control second data communication for second data based on a second communication path according to a second standard during the first data communication,
compare first data information on the first data and second data information on the second data, and
control full duplex communication for the first data and the second data based on a result of the comparison.

US Pat. No. 10,216,685

MEMORY MODULES WITH NONVOLATILE STORAGE AND RAPID, SUSTAINED TRANSFER RATES

AgigA Tech Inc., San Die...

1. A memory module, comprising:a data bus;
a plurality of slice sections, each slice section configured to input and output a slice of a data for a different section of the data bus;
each slice section comprising:
at least one nonvolatile memory (NVM);
a memory element to store the slice of the data for the slice section during operations that transfer the slice of the data between the section of the data bus for the slice section and the NVM of the slice section; and
a slice controller configured to translate an address for the slice of the data for the section of the data bus into at least a physical address of the NVM of the slice section; and
the memory element comprising a multi-port random access memory having at least a first address port coupled to receive address data from an address bus common to the plurality of slice sections and at least a second address port coupled to receive an address from the slice controller of the slice section;
wherein the slice controller is configured to simultaneously:
transfer data between the corresponding NVM and the corresponding memory element of the slice section, and transfer data between the corresponding memory element of the slice section and the address bus.

US Pat. No. 10,216,684

OPERATING SYSTEM CARD FOR MULTIPLE DEVICES

Google LLC, Mountain Vie...

1. A system comprising:a main printed circuit board (PCB) card configured to be interchangeably interfaced with multiple types of shell computing devices, wherein a width of the main PCB card is less than forty millimeters, the main PCB card including:
a card connector;
a System on a Chip (SoC) configured to run an operating system on the main PCB card; and
an antenna; and
a shell computing device of the multiple types of shell computing devices, the shell computing device included in a dashboard of an automobile, the shell computing device including:
a slot configured to accommodate the main PCB card allowing the main PCB card to be included inside of the shell computing device; and
a mating connector, the card connector configured to be plugged into the mating connector.

US Pat. No. 10,216,683

MULTIMEDIA COMMUNICATION APPARATUS AND CONTROL METHOD FOR MULTIMEDIA DATA TRANSMISSION OVER STANDARD CABLE

MSTAR SEMICONDUCTOR, INC....

1. A multimedia communication apparatus, suitable for a first multimedia apparatus, electrically connectable to a standard connector, the standard connector adapted to be non-reversibly or reversibly connected to a plug of a standard cable and comprising a plurality of pins, the pins comprising a plurality of differential signal pins, a power pin, a first polarity pin, a second polarity pin, a first data pin and a ground pin, the differential signal pins serving as a plurality of multimedia channels, the power pin serving as a power line, the multimedia communication apparatus comprising:a control logic, checking a first connection polarity of the standard cable through the first polarity pin and the second polarity pin to identify whether the standard cable is non-reversibly or reversibly connected to the standard cable; and
a multimedia signal processor, electrically connectable to the standard connector, transmitting or receiving multimedia data to/from a second multimedia apparatus through the multimedia channels, and power handshaking or exchanging information with the second multimedia apparatus through the first data pin, wherein the information is for controlling a multiplexer to switch the multimedia channels.

US Pat. No. 10,216,682

CONFIGURATION DISTRIBUTION

epro GmbH, Gronau (DE)

1. A method of provisioning cards in a rack mount system, the method comprising the steps of:placing a desired selection of unprovisioned cards in a rack,
selecting desired configuration files for the cards in the rack from a library of configuration files,
copying the configuration files into a memory device,
inserting the memory device on the rack, and
powering up the rack mount system,
wherein the configuration files in the memory device automatically and without any further user intervention provision the cards in the rack mount system upon power-up of the rack mount system.

US Pat. No. 10,216,681

SYSTEM AND METHOD FOR MANAGING WORKLOADS AND HOT-SWAPPING A CO-PROCESSOR OF AN INFORMATION HANDLING SYSTEM

Dell Products, LP, Round...

1. An information handling system, comprising:a host processing complex to instantiate a hosted processing environment and including a first general-purpose processing unit (GPU) and a GPU hot-plug module that enables a hot-plug operation to replace the first GPU with a second GPU while power is provided to the host processing complex, wherein the hosted processing environment includes a plurality of workloads that can be instantiated on the first GPU, the plurality of workloads including a first workload and a second workload, and wherein the hosted processing environment instantiates the first workload on the first GPU; and
a wireless management system that operates out of band from the hosted processing environment, that directs the hosted processing environment to halt the first workload, and that directs the GPU hot-plug module to perform the hot-plug operation, and directs the hosted processing environment to launch the second workload on the second GPU after the GPU hot-plug module performs the hot-plug operation;
wherein the hosted processing environment provides a list of the workloads to the wireless management system.

US Pat. No. 10,216,680

RECONFIGURABLE TRANSMITTER

Intel Corporation, Santa...

1. An apparatus comprising:first and second single-ended transmitters; and
a differential driver coupled to the first and second single-ended transmitters, wherein the differential driver is a fully n-type device based push-pull voltage mode driver, wherein the differential driver comprises eight n-type devices such that for a given electrical path from a power supply node to a ground node there are at most three transistors coupled in series between the power supply node and the ground node.

US Pat. No. 10,216,679

SEMICONDUCTOR DEVICE AND CONTROL METHOD THEREOF

Renesas Electronics Corpo...

1. A semiconductor device comprising:a plurality of processors, each of the plurality of processors being configured to execute a program; and
an external register disposed outside the processors, the external register being connected to each of the plurality of processors, wherein
each of the plurality of processors comprises:
a control circuit that controls execution of the program;
an arithmetic circuit that performs an operation related to the program by using the external register; and
at least one internal storage circuit, the at least one internal storage circuit being disposed inside of a respective one of the plurality of processors,
the internal storage circuit stores execution state data regarding a state of the execution of the program, the execution state data being data that is transferred from a transfer-origin processor to a transfer-destination processor when a program executing entity is changed from one of the plurality of processors to another of the plurality of processors halfway through the execution of the program,
before the program executing entity is changed from the one of the plurality of processors to the another of the plurality of processors, the external register stores operation data related to the operation performed in the arithmetic circuit of the one of the plurality of processors, and
after the program executing entity is changed from the one of the plurality of processors to the another of the plurality of processors, the arithmetic circuit of the another of the plurality of processors performs the operation by using the operation data stored in the external register and the external register stores operation data related to the operation performed in the arithmetic circuit of the another of the plurality of processors.

US Pat. No. 10,216,678

SERIAL PERIPHERAL INTERFACE DAISY CHAIN COMMUNICATION WITH AN IN-FRAME RESPONSE

Infineon Technologies AG,...

1. A master device, wherein the master device is configured to:output a master data output to a first servant device of a plurality of servant devices, wherein the plurality of servant devices is connected in a serial-peripheral interface (SPI) daisy chain configuration with the master device, wherein the SPI comprises a chip select signal, a serial data in signal, a serial data out signal and a clock signal; and
receive a master data input directly from a last servant device of the plurality of servant devices, wherein the master data input comprises an in-frame response of the plurality of servant devices, wherein the in-frame response is received by the master device in a single SPI communication frame, and wherein respective responses from the plurality of servant devices are arranged within the in-frame response so that the respective responses from the plurality of servant devices are received by the master device in an order inverse to the SPI daisy chain configuration.

US Pat. No. 10,216,677

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF WITH IDENTIFICATION OF SENSOR USING HISTORY

Samsung Electronics Co., ...

1. An electronic apparatus comprising:an interface comprising interface circuitry configured to be connectable with at least one of a plurality of sensor modules each sensor module comprising at least one sensor for sensing an object;
a programmable circuit configured to process a sensing signal obtained by sensing the object through each sensor module; and
a controller configured to identify at least one hardware image corresponding to the sensor module connected to the interface from among a plurality of hardware images, to load the at least one identified hardware image to the programmable circuit, and to control the programmable circuit to process a sensing signal corresponding to the at least one hardware image,
wherein the controller is further configured to, in response to the sensor module transmitting the sensing signal through the interface being identified, identify whether a history of using the identified sensor module is present, and to retrieve the hardware image corresponding to the identified sensor module from a storage in response to the history of using the identified sensor module being present.

US Pat. No. 10,216,676

SYSTEM AND METHOD FOR EXTENDED PERIPHERAL COMPONENT INTERCONNECT EXPRESS FABRICS

FutureWei Technologies, I...

1. A system for extending peripheral component interconnect express (PCIe) fabric comprising:a host root complex for a host PCIe fabric associated with a first set of bus numbers and a first memory mapped input/output (MMIO) space;
at least one endpoints connected with the host root complex; and
a root complex end point (RCEP) for an extended PCIe fabric associated with a second set of bus numbers and a second MMIO space separate from the first set of bus numbers and the first MMIO space, respectively, wherein the RCEP is an endpoint of the at least one endpoints connected with the host root complex, wherein the RCEP is a bridge between the extended PCIe fabric and the host PCIe fabric, and wherein the second set of bus numbers allows additional endpoints to be connected to the RCEP beyond a capacity of the host PCIe fabric as provided by resources of the host root complex.

US Pat. No. 10,216,675

TECHNIQUES FOR ESTABLISHING AN EXTERNAL INTERFACE AND A NETWORK INTERFACE WITHIN A CONNECTOR

LENOVO (SINGAPORE) PTE LT...

1. An electronic device comprising:a host system;
a device controller includes a first data channel for communicating with a peripheral device and a second data channel for communicating with a network device;
a first receptacle for simultaneously providing a peripheral interface for said first data channel and a network interface for said second data channel;
a crossbar switch, connected between said device controller and said first receptacle, switches between said first and second data channels of said device controller to establish said peripheral interface and said network interface in said first receptacle; and
a power delivery controller connected to said host system via a first serial bus, and connected to said crossbar switch via a second serial bus.

US Pat. No. 10,216,674

HIGH PERFORMANCE INTERCONNECT PHYSICAL LAYER

Intel Corporation, Santa...

1. An apparatus comprising:physical layer logic, link layer logic, and protocol layer logic, wherein the physical layer logic is to:
generate a supersequence comprising a sequence comprising an electrical ordered set (EOS) and a plurality of training sequences, the plurality of training sequences comprises a predefined number of training sequences corresponding to a respective one of a plurality of training states with which the supersequence is to be associated, each training sequence in the plurality of training sequences is to include a respective training sequence header and a training sequence payload, the training sequence payloads of the plurality of training sequences are to be sent scrambled and the training sequence headers of the plurality of training sequences are to be sent unscrambled.

US Pat. No. 10,216,673

USB DEVICE FIRMWARE SANITIZATION

International Business Ma...

1. A method, comprising:intercepting communications between a universal serial bus (USB) device and a host, at least by implementing first device firmware of the USB device, wherein the second device firmware is implemented in the USB device; and
sanitizing, using at least the implemented first device firmware, intercepted communications from the USB device toward the host, the sanitizing performed so that no communication from the USB device is directly forwarded to the host and instead only sanitized communications are forwarded to the host, wherein:
sanitizing is performed by a sanitizer having a host side and a device side and further comprises:
converting requests from the host to the USB device from USB-level semantics used by the device side to application-level semantics used by the host side, processing the request at an application level to determine first USB-level semantics to use to communicate the request to the USB device, and lowering the application-level semantics to the determined first USB-level semantics for sending to the USB device; and
converting replies from the USB device to the host from the USB-level semantics to the application-level semantics, processing the replies at the application level to determine second USB-level semantics to use to communicate the replies to the host, and lowering the application-level semantics to the determined second USB-level semantics for sending to the host; and
performing the sanitizing based at least on analysis of one or both of the application-level semantics and the USB-level semantics.

US Pat. No. 10,216,672

SYSTEM AND METHOD FOR PREVENTING TIME OUT IN INPUT/OUTPUT SYSTEMS

International Business Ma...

8. A system for preventing time out from occurring during transfer of data to an input/output device, comprising:one or more processors including memory for storing a quantity of data to be transferred to an input/output device in a data transfer;
a data prober, for probing the quantity of data and forwarding the quantity of data to an input/output controller;
the input/output controller, for breaking the quantity of data into data packets and for transferring the data packets in a data stream;
the input/output device, for receiving the data stream transferred by the input/output controller; and
a data dummy generator, for generating dummy data and inserting the dummy data in the data stream, the data dummy generator being configured to generate dummy data and insert same into the data stream at a selected time in order to avoid a time out condition from occurring during the data transfer.

US Pat. No. 10,216,671

POWER AWARE ARBITRATION FOR BUS ACCESS

QUALCOMM Incorporated, S...

1. A method of operating a bus interface unit, the method comprising:receiving three or more words from one or more agents for transmission on to a data bus;
storing the three or more words in three or more respective queues, wherein the three or more respective queues are indexed to a predetermined sequential order;
selecting a subset of the three or more respective queues based on a position of a round robin pointer (RRP) having a RRP value that traverses the three or more queues in accordance with the predetermined sequential order, wherein the subset:
excludes queues having an index value lower than the RRP value; and
includes queues having an index value higher than the RRP value;
identifying a plurality of pending words stored in the selected subset of the three or more queues;
determining which of the plurality of pending words stored in the selected subset will consume the least switching power;
selecting, based on the determining, a next word from the plurality of pending words stored in the selected subset of the three or more queues; and
transmitting the selected next word on to the data bus.

US Pat. No. 10,216,670

SYNCHRONIZATION OF A NETWORK OF SENSORS

STMICROELECTRONICS (GRENO...

3. A system comprising:a plurality of slave boards, each slave board comprising a sensor and a control processor;
a master board comprising a sensor and a control processor, the master board being configured to access measurements of the plurality of slave boards; and
a serial bus connecting the master board and the slave boards;
wherein the control processor of the master board is programmed to calibrate acquisitions of information from each of the slave board, the control processor of the master board being configured to:
receive a count from each slave board, each count representing a time separating an instant of reception of a measurement acquisition start command transmitted by the master board to that slave board and a measurement acquisition end instant for that slave board;
for each slave board, calculate a delay to be applied to an acquisition start command of that slave board, the delay being calculated as a function of the count received from that slave board; and
transmit each acquisition start command to the respective slave board so that the respective slave board delays a start of acquisition by a value of the delay calculated for that board so that acquisitions of all slave boards end at a same instant of time, wherein a delay Ri to be applied to the acquisition start command of a slave board i is calculated according to the following formula:
Ri=(N?i)*C+CN?Ci whereindex i denotes a board and index N denotes the board for which acquisition ends last, where i=0 . . . N,
C is a count, fixed by the master board, between the transmissions, by the master board, of commands to two successive slave boards, and
Ci is a count performed for a board i between the instant of reception of a measurement acquisition start command and the measurement acquisition end instant.

US Pat. No. 10,216,669

BUS BRIDGE FOR TRANSLATING REQUESTS BETWEEN A MODULE BUS AND AN AXI BUS

Honeywell International I...

1. A method for bus bridging comprising:providing a bus interface device communicatively coupled between at least one module bus and at least one advanced extensible interface (AXI) bus for translating bus requests between said module bus and said AXI bus, said bus interface device including logic, wherein said logic is configured to:
receive a read/write (R/W) request that is one of a module bus protocol (module bus protocol R/W request) and an AXI bus protocol (AXI bus protocol R/W request);
buffer said R/W request to provide a buffered R/W request;
translate via said finite state machine (FSM) said buffered R/W request to a first AXI protocol conforming request if said buffered R/W request is said module bus protocol R/W request and translate via said finite state machine (FSM) said buffered R/W request to a first module bus protocol conforming request if said buffered R/W request is said AXI bus protocol R/W request, wherein the FSM is implemented as sequential logic circuits and is defined by a list of its states and triggering conditions for each transition; and
transmit said first AXI protocol conforming request to said AXI bus or said first module bus protocol conforming request to said module bus.

US Pat. No. 10,216,668

TECHNOLOGIES FOR A DISTRIBUTED HARDWARE QUEUE MANAGER

Intel Corporation, Santa...

1. A processor comprising:a plurality of processor cores;
a plurality of hardware queue managers;
interconnect circuitry to connect each hardware queue manager of the plurality of hardware queue managers to each processor core of the plurality of processor cores; and
a plurality of queue mapping units, wherein each of the plurality of processor cores is associated with a different queue mapping unit of the plurality of queue mapping units and each of the plurality of queue mapping units is associated with a different processor core of the plurality of processor cores,
wherein each hardware queue manager of the plurality of hardware queue managers comprises:
enqueue circuitry to store data received from a processor core of the plurality of processor cores in a data queue associated with the respective hardware queue manager in response to an enqueue command generated by the processor core, wherein the enqueue command identifies the respective hardware queue manager;
dequeue circuitry to retrieve the data from the data queue associated with the respective hardware queue manager in response to a dequeue command generated by a processor core of the plurality of processor cores, wherein the dequeue command identifies the respective hardware queue manager; and
wherein each queue mapping unit of the plurality of queue mapping units is configured to:
receive a virtual queue address from the corresponding processor core;
translate the virtual queue address to a physical queue address; and
provide the physical queue address to the corresponding processor core.

US Pat. No. 10,216,667

IMAGE FORMING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image forming apparatus, comprising:a main system;
a sub system that communicates with the main system; and
a device that communicates with the sub system,
wherein:
the main system includes a transfer unit configured to transfer, to a memory of the sub system, a boot program of the sub system and device information that is necessary for the device to perform an activation process of the device,
the sub system includes;
a control unit configured to perform, based on the boot program that has been transferred by the transfer unit and is in the memory of the sub system, an establishment process for establishing communication between the main system and the control unit, and
a transmission unit configured to transmit, to the device, the device information that has been transferred by the transfer unit and is in the memory of the sub system, and
the device includes an execution unit configured to execute the activation process of the device using the device information transmitted by the transmission unit,
wherein the transfer unit included in the main system is configured to transfer, to the memory of the sub system, the boot program of the sub system and the device information that is necessary for the device to perform the activation process of the device, before the establishment process establishes the communication between the main system and the control unit included in the sub system.

US Pat. No. 10,216,666

CACHING METHODS AND SYSTEMS USING A NETWORK INTERFACE CARD

Cavium, LLC, Santa Clara...

1. A machine implemented method, comprising:maintaining a cache entry data structure for storing a sync word associated with a cache entry that points to a storage location at a storage device accessible to a network interface card (NIC) via a peripheral link, the peripheral link couples the NIC, the storage device and a processor of a computing device; wherein the sync word is associated with a plurality of states that are used by the NIC and a caching module executed by the processor of the computing device for processing requests to transmit data cached at the storage device by the NIC using a network link; wherein the plurality of states are an add state, a remove state and a valid state that are updated by the NIC by setting bits associated with each of the plurality of states;
using the cache entry data structure by the NIC to determine that there is a cache hit indicating that data for a read request is cached at the storage device;
posting a first message for the storage device by the NIC via the peripheral link, at a storage device queue located at a host memory of the computing device, the message requesting the data for the read request from the storage device;
in response to the first message, placing the data for the read request for the NIC by the storage device at the host memory via the peripheral link;
posting a second message for the NIC by the storage device at the host memory via the peripheral link for notifying the NIC that the data for the read request has been placed at the host memory;
retrieving the data placed by the storage device at the host memory by the NIC via the peripheral link;
transmitting the data for the read request by the NIC to via the network link; and
updating by the NIC, a state of a cache entry associated with the read request at the cache entry data structure.

US Pat. No. 10,216,665

MEMORY DEVICE, MEMORY CONTROLLER, AND CONTROL METHOD THEREOF

REALTEK SEMICONDUCTOR COR...

1. A control method, comprising:detecting an operational command to a first memory unit;
interrupting an operational status of a second memory unit performing with a write operation or a read operation;
asserting the operational command corresponding to the first memory unit; and
recovering the operational status of the second memory unit,
wherein the first memory unit and the second memory unit are different memory units corresponding to the same channel.

US Pat. No. 10,216,664

REMOTE RESOURCE ACCESS METHOD AND SWITCHING DEVICE

Huawei Technologies Co., ...

1. A remote resource access method, used to access a physical resource device separate from a computer system, the computer system comprising at least one computing node, the computer system and the physical resource device being coupled using a switching device, and the method comprising:obtaining, by the switching device, a first access message from a first computing node in the at least one computing node, the first access message accessing a virtual resource device, and a destination address in the first access message being a virtual address of the virtual resource device;
converting, by the switching device, the first access message into a second access message based on a physical address of the physical resource device corresponding to the virtual address of the virtual resource device, a destination address in the second access message being the physical address of the physical resource device, and the virtual resource device being a virtualized device of the physical resource device;
sending, by the switching device, the second access message to the physical resource device using a network, the physical resource device comprising at least one physical resource;
selecting, by the switching device, a device driver according to physical resource information, the physical resource information being received from a management platform and corresponding to the physical resource device; and
running, by the switching device, the device driver to simulate insertion of the physical resource device into the switching device.

US Pat. No. 10,216,663

SYSTEM AND METHOD FOR AUTONOMOUS TIME-BASED DEBUGGING

NXP USA, INC., Austin, T...

1. A processing system, comprising:a general purpose instruction based data processor;
an input configured to receive a command written by the data processor;
a timer manager controller configured to receive the command, and to execute the command; and
a debug interrupt timer controller (DITC) configured to determine that the command is directed to the DITC, and to store configuration information that associates the command with an element of the processing system that is a source of the command, wherein the configuration information is included in the command.

US Pat. No. 10,216,662

HARDWARE MECHANISM FOR PERFORMING ATOMIC ACTIONS ON REMOTE PROCESSORS

Intel Corporation, Santa...

1. A hardware apparatus comprising:a first register in a processor core to store a memory address of a payload corresponding to an action to be performed associated with a remote action request (RAR) interrupt;
a second register in a processor core to store a memory address of an action list accessible by a plurality of processors;
a remote action handler circuit to:
identify a received RAR interrupt,
access the action list to identify an action to be performed and access the payload associated with the identified action,
perform the action of the received RAR interrupt, and
signal acknowledgment to an initiating processor upon completion of the action.

US Pat. No. 10,216,661

HIGH PERFORMANCE INTERCONNECT PHYSICAL LAYER

Intel Corporation, Santa...

1. An apparatus comprising:a receiver processor comprising an agent to support a layered protocol stack comprising physical layer logic, link layer logic, and protocol layer logic, wherein the agent is to:
receive a link layer data stream within an active link state (L0), wherein the link layer data comprises a set of flits;
intermittently enter a coordination link state (L0c), wherein the coordination link state defines a L0c interval in which physical layer control;
receive a control code within the L0c interval; and
initiate a reset of the link based on a control code mismatch, wherein the control code mismatch is based on an identification that the control code fails to match one of a set of specified codes.

US Pat. No. 10,216,660

METHOD AND SYSTEM FOR INPUT/OUTPUT (IO) SCHEDULING IN A STORAGE SYSTEM

EMC IP Holding Company LL...

1. A computer-implemented method for input/output (IO) scheduling for a storage system, the method comprising:receiving a plurality of input/output (IO) requests at the storage system, the IO requests including random IO requests and sequential IO requests;
determining whether there is a pending random IO request from the plurality of IO requests;
in response to determining that there is a pending random IO request, determining whether a total latency of the sequential IO requests exceeds a predicted latency of the pending random IO request; and
servicing the pending random IO request in response to determining that the total latency of the sequential IO requests exceeds the predicted latency of the pending random IO request.

US Pat. No. 10,216,659

MEMORY ACCESS SIGNAL DETECTION UTILIZING A TRACER DIMM

HEWLETT PACKARD ENTERPRIS...

1. A system, comprising;a memory controller;
a memory bus coupled to the memory controller; and
a dual inline memory module (DIMM) coupled to the memory controller through the memory bus, the DIMM comprising:
a dynamic random access memory (DRAM) portion;
a storage portion comprising one or more storage devices; and
a gate array portion coupled to the memory bus to detect memory access signals and to store information related to the memory access signals on the storage portion,
wherein the gate array portion is configured to detect memory access signals to any of one or more DIMMs coupled to the memory bus and store information related to the memory access signals directed to any of the one or more DIMMs on the storage portion;
a second DIMM coupled to the memory controller through the memory bus, the second DIMM comprising:
a second DRAM portion;
a second storage portion; and
a second gate array portion coupled to the memory bus to detect memory access signals and to store information related to the memory access signals on the second storage portion,
wherein detection of memory access signals by the gate array portion and detection of memory access signals by the second gate array portion are synchronized;
wherein at least one of the DRAM portion or the second DRAM portion includes a selected memory address, wherein an access command to the selected memory address is used to synchronize detection of memory access signals by the gate array portion and detection of memory access signals by the second gate array portion.

US Pat. No. 10,216,658

REFRESHING OF DYNAMIC RANDOM ACCESS MEMORY

VIA ALLIANCE SEMICONDUCTO...

8. A control method for dynamic random access memory, comprising:providing a command queue with access commands queued therein, wherein the access commands are queued in the command queue waiting to be transmitted to a dynamic random access memory;
using a counter to count how many times a rank of the dynamic random access memory is entirely refreshed;
repeatedly performing a per-rank refresh operation on the rank when the counter has not reached an upper limit and no access command corresponding to the rank is waiting in the command queue;
decreasing the counter by 1 every refresh inspection interval;
when there are access commands corresponding to the rank waiting in the command queue and the counter is 0, refreshing the rank bank-by-bank by per-bank refresh operations;
corresponding to a per-bank refresh operation to be performed on a single bank within the rank, priority of access commands queued in the command queue corresponding to remaining banks of the rank except for the single bank is raised; and
when finishing the per-bank refresh operation on the single bank, the priority of the access commands queued in the command queue corresponding to the remaining banks of the rank except for the single bank is restored.

US Pat. No. 10,216,657

EXTENDED PLATFORM WITH ADDITIONAL MEMORY MODULE SLOTS PER CPU SOCKET AND CONFIGURED FOR INCREASED PERFORMANCE

INTEL CORPORATION, Santa...

1. An apparatus comprising:a printed circuit board (PCB) defining a length and a width, the length being greater than the width;
a first row of elements on the printed circuit board, including a first memory region configured to receive at least one memory module;
a second row of elements on the PCB including a first central processing unit (CPU) socket configured to receive a first CPU, and a second CPU socket configured to receive a second CPU, the first CPU socket and the second CPU socket positioned side by side along the width of the PCB; and
a third row of elements on the PCB, including a second memory region configured to receive a at least one memory module;
wherein the second row of elements is positioned between the first row of elements and the third rows of elements.

US Pat. No. 10,216,656

CUT-THROUGH BUFFER WITH VARIABLE FREQUENCIES

INTERNATIONAL BUSINESS MA...

1. A system comprising:a header cut-through buffer operable to be asynchronously read while being written at different clock frequencies, wherein the header cut-through buffer is operable to buffer values from a header portion of a packet;
a data cut-through buffer operable to buffer values from a payload portion of the packet in parallel with the header cut-through buffer; and
a controller operatively connected to the header cut-through buffer and the data cut-through buffer, the controller operable to perform:
writing one or more values into the header cut-through buffer in a first clock domain, wherein the data cut-through buffer is written in the first clock domain;
comparing a number of values written into the header cut-through buffer to a notification threshold;
passing a notification indicator from the first clock domain to a second clock domain based on determining that the number of values written into the header cut-through buffer meets the notification threshold; and
based on receiving the notification indicator, reading the header cut-through buffer from the second clock domain continuously without pausing until the one or more values are retrieved and any additional values written to the header cut-through buffer during the reading of the one or more values are retrieved, wherein the data cut-through buffer is read in the second clock domain, and the notification threshold delays reading of the header cut-through buffer without delaying reading of the data cut-through buffer.

US Pat. No. 10,216,655

MEMORY EXPANSION APPARATUS INCLUDES CPU-SIDE PROTOCOL PROCESSOR CONNECTED THROUGH PARALLEL INTERFACE TO MEMORY-SIDE PROTOCOL PROCESSOR CONNECTED THROUGH SERIAL LINK

ELECTRONICS AND TELECOMMU...

1. A memory interface apparatus, comprising:a central processing unit (CPU)-side protocol processor connected to a CPU through a parallel interface; and
a memory-side protocol processor connected to a memory through a parallel interface,
wherein the CPU-side protocol processor and the memory-side protocol processor are connected through a serial link, and
wherein the CPU-side protocol processor includes:
a front-end bus controller configured to generate a header packet for header processing and a write data payload packet for a data payload;
a header buffer configured to store the header packet; and
a write data buffer configured to store the write data payload packet.

US Pat. No. 10,216,654

DATA SERVICE-AWARE INPUT/OUTPUT SCHEDULING

EMC IP Holding Company LL...

1. A method of request scheduling in a computing environment, comprising:obtaining a segment size for which one or more data services in the computing environment are configured to process data;
obtaining, from a host device in the computing environment, one or more requests to at least one of read data from and write data to one or more storage devices in the computing environment, wherein the one or more requests originate from one or more application threads of the host device;
aligning the one or more requests into one or more segments having the obtained segment size to generate one or more aligned segments, wherein the one or more requests are respectively provided to one or more local request queues corresponding to the one or more application threads, the one or more local request queues performing request merging, based on the obtained segment size, to generate the one or more aligned segments; and
dispatching the one or more aligned segments to the one or more data services prior to sending the one or more requests to the one or more storage devices;
wherein the computing environment is implemented via one or more processing devices operatively coupled via a communication network.

US Pat. No. 10,216,653

PRE-TRANSMISSION DATA REORDERING FOR A SERIAL INTERFACE

International Busiess Mac...

1. A serial communication system, comprising:a transmitting circuit for serially transmitting data via a serial communication link including N channels where N is an integer greater than 1, the transmitting circuit including:
an input buffer having storage for input data frames each including M bytes forming N segments of M/N contiguous bytes;
a reordering circuit coupled to the input buffer, wherein the reordering circuit includes a reorder buffer, and wherein the reordering circuit buffers, in each of multiple entries of the reorder buffer, a byte in a common byte position in each of the N segments of an input data frame, and wherein the reordering circuit sequentially outputs the contents of the entries of the reorder buffer via the N channels of the serial communication link.

US Pat. No. 10,216,652

SPLIT TARGET DATA TRANSFER

EMC IP Holding Company LL...

1. A method of transferring data to an initiator, comprising:providing a first target that exchanges commands and status with the initiator, the first target including a data storage array having a cache memory, at least one storage device, and a host adaptor that is coupled to the initiator and communicates with the cache memory;
providing a second target, coupled to the host adaptor, that exchanges commands and data with the first target and exchanges data with the initiator, the second target including a fast memory unit having memory that is accessible faster than the at least one storage device of the data storage array, wherein at least some data is stored at only one of: the first target or the second target;
the initiator providing a first transfer command to the first target, the first transfer command including a request for requested data;
transferring the requested data from cache memory of the first target through the host adaptor to the initiator in response to the requested data being stored in the cache memory;
determining whether the requested data is stored in the faster memory unit of the second target in response to the requested data not being stored in the cache memory of the data storage array of the first target;
the first target providing a second transfer command to the second target in response to the requested data not being stored in the cache memory of the data storage array of the first target and being stored in the faster memory unit of the second target; and
in response to the second transfer command received from the first target, the second target transferring the requested data to the initiator through the host adaptor.

US Pat. No. 10,216,651

PRIMARY DATA STORAGE SYSTEM WITH DATA TIERING

NexGen Storage, Inc., Lo...

1. A primary data storage system for use in a computer network and having tiering functionality, the system comprising:an input/output port for receiving a block command packet that embodies one of a read block command and a write block command and transmitting a block result packet in reply to a block command packet;
a data store system having at least a first tier and a second tier;
wherein the first tier has a first set of characteristics;
wherein the second tier has a second set of characteristics;
a statistics database configured to receive, store, and provide data for use in making a decision related to tiering of a data block;
a tiering processor for performing tiering functionality to cause a data block associated with a block command packet to be stored in whichever of the first tier and second tier has characteristics that are most compatible with the access pattern of the data block if, based on data obtained from the statistics database, there are sufficient resources for performing the tiering functionality and a calculated weight associated with a future performance of the tiering functionality at a first point in time is dominant relative to a calculated weight associated with a future performance of each of one or more other operations associated with one or more other block command packets that are simultaneously being considered for performance at the first point in time, and if there are insufficient resources for performing the tiering functionality or the calculated weight associated with the future performance of the tiering functionality is not dominant relative to a calculated weight associated with the future performance of each of the one or more other operations associated with one or more other block command packets simultaneously being considered for performance at the first point in time, forgoing any tiering functionality with respect to the data block until at a second point in time that is later than the first point in time, data obtained from the statistics database indicates that there are sufficient resources for performing the tiering functionality and a calculated weight associated with a future performance of the tiering functionality at the second point in time is dominant relative to a calculated weight associated with a future performance of each of whatever one or more other operations associated with one or more other block command packets are simultaneously being considered for future performance at the second point in time;
wherein the tiering processor is adapted for:
copying a first plurality of data blocks from the first tier to the second tier so that the second tier has a second plurality of data blocks that is identical to the first plurality of data blocks; and
after a copying, identifying a retained portion of the space occupied by the second plurality of data blocks on the second tier as being most compatible with the second tier than with the first tier, identifying an available portion of the space occupied by the first plurality of data blocks on the first tier that corresponds to the retained portion of space on the second tier as available, and thereby retaining on the first tier a third data block or third plurality of data blocks that is a subset of the second plurality of data blocks on the second tier.

US Pat. No. 10,216,650

METHOD AND APPARATUS FOR BUS LOCK ASSISTANCE

Intel Corporation, Santa...

1. A method comprising:detecting that a first instruction and a second instruction are locked instructions;
determining that execution of the first instruction and the second instruction each include imposing an initial bus lock; and
executing a bus lock assistance function in response to the determining, wherein the bus lock assistance function comprises:
preventing the initial bus lock from being imposed for the first instruction by raising a flag to cause execution of software including the first instruction to stop, and
permitting the initial bus lock to be imposed for the second instruction.

US Pat. No. 10,216,649

KERNEL TRANSITIONING IN A PROTECTED KERNEL ENVIRONMENT

1. A method for providing multiple kernels in a protected kernel environment, the method comprising:providing, by a hypervisor, a virtual machine that includes a first kernel and a second kernel;
allocating a first portion of memory for the first kernel and a second portion of memory for the second kernel;
executing the first kernel that is stored in the first portion of memory;
disabling, by the hypervisor, access privileges corresponding to the second portion of memory; and
transitioning from executing the first kernel to executing the second kernel, wherein the transitioning from the first kernel to the second kernel occurs while the virtual machine is running and without shutting down or rebooting the virtual machine, the transitioning comprising:
clearing, by the hypervisor, at least some of the first portion of memory;
enabling, by the hypervisor, access privileges corresponding to the second portion of the memory; and
after the enabling, executing the second kernel on the virtual machine.

US Pat. No. 10,216,648

MAINTAINING A SECURE PROCESSING ENVIRONMENT ACROSS POWER CYCLES

Intel Corporation, Santa...

1. A processor comprising:an instruction unit to receive a first instruction, wherein the first instruction is to evict a root version array page entry from a secure cache; and
an execution unit to execute the first instruction, wherein execution of the first instruction includes generating a blob to contain information to maintain a secure processing environment across a power cycle and storing the blob in a non-volatile memory, wherein generating the blob is to include encrypting a combination of inputs reflecting context of the secure cache, the combination of inputs to include a version number of the root version array page.

US Pat. No. 10,216,647

COMPACTING DISPERSED STORAGE SPACE

International Business Ma...

1. A method for execution by a storage unit, the method comprises:receiving an encoded data slice for storage in physical memory of the storage unit, wherein the physical memory includes a plurality of storage locations, wherein the physical memory is virtually divided into a plurality of log files, and wherein each log file of the plurality of log files is associated with a unique set of storage locations of the plurality of storage locations, wherein a data object is partitioned into a plurality of data segments, wherein a data segment of the plurality of data segments is error encoded and sliced in accordance with distributed data storage parameters to produce a plurality of encoded data slices for distributed storage in a plurality of storage units that includes the storage unit, and wherein the plurality of encoded data slices includes the encoded data slice that is received for storage in physical memory of the storage unit;
determining a storage location of the plurality of storage locations for storing the encoded data slice by:
identifying a log file of the plurality of log files based on information regarding the encoded data slice corresponding to information regarding the log file to produce an identified log file, wherein the information regarding the encoded data slice includes at least one of: a data identifier (ID) of a file associated with the encoded data slice, a user ID associated with the encoded data slice, and an indication of the log file contained in a message accompanying the encoded data slice, and wherein the identified log file is storing at least one other encoded data slice;
comparing storage parameters of the identified log file with desired storage parameters associated with the encoded data slice; and
when the storage parameters of the identified log file include at least one of:
the log file is identified as a most recently compacted log file;
the log file is identified in a slice location table lookup; and
the log file is identified based on a slice name associated with the encoded data slice,
identifying a storage location within the unique set of storage locations associated with the identified log file.

US Pat. No. 10,216,646

EVICTING APPROPRIATE CACHE LINE USING A REPLACEMENT POLICY UTILIZING BELADY'S OPTIMAL ALGORITHM

Board of Regents, The Uni...

1. A method for cache replacement, the method comprising:tracking, by a processor, an occupied cache capacity of a simulated cache at every time interval using an occupancy vector, wherein said occupancy vector contains a number of cache lines contending for said simulated cache, wherein said cache capacity corresponds to a number of cache lines of said simulated cache;
retroactively assigning said cache capacity to cache lines of said simulated cache in order of their reuse, wherein a cache line is considered to be a cache hit utilizing Belady's optimal algorithm if said cache capacity is available at all times between two subsequent accesses, wherein a cache line is considered to be a cache miss utilizing said Belady's optimal algorithm if said cache capacity is not available at all times between said two subsequent accesses;
updating said occupancy vector using a last touch timestamp of a current memory address;
determining if said current memory address results in a cache hit or a cache miss utilizing said Belady's optimal algorithm based on said updated occupancy vector; and
storing a replacement state used for evicting a cache line of a cache using results of said determination.

US Pat. No. 10,216,645

MEMORY DATA TRANSFER METHOD AND SYSTEM

Synopsys, Inc., Mountain...

1. A method using one hardware implemented DMA (Direct Memory Access) processor having DMA capability integrated therein, the method comprising:the one hardware implemented DMA processor moving a first data from a plurality of first locations to an internal memory within the one hardware DMA processor in response to an initial command:
retrieving a subset of the first data from the internal memory within the one hardware implemented DMA processor, the internal memory within the one hardware implemented DMA processor for temporary storage of the retrieved data;
storing the retrieved subset from the internal memory within the one hardware implemented DMA processor to a corresponding one of the plurality of second locations; and
storing the retrieved data from the internal memory to a location of the at least a third location simultaneously and by the same DMA process performed by the one hardware implemented DMA processor,
wherein the plurality of second locations forms a memory buffer having the first data duplicated therein and the at least a third location forms one of a memory buffer having the first data duplicated therein and a memory supporting inline processing of data provided therein.

US Pat. No. 10,216,644

MEMORY SYSTEM AND METHOD

Toshiba Memory Corporatio...

1. A memory system comprising:a first memory that is nonvolatile;
a second memory that includes a buffer; and
a memory controller configured to:
manage a logical address space by dividing the logical address space into a plurality of regions, each region including a fixed number of continuous logical addresses, the fixed number being an integer larger than one; and
manage a plurality of pieces of translation information, each piece of translation information correlating a physical address indicating a location in the first memory with a logical address,
wherein
the plurality of pieces of translation information includes a first plurality of pieces of translation information correlating the fixed number of physical addresses with the fixed number of continuous logical addresses included in one region of the plurality of regions,
in a case where the first plurality of pieces of translation information correspond to a second plurality of pieces of translation information, the second plurality of pieces of translation information linearly correlating a plurality of continuous physical addresses with a plurality of continuous logical addresses,
the memory controller caches first translation information correlating a first physical address with a first logical address among the first plurality of pieces of translation information in the buffer and does not cache second translation information correlating a second physical address with a second logical address among the first plurality of pieces of translation information in the buffer, and
in a case where the first plurality of pieces of translation information do not correspond to the second plurality of pieces of translation information,
the memory controller caches the first plurality of pieces of translation information in the buffer.

US Pat. No. 10,216,643

OPTIMIZING PAGE TABLE MANIPULATIONS

INTERNATIONAL BUSINESS MA...

1. A computer program product for optimizing page table manipulations, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions being readable and executable by a processing circuit to cause the processing circuit to:create and maintain a translation table for translating direct memory access (DMA) addresses to real addresses with a translation look-aside buffer (TLB) disposed to cache priority translations;
update the translation table upon de-registration of a DMA address without issuance of a corresponding TLB invalidation instruction;
allocate entries in the translation table from low to high memory addresses during memory registration;
maintain a cursor for identifying where to search for available entries upon performance of a new registration;
advance the cursor from entry-to-entry in the translation table and wrap the cursor from an end of the translation table to a beginning of the translation table; and
issue a synchronous TLB invalidation instruction to invalidate an entirety of the TLB upon at least one wrapping of the cursor and an entry being identified and updated.

US Pat. No. 10,216,621

AUTOMATED DIAGNOSTIC TESTING OF DATABASES AND CONFIGURATIONS FOR PERFORMANCE ANALYTICS VISUALIZATION SOFTWARE

ServiceNow, Inc., Santa ...

1. A system for diagnostic testing of a performance analytics software application, wherein the system is disposed within a computational instance of a remote network management platform that remotely manages a managed network, the system comprising:a performance analytics database containing performance analytics data that define key performance indicators (KPIs) associated with the managed network and that define dashboards that are configured to specify, on a performance analytics graphical user interface (GUI) within the managed network, graphical representations of the KPIs;
a diagnostic database containing representations of a plurality of tests, the tests configured to determine whether the KPIs and the dashboards comply with pre-defined consistency, configuration, and performance rules; and
a computing device operational to execute a diagnostic software program, wherein the diagnostic software program is configured to:
obtain, from the diagnostic database, a representation of a particular test of the plurality of tests, wherein the particular test includes a plurality of the pre-defined consistency, configuration, and performance rules,
apply each of the plurality of the pre-defined consistency, configuration, and performance rules to the KPIs and the dashboards stored in the performance analytics database, and
write, when applying at least one of the plurality of the pre-defined consistency, configuration, and performance rules indicates a problem, an associated severity, problem description, and solution description to the diagnostic database as output of the particular test.

US Pat. No. 10,216,599

COMPREHENSIVE TESTING OF COMPUTER HARDWARE CONFIGURATIONS

International Business Ma...

1. A method for testing a computer, wherein the computer includes a plurality of hardware components, wherein the plurality of hardware components includes a plurality of processors, and wherein the method comprises:receiving a signal to determine resources of the computer to allocate to a program, wherein the program is executed by at least one processor included in the plurality of processors;
determining, in response to the signal, to allocate to the program an at least one first hardware component included in the plurality of hardware components;
detecting, in response to the signal, that the computer is operating in a test mode;
selecting, in response to the signal and based at least in part on the computer operating in the test mode, a subset of the plurality of hardware components included in the computer, wherein the subset includes hardware components associated with the at least one first hardware component, wherein the subset includes hardware components not presently allocated to the program, and wherein the subset comprises a number of hardware components no greater than a program limit; and
swapping an at least one second hardware component for an at least one third hardware component, wherein the at least one second hardware component is included in the subset of the plurality of hardware components included in the computer, wherein the at least one third hardware component is included in the plurality of hardware components included in the computer, the at least one third hardware component presently allocated to the program, and wherein the swapping of the at least one second hardware component for the at least one third hardware component comprises de-allocating, from the program, the at least one third hardware component and allocating, to the program, the at least one second hardware component.

US Pat. No. 10,216,598

METHOD FOR DIRTY-PAGE TRACKING AND FULL MEMORY MIRRORING REDUNDANCY IN A FAULT-TOLERANT SERVER

Stratus Technologies Berm...

1. A method of transferring memory from an active to a standby memory in an FT (Fault Tolerant) Server system comprising the steps of:Reserving a portion of memory using BIOS of the FT Server system;
Loading and initializing an FT Kernel Mode Driver into memory;
Loading and initializing an FT virtual machine Manager (FTVMM) including the Second Level Address Table (SLAT) into the reserved portion of memory and synchronizing all processors in the FTVMM;
Tracking the OS (Operating System), driver, software and Hypervisor memory accesses using the FTVMM's SLAT in Reserved Memory;
Tracking Guest VM (Virtual Machine) memory accesses by tracking all pages of the SLAT associated with the guest and intercepting the Hypervisor writes to memory pages that constitute the SLAT;
Entering Brownout—level 0, by executing a full memory copy while keeping track of the dirty bits (D-Bits) in the SLAT to track memory writes by all software in the FT Server;
Clearing all of the D-Bits in the FTVMM SLAT and each Guest's current SLAT;
Entering Brownout of phases 1-4 tracking all D-Bits by:
Collecting the D-Bits;
Invalidating all processors cached translations for the FTVMM SLAT and each current Guest's SLAT;
Copying the data of the modified memory pages from the active memory to the second Subsystem memory;
Entering Blackout by pausing all the processors in the FTVMM, except Processor #0, and disabling interrupts in the FT Driver for processor #0;
Copying the collected data from active to the mirror memory;
Collecting the final set of recently dirtied pages including stack and volatile pages;
Transferring the set of final pages to the mirror memory using an FPGA (Field Programmable Gate Array);
Using the FT Server FPGA and Firmware SMM (System Management Module) to enter the state of Mirrored Execution;
Completing the Blackout portion of the operation by terminating and unloading the FTVMM;
Returning control to the FT Kernel Mode Driver; and
Resuming normal FT System operation.

US Pat. No. 10,216,597

RECOVERING UNREADABLE DATA FOR A VAULTED VOLUME

NETAPP, INC., Sunnyvale,...

1. A method comprising:identifying a sector from a plurality of sectors in a physical memory of a storage system as an unreadable sector;
determining that a logical block address range of the unreadable sector matches a logical block address range of a copy of the sector identified as the unreadable sector that was previously uploaded to a cloud storage, wherein the copy of the sector stores readable data and a match indicates that the logical block address of the unreadable sector has not changed since the copy of the sector was previously uploaded to the cloud storage;
based on the determining, receiving the copy of the sector from the cloud storage; and
replacing the unreadable sector with the copy of the sector at a same location in the physical memory occupied by the unreadable sector.

US Pat. No. 10,216,596

FAST CONSISTENT WRITE IN A DISTRIBUTED SYSTEM

BiTMICRO Networks, Inc., ...

1. A system, comprising:a local server comprising a first peripheral component that is attached to a PCIe (Peripheral Component Interconnect Express) and that is configured to complete a write operation and a consistency operation by transmitting a derivative mirror write or log operation;
wherein the local server transmits the derivative mirror write or log operation in response to a request from a node to the local server;
wherein the request comprises a write from the node to the local server;
wherein the node comprises a host;
a remote server comprising a second peripheral component; and
wherein the write operation and consistency operation are completed without external software intervention in the remote server that is remote from the local server and the first peripheral component;
wherein the local server comprises a first network interface card (NIC);
wherein the first peripheral component comprises a first PCIe drive in the local server;
wherein the remote server comprises a second NIC;
wherein the second peripheral component comprises a second PCIe drive in the remote server;
wherein the first PCIe drive in the local server and the second PCIe drive in the remote server communicate with each other via the first NIC and the second NIC and via a network that is coupled to the first NIC and the second NIC;
wherein the local server transmits a final acknowledgement to the node to indicate a successful completion of the write operation and consistency operation;
wherein the local server transmits the final acknowledgement to the node in response to a derivative mirror write acknowledgement from the remote server to the local server or a log operation acknowledgement from the remote server to the local server and without the local server waiting for a CPU (central processing unit) acknowledgement from the remote server wherein the CPU acknowledgement indicates a completion of a remote write in the remote server;
wherein the remote server immediately transmits the derivative mirror write acknowledgement to the local server in response to the remote server receiving the derivative mirror write that is transmitted from the local server;
wherein the remote server immediately transmits the log write acknowledgement to the local server in response to the remote server receiving the log write that is transmitted from the local server;
wherein the local server comprises a first compute element and a first networking software that runs on the first compute element;
wherein the first compute element is configured to process requests from the node;
wherein the first networking software configures the local server to transmit the derivative mirror write or log operation to the remote server in response to the request from the node to the local server;
wherein the remote server comprises a second compute element and a second networking software that runs on the second compute element; and
wherein the second networking software configures the remote server for mirroring remote writes or for logging remote writes.

US Pat. No. 10,216,595

INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR THE INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An information processing apparatus that performs mirroring to store same data in a plurality of storage units, the information processing apparatus comprising:a memory of a main board, the memory being configured to store mirroring information including configuration information about the mirroring in the plurality of storage units;
a mirroring configuration unit configured to configure mirroring in the plurality of storage units based on the configuration information about the mirroring stored in the memory of a main board;
a memory of a sub board, the memory being configured to store the mirroring information;
a detection unit configured to detect replacement of the main board; and
a restoration unit configured to restore the mirroring information stored in the memory of the sub board in a memory of a replaced main board in accordance with detection of the replacement of the main board by the detection unit,
wherein the mirroring configuration unit is configured to configure mirroring in a mirroring state in accordance with the configuration information about the mirroring restored in the memory of the replaced main board being information indicating a mirror, and configure mirroring in a degraded state in accordance with the configuration information about the mirroring being information indicating degradation.

US Pat. No. 10,216,594

AUTOMATED STALLED PROCESS DETECTION AND RECOVERY

INTERNATIONAL BUSINESS MA...

1. A method for use in a dispersed storage network (DSN) including a plurality of storage units, the method comprising:detecting, by a processing unit included in the DSN, a failing storage unit;
issuing, by the processing unit, an error indicator to a recovery unit, to indicate the failing storage unit;
issuing, by the recovery unit, a test request to the failing storage unit;
determining, by the recovery unit, to implement a corrective action; and
facilitating, by the recovery unit, execution of the corrective action.

US Pat. No. 10,216,593

DISTRIBUTED PROCESSING SYSTEM FOR USE IN APPLICATION MIGRATION

HITACHI, LTD., Tokyo (JP...

1. A distributed processing system, comprising:a plurality of application servers; and
a management device,
wherein the application server includes an application portion and a distributed execution platform portion,
the application portion includes
an adapter unit that receives an execution request of an application from a client terminal, and transmits a message of a processing request of the application to the distributed execution platform portion, and
a processor unit that performs a process of the application in response to a request from the distributed execution platform,
the distributed execution platform portion includes
a dispatcher unit that holds the message transmitted from the adapter unit, and selects an application server that performs the process of the application requested through the message according to a routing strategy, and
a statistical information storage unit that holds statistical information of each process of the application by the application server, and
the management device includes
a migration management unit that manages a migration status of the application for every two or more application servers, and
a migration evaluating unit that decides a migration target server group based on performance information of the application server, statistical information of each process of the application, and the number of non-completed processes of each process calculated based on the number of messages held in the dispatcher unit for the application server in which the migration status is an old application operation state.

US Pat. No. 10,216,592

STORAGE SYSTEM AND A METHOD USED BY THE STORAGE SYSTEM

International Business Ma...

1. A computer program product for performing failover processing between a production host and a backup host, a storage system is connected to the production host and the backup host, the computer program product comprising:a computer readable non-transitory article of manufacture tangibly embodying computer readable instructions which, when executed, cause a computer to carry out a method comprising:
in response to a failure of the production host, obtaining metadata of data blocks that have already been cached from an elastic space located in a fast disk of the storage system, and expanding a storage capacity of the elastic space;
in response to a maximum storage capacity of the elastic space being less than a storage capacity of the data blocks to which the metadata corresponds, expanding the elastic space to its maximum capacity;
in response to the maximum storage capacity of the elastic space being more than the storage capacity of the data blocks to which the metadata corresponds, expanding the elastic space until the storage capacity of the elastic space at least is capable of storing the data blocks to which the metadata corresponds;
obtaining data blocks to which the metadata corresponds according to the metadata and the storage capacity of the expanded elastic space, and storing the same in the expanded elastic space; and
in response to the backup host requesting the data blocks to which the metadata corresponds and the data blocks to which the metadata corresponds have already been stored in the expanded elastic space, obtaining the data blocks to which the metadata corresponds from the expanded elastic space and transmitting the same to the backup host.

US Pat. No. 10,216,591

METHOD AND APPARATUS OF A PROFILING ALGORITHM TO QUICKLY DETECT FAULTY DISKS/HBA TO AVOID APPLICATION DISRUPTIONS AND HIGHER LATENCIES

EMC IP Holding Company LL...

1. A method for determining a faulty hardware component within a data storage system, comprising:collecting, by a processor, data relating to a plurality of input/output (IO) errors associated with a first storage processor within the data storage system, wherein the data storage system includes a plurality of disk array enclosures (DAEs), each DAE having one or more disk drives;
compiling, by the processor, IO error statistics based on the data relating to the plurality of IO errors, the IO error statistics being related to a first one of the DAEs of the data storage system; and
determining, by the processor, a faulty hardware component based on the IO error statistics, wherein the determining of the faulty hardware component comprises utilizing a second storage processor of the data storage system independent from the first storage processor, including examining IO access statistics of the second storage processor for accessing the first DAE through a different path, and
wherein the plurality of DAEs are connected to the first storage processor and the second storage processor through an independent first path and an independent second path, and each of the one or more disk drives has a first port connected to the first storage processor through the first path and a second port connected to the second storage processor through the second path.

US Pat. No. 10,216,590

COMMUNICATION CONTROL DETERMINATION OF STORING STATE BASED ON A REQUESTED DATA OPERATION AND A SCHEMA OF A TABLE THAT STORES THEREIN DATA TO BE OPERATED BY THE DATA OPERATION

KABUSHIKI KAISHA TOSHIBA,...

1. A communication control device to be connected to a plurality of server devices configured to store therein data in a distributed manner, the communication control device comprising:a determining unit configured to determine, based on a requested data operation and a schema of a table that stores therein data to be operated by the data operation, any one of a first operation method of storing state information indicating a state of the data operation and a second operation method of avoiding storing the state information as a method for the data operation;
a request unit configured to request the server devices to execute the data operation;
a storage control unit configured to store, when the first operation method is determined, the state information in the storage unit upon execution of the data operation; and
a recovery unit configured to recover, when a failure occurs upon the execution of the data operation, from the failure by a recovery method suited to the operation method determined by the determining unit.

US Pat. No. 10,216,589

SMART DATA REPLICATION RECOVERER

International Business Ma...

1. A computer system for data replication recovery in a heterogeneous environment comprising:one or more processors, one or more computer-readable storage devices, and a plurality of program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors, the plurality of program instructions comprising:
program instructions to receive, by a data replication recoverer (DRR) agent, one or more committed transaction records from a source agent, wherein the source agent is configured to receive the one or more committed transaction records from a source database;
program instructions to create, by the DRR agent, data and metadata records from the received one or more committed transaction records, wherein the metadata comprises a transaction identifier, a timestamp associated with a transaction, transaction statistics wherein the transaction statistics include a number of operations performed on the source database, and a number of rows processed on the source database, and program instructions to save the data and the metadata records in a data replication repository; and
in response to receiving a request to recover a target database, program instructions to selectively recover, by the DRR agent, the target database wherein the target database is recovered using either one or more individual transactions or a bookmark; and
wherein the program instructions to selectively recover the target database further comprises:
program instructions to locate the selected bookmark within the data replication repository associated with the target database;
program instructions to locate an earliest log position entry recorded in the bookmark and a last log position entry recorded in the bookmark in the metadata within the data replication repository associated with the target database;
program instructions to create a plurality of database operations to reverse the transactions recorded within the earliest log position entry and the last log position entry in the selected bookmark;
program instructions to send, by the DRR agent, the plurality of created database operations to a target agent on the target database, wherein the target agent executes the plurality of created database operations on the target database;
based on the created database operations completing on the target database, program instructions to notify the DRR agent, by the target agent; and
in response to receiving the notification, program instructions for the DRR agent to mark the data and metadata in the data replication repository as being recovered.

US Pat. No. 10,216,588

DATABASE SYSTEM RECOVERY USING PRELIMINARY AND FINAL SLAVE NODE REPLAY POSITIONS

SAP SE, Walldorf (DE)

1. One or more non-transitory computer-readable storage media storing computer-executable instructions for causing a computing system to perform processing to carry out a database recovery at a slave database system node, the slave node in communication with a master node, the processing comprising:receiving a preliminary slave log backup position from a backup manager;
replaying at least a portion of one or more log backups until the preliminary slave log backup position is reached;
receiving a final slave log backup position; and
replaying at least a portion of one or more log backups until the final slave log backup position is reached.

US Pat. No. 10,216,587

SCALABLE FAULT TOLERANT SUPPORT IN A CONTAINERIZED ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method for providing failure tolerance to containerized applications by one or more processors, comprising:initializing a layered filesystem to maintain checkpoint information of stateful processes in separate and exclusive layers on individual containers;
transferring a most recent checkpoint layer from a main container exclusively to an additional node to maintain an additional, shadow container;
implementing a maintenance schedule for the main and shadow containers, including transferring additional checkpoint layers at regular intervals; and
organizing the most recent checkpoint layer and additional layers such that the most recent checkpoint layer is a topmost layer.

US Pat. No. 10,216,586

UNIFIED DATA LAYER BACKUP SYSTEM

International Business Ma...

1. A method of managing disaster recovery with a unified data layer, the method comprising:establishing a primary system at a first site, wherein the primary system includes both a primary instance of an application being hosted by the primary system and a primary database for data of the application, wherein the application is hosted for remote users that use the application to manage data of the primary database;
establishing a unified data layer for the primary system at a second site; wherein the unified data layer provides access to data of the primary database without providing access to the primary database, wherein the second site is remote from the first site;
detecting a triggering event of the primary system, wherein the triggering event impairs the ability of the primary system to host the application;
instantiating a recovery system in response to detecting the triggering event, wherein the recovery system includes both a recovery instance of the application and a recovery database for the data of the application, wherein the recovery system is not located at the first site;
populating the recovery database using the unified data layer;
activating the recovery system, wherein the activating the recovery system includes allowing the remote users to access the recovery instance of the application to manage data of the recovery database;
detecting availability of the primary system to host the application without impairment;
updating the primary database using the unified data layer in response to detecting the availability of the primary system;
deactivating the recovery system at a second point in time, wherein the second point in time occurs immediately before activating of the primary system and is prior to a first point in time; and
activating the primary system, wherein activating the primary system includes allowing the remote users to access the primary instance of the application to manage data of the primary database.

US Pat. No. 10,216,585

ENABLING DISK IMAGE OPERATIONS IN CONJUNCTION WITH SNAPSHOT LOCKING

Red Hat Israel, Ltd., Ra...

1. A method comprising:attaching a first snapshot to a first virtual machine, the first snapshot being stored within a disk image, wherein the first snapshot is a copy of a virtual disk at a first point in time;
generating, in view of the first snapshot, a second snapshot while the first snapshot is attached to the first virtual machine, wherein the second snapshot is a copy of the virtual disk at a second point in time;
attaching the second snapshot to a second virtual machine while the first snapshot is attached to the first virtual machine; and
causing, by a processing device, the second snapshot to be locked in view of the second virtual machine performing one or more operations on the second snapshot,
wherein the first virtual machine performs one or more operations on the first snapshot concurrent with the second virtual machine performing one or more operations on the second snapshot while the second snapshot is locked.

US Pat. No. 10,216,584

RECOVERY LOG ANALYTICS WITH A BIG DATA MANAGEMENT PLATFORM

International Business Ma...

1. A computer-implemented method for replicating relational transactional log data to a big data platform, comprising:fetching, using a processor of a computer, change records contained in change data tables;
rebuilding a relational change history with transaction snapshot consistency to generate consistent change records by joining the change data tables and a unit of work table based on a commit sequence identifier, wherein the rebuilding is performed by one of a relational database management system and the big data platform; and
storing the consistent change records on the big data platform, wherein queries are answered on the big data platform using the consistent change records.

US Pat. No. 10,216,583

SYSTEMS AND METHODS FOR DATA PROTECTION USING CLOUD-BASED SNAPSHOTS

Veritas Technologies LLC,...

1. A computer-implemented method for data protection using cloud-based snapshots, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:identifying a request to back up an information asset hosted by a cloud-based platform;
discovering, in response to the request, a plurality of snapshots taken at the cloud-based platform, wherein at least some of the plurality of snapshots store data underlying the information asset but do not provide a consistent image of the information asset;
determining that a snapshot subset of the plurality of snapshots provides data sufficient to produce the consistent image of the information asset by iteratively attempting to recover, within a rehearsal environment, the consistent image of the information asset from each snapshot within the plurality of snapshots until encountering at least one snapshot that is sufficient to recover the consistent image;
performing a backup that provides the consistent image of the information asset from the snapshot subset based on a successful attempt to recover the consistent image of the information asset from the snapshot subset within the rehearsal environment.

US Pat. No. 10,216,582

RECOVERY LOG ANALYTICS WITH A BIG DATA MANAGEMENT PLATFORM

International Business Ma...

1. A computer program product for replicating relational transactional log data to a big data platform, the computer program product comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code executable by at least one processor to perform operations comprising:fetching change records contained in change data tables;
rebuilding a relational change history with transaction snapshot consistency to generate consistent change records by joining the change data tables and a unit of work table based on a commit sequence identifier, wherein the rebuilding is performed by one of a relational database management system and the big data platform; and
storing the consistent change records on the big data platform, wherein queries are answered on the big data platform using the consistent change records.

US Pat. No. 10,216,581

AUTOMATED DATA RECOVERY FROM REMOTE DATA OBJECT REPLICAS

International Business Ma...

1. A method for recovering data objects in a distributed data storage system, the method comprising:storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network, wherein a first data center of the one or more data centers includes a first cluster of the one or more clusters and the first cluster includes a plurality of compute nodes, the first cluster further includes a database that stores metadata concerning a replica of each data object of each of the plurality of compute nodes of the first cluster, a first compute node of the plurality of compute nodes includes the first data object, and wherein an availability status of the one or more replicas is maintained in the database at the first cluster per replica, and wherein the availability status is not maintained at the first compute node;
recording health information metadata about said one or more replicas within the database, wherein the health information comprises data about availability of a replica to participate in a restoration process;
determining that the first data object is faulty when at least one replica of the first data object is determined to be lost or damaged;
in response to determining that the first data object is faulty, determining that the first data object is to be recovered when a number of replicas of the first data object that are damaged or lost exceeds a threshold number of replicas;
in response to determining that the first data object is to be recovered, calculating a query-priority for the first data object;
querying, based on the calculated query-priority, the health information metadata within the database for the one or more replicas to determine which of the one or more replicas is available for restoration of the first data object;
calculating a restoration-priority for the first data object based on the health information metadata for the one or more replicas; and
restoring the first data object from the one or more of the available replicas, based on the calculated restoration-priority and based on the availability status, wherein the restoration-priority is calculated based on a priority function P(D)=Func(N(D),C(D),n), where:
D represents a data object with multiple replicas in multiple clusters;
N(D) represents number of remote replicas for which H(D)i is available;
C(D) represents cost of losing N replicas of D;
P(D) represents priority given by the system for the restoration operation of D; and
Func( )represents some function.

US Pat. No. 10,216,580

SYSTEM AND METHOD FOR MAINFRAME COMPUTERS BACKUP AND RESTORE ON OBJECT STORAGE SYSTEMS

MODEL9 SOFTWARE LTD., Ki...

1. A computer-implemented method comprising:receiving a request for backing up a data set from a mainframe onto an object storage;
splitting the data set into a multiplicity of chunks, each chunk of the multiplicity of chunks having a predetermined size;
creating a mapping object;
repeating for each chunk:
allocating a sender thread to the chunk;
transmitting by the sender thread using an object storage Application programming interface (API), the chunk having the predetermined size as an object, to the object storage, to be stored as an object; and
updating the mapping object with details of the chunk;
subject to the data set being fully split and no more chunks to be transmitted, transmitting the mapping object to the object storage by the sender thread; and
writing an identifier of the data set and meta data including an object name of the mapping object, to a database stored in a storage device of the mainframe.

US Pat. No. 10,216,579

APPARATUS, SYSTEM AND METHOD FOR DATA COLLECTION, IMPORT AND MODELING

International Business Ma...

1. A computer program product for data analysis of a backup system, the computer program product comprising:one or more non-transitory computer-readable storage media and program instructions stored on the one or more non-transitory computer-readable storage media, the program instructions comprising:
program instructions to generate a dump file for each of a plurality of backup servers, each dump file comprising configuration and state information about each of the plurality of backup servers in a native format used by each of the plurality of backup servers on which data is stored, wherein the backup servers backup the data from a primary storage layer to a common media layer;
program instructions to extract a first predetermined configuration and state information from the respective dump files of the plurality of backup servers, the first predetermined configuration and state information being in different formats based on the dump file from which it was extracted;
program instructions to translate the first predetermined configuration and state information from the format used by each of the plurality of backup servers into a normalized format, wherein the translated first configuration and state information comprises configuration and state information irrespective of which of the plurality of backup servers from which it was generated;
program instructions to store the translated first configuration and state information in a single database;
program instructions to generate a dump file for each of a plurality of different computer systems of the primary storage layer, each dump file comprising configuration and state information about each of the plurality of computer systems in a format used by each of the plurality of computer systems on which the data is stored, wherein the plurality of computer systems include server computers, desktop computers, and laptop computers which are physically located across various sites and use hardware and software from different vendors;
program instructions to extract a second predetermined configuration and state information from the respective dump files of the plurality of different computer systems, the second predetermined configuration and state information being in different formats based on the dump file from which it was extracted;
program instructions to translate the second predetermined configuration and state information from the format used by each of the plurality of computer systems into a normalized format, wherein the translated second configuration and state information comprises configuration and state information irrespective of which of the plurality of computer systems from which it was generated;
program instructions to store the translated second configuration and state information in the single database; and
program instructions to determine what components are in the backup system, how the backup system works, how data is stored in the backup system, how efficiently data is stored in the backup system, a total capacity of the backup system, a remaining capacity of the backup system, and an operating cost of the backup system by analyzing the normalized first and second configuration and state information stored in the single database.

US Pat. No. 10,216,578

DATA STORAGE DEVICE FOR INCREASING LIFETIME AND RAID SYSTEM INCLUDING THE SAME

Samsung Electronics Co., ...

1. A data storage device comprising:a nonvolatile memory arranged according to drives and stripes;
a buffer configured to store state information of each of the stripes; and
a memory controller comprising a redundant array of independent disks (RAID) controller configured to operate in a spare region mode and perform data recovery using garbage collection based on the state information,
wherein the state information includes at least one of a first state indicating that none of the drives has malfunctioned, a second state indicating that a first drive among the drives has malfunctioned, and a third state indicating that data/parity stored in the first drive has been recovered,
wherein upon detecting malfunction of the first drive, the RAID controller is further configured to change the state information from the first state to the second state, and
wherein the RAID controller is further configured to recover the data/parity stored in the first drive to a spare region of the nonvolatile memory and change state information of a stripe among the stripes including the spare region from the second state to the third state, and to move the recovered data/parity while performing the garbage collection from the spare region to a predetermined drive, and change the state information of the stripe including the spare region from the third state to the first state.

US Pat. No. 10,216,577

MULTICAST RAID: DISTRIBUTED PARITY PROTECTION

Nexenta Systems, Inc., S...

1. A method for a storage server to create a parity protection conglomerate protecting a received chunk, comprising:generating a manifest within a parity protection conglomerate, wherein the manifest enumerates: a set of chunks protected by the parity protection conglomerate, including the received chunk; a previously-generated unique chunk identifier for each chunk in the set of chunks; and a failure domain where the primary whole replica of that chunk should be stored, wherein a selection of the parity protection conglomerate by the storage server is constrained such that the created parity protection conglomerate references only chunks contained in failure domains enumerated in an eligibility set specified with a put message;
generating a payload portion of a parity protection conglomerate, wherein the payload portion comprises a Galois transformation of a payload portion of each chunk within the set of chunks, thereby protecting the received chunk as a protected chunk;
updating a local index to map a parity protection conglomerate identifier (PPCID) to the previously-generated unique chunk identifier of the parity protection conglomerate;
generating a protection index entry to map a chunk identifier of the received chunk to the PPCID; and
reducing an eligibility set associated with the PPCID to exclude all failure domains that were not contained in the eligibility set specified for the received chunk.

US Pat. No. 10,216,576

ENCODING DATA UTILIZING A ZERO INFORMATION GAIN FUNCTION

International Business Ma...

1. A method for execution by a processing module of a computing device of a dispersed storage network (DSN), the method comprises:dispersed storage error encoding, by the processing module and in accordance with distributed data storage parameters, a data segment to produce a set of encoded data slices and a set of zero information gain (ZIG) encoded data slices, wherein the set of encoded data slices is encoded in accordance with a dispersed storage error encoding scheme and wherein the set of ZIG encoded data slices is encoded using a ZIG function and further wherein a first ZIG encoded data slice of the set of ZIG encoded data slices is generated by matrix multiplying a first decoding matrix and a first encoded data slice of the set of encoded data slices, wherein generating the first ZIG encoded data slice includes generating a first partial encoded data slice based on the first decoding matrix, the first encoded data slice and a row of the encoding matrix corresponding to the first encoded data slice, generating a second partial encoded data slice based on a second decoding matrix, a second encoded data slice and a row of the encoding matrix corresponding to the second encoded data slice, wherein the second encoded data slice is an encoded data slice of the set of encoded data slices and not included in the subset of encoded data slices, and combining the first and second partial encoded data slices to produce the first ZIG encoded data slice;
selecting, by the processing module, a first subset of encoded data slices from the set of encoded data slices, wherein the first subset of encoded data slices includes less than a threshold number of encoded data slices, and further wherein the threshold number of encoded data slices is required to recreate the data segment;
sending, by the processing module via an interface of the computing device, the subset of encoded data slices to a first memory within the DSN for storage therein; and
sending, by the processing module via the interface, the set of ZIG encoded data slices to a second memory within the DSN for storage therein.

US Pat. No. 10,216,575

DATA CODING

SanDisk Technologies LLC,...

1. A data storage device comprising:an encoder coupled to a memory controller and configured to receive input data and to map, based on a frequency of occurrence of groups of bits included in the input data, at least one input group of bits of the input data to generate output data including at least one output group of bits, wherein each input group of bits of the at least one input group of bits has the same number of bits as each corresponding output group of bits of the at least one output group of bits, wherein a first input group of bits occurs more frequently in the input data than a second input group of bits; and
a memory including multiple storage elements, each storage element of the multiple storage elements configured to be programmed to a voltage state corresponding to an output group of bits of the at least one output group of bits associated with the storage element, wherein a first voltage state corresponding to a first output group of bits mapped from the first input group of bits is lower than a second voltage state corresponding to a second output group of bits mapped from the second input group of bits.

US Pat. No. 10,216,574

ADAPTIVE ERROR CORRECTION CODES FOR DATA STORAGE SYSTEMS

WESTERN DIGITAL TECHNOLOG...

1. A data storage system, comprising:a non-volatile memory array comprising a plurality of memory pages; and
a controller configured to:
access coding parameters used to encode user data and parity data to be stored in the plurality of memory pages;
encode, using the coding parameters, first user data and first parity data as a first data unit;
store the first data unit in the plurality of memory pages;
decode, using the coding parameters, the first data unit retrieved from the plurality of memory pages;
detect a first number of bit errors encountered during decoding the first data unit retrieved from the plurality of memory pages;
in response to determining that the first number of bit errors exceeds a first threshold, adjust the coding parameters to increase an amount of parity data per total data that is included in data units subsequently encoded and stored in the plurality of memory pages; and
encode, using the adjusted coding parameters, second user data and second parity data as a second data unit to be stored in the plurality of memory pages.

US Pat. No. 10,216,573

METHOD OF OPERATING A MEMORY DEVICE

Infineon Technologies AG,...

1. A method of correcting and/or detecting an error in a memory device, the method comprising:in a first operations mode, applying a first code to detect and/or correct an error; and
in a second operations mode after an inactive mode and before entering the first operations mode, applying a second code for correcting and/or detecting an error, wherein the first code and the second code have different code words.

US Pat. No. 10,216,572

FLASH CHANNEL CALIBRATION WITH MULTIPLE LOOKUP TABLES

NGD Systems, Inc., Invin...

1. A method, comprising:calibrating a flash memory; and
performing an adaptive multi-read operation based on the calibration,
the calibrating comprising:
performing a first read operation on a first plurality of flash memory cells, at a first word line voltage, to form a first raw data word;
performing a second read operation on the first plurality of flash memory cells, at a second word line voltage, to form a second raw data word;
executing a first error correction code decoding attempt with a first set of one or more raw data words including the first raw data word;
determining that the first error correction code decoding attempt has succeeded; and
based on the determining that the first error correction code decoding attempt has succeeded:
generating,
from bit differences between
 the first set of one or more raw data words and
 one or more corresponding decoded data words generated by the first error correction code decoding attempt,
a first lookup table including:
 a first log likelihood ratio corresponding to a first range of word line voltages and
 a second log likelihood ratio corresponding to a second range of word line voltages; and
generating,
from bit differences between
 a second set of two or more raw data words and
 two or more corresponding decoded data words generated by the first error correction code decoding attempt,
a second lookup table including:
 a third log likelihood ratio corresponding to a third range of word line voltages;
 a fourth log likelihood ratio corresponding to a fourth range of word line voltages; and
 a fifth log likelihood ratio corresponding to a fifth range of word line voltages.

US Pat. No. 10,216,571

SYSTEM AND METHODOLOGY FOR ERROR MANAGEMENT WITHIN A SHARED NON-VOLATILE MEMORY ARCHITECTURE USING BLOOM FILTERS

WESTERN DIGITAL TECHNOLOG...

1. A system, comprising:a non-volatile memory (NVM) component configured to store data in an NVM array;
an error tracking table (ETT) component configured to store error correction vector (ECV) information associated with the NVM array, wherein the ETT component is within one of a dynamic random access memory (DRAM) or a second NVM component;
a controller configured to perform a parallel query of the NVM array and the ETT component, wherein the parallel query includes a query of the NVM array that yields a readout of the NVM array and a query of the ETT component that yields a construction of an ECV corresponding to the readout of the NVM array; and
at least one Bloom filter configured to predict at least one subset of ETT component entries in which at least one of the ETT component entries corresponds to a reporting of an error in the NVM array.

US Pat. No. 10,216,570

MEMORY DEVICE AND CONTROL METHOD THEREOF

Winbond Electronics Corpo...

1. A memory device, comprising:a memory block including a plurality of sectors and a plurality of refresh units, each refresh unit including at least one of the plurality of sectors; and
a control unit configured to:
pre-store a plurality of first indicators in a storage unit, the plurality of first indicators respectively corresponding to the plurality of refresh units in the memory block, and each one of the plurality of first indicators being generated based on data obtained by reading a corresponding one of the plurality of refresh units with a first reference voltage level; and
in an erase cycle for erasing a target sector of the plurality of sectors in the memory block:
selecting one of the plurality of refresh units;
read data from the selected refresh unit with a second reference voltage level different from the first reference voltage level;
generate a second indicator for the selected refresh unit based on the data read from the selected refresh unit with the second reference voltage level;
compare one of the plurality of first indicators that corresponds to the selected refresh unit with the second indicator of the selected refresh unit;
if the second indicator of the selected refresh unit is not equal to the one of the plurality of first indicators that corresponds to the selected refresh unit, refresh data in the selected refresh unit; and
if the second indicator of the selected refresh unit is equal to the one of the plurality of first indicators that corresponds to the selected refresh unit, select a next one of the plurality of refresh units.

US Pat. No. 10,216,569

SYSTEMS AND METHODS FOR ADAPTIVE DATA STORAGE

FIO Semiconductor Technol...

1. A method, comprising:managing, via a storage module, storage operations for a solid-state storage array;
queuing storage requests for the solid-state storage array in an ordered request buffer;
reordering the storage requests in the ordered request buffer, wherein the storage module comprises:
a logical-to-physical translation layer; and
the ordered request buffer, wherein the ordered request buffer is configured to receive storage requests from one or more store clients, and to buffer storage requests received via a bus;
generating, via an error-correcting code write module, an error-correcting code codeword comprising data for storage on the solid-state storage array, wherein the error-correcting code codeword is used to detect errors in data read from the solid-state storage array, correct errors in data read from the solid-state storage array, or a combination thereof;
generating, via a write module, data rows for storage within columns of the solid-state storage array, wherein each of the data rows comprises data of two or more different error-correcting code codewords;
generating, via a parity module, respective parity data for each of the data rows; and
reconstructing, via a data reconstruction module, an uncorrectable error-correcting code codeword of the two or more different error-correcting code codewords by accessing data rows and the respective parity data comprising the two or more different error-correcting code codewords.

US Pat. No. 10,216,568

LIVE PARTITION MOBILITY ENABLED HARDWARE ACCELERATOR ADDRESS TRANSLATION FAULT RESOLUTION

International Business Ma...

1. A method for memory translation fault resolution between a processing core and a hardware accelerator, the method comprising:configuring a first table having an entity identifier associated with an effective address for an entity, the first table operatively coupled to an operating system;
forwarding an operation from the processing core to a first buffer associated with the hardware accelerator;
determining at least one memory address translation related to the operation having a fault;
flushing the operation and the fault memory address translation from the hardware accelerator, including augmenting the operation with the entity identifier;
forwarding the operation with the fault memory address translation, including the entity identifier, from the hardware accelerator to a second buffer, the second buffer operatively coupled to an element selected from the group consisting of: a hypervisor and the operating system;
repairing the fault memory address translation, including an interruption of execution of the operating system;
sending the operation with the repaired memory address translation to the processing core utilizing the effective address for the entity based on the first table and the entity identifier within the fault memory address translation;
forwarding the operation with the repaired memory address translation from the second buffer to the first buffer supported by the processing core; and
executing the operation with the repaired memory address translation.

US Pat. No. 10,216,567

DIRECT PARITY ENCODER

Avago Technologies Intern...

1. An encoder for wireless local area networking (WLAN) communication, comprising:a processor configured to divide a generator matrix into a first portion and a second portion, the second portion of the generator matrix including an array of sub-blocks, the array of sub-blocks arranged in rows and columns, each row including M number of sub-blocks and each column including J number of sub-blocks, wherein M and J are integers, each sub-block including Z number of rows and Z number of columns, wherein Z is an integer, a sub-block of the array of sub-blocks including (i) a first set of elements circularly shifted from an identity matrix by a first amount, and (ii) a second set of elements circularly shifted from the identity matrix by a second amount; and
parity bit generation circuitry coupled to the processor, the parity bit generation circuitry configured to generate parity bits according to the array of sub-blocks, the parity bit generation circuitry including:
bit permutation circuitry,
Z number of XOR devices coupled to the bit permutation circuitry,
M sets of storage registers, each set of the M sets of storage registers including Z number of storage registers, each of the Z number of storage registers coupled to a corresponding XOR device of the Z number of XOR devices, and
control circuitry coupled to the bit permutation circuitry and the M sets of storage registers, the control circuitry configured to:
cause the bit permutation circuitry to generate Z number of first bits according to Z number of input bits and the first amount, the Z number of first bits equal to the Z number of input bits when circularly shifted according to the first amount, each of the Z number of first bits provided as input to a corresponding XOR device of the Z number of XOR devices,
cause each storage register of a first set of the M sets of storage registers to store an output of the corresponding XOR device of the Z number of XOR devices,
cause the bit permutation circuitry to generate Z number of second bits according to the Z number of input bits and the second amount, the Z number of second bits equal to the Z number of input bits when circularly shifted according to the second amount, and
cause the Z number of XOR devices to perform bit-wise XOR operations on the stored Z number of outputs from the first set of the M sets of storage registers and the generated Z number of second bits from the bit permutation circuitry, to provide a portion of the parity bits.

US Pat. No. 10,216,566

FIELD PROGRAMMABLE GATE ARRAY

Hitachi, Ltd., Tokyo (JP...

1. A field programmable gate array, comprising:a hard macro CPU in which a circuit structure is fixed;
a programmable logic in which a circuit structure is changeable;
a diagnosis circuit which diagnoses an abnormality of the programmable logic;
a fail-safe interface circuit which is able to control an external output from the programmable logic to a safe side; and
a function in which the hard macro CPU is instructed to output a fail-safe signal which is an output to a safe side to the fail-safe interface circuit when an error is detected by the diagnosis circuit;
wherein the fail-safe interface circuit is provided in the programmable logic, and
wherein an instruction from the hard macro CPU to the fail-safe interface circuit is issued through a communication path in which data is able to be transmitted only from the hard macro to the programmable logic.

US Pat. No. 10,216,565

ROOT CAUSE ANALYSIS

International Business Ma...

1. A method for performing a root cause analysis, said method comprising:opening, by a central processing unit (CPU), a file comprising event data;
recording, by the CPU, recordation data of a user's observable behavior while viewing the event data of the file, wherein the user's observable behavior includes the user's eye gaze;
identifying, by the CPU, a presence of one or more events of interest as a function of the user's observable behavior while viewing the event data of the file;
calculating, by the CPU, an interest score for each of the identified events of interest, wherein the interest score is a probability of each of the identified events of interest being a root cause of a defect; and
tagging, by the CPU, each of the events of interest within the file with a tag as a function of each calculated interest score;
wherein said identifying comprises:
tracking, by the CPU, a focal point of the user's eye gaze;
correlating, by the CPU, the focal point of the user's eye gaze to a viewing position of a display device displaying the file;
identifying, by the CPU, as a function of the viewing position, the event data being viewed and an amount of time that the event data is viewed by the user; and
further identifying, by the CPU, an emotive expression of the user during an amount of time focused on the viewing position, and
wherein said calculating comprises:
assigning, by the CPU, a numerical value to the viewing position, amount of time, emotive expression and event data viewed by the user; and
inserting, by the CPU, the numerical value assigned to the viewing position, amount of time, emotive expression and text of the event data, into a linear regression model; and
outputting, by the CPU, as a function of the linear regression model, a value of the interest score.

US Pat. No. 10,216,564

HIGH VOLTAGE FAILURE RECOVERY FOR EMULATED ELECTRICALLY ERASABLE (EEE) MEMORY SYSTEM

NXP USA, Inc., Austin, T...

1. A method for managing failing sectors in a semiconductor memory device that includes a volatile memory, a non-volatile memory, and a memory controller coupling the volatile memory and the non-volatile memory, the method comprising:detecting that a failure to program (FTP) error occurred during a sector identifier (ID) update action for a sector in the non-volatile memory, wherein
the sector is associated with a failure status indicator that indicates healthy status;
in response to determining the FTP error occurred while attempting to program a sector ID of the sector to one of a READY, READYQ, FULL, and FULLQ erase status, updating the failure status indicator to indicate a dead status; and
in response to determining the FTP error occurred while attempting to program the sector ID to one of a FULLE, FULLEQ, FULLC, and FULLCQ erase status, updating the failure status indicator to indicate a read-only status.

US Pat. No. 10,216,563

SAFETY FILTER IN A VEHICLE NETWORK

TRW Limited, Solihull, W...

1. An electrical subsystem for a vehicle comprising an electronic control module adapted to generate one or more output messages suitable for transmission by a communication network, the subsystem further comprising:a message filter which is arranged in an event of a fault of the electronic control module to filter the messages generated by the electronic control module so that only messages that meet predefined criteria are transmitted by the communication network and to block messages that do not meet the predefined criteria, wherein the predefined criteria is that the message is non-safety-critical for the vehicle.

US Pat. No. 10,216,562

GENERATING DIAGNOSTIC DATA

International Business Ma...

1. An apparatus comprising:a processor;
a memory storing code executable by the processor to:
detect that a first address space of an application references one or more second address spaces during execution of the application, the first address space comprising a main address space for the execution of the application and the one or more second address spaces comprising address spaces that comprise information that the application references during execution;
dynamically create an entry in a data structure for mapping the first address space of an application executing in the first address space to the one or more second address spaces in response to detecting the application referencing the one or more second address spaces during execution of the application;
if, during execution of the application, a diagnostic trigger for the first address space is detected:
check the data structure for one or more second address spaces mapped to the first address space; and
generate one or more dump files comprising diagnostic data for the first address space and the one or more second address spaces; and
if, during execution of the application, a diagnostic trigger for the address space is not detected and a second address space of the one or more second address spaces is no longer referenced by the first address space, dynamically remove the entry in the data structure of the mapping of the first address space to the second address space that is no longer referenced by the first address space.

US Pat. No. 10,216,561

MONITOR PERFORMANCE ANALYSIS

C SERIES AIRCRAFT LIMITED...

1. A system, comprising:a processor;
a memory system in communication with the processor, the memory system storing instructions that when executed by the processor result in the system being operable to:
identify a system hazard boundary of a monitored system and a system nuisance boundary of the monitored system;
determine a must-trip condition based on the system hazard boundary and a must-not-trip condition based on the system nuisance boundary;
conduct a tolerance stack-up at the must-trip condition to calculate a first estimation error; and
output a protection margin for the monitored system based on the system hazard boundary, the first estimation error and a difference between the must-trip condition and the must-not-trip condition; and
a monitor of the monitored system, the monitor being configured to receive a monitored input from the monitored system, and, based on the monitored input, trip before the monitored system exceeds the system hazard boundary.

US Pat. No. 10,216,560

INTEGRATION BASED ANOMALY DETECTION SERVICE

Amazon Technologies, Inc....

1. A system comprising:a memory storing data regarding operating parameters related to performance of a computing system; and
a computer processor in communication with the memory, the computer processor programmed by computer-executable instructions to at least:
receive, from a monitored source, a first set of input data for an operating parameter at a first time;
determine, based at least in part on the first set of input data, a predicted value for the operating parameter that is expected at a second time;
determine a permitted relationship between the predicted value and a second set of input data for the operating parameter that is expected at the second time;
receive the second set of input data for the operating parameter at the second time;
determine that the second set of input data for the operating parameter at the second time does not satisfy the permitted relationship;
in response to determining that the second set of input data for the operating parameter at the second time does not satisfy the permitted relationship, identify an anomaly detection; and
cause display of a graphical interface presenting an anomaly notification, wherein the graphical interface enables receipt of an indication that the anomaly notification is erroneous.

US Pat. No. 10,216,559

DIAGNOSTIC FAULT COMMUNICATION

Allegro MicroSystems, LLC...

1. An integrated circuit comprising:a fault detector configured to detect a fault condition of the integrated circuit;
a controller configured to generate a controller output data signal; and
an output generator configured to generate an output signal of the integrated circuit and to:
generate the output signal at a first set of output levels based upon the controller output data signal when the fault detector does not detect the fault condition; and
generate the output signal at a second set of output levels based upon the controller output data signal when the fault detector detects the fault condition, wherein the second set of output levels is different than the first set of output levels and comprises a level of the output signal caused by an open circuit or a short circuit of the output signal.

US Pat. No. 10,216,558

PREDICTING DRIVE FAILURES

EMC IP Holding Company LL...

1. A computer-implemented method for predicting drive failures, the method comprising:collecting any one or more samples of drive health indicators from a drive over a specified time period, wherein the samples of drive health indicators include one or more Self-Monitoring, Analysis and Reporting Technology (SMART) attributes obtained from the drive;
performing a first feature selection modeling of a last collected sample of SMART drive health indicators to generate a drive feature for the drive, the drive feature for modeling a drive health at a time of the last collected sample;
performing a second feature engineering modeling of collected samples of SMART drive health indicators over the specified time period to generate one or more drive behavior history features for the drive, the drive behavior history features for modeling the drive health over the specified time period; and
classifying the drive as more likely to experience failure than other drives, the classifying based on predicted drive failure probabilities representing the drive health, including:
the drive health at the time of the last collected sample as modeled by the drive feature, and
the drive health over the specified time period as modeled by the drive behavior history features.

US Pat. No. 10,216,557

METHOD AND APPARATUS FOR MONITORING AND ENHANCING ON-CHIP MICROPROCESSOR RELIABILITY

International Business Ma...

1. A system for projecting reliability to manage system functions, comprising:an activity module which determines activity in the system that occurs during operation of the system;
a reliability module interacting with the activity module to determine a reliability measurement for regions for a current period within the system in real-time based upon the activity and measured operational quantities of the system, wherein the reliability measurement characterizes one or more potential physical failure mechanisms; and
a management module comprising a processor comparing the reliability measurement within the system to a locally stored reliability target and increase activity of the system during operation of the system based on whether the reliability measurement is determined to be above or below the stored reliability target;
wherein increasing the activity of the system includes one of increasing a clock rate, reallocating resources, and increasing current or voltage.

US Pat. No. 10,216,556

MASTER DATABASE SYNCHRONIZATION FOR MULTIPLE APPLICATIONS

SAP SE, Walldorf (DE)

1. An apparatus comprising:a hardware processor; and
a memory having stored therein instructions that, when executed by the hardware processor, cause the apparatus to perform operations for reducing resources consumed by a first application of a plurality of applications when accessing a master data store, the operations comprising:
accessing master data from a master data source, the master data to be employed by the plurality of applications;
accessing schema of the master data from the master data source, the schema of the master data to be employed by the plurality of applications;
generating one or more publication requests to store the master data and the schema of the master data to a master data store accessible by the plurality of applications; and
causing the schema of the master data stored in the master data store to be stored in a local cache of a first application of the plurality of applications for access by the first application, the causing of the schema to be stored in the local cache allowing the application to access the schema without consuming resources of the master data store.

US Pat. No. 10,216,555

PARTIALLY RECONFIGURING ACCELERATION COMPONENTS

Microsoft Technology Lice...

1. A method for partially reconfiguring an acceleration component programmed with a role, the role linked via an area network to one or more of: a downstream role at a downstream neighbor acceleration component and an upstream role at an upstream neighbor acceleration component to compose a graph providing an instance of service acceleration, the method comprising:detecting a reason for changing the role;
halting the role, including instructing at least one of: the downstream role and the upstream role to stop receiving data from the role;
partially reconfiguring the acceleration component by writing an image for the role to the acceleration component;
maintaining a network interface programmed into the acceleration component and a second role programmed into the acceleration component as operational during partially reconfiguring the acceleration component, maintaining the network interface permitting the second role to exchange network communication via the area network with one or more other roles at other acceleration components, the second role linked to the one or more other roles to compose another graph providing another instance of service acceleration, wherein the graph provides service acceleration for a service selected from among: document ranking, data encryption, data compression, speech translation, computer vision, or machine learning; and
activating the role at the acceleration component after partially reconfiguring the acceleration component is complete, including notifying the at least one of: the downstream role and the upstream role that the role is operational.

US Pat. No. 10,216,554

API NOTEBOOK TOOL

Mulesoft, Inc., San Fran...

1. A system, comprising:a processor configured to:
dynamically generate a client for calling an API for a service using a library for an API specification and an application programming interface (API) notebook tool, wherein the API specification includes a description of one or more APIs in an API modeling language including the API for the service, and wherein the client for calling the API for the service is dynamically generated based on the API specification;
convert the API into an object model stored as a note in a data store, wherein the note includes a coded implementation of the client and documentation for a documented usage scenario of the API implemented by the client, wherein user credentials for authenticating with the service are cached locally and are not stored in the data store, and wherein confidential content is removed from a results cell;
load the note from the data store using the API notebook tool, wherein the note was previously saved in the data store; and
save a modified version of the note in the data store using the API notebook tool, wherein the data store is an open collaboration repository, wherein the note is shared with a plurality of users, wherein each of the plurality of users can execute and/or edit the note to provide for a modified usage scenario of the API, and wherein user credentials of each of the plurality of users for authenticating with the service are cached locally and are not stored with the note in the open collaboration repository in the data store; and
a memory coupled to the processor and configured to provide the processor with instructions.

US Pat. No. 10,216,553

MESSAGE ORIENTED MIDDLEWARE WITH INTEGRATED RULES ENGINE

International Business Ma...

1. A message processing data processing system for managing a messaging component in message oriented middleware, the system comprising:a host computer including:
a processor set including at least one processor,
memory,
a messaging engine, and
a rules engine coupled to the messaging engine;
wherein:
the rules engine and messaging engine are programmed to establish working memory in shared memory of message oriented middleware executing by the processor set of the host computer for use by the messaging engine, to detect a change in the messaging component, to determine if the change corresponds to an addition of an object to the messaging component and, on condition the change corresponds to an addition of a new object to the messaging component, to create a token in the working memory, but on condition the change corresponds to a deletion of an existing object from the messaging component, to delete a token from the working memory, and on condition the change corresponds to a change to an existing object of the messaging component that is not a deletion of the existing object, to apply a change to an existing token in the working memory to observe the working memory to detect changes in one or more tokens in the working memory and in response to detecting a change to one or more of the tokens in the working memory, to apply by the rules engine and a messaging engine management rules to the tokens in the working memory in order to direct management actions in the messaging component, wherein the rules engine and messaging engine further ensure that tokens in the memory correspond to but are separate from objects in the messaging engine by placing a message on a queue, inserting a token corresponding to the placed message in memory, and linking the token to the corresponding message.

US Pat. No. 10,216,552

ROBUST AND ADAPTABLE MANAGEMENT OF EVENT COUNTERS

INTERNATIONAL BUSINESS MA...

1. A method for improved accuracy of a counter design implemented in computer hardware to ensure that the counter design captures a design event and avoids a race condition between a context event and the design event, the method comprising:receiving a plurality of events within the counter design, the plurality of events including the context event and the design event;
dynamically determining, by the computer hardware, a tolerance window defined around the context event, the tolerance window comprising a first window portion before the context event and a second window portion after the context event;
statically determining, by the computer hardware, a width of the tolerance window based on maximum effective path delays of the context event and the design event; and
performing a verification algorithm to determine that the design event is captured within the tolerance window and is accounted for by a design model counter of the counter design to avoid the race condition between the context event and the design event.

US Pat. No. 10,216,551

USER INFORMATION DETERMINATION SYSTEMS AND METHODS

Intertrust Technologies C...

1. A method performed by a system comprising a processor and a non-transitory computer-readable storage medium storing instructions that, when executed, cause the system to perform the method, the method comprising:receiving, at an interface of the system, application usage information from an electronic device associated with a user, the application usage information being associated with an application installed on the electronic device;
mapping the application usage information to one or more interest taxonomies to identify one or more interests associated with the user;
determining one or more relative adjusted weights associated with the identified one or more interests based on the application usage information, wherein determining the one or more relative adjusted weights comprises:
determining one or more decay rates based an indication of a momentum associated with the application, the momentum being determined based on a first density of use of the application over a first time period and a second density of use of the application over a second time period, the second time period being longer than the first time period; and
adjusting one or more initial relative weights based on the one or more decay rates to generate the one or more one or more relative adjusted rates;
associating the one or more relative adjusted weights with the identified one or more interests to generate one or more weighted interests;
identifying one or more content items based on the one or more weighted interests; and
transmitting the one or more content items to the electronic device.

US Pat. No. 10,216,550

TECHNOLOGIES FOR FAST BOOT WITH ADAPTIVE MEMORY PRE-TRAINING

Intel Corporation, Santa...

1. A computing device for memory parameter pre-training, the computing device comprising:a processor, a memory controller, and a non-volatile storage device; and
a boot loader to (i) determine whether a pre-trained memory parameter data set is inconsistent in response to a reset of the processor, wherein the pre-trained memory parameter data is stored by the non-volatile storage device, (ii) send a message that requests full memory training to a safety microcontroller via a serial link in response to a determination that the pre-trained memory parameter data set is inconsistent, (iii) determine whether a full memory training signal is raised via a general-purpose I/O link with the safety microcontroller in response to a determination that the pre-trained memory parameter data set is consistent, (iv) execute a fast boot path to initialize the memory controller with the pre-trained memory parameter data set in response to a determination that the full memory training signal is not raised, and (v) execute a slow boot path to generate the pre-trained memory parameter data set in response to a determination that the full memory training signal is raised.

US Pat. No. 10,216,549

METHODS AND SYSTEMS FOR PROVIDING APPLICATION PROGRAMMING INTERFACES AND APPLICATION PROGRAMMING INTERFACE EXTENSIONS TO THIRD PARTY APPLICATIONS FOR OPTIMIZING AND MINIMIZING APPLICATION TRAFFIC

SEVEN NETWORKS, LLC, Mar...

1. A method for optimizing and minimizing application traffic in a wireless network, the method comprising:defining an application programming interface (API) for controlling application traffic between an application client residing on a mobile device that operates within a wireless network and an application server not residing on the mobile device; and
using the API to optimize application traffic in the wireless network including controlling, by the mobile device, traffic sent by the application server to the mobile device, wherein using the API to optimize application traffic includes using the API for:
providing a subscriber tiering and reporting service having a premium subscriber tier;
providing delivery notification to a sending entity subscribing to the premium subscriber tier;
sending a plurality of data packets together as a batch within a defined window of time, wherein the defined window of time is determined by a time criticality of the plurality of data packets;
adjusting message priority for entities subscribing to the premium subscriber tier; and
providing special traffic reporting to a reporting server based on a reporting policy received from a policy management server.

US Pat. No. 10,216,547

HYPER-THREADED PROCESSOR ALLOCATION TO NODES IN MULTI-TENANT DISTRIBUTED SOFTWARE SYSTEMS

International Business Ma...

1. A computer program product comprising a non-transitory computer readable storage medium having a computer readable program for allocating a hyper-threaded processor to nodes of multi-tenant distributed software systems stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:responsive to receiving a request to provision a node of the multi-tenant distributed software system on the host data processing system, identify a cluster of nodes to which the node belongs;
determine whether the node is a first type of node or a second type of node;
responsive to the node being the second type of node, determine whether another second type of node in the same cluster has been provisioned on the host data processing system;
responsive to determining that another second type of node in the same cluster has been provisioned on the host data processing system, determine whether a number of unallocated virtual processors (VPs) on different physical processors from that of the other second type of node is greater than or equal to a requested number of VPs for the second type of node;
responsive to the number of unallocated VPs on different physical processors from that of the other second type of node being greater than or equal to the requested number of VPs for the second type of node, allocate the requested number of VPs for the second type of node each to a different physical processor from that of the other second type of node; and
responsive to the number of unallocated VPs on different physical processors from that of the other second type of node being less than the requested number of VPs for the second type of node, allocate up to the requested number of VPs for the second type of node to as many different physical processors as supported by the different physical processors from that of the other second type of node; and
allocate any remaining unallocated VPs from the requested number of VPs for the second type of node to other physical processors.

US Pat. No. 10,216,546

COMPUTATIONALLY-EFFICIENT RESOURCE ALLOCATION

Insitu Software Limited, ...

1. A method operative in a computing system to associate a set of first entities to a set of second entities, wherein a second entity corresponds to a vertex in a network graph, comprising:associating a grouping of first entities with a particle of a set of particles, each particle having an attribute set;
configuring the set of particles into a force directed graph, wherein the particles are configured with respect to one another according to attractions or repulsions derived from the particle attribute sets;
bringing the force directed graph into an equilibrium state;
thereafter mapping the particles of the force directed graph onto the network graph; and
executing a simulation against the network graph that has been mapped with the particles of the force directed graph to associate the set of first entities to the set of second entities;
wherein mapping the network graph with the particles of the force directed graph improves efficiency of the computing system executing the simulation by obviating random mapping of the network graph, and by avoiding local neighbor searching with respect to one or more regions of the network graph that otherwise provide substantially equally-fit solutions.

US Pat. No. 10,216,545

METHOD AND SYSTEM FOR MODELING AND ANALYZING COMPUTING RESOURCE REQUIREMENTS OF SOFTWARE APPLICATIONS IN A SHARED AND DISTRIBUTED COMPUTING ENVIRONMENT

SERVICENOW, INC., Santa ...

1. A system for managing a plurality of applications in a shared computing environment, each application comprising a plurality of application components, the system comprising:a processor;
an application manager executable by the processor to receive a service specification for a first application of the plurality of applications in the shared computing environment that defines a set of computing resources that are used to run each application component of the plurality of application components of the first application; and
a resource supply manager in communication with the application manager and operable to manage a plurality of computing resources in the shared computing environment;
wherein the application manager is operable to request the set of computing resources from the computing resource supply manager, and wherein the resource supply manager determines the availability of the computing resources within the shared computing environment according to resource allocation policies and allocates computing resources to the application manager, and wherein the application manager is operable manage allocation of the computing resources to the first application, the application manager operable to deploy and manage instances of each application component of the first application on the allocated computing resources.

US Pat. No. 10,216,544

OUTCOME-BASED SOFTWARE-DEFINED INFRASTRUCTURE

International Business Ma...

1. A method for outcome-based adjustment of a software-defined environment (SDE), the method comprising:dividing a business operation into a set of prioritized tasks including high priority tasks and low priority tasks, each task having a corresponding set of key performance indicators (KPIs);
establishing a set of outcome links between the set of prioritized tasks and a first resource configuration for the SDE, the set of outcome links favoring the high priority tasks over the low priority tasks with respect to the first resource configuration;
establishing a monitoring mechanism for continuously measuring a current state of the SDE while performing each of the prioritized tasks;
predicting a triggering event based on a first outcome of a behavior model of the SDE;
responsive to predicting the triggering event, determining to change from the first resource configuration to a second resource configuration for the SDE according to the set of outcome links for performing the business operation based on a second outcome of the behavior model;
wherein:
the set of outcome links include at least one of a utility of services for the business operation, a cost of a set of resources consumed by the first resource configuration, and a risk of the set of resources becoming unavailable; and
at least the using the behavior model steps are performed by computer software running on computer hardware.

US Pat. No. 10,216,543

REAL-TIME ANALYTICS BASED MONITORING AND CLASSIFICATION OF JOBS FOR A DATA PROCESSING PLATFORM

Mitylytics Inc., Alameda...

1. A method comprising:selecting, by a computing device, a new job to schedule for execution on a data processing system, the new job including a classification in a plurality of classifications, wherein the classification is determined by:
using a process to analyze a set of operations to determine which operation is to be used to classify the current job, wherein a first operation in the set of operations is selected; and
classifying the first operation in a first classification based on resource usage for the first operation, wherein the first classification is determined based on a resource being used by the first operation in a highest percentage usage in the data processing platform compared to other resources used by the first operation;
retrieving, by the computing device, performance information for a set of current jobs that are being executed in the data processing system, wherein the set of jobs are assigned to a plurality of queues and currently classified with a current classification in the plurality of classifications;
analyzing, by the computing device, the performance information to determine when one or more current jobs in the set of current jobs should be re-classified due to resource usage of a respective current job when being executed in the data processing system, wherein analyzing comprises:
determining a second operation in the set of operations; and
determining that the first classification should be changed to the second classification when a resource being used by the second operation has a higher percentage usage in the data processing platform compared to the highest percentage usage for the first operation;
re-classifying, by the computing device, the classifications for the one or more current jobs in the plurality of queues, wherein the first classification is re-classified to the second classification for the first operation; and
assigning, by the computing device, the new job to one of the queues based on the classification of the new job and the classifications of jobs in the plurality of queues including the re-classified classifications for the one or more current jobs.

US Pat. No. 10,216,542

RESOURCE COMPARISON BASED TASK SCHEDULING METHOD, APPARATUS, AND DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A task scheduling method for scheduling tasks to be executed by a data warehouse system, the method comprising:scheduling, at a task scheduling system, a set of configured tasks to be executed by the data warehouse system, wherein scheduling the set of configured tasks includes:
managing a preset task scheduling condition;
acquiring real-time available resource information about one or more computing resources available for task execution in the data warehouse system;
receiving, from a task deployment system, instructions to schedule the set of configured tasks;
determining resource consumption information regarding each configured task in the set of the configured tasks;
comparing the resource consumption information regarding each configured task in the set with the available resource information to obtain a comparison result for the configured task; and
identifying a target task from the set of configured tasks by virtue of the target task having a corresponding comparison result that meets the preset task scheduling condition, the preset task scheduling condition specifying consumption resource regarding the target task is less than the one or more available computing resources in the data warehouse system; and
delivering, at the task scheduling system, the target task to the task deployment system for the task deployment system to deploy the target task on the data warehouse for execution; and, wherein
comparing the resource consumption information of each configured task with the available resource information comprises:
determining, from the set according to a task cluster type indicated by the resource consumption information of each configured task in set and an available-cluster type indicated by the information about the computing resource available for task execution, a task subset whose task cluster type matches the available-cluster type;
comparing a resource consumption amount indicated by resource consumption information of a configured task in the task subset with an available-resource amount indicated by the information about the computing resource available for task execution; and
when a comparison result indicates that the resource consumption amount of the task is less than the available-resource amount, recording that the comparison result corresponding to the task meets the preset task scheduling condition; and, wherein identifying the target task from the set of the configured tasks comprises:
using at least one task in the task subset having a recorded comparison result that meets the task scheduling condition as a target task in the current scheduling period.

US Pat. No. 10,216,541

SCHEDULER OF PROCESSES HAVING TIMED PREDICTIONS OF COMPUTING LOADS

Harmonic, Inc., San Jose...

1. A non-transitory computer-readable storage medium storing one or more sequences of instructions for a scheduler of computer processes to be executed upon a cluster of processing capabilities, wherein execution of the one or more sequences of instructions cause:obtaining predictions of a computing load of at least one computer process to allocate, wherein said predictions are associated with a period of time;
retrieving predictions of available computing capacities of the cluster of processing capabilities for the period of time;
determining, based on the predictions of the computing load for the period of time and the predictions of the available computing capacities for the period of time, a processing capability to allocate said at least one computer process during said period of time;
creating at least one Operating-System-Level virtual environment for said at least one computer process, said at least one Operating-System-Level virtual environment having a computing capacity equal to or higher than at least one of said predictions of the computing load of said at least one computer process to allocate at a start of the period of time; and
adapting the computing capacity of said at least one Operating-System-Level virtual environment to the predictions of the computing load of said at least one computer process during said period of time.

US Pat. No. 10,216,540

LOCALIZED DEVICE COORDINATOR WITH ON-DEMAND CODE EXECUTION CAPABILITIES

Amazon Technologies, Inc....

1. A system to remotely configure a coordinator computing device managing operation of coordinated devices, the system comprising:a non-transitory data store including a device shadow for the coordinator computing device, the device shadow indicating a version identifier for a desired configuration of the coordinator computing device;
a deployment device in communication with the non-transitory data store, the deployment device comprising a processor configured with computer-executable instructions to:
obtain configuration information for the coordinator computing device, the configuration information indicating one or more coordinated devices to be managed by the coordinator computing device and one or more tasks to be executed by the coordinator computing device to manage the one or more coordinated devices, wherein individual tasks of the one or more tasks correspond to code executable by the coordinator computing device, and wherein the configuration information further specifies an event flow table indicating criteria for determining an action to be taken by the coordinator computing device in response to a message obtained from an execution of the one or more tasks;
generate a configuration package including the configuration information, wherein the configuration package is associated with an additional version identifier;
modify the device shadow to indicate that the desired configuration corresponds to the additional version identifier;
notify the coordinator computing device of the modified device shadow;
obtain a request from the coordinator computing device for the configuration package; and
transmit the configuration package to the coordinator computing device, wherein the coordinator computing device is configured to utilize the configuration package to retrieve the one or more tasks to be executed by the coordinator computing device to manage the one or more coordinated devices indicated within the configuration package.

US Pat. No. 10,216,539

LIVE UPDATES FOR VIRTUAL MACHINE MONITOR

AMAZON TECHNOLOGIES, INC....

1. A computing system comprising:an offload computing device comprising one or more processors and memory, wherein the offload computing device is configured to electronically communicate with a physical computing device, wherein the one or more processors are configured to execute instructions that, upon execution, configure the offload computing device to:
receive, from a remote update manager, an update notification for a virtual machine monitor executing on the physical computing device;
store an update data package in the memory of the offload computing device, the update data package comprising an update to the virtual machine monitor;
send an interrupt to the physical computing device;
transmit, to the physical computing device, an indication of the update to the virtual machine monitor, wherein the virtual machine monitor is configured to suspend operation of one or more virtual machine instances in a first state of operation based, at least in part, on the indication; and
provide the update data package to the virtual machine monitor, wherein the virtual machine monitor is configured to execute the update data package in first memory to update the virtual machine monitor, wherein the execution of the update data package implements an updated virtual machine monitor within the first memory,
wherein the updated virtual machine monitor is configured to retrieve state information associated with the first state of operation of the one or more virtual machine instances, and cause the one or more virtual machine instances to resume operation in the first state of operation based, at least in part, on the state information.

US Pat. No. 10,216,538

AUTOMATED EXPLOITATION OF VIRTUAL MACHINE RESOURCE MODIFICATIONS

International Business Ma...

1. A method for automated exploitation of virtual machine resource modifications, the method comprising:deploying, by one or more computer processors, at least one application in a distributed computing environment;
providing, by one or more computer processors, at least one resource of a virtual machine to the at least one application in the distributed computing environment, wherein the at least one resource of the virtual machine provided is recorded in metadata and the at least one application receives the metadata and using the metadata, the at least one application determines how much of the at least one resource of the virtual machine to utilize;
determining, by one or more computer processors, a change to the at least one resource of the virtual machine using a metalayer, wherein the metalayer includes, in the metadata, a factor, and wherein the factor is a level of utilization not to be exceeded for any resource of the at least one resource to protect against overusing the at least one resource of the virtual machine; and
responsive to determining the change to the at least one resource of the virtual machine, modifying, by one or more computer processors, the metadata, wherein the at least one application uses the modified metadata to determine how much of the changed at least one resource of the virtual machine to utilize.

US Pat. No. 10,216,537

DISPERSIVE STORAGE AREA NETWORKS

DISPERSIVE NETWORKS, INC....

1. A method for storing data from a first electronic device at a plurality of storage devices of a dispersive storage area network comprising:(a) spawning, at the first electronic device, a first virtual machine that virtualizes network capabilities of the first electronic device such that a first virtual network connection is provided; (b) spawning, at a first storage server, a second virtual machine that virtualizes network capabilities of the first storage server such that a second virtual network connection is provided; (c) spawning, at a second storage server, a third virtual machine that virtualizes network capabilities of the second storage server such that a third virtual network connection is provided; (d) spawning, at a third storage server, a fourth virtual machine that virtualizes network capabilities of the third storage server such that a fourth virtual network connection is provided; (e) spawning, at a first splitting server, a fifth virtual machine that virtualizes network capabilities of the first splitting server such that a fifth virtual network connection is provided; (f) generating, at the first electronic device, a first hash for first data to be stored on the dispersive storage area network, and storing the generated first hash at the first electronic device; (g) communicating, from the first electronic device via the first virtual network connection, one or more packets collectively containing the first data for communication to the first splitting server for storage of the first data on the dispersive storage area network; (h) receiving, at the first splitting server via the fifth virtual network connection, the one or more packets containing data for storage on the dispersive storage area network; (i) spawning, at the first splitting server, a sixth virtual machine that virtualizes network capabilities of the first splitting server such that a sixth virtual network connection is provided; (j) spawning, at the first splitting server, a seventh virtual machine that virtualizes network capabilities of the first splitting server such that a seventh virtual network connection is provided; (k) spawning, at the splitting server, an eighth virtual machine that virtualizes network capabilities of the first splitting server such that an eighth virtual network connection is provided; (l) splitting, at the first splitting server, the first data for storage on the dispersive storage area network; (m) communicating, from the first splitting server via the sixth virtual network connection, one or more packets for communication to the first storage server representing a first portion of the split data; (n) receiving, at the first storage server via the second virtual network connection, the one or more packets representing a first portion of the split data; (o) storing, at the first storage server, the first portion of the split data; (p) communicating, from the first splitting server via the seventh virtual network connection, one or more packets for communication to the second storage server representing a second portion of the split data; (q) receiving, at the second storage server via the third virtual network connection, the one or more packets representing a second portion of the split data; (r) storing, at the second storage server, the second portion of the split data; (s) communicating, from the first splitting server via the eighth virtual network connection, one or more packets for communication to the third storage server representing a third portion of the split data; (t) receiving, at the third storage server via the fourth virtual network connection, the one or more packets representing a third portion of the split data; (u) storing, at the third storage server, the third portion of the split data; (v) effecting, by the first electronic device, retrieval of the first data; (w) retrieving, at a second splitting server in response to effecting retrieval of the first data, a plurality of data portions stored on a plurality of storage servers, including retrieving the first portion from the first storage server, retrieving the second portion from the second storage server, and retrieving the third portion from the third storage server; (x) combining, at the second splitting server, the retrieved plurality of data portions into second data; (y) communicating, from the second splitting server to the first electronic device, the second data; (z) generating, at the first electronic device, a second hash for the second data; (aa) determining, at the first electronic device, whether the stored first data was corrupted by comparing the stored first hash to the generated second hash.

US Pat. No. 10,216,536

SWAP FILE DEFRAGMENTATION IN A HYPERVISOR

VMware, Inc., Palo Alto,...

1. A method, comprising:creating a swap file for storing memory data of a virtual machine executing on a first host, wherein the swap file comprises a plurality of storage blocks including a first storage block;
executing a defragmentation procedure on the swap file while the virtual machine is powered on, the defragmentation procedure comprising:
selecting a first memory page frame of the virtual machine having first memory data that has been swapped out to the first storage block of the swap file;
determining an overall density of the swap file based on a first ratio of a first number of memory page frames stored in the swap file to a second number of memory pages for which space is allocated in the swap file;
determining a density of the first storage block based on a second ratio of a third number of memory page frames stored in the first storage block to a fourth number of memory pages for which space is allocated in the first storage block;
responsive to determining that the density of the first storage block is less than the overall density of the swap file, moving the first memory data from the first storage block to a second storage block; and
updating the first memory page frame with a location of the first memory data in the second storage block.

US Pat. No. 10,216,535

EFFICIENT MAC ADDRESS STORAGE FOR VIRTUAL MACHINE APPLICATIONS

MEDIATEK INC., Hsin-Chu ...

1. A method, comprising:determining at least one common property each having a respective value commonly shared by a plurality of addresses associated with one or more virtual machines executed on a computing apparatus, each address of the plurality of addresses being different from one another;
generating at least one first field each containing the respective value of a corresponding property of the at least one common property;
generating at least one second field each containing a respective value distinguishably identifying each virtual machine of the one or more virtual machines;
storing, in a memory, the at least one first field and the at least one second field as an address entry representative of the plurality of addresses associated with the one or more virtual machines; and
utilizing a mapping table, which stores an organizationally unique identifier (OUI) of each virtual machine of the one or more virtual machines, along with the memory such that an amount of memory required to store an address associated with a respective one of the one or more virtual machines is reduced,
wherein each address of the plurality of addresses includes an index pointing to a corresponding entry in the mapping table,
wherein the at least one first field comprises a first field that includes a first number of bits indicative of the OUI of each virtual machine of the one or more virtual machines,
wherein the at least one second field comprises one or more second fields each of which corresponding to a respective virtual machine of the one or more virtual machines,
wherein each second field of the one or more second fields comprises an index pointing to the first field and a second number of least significant bits distinguishably identifying the respective virtual machine, and
wherein the first number is greater than the second number.

US Pat. No. 10,216,534

MOVING STORAGE VOLUMES FOR IMPROVED PERFORMANCE

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:analyzing capacity for a first network locality in a multi-tenant environment, the first network locality including a first subset of resources sharing state information and network interconnection, the first subset of resources including at least one server hosting a virtual machine for serving input and output (I/O) operations for at least one data volume;
identifying a detached data volume hosted by the first subset of resources, the detached data volume disconnected from a corresponding virtual machine for serving I/O operations;
determining that a probability of the data volume being reattached to the corresponding virtual machine is above a specified probability threshold;
determining sufficient capacity for the detached data volume in a second subset of resources corresponding to a second network locality;
causing, by a placement management service, the detached data volume to be hosted by the second subset of resources;
receiving, by a placement management service, a request to place a new data volume in the multi-tenant environment;
causing, by the placement management service, the new data volume to be hosted by the first subset of resources where the corresponding virtual machine for serving I/O operations for the new data volume is provided by the first subset of resources; and
attaching the new data volume to the corresponding virtual machine in the first subset of resources, wherein the new data volume is capable of serving I/O operations for the corresponding virtual machine within the first subset of resources corresponding to the first network locality.

US Pat. No. 10,216,533

EFFICIENT VIRTUAL I/O ADDRESS TRANSLATION

Altera Corporation, San ...

1. A method, comprising:using a network interface controller to monitor a transmit ring, wherein the transmit ring comprises a circular ring data structure that stores descriptors, wherein a descriptor describes a fragment of a packet of data and comprises a guest bus address that provides a virtual memory location of the fragment of the packet of data;
using the network interface controller to determine that a first descriptor describing a first fragment of a first packet of data has been written to the transmit ring based on monitoring the transmit ring;
using the network interface controller to attempt to retrieve a first translation for a first guest bus address of the first descriptor in response to determining that the first descriptor has been written to the transmit ring while a second descriptor describing a second fragment of the first packet of data is written to the transmit ring;
using the network interface controller to determine that the second descriptor has been written to the transmit ring;
using the network interface controller to attempt to retrieve a second translation for a second guest bus address of the second descriptor in response to determining that the second descriptor has been written to the transmit ring; and
using the network interface controller to read the first descriptor and the second descriptor from the transmit ring.

US Pat. No. 10,216,532

MEMORY AND RESOURCE MANAGEMENT IN A VIRTUAL COMPUTING ENVIRONMENT

Intel Corporation, Santa...

1. An apparatus for memory management in a virtual computing environment, comprising:a storage device including first memory area for a host machine, second memory area for a guest machine hosted by the host machine, a cache memory to selectively cache content of the first and second memory areas;
a hardware processor;
memory page comparison logic executed by the hardware processor coupled to the storage device to determine that a first memory page of instructions, stored in the second memory area of the storage device, for the guest machine in the virtual computing environment is identical to a second memory page of instructions, stored in the first memory area of the storage device, for a host machine in the virtual computing environment; and
merge logic executed by the hardware processor to, in response to a determination that the first memory page of instructions is identical to the second memory page of instructions, map the first memory page of instructions to the second memory page of instructions to cause a copy of the second memory page of instructions cached in the cache memory also serves as a cache copy of the first memory page of instructions.

US Pat. No. 10,216,531

TECHNIQUES FOR VIRTUAL MACHINE SHIFTING

NETAPP, INC., Sunnyvale,...

1. A computer-implemented method, comprising:validating by a universal application programming interface (API) a source virtual machine (VM) of a source hypervisor having a first platform, for migrating the source VM to a destination hypervisor with a second platform different from the first platform;
generating a clone of the source VM, prior to migration, by:
creating an empty data object in a destination logical storage unit of the destination hypervisor; and
mapping a source block range used by the source VM to store data to a destination block range of the empty data object without having to create a physical copy of source VM data;
migrating the source VM to the destination hypervisor using the clone;
reconfiguring by the API a network interface of the source VM for use by a destination VM at the destination hypervisor; and converting by the API, prior to initializing the destination VM, a virtual disk used by the source VM from a source format to a destination format with same storage blocks used to store VM data before and after migration of the source VM;
wherein the source VM is migrated to the destination hypervisor by reading meta-data of the source VM; creating an empty destination VM and meta-data on the destination hypervisor according to specification of the destination hypervisor; and creating at the empty destination VM, the clone of the source VM on the hypervisor.

US Pat. No. 10,216,530

METHOD FOR MAPPING BETWEEN VIRTUAL CPU AND PHYSICAL CPU AND ELECTRONIC DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A method for mapping between a virtual central processing unit (CPU) and a physical CPU, the method being applied to a multi-core system, the multi-core system comprising at least two physical CPUs, a virtual machine manager, and at least one virtual machine, the at least one virtual machine comprising at least two virtual CPUs, and the method comprising:obtaining, by the virtual machine manager, a set of to-be-mapped first virtual CPUs from the at least two virtual CPUs in a current time period;
obtaining, from the at least two physical CPUs, a first physical CPU that has a fewest to-be-run tasks;
obtaining, by the virtual machine manager, a first attribute value of each first virtual CPU in the set of first virtual CPUs and a second attribute value of the first physical CPU, the first attribute value of each first virtual CPU representing an attribute of a physical CPU to which the first virtual CPU is mapped in a previous time period, and the second attribute value representing an attribute of the first physical CPU in the previous time period;
obtaining, by the virtual machine manager from all the first attribute values, a target attribute value that matches the second attribute value by:
obtaining, according to the second attribute value and the first attribute value of each first virtual CPU, a similarity value between the second attribute value and the first attribute value of each first virtual CPU;
obtaining a first attribute value corresponding to a similarity value that is in a specified value range in all similarity values; and
using the first attribute value as the target attribute value that matches the second attribute value; and
mapping a target virtual CPU corresponding to the target attribute value to the first physical CPU for running.

US Pat. No. 10,216,529

METHOD AND SYSTEM FOR SHARING DRIVER PAGES

Virtuozzo International G...

1. A computer-implemented method for sharing driver pages among Containers, the method comprising:on a computer system having a processor, a single operating system (OS) and a first instance of a dedicated system driver installed and performing dedicated system services, instantiating a plurality of Containers that virtualize the OS, wherein code and data of the first instance of the dedicated system driver are loaded from an image into a plurality of pages arranged in a virtual memory, and
instantiating a second instance of the dedicated system driver upon a first request from one of the Containers for dedicated system services by:
(a) loading, from the image, pages of the second instance into a physical memory and allocating virtual memory pages for the second instance;
(b) associating the second instance with the first instance and acquiring virtual addresses of identical pages of the first instance compared to the second instance;
(c) mapping the virtual addresses of the identical pages of the second instance to physical pages to which virtual addresses of the corresponding identical pages of the first instance are mapped, while protecting the physical pages from modification;
(d) wherein virtual addresses of non-identical pages of the second instance remain mapped to the physical pages of the second instance;
(e) releasing the physical memory occupied by the identical physical pages of the second instance; and
(f) starting the second instance for responding to requests for the dedicated system services from the one of the Containers.

US Pat. No. 10,216,528

DYNAMICALLY LOADED PLUGIN ARCHITECTURE

Bitvore Corp., Los Angel...

1. A non-transitory computer-readable storage medium storing instructions which, when executed by one or more processors, provides an architecture for dynamically loading plugins, the architecture comprising:a parent context comprising data to configure one or more reusable software components;
a plugin repository operable to store a first plugin and a second plugin;
a first child context produced dynamically when the first plugin is loaded, the first child context being associated with a first version of the first plugin, the first child context inheriting the one or more reusable software components from the parent context; and
a second child context produced dynamically when the first plugin is loaded a second time, the second child context being associated with a second version of the first plugin, the second child context inheriting the one or more reusable software components from the first child context, wherein a violation is indicated if the first plugin returns a plugin object that belongs to the second plugin.

US Pat. No. 10,216,527

AUTOMATED SOFTWARE CONFIGURATION MANAGEMENT

Cisco Technology, Inc., ...

1. A method, comprising:detecting, by an agent at runtime, loading of a compiled file in an application, the application being one of a plurality of applications that provide a distributed business transaction;
responsive to the detecting, identifying, by the agent, parts of the compiled file;
performing, by the agent, a hash of the parts of the compiled file to generate corresponding hash values;
constructing, by the agent, a hash tree from the generated hash values;
determining, by the agent, whether a previously constructed hash tree from a previously detected load of the file is available to perform a comparison;
comparing, by the agent, the constructed hash tree against the previously constructed hash tree to identify changes to blocks of code inside the compiled file, wherein the identified changes indicate a change in the distributed business transaction by tracking one or more changes to the blocks code inside the compiled file; and
reporting, by the agent, results of the comparison.

US Pat. No. 10,216,526

CONTROLLING METHOD FOR OPTIMIZING A PROCESSOR AND CONTROLLING SYSTEM

MEDIATEK INC., Hsin-Chu ...

1. A controlling method for optimizing a processor, comprising:determining an actual utilization state of the processor in a first period;
extracting an actual utilization value from the determined actual utilization state to evaluate the overall utilization of the processor after the step of determining the actual utilization state;
determining an integral parameter and a derivative parameter by a PID (Proportional Integral Derivative) governor to obtain a dynamic adjustment value based on the actual utilization state, wherein the integral parameter corresponds to a low frequency, and the derivative parameter corresponds to a high frequency;
determining a proportional parameter by the PID governor to obtain the dynamic adjustment value based on the actual utilization state; and
adjusting performance and/or power of the processor in a second period by the PID governor based on the actual utilization state in the first period, wherein the second period is after the first period, the proportional parameter is determined from recent error values of the first period, the integral parameter is determined from error values in a long time of the first period, and the derivative parameter is determined from error values in a short time of the first period, and ten times of the short time is less than the long time.

US Pat. No. 10,216,525

VIRTUAL DISK CAROUSEL

American Megatrends, Inc....

1. A computer-implemented method for providing an automated installation of a plurality of operating systems during a boot of at least one computer, the method comprising performing computer-implemented operations for:receiving, by a bridge device, a request to expose one of a plurality of operating systems stored on a virtual disk carousel to at least one computer during a boot of the at least one computer, wherein the at least one computer is connected to the bridge device by a single first USB port and the bridge device is connected to the virtual disk carousel by a single second USB port; and
in response to the bridge device receiving the request for the selected operating system, the bridge device requesting the selected operating system from the virtual disk carousel through the single second USB port, the bridge device receiving the selected operating system from the virtual disk carousel in a standard disk image format through the single second USB port, the bridge device translating the selected operating system received from the virtual disk carousel in the standard disk image format to one of a plurality of standard mass storage device formats prior to transmission to the computer, wherein the selected standard mass storage device format is identified to the bridge device in a header in a disk image of the selected operating system, and the bridge device responding to the request with the selected operating system received from the virtual disk carousel by way of the selected standard mass storage device format exposed to the computer by the bridge device through the single first USB port.

US Pat. No. 10,216,524

SYSTEM AND METHOD FOR PROVIDING FINE-GRAINED MEMORY CACHEABILITY DURING A PRE-OS OPERATING ENVIRONMENT

Dell Products, LP, Round...

1. An information handling system, comprising:a memory including a cache; and
a processor to execute pre-operating system (pre-OS) code before the processor executes boot loader code, the pre-OS code to:
set up a Memory Type Range Register (MTRR) to define a first memory type for a memory region of the memory, wherein the first memory type specifies a first cacheability setting on the processor for data from the memory region;
set up a page attribute table (PAT) with an entry to define a second memory type for the memory region, wherein the second memory type specifies a second cacheability setting on the processor for data from the memory region;
disable the PAT; and
pass execution by the processor to the boot loader code.

US Pat. No. 10,216,523

SYSTEMS AND METHODS FOR IMPLEMENTING CONTROL LOGIC

General Electric Company,...

1. A system comprising:one or more hardware processors configured to implement a control-logic-agnostic virtual control engine to control a controlled system by executing control logic defined in attributed data, the control logic comprising a plurality of control nodes, and the attributed data comprising, for each of the control nodes, an attributed data item comprising a sample-data class structure and an attributes class structure, the attributes class structure comprising metadata specifying an output variable, one or more input variables, and a control operator generating the output variable from the one or more input variables; and
an attributed-data dictionary stored in non-transitory computer memory and configured to interpret the attributed data in response to a service call from the virtual control engine and to return a control-engine-specific interpretation to the virtual control engine,
wherein the control-engine-specific interpretation of each attributed data item comprises program code that, when instantiated and executed, implements the control operator specified in that data item, and
wherein the virtual control engine, upon execution of the control logic to control a controlled system, writes values of the output variables generated by the control operators of the plurality of control nodes to the sample-data class structures of the respective attributed data items.

US Pat. No. 10,216,522

TECHNOLOGIES FOR INDIRECT BRANCH TARGET SECURITY

Intel Corporation, Santa...

1. A computing device for executing an indirect branch instruction, the computing device comprising:a processor comprising:
an activation record key register; and
an indirect branch target module to: (i) load an encrypted indirect branch target, (ii) decrypt the encrypted indirect branch target using an activation record key stored in the activation record key register to generate an indirect branch target, (iii) and perform a jump to the indirect branch target.

US Pat. No. 10,216,521

ERROR MITIGATION FOR RESILIENT ALGORITHMS

NVIDIA Corporation, Sant...

1. A computer-implemented method, comprising:receiving, by a processing unit, a set of program instructions including a first program instruction that is responsive to error detection, wherein the first program instruction includes an opcode;
detecting an error in a value of a first operand of the first program instruction;
determining that error coping execution is selectively enabled for the first program instruction;
replacing the value for the first operand with a substitute value; and
executing, by the processing unit, the first program instruction including the opcode and the substitute value.

US Pat. No. 10,216,520

COMPRESSING INSTRUCTION QUEUE FOR A MICROPROCESSOR

VIA TECHNOLOGIES, INC., ...

1. A compressing instruction queue for a microprocessor, the microprocessor including an instruction translator having Q outputs providing up to P microinstructions per clock cycle in any one of multiple combinations of the Q outputs while maintaining program order in which Q is greater than or equal to P, wherein said compressing instruction queue comprises:a storage queue comprising a matrix of storage locations including N rows and M columns for storing microinstructions of the microprocessor in sequential order, wherein said sequential order comprises a zigzag pattern from a first column to a last column of each row and in only one direction from each row to a next adjacent row of said storage queue, and wherein N and M are each greater than one; and
a redirect logic circuit that is configured to receive and write said up to P microinstructions per cycle of a clock signal into sequential storage locations of said storage queue in said sequential order beginning with a next available sequential storage location that is next to a last storage location that was written in said storage queue in a last cycle;
wherein P>M, and wherein said redirect logic circuit writes a first of said up to P microinstructions into any of said M columns of said storage queue in which said next available sequential storage location is located, and to write any remaining ones of said up to P microinstructions following said zigzag pattern in each cycle; and
wherein said redirect logic circuit comprises:
a first select logic circuit that is configured to select said up to P microinstructions from among the Q outputs of the instruction translator, wherein said first select logic circuit comprises a first set of P multiplexers including a multiplexer in each of P positions in which each multiplexer has inputs receiving only those microinstructions that are allowed in a corresponding position of said each multiplexer; and
a second select logic circuit that reorders said up to P microinstructions according to sequential storage locations of said storage queue beginning with a column position of said next available sequential storage location in said storage queue;
wherein said second select logic circuit comprises a second set of P multiplexers, each having inputs coupled to outputs of a selected subset of said first set of P multiplexers; and
wherein said redirect logic circuit comprises a redirect controller that controls said second set of P multiplexers based on which of said M columns of said storage queue includes said next available sequential storage location.

US Pat. No. 10,216,519

MULTICOPY ATOMIC STORE OPERATION IN A DATA PROCESSING SYSTEM

International Business Ma...

1. A method of data processing in a data processing system implementing a weak memory model, wherein the data processing system includes a plurality of processing units coupled to an interconnect fabric, the method comprising:in response to executing a multicopy atomic store instruction in a processor core, an initiating processing unit broadcasting a store request on the interconnect fabric to a plurality of processing units to obtain coherence ownership of a target cache line of the multicopy atomic store instruction;
the initiating processing unit posting a kill request to at least one of the plurality of processing units to request invalidation of a copy of the target cache line said at least one of the plurality of processing units;
in response to successful posting of the kill request, the initiating processing unit broadcasting a store complete request on the interconnect fabric to enforce completion of the invalidation of the copy of the target cache line by the said at least one of the plurality of processing units; and
in response to the store complete request receiving a coherence response indicating success, permitting an update to the target cache line requested by the multicopy atomic store instruction to be visible to all of the plurality of processing units.

US Pat. No. 10,216,518

CLEARING SPECIFIED BLOCKS OF MAIN STORAGE

International Business Ma...

1. A computer system for a data processing system comprising:one or more computer processors;
one or more non-transitory computer readable storage media;
program instructions stored on the one or more non-transitory computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising:
program instructions to determine, from an instruction stream, an extended asynchronous data mover (EADM) start subchannel instruction, wherein the EADM start subchannel instruction comprises: a subsystem identification operand and an EADM operation request block operand both configured to designate a location of an EADM operation request block;
program instructions to execute the EADM start subchannel instruction, wherein executing the EADM start subchannel instruction comprises notifying a system assist processor (SAP) that includes:
an architecture that yields a same performance capability as a CPU on which operating systems and application programs execute; and
wherein the SAP includes program instructions to perform memory clearing operations in a same manner as the CPU on which operating systems and application programs execute;
program instructions to receive, by the SAP, an asynchronous data mover (ADM) request block;
program instructions to determine, by the SAP, whether the ADM request block specifies a main-storage-clearing operation command, wherein the main-storage-clearing operation command operates asynchronously from a program on a CPU;
responsive to determining the ADM request block specifies the main-storage-clearing operation command, program instructions to obtain one or more move specification blocks (MSBs), wherein an address associated with the one or more MSBs is designated by the ADM request block;
program instructions to determine, by the SAP, based on the one or more MSBs, an address and a size of a main storage block to clear;
responsive to determining the address and the size of the main storage block, program instructions to clear, by the SAP, the main storage block, wherein if the SAP is associated with a predetermined time period for clearing the main storage block, then a set of partially completed instructions are placed on a queue by the SAP, to continue the clearing of the main storage block at a later time;
responsive to clearing, by the SAP, the main storage block, program instructions to notify asynchronously, the CPU, when the SAP successfully completes the main-storage-clearing operation command;
responsive to determining that the main-storage-clearing operation command did not complete successfully, program instructions to provide an indication, in an ADM response block, of an error associated with at least one of: a request block, the one or more MSBs, and a memory access;
responsive to executing the EADM start subchannel instruction and notifying the SAP, program instructions to monitor the main storage clearing operations;
responsive to monitoring the main storage clearing operations, program instructions to receive a set of frequency statistics associated with the main-storage clearing operations, wherein the set of frequency statistics comprises:
a quantity of blocks cleared;
a size of the blocks cleared; and
a reason for clearing the specified blocks;
responsive to receiving the set of frequency statistics, program instructions to determine whether it is more efficient to use a combination of both the CPU memory cleaning operation and the SAP main storage cleaning operation, to clear the main storage block, wherein determining whether the combination of both the CPU and the SAP is more efficient comprises:
program instructions to analyze the set of frequency statistics and a current workload in the CPU; and
responsive to determining it is more efficient to use a combination of both the CPU and the SAP to clear the main storage block, continuously, at predetermined intervals, program instructions to analyze the set of frequency statistics to identify a breakpoint by comparing if it is more effective to use the CPU to when it is more effective to use the SAP for main storage clearing operations.

US Pat. No. 10,216,517

CLEARING SPECIFIED BLOCKS OF MAIN STORAGE

International Business Ma...

1. A non-transitory computer readable storage medium and program instructions stored on the non-transitory computer readable storage medium, the program instructions comprising:program instructions to determine, from an instruction stream, an extended asynchronous data mover (EADM) start subchannel instruction, wherein the EADM start subchannel instruction comprises: a subsystem identification operand and an EADM operation request block operand both configured to designate a location of an EADM operation request block;
program instructions to execute the EADM start subchannel instruction, wherein executing the EADM start subchannel instruction comprises notifying a system assist processor (SAP) that includes:
an architecture that yields a same performance capability as a CPU on which operating systems and application programs execute; and
wherein the SAP includes program instructions to perform memory clearing operations in a same manner as the CPU on which operating systems and application programs execute;
program instructions to receive, by the SAP, an asynchronous data mover (ADM) request block;
program instructions to determine, by the SAP, whether the ADM request block specifies a main-storage-clearing operation command, wherein the main-storage-clearing operation command operates asynchronously from a program on a CPU;
responsive to determining the ADM request block specifies the main-storage-clearing operation command, program instructions to obtain one or more move specification blocks (MSBs), wherein an address associated with the one or more MSBs is designated by the ADM request block;
program instructions to determine, by the SAP, based on the one or more MSBs, an address and a size of a main storage block to clear;
responsive to determining the address and the size of the main storage block, program instructions to clear, by the SAP, the main storage block, wherein if the SAP is associated with a predetermined time period for clearing the main storage block, then a set of partially completed instructions are placed on a queue by the SAP, to continue the clearing of the main storage block at a later time;
responsive to clearing, by the SAP, the main storage block, program instructions to notify asynchronously, the CPU, when the SAP successfully completes the main-storage-clearing operation command;
responsive to determining that the main-storage-clearing operation command did not complete successfully, program instructions to provide an indication, in an ADM response block, of an error associated with at least one of: a request block, the one or more MSBs, and a memory access;
responsive to executing the EADM start subchannel instruction and notifying the SAP, program instructions to monitor the main storage clearing operations;
responsive to monitoring the main storage clearing operations, program instructions to receive a set of frequency statistics associated with the main-storage clearing operations, wherein the set of frequency statistics comprises:
a quantity of blocks cleared;
a size of the blocks cleared; and
a reason for clearing the specified blocks;
responsive to receiving the set of frequency statistics, program instructions to determine whether it is more efficient to use a combination of both the CPU memory cleaning operation and the SAP main storage cleaning operation, to clear the main storage block, wherein determining whether the combination of both the CPU and the SAP is more efficient comprises:
program instructions to analyze the set of frequency statistics and a current workload in the CPU; and
responsive to determining it is more efficient to use a combination of both the CPU and the SAP to clear the main storage block, continuously, at predetermined intervals, program instructions to analyze the set of frequency statistics to identify a breakpoint by comparing if it is more effective to use the CPU to when it is more effective to use the SAP for main storage clearing operations.

US Pat. No. 10,216,516

FUSED ADJACENT MEMORY STORES

Intel Corporation, Santa...

15. A method comprising:identifying a pair of store instructions among a plurality of instructions in an instruction queue, wherein the pair of store instructions comprise a first store instruction and a second store instruction, wherein a first data of the first store instruction corresponds to a first memory region of a memory, the first memory region adjacent to a second memory region of the memory, and wherein a second data of the second store instruction corresponds to the second memory region;
responsive to determining that the first store instruction and the second store instruction correspond to adjacent memory regions, fusing the first store instruction with the second store instruction resulting in a fused store instruction; and
determining whether the first data and the second data is to be stored in one of an ascending storage order or a descending storage order based on a first operand and a second operand of the first instruction and a third operand and a fourth operand of the second instruction, wherein the first, second, third, and fourth operands are different than the first and second data.

US Pat. No. 10,216,515

PROCESSOR LOAD USING A BIT VECTOR TO CALCULATE EFFECTIVE ADDRESS

Oracle International Corp...

1. An apparatus, comprising:a register configured to store a bit vector, wherein the bit vector includes a plurality of elements that occupy N ordered element positions, wherein N is a positive integer; and
circuitry configured to:
identify a particular element position of the bit vector, wherein a value of a element occupying the particular element position matches a first value;
determine an address value using the particular element position of the bit vector and a base address; and
store a data value in the particular element position of the bit vector based on results of a comparison between a second value and data loaded from a location in a memory specified by the address value.

US Pat. No. 10,216,514

IDENTIFICATION OF A COMPONENT FOR UPGRADE

Hewlett Packard Enterpris...

1. A method comprising:receiving a first topology map that describes a desired software configuration for multiple components in a system, wherein the desired software configuration includes a desired software version;
accessing a second topology map that describes a current software configuration for the multiple components in the system, wherein the current software configuration includes a current software version;
determining based on the first topology map and the second topology map that the desired software configuration differs from the current software configuration;
responsive to the determination that the desired software configuration differs from the current software configuration, identifying which of the multiple components to upgrade, including:
identifying redundant components according to a common functionality from among the multiple components to upgrade; and
identifying an order of upgrade for each of the redundant components by prioritizing the redundant components that have dependencies on other components;
automating an upgrade at the identified component among the multiple components by upgrading a current software configuration of the identified components from the current software version to the desired software version.

US Pat. No. 10,216,513

PLUGIN FOR MULTI-MODULE WEB APPLICATIONS

Oracle International Corp...

1. A non-transitory computer-readable storage medium carrying program instructions thereon, the instructions when executed by one or more processors cause the one or more processors to perform operations comprising:determining, at a server, dependencies associated with each software module of a process defined by software modules, wherein each of the software modules is associated with a respective JavaScript Object Notation (JSON) file that lists a unique set of the dependencies specific to each of the software modules;
aggregating the dependencies associated with the software modules;
storing the aggregated dependencies in one or more configuration files, wherein a configuration file includes one or more dependency paths associated with each of the dependencies and includes at least one internal dependency unique to each of the software modules; and
updating one or more of the dependency paths in the configuration files based on one or more changes to one or more of the dependency paths.

US Pat. No. 10,216,512

MANAGED MULTI-CONTAINER BUILDS

Amazon Technologies, Inc....

1. A computer-implemented method for managing multi-container builds, comprising:under control of one or more computer systems configured with executable instructions,
receiving, at a software build management service, a software build task description, the software build task description specifying a software object to build, the software build task description including a set of environments, each environment of the set of environments specifying a corresponding set of parameters usable to build a corresponding version of the software object;
instantiating, for each environment of the set of environments, a corresponding container of a set of containers on a build instance of one or more build instances, each build instance of the one or more build instances associated with the software build task, the corresponding container based at least in part on one or more parameters of the set of parameters;
for a selected build state of a set of build states of the software build management service:
sending, for each environment of the set of environments, a command to the corresponding container of the set of containers, the command based at least in part on the build state and the environment;
receiving, from the corresponding container of each environment of the set of environments, a corresponding response to the command;
waiting until the corresponding response to the command is received from the corresponding container for all environments of the set of environments; and
determining the next build state of the set of build states; and
providing a build status of the software build task, the build status indicating whether the software build task completed.

US Pat. No. 10,216,511

METHOD APPARATUS AND SYSTEMS FOR ENABLING DELIVERY AND ACCESS OF APPLICATIONS AND SERVICES

1. A non-transitory computer-readable storage medium having at least a computer-readable program stored therein, said computer-readable program comprising a first set of instructions, at least one or more of accessing and executing said first set of instructions by a processor associated with a generator device enables said generator device to at least:a. enabling a determination of an audio-visual content comprising at least one or more of a sample of an audio content and a sample of a visual content, wherein said determination is enabled due to at least a capture of a portion of a one or more of the sample of the audio content and the sample of the visual content, by at least a one or more sensors associated with said generator device;
b. determining a tag related information based on at least a portion of one or more of:
the sample of the audio content;
the sample of the video content;
c. enabling transmission of at least said tag related information on a communication interface associated with said generator device, wherein said transmission enables a one or more computing devices to at least:
i. determining a first contextual tag, wherein said first contextual tag comprises information determined based on at least a portion of said tag related information;
ii. determining an application identification information based on at least
a portion of said first contextual tag, said application identification information
identifying an application, wherein at least a portion of at least one of said application and said application identification information can be one or
more of identified, determined and selected based on at least a portion of information in an application repository, said application repository allows
data associated with at least one or more of an application and an application identification information to be:
i. added to said application repository;
ii. updated in the said application repository;
iii. modified in the said application repository;
iv. deleted from said application repository;
iii. enabling an activation of said application, wherein said activation
comprises enabling a first execution of a second set of instructions associated with said application;
d. receiving a first plurality of information on said communication interface associated with said generator device;
e. determining, based on at least a portion of said first plurality of information, at least a one or more of:
i. a second contextual tag; and
ii. a context value:
f. determining a third set of instructions based on at least a portion of said second contextual tag; and
g. enabling a second execution of at least said third set of instructions on said processor associated with said generator device, wherein said second execution enables said processor to at least one or more of processing and accessing at least a portion of said context value.

US Pat. No. 10,216,510

SILENT UPGRADE OF SOFTWARE WITH DEPENDENCIES

AIRWATCH LLC, Atlanta, G...

1. A non-transitory computer-readable medium embodying program instructions executable in a client device for performing a silent upgrade of a first client application on the client device that, when executed, cause the client device to:identify, by a second client application, that a new version of the first client application is available that upgrades a current version of the first client application to the new version, wherein the new version is required for a state of the client device to be in compliance with at least one compliance rule;
download, by the second client application, an installation package file for the new version of the first client application;
search, by the second client application, a registry of an operating system installed on the client device using a unique identifier identified for the first client application to locate information associated with the current version of the first client application in the registry;
identify, by the second client application, a file path for the current version of the first client application from the registry;
modify, by the second client application, the installation package file using the information in the registry, wherein the installation package file is modified by performing:
renaming a file name of the installation package file to be the same as a name of an initial installation package file used to install the first client application; and
moving the installation package file to a directory in the file path of the current version of the first client application; and
generate and execute, by the second client application, a command line query that causes a default installer application executable on the client device to perform a silent upgrade of the first client application, wherein the silent upgrade comprises replacing the current version of the first client application with the new version of the first client application without user interaction.

US Pat. No. 10,216,509

CONTINUOUS AND AUTOMATIC APPLICATION DEVELOPMENT AND DEPLOYMENT

TUPL, INC., Bellevue, WA...

1. A system, comprising:one or more processors; and
memory including a plurality of computer-executable components, the plurality of computer-executable components comprising:
a continuous integration component that generates a completed version of a deployment project in a development environment by concurrently:
generating an updated second version of a first project element via a first pipeline, and
integrating, via an integration pipeline, a first version of the first project element with a first version of a second project element to generate the completed version,
wherein the integration pipeline performs the integrating under command of a version controller that tracks versions of applications, application components, and infrastructures to enable concurrent compilation and testing of multiple versions of the applications, application components, and infrastructures;
an orchestration component that configures a production environment that includes at least one computing node to execute a development image that is created from the completed version of the deployment project, the production environment being mirrored by the development environment; and
an automatic deployment component that deploys a production image that is a copy of the development image into the production environment for execution,
wherein the first project element or the second project element is one of an application component, at least one portion of an application, or at least one portion of an infrastructure that supports the execution of the application.

US Pat. No. 10,216,508

SYSTEM AND METHOD FOR CONFIGURABLE SERVICES PLATFORM

Bank of America Corporati...

1. A system for installing and managing service software, comprising:one or more computers, each comprising a respective processor, the processors collectively configured to execute program instructions to implement a plurality of services on behalf of users in one or more client domains;
a configurable services platform communicatively coupled to the one or more computers, the configurable services platform including:
a memory storing respective configuration information associated with each of the one or more client domains, the configuration information including, for each client domain, information defining one or more of an indexing key, a configuration attribute, or filtering criteria associated with service requests submitted on behalf of users in the client domain and targeting at least one of the plurality of services; and
a service request processor configured to process service requests received on behalf of the users in the one or more client domains and targeting respective ones of the plurality of services based on the configuration information stored in the memory and associated with each of the client domains, the processing including routing the service requests to the targeted services;
a platform administration portal communicatively coupled to the configurable services platform and configured to:
present a user interface through which the configuration information associated with each of the one or more client domains is input to the system by a platform administrator;
receive, through the user interface, input indicating a requested change in the configuration information associated with a given one of the client domains;
the configurable services platform further including a configuration object builder configured to create a configuration object including updated configuration information associated with the given client domain, the updated configuration information reflecting the requested change;
the configurable services platform further configured to push the configuration object to an application cache accessible by a given one of the plurality of services targeted by service requests submitted on behalf of users in the given client domain; and
the given one of the plurality of services configured to apply the configuration object in fulfilling the service requests submitted on behalf of users in the given client domain without modification of the program instructions executable to implement the given service.

US Pat. No. 10,216,507

CUSTOMIZED APPLICATION PACKAGE WITH CONTEXT SPECIFIC TOKEN

Twitter, Inc., San Franc...

1. A method comprising:receiving, at an application distribution platform, an initial application package comprising one or more files of an application;
obtaining, at the application distribution platform, an application token specific to a user account, the application token providing a context-specific functionality to the application, wherein the context of the application token includes information specific to the user account;
assembling, by an assembler module of the application distribution platform, a customized application package comprising the initial application package and the application token, wherein assembling the customized application package includes incorporating the token into a directory structure of the initial application package; and
providing, to a client device, the customized application package in response to a user request, wherein the customized application package is configured to configure the application during installation according to the context specified by the application token, including incorporating the information specific to the user account to preconfigure the installed application.

US Pat. No. 10,216,506

LOCATION-BASED AUTOMATIC SOFTWARE APPLICATION INSTALLATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:collecting device data of a mobile device of a user, the device data comprising information indicative of a location at which the user will be present at a future time;identifying, based on the collecting the device data, a software application associated with that location;downloading an installer for the software application to the mobile device of the user;
automatically installing the software application on the mobile device based on a triggering event, the installing being prior to arrival of the user at the location at the future time; and
automatically authorizing, during the automatic installation of the software application, at least one application permission required for the software application based on a received grant of one or more application permissions to a sandbox application.

US Pat. No. 10,216,505

USING MACHINE LEARNING TO OPTIMIZE MINIMAL SETS OF AN APPLICATION

VMware, Inc., Palo Alto,...

1. A method, comprising:deploying, by a management server, an initial minimal set of application components stored on the management server to each endpoint device in a plurality of endpoint devices, wherein the initial minimal set comprises a subset of components of the application that enables a portion of functionality of the application to be executed without including all application components;
on each endpoint device in the plurality of endpoint devices,
detecting execution of the application from the initial minimal set;
during execution from the initial minimal set, recording by an agent operating on the endpoint device accesses made by the application to application components located in the initial minimal set on the endpoint device and to application components missing from the initial minimal set on the endpoint device, and storing the recordings;
selecting one or more endpoint devices in the plurality of endpoint devices;
retrieving the recordings of application accesses for the selected endpoint devices; and
processing the retrieved recordings to produce an optimized minimal set on the management server based on recorded accesses to application components located in the initial minimal set on the endpoint device and to application components missing from the initial minimal set on the endpoint device in the retrieved recordings, wherein the optimized minimal set is produced by at least one of: removing one or more application components from the initial minimal set or adding one or more application components to the initial minimal set.

US Pat. No. 10,216,504

SYSTEM AND METHOD FOR INSULATING A WEB USER INTERFACE APPLICATION FROM UNDERLYING TECHNOLOGIES IN AN INTEGRATION CLOUD SERVICE

ORACLE INTERNATIONAL CORP...

1. A system for insulating a web user interface application from runtime engines in a cloud service runtime, the system comprising:a computer comprising one or more microprocessors;
a cloud service, executing on the computer, the cloud service comprising:
a web interface application for creating an integration flow between a source application and a target application; and
a runtime for executing the integration flow, the runtime comprising a plurality of runtime engines; and
an abstraction application programming interface that exposes a plurality of services to the web interface application, for use by the web interface application in designing, deploying and monitoring an integration project,
wherein the abstraction application programming interface operates to:
persist the integration project into a persistence store in a runtime-engine-neutral format, wherein the persistence store uses a pluggable persistence framework that insulates the integration project from the plurality of services exposed to the web interface application used by an operation manager to perform create, read, update, and delete operations on the integration project insulated within the persistence store during a development of the integration project with agnostic knowledge by the web interface application of the plurality of runtime engines;
use a template particular to a first runtime engine of the plurality of runtime engines to transform the integration project developed within the persistence store to a format specific to the first runtime engine of the plurality of runtime engines at deployment time, and
deploy the integration project transformed to the format specific to the first runtime engine on the first runtime engine for execution.

US Pat. No. 10,216,503

DEPLOYING, MONITORING, AND CONTROLLING MULTIPLE COMPONENTS OF AN APPLICATION

ElasticBox Inc., Broomfi...

1. A method for deploying an application, the method comprising:electronically receiving a request to deploy a cloud-based application, the request comprising information about the application but not including the application or portions thereof, wherein the request further includes information indicating for the application at least one of minimum, maximum, or average memory requirements, processing requirements, storage requirements, or bandwidth requirements;
assigning a unique identifier to the received request;
selecting a server from a plurality of servers upon which to deploy the application;
sending a message to the server to allocate for the application at least one of memory, processing power, storage, or throughput on the server before the application is deployed;
causing installation of an agent program on the selected server;
storing a plurality of commands in a script queue in a computer memory, the commands comprising computer instructions for the installation and configuration of the application; and
automatically sending the unique identifier that identifies the received request to deploy the cloud-based application to the agent program and sending the commands to the agent program for execution of the commands on the server, the execution of the commands causing installation and configuration of the application on the server.

US Pat. No. 10,216,502

SYSTEM MODULE DEPLOYMENT OPTIMIZATION

INTERNATIONAL BUSINESS MA...

1. A method for optimizing deployment of a modular application in a runtime environment, the method comprising:deploying application modules of the modular application, each application module having a module manifest and at least one application module having parts for execution, one or more module manifests comprising one or more references to parts of another application module, and parts required for execution of the application, the deploying being according to the module manifest;
executing the modular application in a runtime environment on a representative workload;
based on the modular application operating on the representative workload, determining that at least one deployed application module has no parts executing in the runtime environment, the determining including checking a heap of the runtime environment to determine whether at least one deployed application module has no parts executing in the heap;
based on determining that at least one deployed application module has no parts executing in the heap, adapting the module manifests so that the determined at least one deployed application module with no parts executing in the heap will not be deployed as part of the modular application in future deployments; and
wherein adapting the module manifests comprises creating an overlay file that operates on the module manifests so that the determined at least one deployed application module will not be deployed in future deployments, wherein the overlay file is referenced in a module manifest and used during execution to selectively deploy listed parts required for execution in the module manifest rather than deploying all listed parts in the module manifest.