US Pat. No. 10,114,847

CHANGE CAPTURE PRIOR TO SHUTDOWN FOR LATER BACKUP

CA, Inc., New York, NY (...

1. A computer implemented method comprising:monitoring, using an application, blocks of data on a storage device that are changing as a computer operates;
creating a plurality of incremental backups, wherein each incremental backup includes only blocks of data of the monitored blocks of data that have changed since a previous incremental backup;
merging, using the application, two oldest incremental backups of the plurality of incremental backups in response to a number of incremental backups exceeding a specified number;
detecting that the computer is being shut down;
in response to detecting that the computer is being shut down, saving a copy of a shutdown incremental backup to the storage device before the computer is shut down, wherein the shutdown incremental backup includes blocks of data of the monitored blocks of data that have changed since the most recent incremental backup of the plurality of incremental backups; and
upon startup of the computer, transmitting, using the application, the blocks of data included in the shutdown incremental backup to a backup storage device.

US Pat. No. 10,114,846

BALANCED DISTRIBUTION OF SORT ORDER VALUES FOR A MULTI-COLUMN SORT ORDER OF A RELATIONAL DATABASE

Amazon Technologies, Inc....

1. A system, comprising:one or more storage nodes, respectively comprising at least one processor and a memory, that implement a data store;
the data store, configured to:
identify a plurality of columns in a database table for a multi-column sort order of the database table;
evaluate data values in the plurality of columns to determine buckets for respective depth-balanced histograms for individual ones of the columns, wherein the buckets of the respective histograms represent ranges of the data values in the columns, wherein the buckets are assigned respective bucket values;
identify those buckets representing ranges of the data values in the respective histograms for the columns that include the data values of the plurality of columns in the entries;
generate multi-column sort order values for the entries of the database table according to interleaved bits of the assigned bucket values for each of the identified buckets for the entries; and
store the entries of the database table according to a sorted order of the multi-column sort order values for the entries.

US Pat. No. 10,114,845

EFFICIENTLY ESTIMATING COMPRESSION RATIO IN A DEDUPLICATING FILE SYSTEM

EMC IP Holding Company LL...

1. A deduplicating storage system, comprising:a processor configured to:
for each of k times: associate a bin of an ordered set of bins with each received identifier, wherein each bin in the ordered set of bins has a bin number and each received identifier comprises a fingerprint of a segment of a set of segments stored on a file system of the deduplicating storage system;
determine a minimum bin number associated with each received identifier, the minimum bin number being the bin number that is minimum among the bins associated with the each received identifier;
repeat the k times of associating a bin with a received identifier for n trials, where n is greater than two;
determine an estimate of a quantity of unique identifiers based at least in part on an average of the minimum associated bin number;
determine a data compression ratio of the segments stored in the file system of the deduplicating storage system based on the estimated quantity of the unique identifiers without having to record a list of the unique identifiers and check the list of the unique identifiers for the each received identifier;
determine a capacity of the deduplicating storage system; and
back up data to the system of the deduplicating storage system based on the determined capacity of the deduplicating storage system and the determined data compression ratio of the segments stored therein; and
a memory coupled to the processor and configured to provide the processor with instructions.

US Pat. No. 10,114,844

READINESS CHECKER FOR CONTENT OBJECT MOVEMENT

International Business Ma...

1. A method, comprising:determining, outside a movement time window, whether each content object in a set of content objects is ready for movement by checking whether each content object is on hold, wherein any content object that is on hold is not to be any of migrated, deleted, and archived;
for each content object in the set of content objects that is not on hold, setting, outside the movement time window, an associated indicator to indicate that the content object is ready for movement;
for each content object in the set of content objects that is on hold, resetting, outside the movement time window, the associated indicator to indicate that the content object is not ready for movement; and
moving, within the movement time window, each content object in the set of content objects that has the associated indicator set to indicate that the content object is ready for movement, wherein the movement time window is set to a period of time that allows completing movement of each content object in the set of content objects that has the associated indicator set.

US Pat. No. 10,114,843

CONTENT MIGRATION FRAMEWORK

SAP SE, Walldorf (DE)

1. A computer-implemented method for supporting migration of unstructured data stored in enterprise content management systems, comprising:generating a search for the unstructured data matching at least one content search rule, the unstructured data comprising information that does not have a predefined data model, the at least one content search rule comprising an enterprise identifier and a document identifier and the at least one content search rule being retrieved from a migration engine server configured to provide support for migration of the unstructured data and structured data stored in an enterprise resource planning system;
receiving a list of matched documents, wherein each document in the list of matched documents comprises at least a portion of the unstructured data and is associated with at least a source repository identifier and a unique document identifier;
calculating a target repository identifier and at least one metadata change instruction for each unique document identifier using at least one migration rule defining how the unstructured data is to be migrated to maintain an integrity of the unstructured data, the at least one migration rule being compliant with international regulations; and
modifying metadata for the document associated with the document identifier using the calculated at least one metadata change instruction to generate a modified copy of the document, the at least one metadata change instruction specifying that any matched content with a particular source repository identifier is to be modified to reflect a particular target repository identifier, the particular target repository identifier comprising at least one of a prefix and a suffix different from the particular source repository identifier and the at least one metadata change instruction specifying deletion of the metadata matching the particular source repository identifier from a source repository associated with the particular source repository identifier upon migration to a target repository associated with the particular target repository identifier.

US Pat. No. 10,114,842

MEDIA COMPRESSION IN A DIGITAL DEVICE

Red Hat, Inc., Raleigh, ...

1. A method, comprising:monitoring available data storage space in a digital device;
in response to the available data storage space falling below a threshold, generating, by a processing device of the digital device, a user interface to prompt a user of the digital device to allow compression of content stored on the digital device;
receiving, by the processing device via the user interface, an indication from the user to allow the compression of the content; and
in response to receipt of the indication from the user:
determining one or more types of the content that are indicated as allowed for compression; and
performing, by the processing device, the compression on the determined one or more types of the content.

US Pat. No. 10,114,841

SYSTEM FOR GENERATING A TABLE

1. A system for generating a table, comprising:a computer, wherein the computer includes:
a table generator for generating a table which contains:
at least a column or line depicting a plurality of first categories and at least a column or line depicting first values associated with said plurality of first categories;
a sparkline generator for generating a sparkline, wherein each one of the depicted plurality of first categories is provided with a sparkline within a confine of the table and each respective first category, the sparkline showing a number of second categories into which the respective first category is sub-divided and the relation of values associated with said second categories, the second categories being positioned within the table and directly adjacent to the respective first category, and the sparkline provides information related to data of the second categories;
a category selector for selecting one of said first categories by a user; and
an adder for enlarging the table upon selection of a category by said category selector, said adder being adapted to enlarge the table by adding a new column or line which comprises second categories into which said selected first category is subdivided and second values associated with said second categories,
wherein the computer is configured to allow display in the new column or line only the second categories associated with the selected first category, to update the column or line depicting first values with the second values associated with the second categories, disallow display in the new column or line of any second categories associated with non-selected first categories, and continue to display the column or line depicting first values associated with the non-selected first categories,
wherein the first category is subdivided into different types of second categories and wherein the adder is adapted to automatically choose the type of second categories based upon a criterion,
wherein said criterion is the spread of values associated with the second categories, and
wherein the adder is adapted to add types of categories in an order so that categories having a higher spread of values are chosen prior to those categories having a lower spread of values.

US Pat. No. 10,114,840

CUSTOMER DATA SEPARATION IN A SERVICE PROVIDER SCENARIO

SAP SE, Walldorf (DE)

1. A computer-implemented method for controlling customer access to documents, the method comprising:storing, by a computing system, a group of documents for a plurality of customers, wherein:
(i) each of the documents in the group is either a parent document or a node document,
(ii) the documents in the group include multiple parent documents and multiple node documents,
(iii) each of the node documents is directly linked to one or more of the parent documents or are indirectly linked to one or more of the parent documents through other of the node documents, and
(iv) the plurality of customers are associated with a plurality of customer records such that each of the customers is associated with a customer record from the plurality of customer records;
authorizing, by the computing system, each customer in the plurality of customers to access only a respective subset of the documents from the group of documents by assigning each document in the group of documents to a respective customer record from the plurality of customer records, wherein:
(i) the assigning of each document to the respective customer record involves including, for each of the parent documents, a reference to a customer attribute that identifies the respective customer record, the customer attribute being included in a plurality of customer attributes, and
(ii) only the parent documents in the group of documents are assigned to the customer records such that the node documents do not include references to customer attributes;
receiving, by the computing system, a request to access documents for a particular customer, and in response:
(i) authorizing the particular customer to access each parent document that includes a particular customer attribute that identifies the respective customer record associated with the particular customer, wherein the computing system authorizes the access to the parent document by referencing the particular customer attribute;
(ii) authorizing the particular customer to access each node document that is linked to each of the parent objects that include the customer attribute, wherein the computing system authorizes the access to the node document by referencing the parent document that include the particular customer attribute without first accessing the node document; and
managing, by the computing system, the group of documents for the plurality of customer records, wherein the group of documents and the plurality of customer attributes are stored in a database.

US Pat. No. 10,114,839

FORMAT IDENTIFICATION FOR FRAGMENTED IMAGE DATA

EMC IP Holding Company LL...

1. A system for storing information, comprising:an interface that receives an input stream of information, wherein the input stream of information comprises a plurality of fragments;
a data model generator that is configured to determine fragment boundaries of the plurality of fragments and to determine a data format for each of the plurality of fragments based on continuity properties;
wherein the data model generator operates to
divide the input stream of information into a plurality of windows, wherein each window has a fixed size and includes a same number of bytes, and wherein each of the plurality of fragments contains no more than a single window;
for each of the plurality of windows:
determine whether the window has a known or unknown format based on a value of a scoring function, wherein a high value of the scoring function indicates the window has a known format whereas a low value of the scoring function indicates the window has an unknown format;
compare portions of the window having an unknown format with neighboring windows to determine fragment boundaries;
calculate statistics of bits in the window based on formats of the neighboring windows; and
identify a breakpoint based on the statistics of bits in the window;
a data compressor that compresses the plurality of fragments into a compressed stream using a compression technique selected based on the data format for each of the fragments; and
a memory that stores the compressed stream.

US Pat. No. 10,114,838

REFERENCE CARD FOR SCENE REFERRED METADATA CAPTURE

Dolby Laboratories Licens...

1. A method comprising:receiving one or more reference source images and one or more corresponding non-reference source images, the one or more reference source images comprising image data for one or more reference cards comprising a plurality of reference gray levels and primary colors used to digitally encode the reference source images;
generating one or more output images for the one or more corresponding non-reference source images;
deriving, based on the image data for the primary colors and gray levels of the one or more reference cards, scene-referred metadata comprising a set of reference values and a corresponding set of coded values, the corresponding set of coded values comprising coded values in the one or more output images;
generating a constructive set of scene-referred metadata by interpolating data from one of two or more existing reference source images and two or more existing scene-referred metadata; and
outputting the one or more output images with the scene-referred metadata as a part of image metadata for the one or more output images.

US Pat. No. 10,114,837

DISTRIBUTED TRANSACTION MANAGEMENT

Microsoft Technology Lice...

1. A system that processes a transaction on behalf of a node set, the system comprising:a processing unit; and
a memory storing instructions that, when executed by the processing unit, cause the distributed system to:
initiate a transaction involving data locally stored by a plurality of nodes of the node set,
send a transaction request to respective participating nodes of the node set to participate in the transaction,
receive, from the respective participating nodes, a commit time vote for the transaction according to a local clock maintained by the participating node,
based on the commit time votes of the respective participating nodes, determine a commit time for the transaction having a maximum count of votes among the plurality of participating nodes, and
initiate commit processing of the transaction by sending a commit request to the participating nodes of the node set, wherein the commit request includes the commit time for the transaction, wherein the notification enables the respective participating nodes that did not vote for the commit time selected for the transaction to advance the local clock to synchronize with the participating nodes that voted for the commit time.

US Pat. No. 10,114,836

SYSTEMS AND METHODS FOR PRIORITIZING FILE DOWNLOADS

Google LLC, Mountain Vie...

1. A method comprising:evaluating a first respective score for each file in a plurality of first files by applying a first ranking scheme to first metadata associated with the first files to generate a first ranking of the first files, wherein the first ranking scheme is based on weights of at least two features of the first metadata;
initiating a first download process for at least some files in the first files from a cloud system to a client system based on the first ranking of the first files;
refining, by at least one processor, the first ranking scheme based on training data to generate a second ranking scheme, wherein the training data comprises one or more changes to the first files, wherein the changes comprise at least an addition, a replacement, a deletion, a modification, or an access of at least one first file in the first files, wherein refining the first ranking scheme comprises generating a predicted access frequency of the first file and comparing the predicted access frequency to an actual access frequency of the first file, and wherein the second ranking scheme is generated in response to a result of the comparison exceeding a threshold;
evaluating a second respective score for each file in a plurality of second files by applying the second ranking scheme to second metadata associated with the second files to generate a second ranking of the second files; and
initiating a second download process for each file in the second files from the cloud system to the client system based on the second ranking of the second files.

US Pat. No. 10,114,835

VIRTUAL FILE SYSTEM FOR CLOUD-BASED SHARED CONTENT

Box, Inc., Redwood City,...

1. A method for accessing a file system in a cloud-based storage platform that stores shared content accessible over a network by two or more user devices, the method comprising:implementing a file system interface between the cloud-based storage platform and a virtual file system, in which the file system interface directs file system calls from an application running on one of the user devices to the virtual file system;
processing at least some of the file system calls received at the file system interface through a first operation pipeline comprising a local data manager that issues one or more of the file system calls to a file system executor that performs local processing to produce a series of file events;
receiving a file event from the first pipeline and initiating processing of the file event through a second pipeline comprising at least a first operation to access local metadata corresponding to the series of file events and a second operation to access a local cache to identify a portion of a file within the virtual file system; and
providing at least an identification of contents of the local cache to a remote storage application programming interface to initiate a change in the file system of the cloud-based storage platform, wherein the change to the file system of the cloud-based storage platform corresponds to the file event that was processed by accessing to the local metadata.

US Pat. No. 10,114,834

EXOGENOUS VIRTUAL MACHINE SYNCHRONIZATION AND REPLICATION

CA, Inc., New York, NY (...

1. A system to provide a virtualized replication and availability environment, the system comprising:a production server having hardware and configured to host at least a first part of a virtualization architecture, wherein:
the virtualization architecture comprises a parent partition that contains a virtualization stack having access to the hardware associated with the production server and one or more child partitions configured to execute one or more virtual machines, and
the one or more virtual machines include respective operating systems having respective file systems with virtual machine files accessed by respective applications configured to be executed in the respective operating systems;
a replica server having hardware and configured to host at least a second part of the virtualization architecture; and
a replication and availability engine installed in the production server, and external to the one or more virtual machines, wherein the replication and availability engine is configured to:
initially synchronize and then continuously or periodically replicate, while the one or more virtual machines execute on the production server, the virtual machine files associated with the one or more virtual machines executed in the one or more child partitions of the production server to the replica server by forming or updating replicated instances of the virtual machine files in association with data defining one or more replica virtual machines of the replica server, the one or more replica virtual machines having instances of the respective applications and being configured to execute a workload of the production server in place of the production server, wherein the replication and availability engine is configured to perform replication while the one or more replica virtual machines are not executing, wherein:
the second part of the virtualization architecture is configured to cause end users or workloads to be automatically redirected from the one or more virtual machines executed in the one or more child partitions on the production server to one or more on-demand virtual machines that are associated with corresponding replicated instances of virtual machine files and that are started in the one or more child partitions on the replica server, and
the second part of the virtualization architecture further includes a hypervisor configured to isolate the parent partition on the replica server from the one or more child partitions on the replica server.

US Pat. No. 10,114,833

DISTRIBUTED CODE REPOSITORY WITH LIMITED SYNCHRONIZATION LOCKING

GitHub, Inc., San Franci...

1. A system, comprising:a hardware processor configured to:
receive a request to change an existing portion of code;
determine whether the request to change the existing portion of code is valid;
in response to determining that the request to change the existing portion of code is valid, distribute the request to a plurality of repositories, wherein the request comprises a new portion of the code;
request the plurality of repositories to approve the request to change the existing portion of code to the new portion;
receive a corresponding vote to approve swapping references from each repository of the plurality of repositories; determine whether swapping references is approved by a majority of the plurality of repositories based on the corresponding votes; and
in response to determining that swapping references is approved by the majority of the plurality of repositories, modify the code by swapping an existing reference associated with the code that points to the existing portion with a change reference associated with the code that points to the new portion and unlock the plurality of repositories; and
a memory coupled to the hardware processor and configured to provide the hardware processor with instructions.

US Pat. No. 10,114,832

GENERATING A DATA STREAM WITH A PREDICTABLE CHANGE RATE

EMC IP Holding Company LL...

1. A system, comprising:a processor configured to:
store an unmodified non-deduplicatable data stream, wherein the non-deduplicatable data stream comprises a non-compressible data stream and wherein the processor is further configured to generate the non-compressible data stream at least in part by:
receive an initialization parameter;
determine a first constrained prime number, wherein the first constrained prime number comprises a plurality of component values, wherein each of the plurality of component values comprises a prime number wherein each of the plurality of component values is different;
generate a first non-compressible sequence based at least in part on the initialization parameter and the first constrained prime number;
obtain a second non-compressible sequence, wherein the second non-compressible sequence is associated with the initialization parameter and a second constrained prime number; and
generate the non-compressible data stream including by merging the first non-compressible sequence and the second non-compressible sequence;
receive a change rate parameter, wherein the change rate parameter indicates an amount by which the unmodified non-deduplicatable data stream is to be modified; and
generate a modified data stream that differs from the unmodified non-deduplicatable data stream by the amount indicated by the change rate parameter wherein to generate the modified data stream, the processor is further configured to modify at least a portion of a plurality of data blocks associated with the non-deduplicatable data stream to obtain a corresponding portion of the modified data stream, wherein a data block of the plurality of data blocks is associated with a block size that is based on a segmenting attribute associated with a storage destination;
identify a set of new data blocks, wherein the set of new data blocks are identified based on comparing the unmodified non-deduplicatable data stream with the modified data stream;
determine a percentage of the modified data stream to store;
determine a deduplication result based on comparing the determined percentage to the change rate parameter; and
in response to determining that the determined percentage does not match the change rate parameter, reconfigure a deduplication technique; and
a memory coupled to the processor and configured to store the change rate parameter.

US Pat. No. 10,114,831

DELTA VERSION CLUSTERING AND RE-ANCHORING

Exagrid Systems, Inc., W...

1. A computer-implemented method comprising:generating a first anchor in a plurality of anchors having a plurality of delta-compressed versions of data dependent on the first anchor, wherein the first anchor being at least one of the following: a version of data and a delta-compressed version of the data,
wherein the plurality of delta-compressed versions includes delta-compressed versions that do not linearly depend on one another, each delta-compressed version in at least a portion of the delta-compressed versions in the plurality of delta-compressed versions is computed against the first anchor, the first anchor and the plurality of delta-compressed versions form a cluster;
generating a decompressed second anchor in the plurality of anchors, wherein the decompressed second anchor includes at least another version of the data; and
replacing the first anchor with the generated decompressed second anchor,
wherein the replacing includes
decompressing the first anchor to generate a decompressed first anchor;
determining a difference between the decompressed first anchor and the generated decompressed second anchor;
generating a first reverse delta-compressed version representative of the determined difference between the decompressed first anchor and the generated decompressed second anchor, wherein the first reverse delta-compressed version is dependent on the generated decompressed second anchor, wherein each delta-compressed version in the at least a portion of the delta-compressed versions being previously dependent on the first anchor is computed to be dependent on the first reversed delta-compressed version;
re-computing, using the determined difference between the decompressed first anchor and the generated decompressed second anchor, at least one delta-compressed version in the plurality of delta-compressed versions to be dependent on the generated decompressed second anchor, wherein the re-computed at least one delta-compressed version is delta-compressed against the generated decompressed second anchor; and
compressing the generated decompressed second anchor, wherein the compressed second anchor replaces the first anchor as an anchor of the cluster;
wherein at least one of the generating the first anchor, the generating the decompressed second anchor, and the replacing is performed on at least one processor.

US Pat. No. 10,114,829

MANAGING DATA CACHE FOR FILE SYSTEM REALIZED WITHIN A FILE

EMC IP Holding Company LL...

1. A method for storing data in a data storage system, the method comprising:receiving, from a requestor, a request specifying a set of data to be written to a logical address in a first file system, the first file system realized as a file within a second file system;
creating a first log entry for the set of data in a first data log, the first data log (i) logging data to be written to the first file system, (ii) having a head and a tail, and (iii) arranged as a circular buffer;
creating a second log entry for the set of data in a second data log, the second data log logging data to be written to the second file system, the first log entry providing a reference to the second log entry;
storing the set of data in a cache page; and
acknowledging the requestor that the request has been completed.

US Pat. No. 10,114,828

LOCAL CONTENT SHARING THROUGH EDGE CACHING AND TIME-SHIFTED UPLOADS

International Business Ma...

1. A method for time-shifted uploading of a data file through a backhaul network to a backend provider, the method comprising:intercepting an upload request to the backend provider, wherein the intercepted upload request is associated with the data file from an originating user located at a network edge within a local network, and wherein the backhaul network connects the local network to the backend provider;
caching the data file associated with the upload request in the local network upstream of the backhaul network, wherein caching the data file associated with the upload request comprises storing a cached copy of the data file on a storage device located at the network edge of the local network;
uploading a placeholder file to the backend provider based on the intercepted upload request;
receiving a file identifier (ID) from the backend provider based on the uploaded placeholder file;
mapping the received file ID to the cached data file;
intercepting a request to access the data file at the backend provider by a requesting user located within the local network;
sending the requesting user the cached data file within the local network based on the mapping and the intercepted access request; and
uploading a copy of the data file to the backend provider through the backhaul network based on a backhaul utilization policy, wherein the backhaul utilization policy instructs the uploading to occur based on customer experience management rules, wherein uploading the copy of the data file to the backend provider comprises uploading a low quality version of the data file and then iteratively uploading a plurality of data overlays derived from a full quality version of the data file to increase a data quality of the copy stored at the backend service provider with each successive data overlay until the data quality of the copy stored at the backend service provider matches the full quality version of the data file, and wherein a data overlay is combined with the copy of the data file to generate a progressively higher quality data file stored at the backend service provider.

US Pat. No. 10,114,827

SYSTEM AND METHOD FOR AN INTELLIGENT E-MAIL AND CONTENT RESPOSITORY

DELL PRODUCTS, LP, Round...

1. An information handling system, comprising:a storage device; and
a repository that:
receives a first file;
modifies the first file to include first metadata that indicates that the first file is a first entry of a first thread;
stores the modified first file including the first metadata to the storage device;
receives a second file, wherein the second file is different from the first file;
determines that the second file includes the first metadata; and
modifies the second file to include second metadata that indicates that the second file is a second entry of the first thread and that the first file is a parent entry to the second file in response to determining that the second file includes the first metadata;
stores the modified second file including the first metadata and the second metadata to the storage device.

US Pat. No. 10,114,826

AUTONOMIC REGULATION OF A VOLATILE DATABASE TABLE ATTRIBUTE

International Business Ma...

1. A computer program product, comprising a plurality of computer-executable instructions recorded in a non-transitory computer-readable media, wherein said instructions, when executed by at least one computer system, cause the computer system to perform:monitoring at least one parameter of a database table of said computerized database over at least one time interval and saving monitored parameter data with respect to said database table;
determining a database table volatility state of said database table using the saved monitored parameter data, said database table volatility state being a property of said database table that is a function of changes to data recorded in said database table with respect to time, said database table volatility state being independent of any queries against data in said database table; and
responsive to determining a database table volatility state of said database table using the saved monitored parameter data, generating and saving at least one database table volatility attribute expressing the database table volatility state of said database table, said at least one volatility attribute being used by a process executing on the at least one computer system to manage access to data in said database table, wherein said at least one volatility attribute is used to manage access to data in said database table by at least one of: (a) using said at least one volatility attribute to determine an optimum query execution strategy for a query against data in said database table, (b) using said at least one volatility attribute to determine whether to re-optimize a previously saved query execution strategy for a query against data in said database table, (c) using said at least one volatility attribute to determine whether to collect statistical data regarding said database table, and (d) using said at least one volatility attribute to manage storage and/or retrieval of data in said at least one database table.

US Pat. No. 10,114,825

DYNAMIC RESOURCE-BASED PARALLELIZATION IN DISTRIBUTED QUERY EXECUTION FRAMEWORKS

SAP SE, Walldorf (DE)

1. A method comprising:receiving, at runtime, by a database server from a remote application server, a query associated with a calculation scenario defining a data flow model, the data flow model comprising a plurality of calculation nodes defining a plurality operations to be executed at runtime by a calculation engine on the database server, the plurality of calculation nodes comprising a dynamic split operator, the dynamic split operator identifying at least one operation of the plurality of operations as available for parallelizing, the dynamic split operator further comprising a criterion to be evaluated to determine, based at least on an input of the at least one operation, a quantity of partitions for splitting the input, the at least one operation being performed on each partition of the input in parallel, the criterion being evaluated during an execution of an execution plan associated with the plurality of operation instead of during a generation of the execution plan, and the criterion quantifying an amount of available database resources necessary to allow parallelization of the at least one operation without negatively affecting other processes running on the database server;
instantiating, by the database server, a runtime model of the calculation scenario based on the plurality of calculation nodes;
generating, at runtime from the runtime model of the calculation scenario, the execution plan specifying how the plurality of operations are to be executed against a database managed by the database server, the runtime model comprising the dynamic split operator;
executing, by the database server, the execution plan, the execution of the execution plan comprising:
evaluating the criterion to at least determine, during the execution of the execution plan, the quantity of partitions for splitting the input, the criterion being evaluated based at least on the input to the at least operation; and
splitting, during the execution of the execution plan, the input into a first partition and a second partition, the splitting being based at least on the quantity of partitions, the first partition and the second partition being operated upon by two or more parallel processor threads comprising the at least one operation; and
providing, by the database server to the application server, the data set.

US Pat. No. 10,114,824

TECHNIQUES FOR PROVIDING A USER WITH CONTENT RECOMMENDATIONS

Verizon Patent and Licens...

1. A method performed by a server device, the method comprising:determining, by the server device, recommendations for media content for a user based on user profile information and a viewing history of media content of the user, the recommendations being further based on features associated with each item of the media content,
wherein each feature is associated with a category and, on a per-user basis:
a weight for the category,
a raw score for the feature, and
a weighted score that is based on the weight for the category and the score for the feature,
wherein the determining of the recommendations for the media content includes using a multi-tiered utility matrix for determining the recommendations for the media content;
communicating, by the server device and to a user device associated with the user,
the recommendations for the media content, and
one or more reasons for which the server device determined the recommendations, wherein the one or more reasons indicate a set of features, of the recommended media content, that were used in the determination of the recommendations,
wherein communicating the recommendations and the one or more reasons allows the user device to present at least one of the recommendations and at least one of the features, of the set of features that were used in the determination of the recommendations;
receiving, by the server device, feedback, from the user device, regarding:
the recommendations for the media content, and
the at least one feature, of the set of features used by the server device to determine the recommendations;
modifying, by the server device, the weighted score for the at least one feature, the modifying including at least one of:
modifying the weight for the category associated with the at least one feature, or
modifying the raw score for the at least one feature;
determining, by the server device, recommendations for different media content for the user based on:
the feedback regarding the recommendations for the media content, and
the modified weighted score, as determined based on the feedback regarding the at least one feature, of the set of features that were used by the server device to determine the recommendations,
wherein the determining of the recommendations for the different media content includes:
updating the multi-tiered utility matrix based on the feedback regarding the recommendations for the media content and the modified weighted score, as determined based on the feedback regarding the at least one feature, of the set of features, that were used to determine the recommendations, and
determining the recommendations for the different media content further based on the updated multi-tiered utility matrix; and
communicating, by the server device and to the user device, the recommendations for the different media content.

US Pat. No. 10,114,823

SYSTEMS AND METHODS FOR METRIC DATA SMOOTHING

Ayasdi, Inc., Menlo Park...

1. A computer implemented method comprising:a processing device storing executable instructions for receiving a matrix for a set of documents, each row of the matrix corresponding to each document of the set of documents and each column of the matrix corresponding to a text segment that is in at least one document of the set of documents, each cell of the matrix including a frequency value indicating a number of instances of a corresponding text segment in a corresponding document;
receiving an indication of a relationship between two text segments, each of the two text segments associated with a first column and a second column, respectively, of the matrix;
adjusting, for each document, a frequency value of the second column based on a frequency value of the first column;
projecting each frequency value of the matrix into a reference space to generate a set of projection values in the reference space;
identifying a plurality of subsets of the reference space, at least some of the plurality of subsets including at least some of the projection values of the set of projection values in the reference space;
clustering, for each subset of the plurality of subsets, at least some documents of the set of documents that correspond to a subset of the set of projection values to generate clusters of one or more documents; and
generating a graph of nodes, each of the nodes identifying one or more of the documents corresponding to each cluster.

US Pat. No. 10,114,822

ENHANCED REPORTING SYSTEM

SAP SE, Walldorf (DE)

1. A system comprising:a computer processor;
a first database system coupled to the computer processor, the first database system comprising a plurality of tables; and
a second database system coupled to the computer processor, the second database system comprising a subset of the tables in the first database system;
wherein the computer processor is operable to identify a plurality of reports that are currently generated using the second database system;
wherein the computer processor is operable to automatically identify without user input a plurality of reports that are not currently generated using the second database system but that are capable of being generated using the second database system;
wherein the computer processor is operable to display on a computer display device a list of the plurality of reports that are not currently generated using the second database system but that are capable of being generated using the second database system;
wherein the computer processor is operable to receive input from a user selecting one or more of the plurality of reports, generate the selected reports, and display the generated reports on a computer display device;
wherein the computer processor is operable to receive input identifying an additional report to be generated, the additional report to be generated requiring one or more tables that are not on the second database system;
wherein the computer processor is operable to identify the one or more tables that are not on the second database system and that are needed to generate the additional report;
wherein the computer processor is operable to determine the size of each of the one or more tables that are not on the second database system and that are needed to generate the additional report;
wherein the computer processor is operable to display on the computer display device a list of the one or more tables and the size of the one or more tables that are not on the second database system and that are needed to generate the additional report; and
wherein the computer processor is operable to display a list of a plurality of additional reports that are currently not generated using the second database system sorted by number of additional tables that are required to generate the additional report and size of additional tables needed to generate the additional report.

US Pat. No. 10,114,821

METHOD AND SYSTEM TO ACCESS TO ELECTRONIC BUSINESS DOCUMENTS

TractManager, Inc., Chat...

1. A method in a computing system for controlling access to document records and document summary pages associated with digital versions of paper contracts associated with a corporate entity, the method comprising:maintaining, by a computing system, an organization-specific database having corporate entities and authorized users;
receiving, by the computing system, scanned images of paper contracts;
processing, by the computing system, each scanned image of a paper contract to generate a searchable text file corresponding to text contained in the scanned image;
using the organization-specific database, generating, by the computing system, a graphical interface allowing for the selection of a corporate entity and an authorized user from a plurality of presented corporate entities and authorized users;
receiving, by the computing system, an indication of at least one corporate entity and at least one authorized user associated with each paper contract;
generating, by the computing system, a document record for each paper contract, the document record associating the scanned image of a paper contract with the searchable text file associated with the scanned image of the paper contract, the indication of the at least one corporate entity and at least one authorized user associated with the paper contract;
generating, by the computing system, a document summary page for each paper contract having fields populated with summarized terms of the paper contract;
setting, by the computing system, an access level associated with each paper contract; and
limiting, by the computing system, the access of the at least one authorized user to information associated with a document record and to a document summary page based on the access level of the corresponding paper contract.

US Pat. No. 10,114,820

DISPLAYING ORIGINAL TEXT IN A USER INTERFACE WITH TRANSLATED TEXT

Google LLC, Mountain Vie...

1. A method comprising:receiving, using one or more processors, a user request to translate a source document from a first language text to a second language text;
obtaining, using the one or more processors, a translated document containing a translation of the source document into the second language text;
formatting, using the one or more processors, the translated document based on the second language;
presenting, using the one or more processors, the translated document, wherein the formatting is used to present the translated document; and
presenting, using the one or more processors, the first language text that corresponds to a portion of the translated document in a graphical element overlaying the translated document in response to a user input pointing to the portion of the translated document.

US Pat. No. 10,114,818

SYSTEM AND METHOD FOR LOCATING BILINGUAL WEB SITES

1. A method comprising:performing a generic web crawl to identify a first webpage in a first language having a link thereon which points to a second webpage in a second language, wherein the first webpage and the second webpage comprise a bilingual website;
based on an analysis of parameters on the first webpage comprising at least two of: the link pointing to the second webpage, a title, a link neighborhood, a link context and data indicating a separate version of the first webpage, classifying the first webpage as a root page and as an entry point for the bilingual website via the link to the second webpage;
performing a bidirectional web crawl between the first webpage and the second webpage to identify the first webpage and the second webpage as the bilingual website, the bidirectional web crawl utilizing classifications of links to avoid links having a low respective relevance;
extracting information pairs from the first webpage and the second webpage for use in a language translation model, the information pairs comprising at least one of a word pair, a paragraph pair and a sentence pair; and
updating a statistical model with domain representative data using the information pairs.

US Pat. No. 10,114,817

DATA MINING MULTILINGUAL AND CONTEXTUAL COGNATES FROM USER PROFILES

Microsoft Technology Lice...

1. A method comprising:storing a plurality of multi-language profiles of a plurality of users;
identifying one or more multilingual cognates in each profile of the plurality of multi-language profiles;
based on the one or more multilingual cognates identified in each profile of the plurality of multi-language profiles, generating one or more translation models;
receiving input that indicates a selection, by a second user, of data that is associated with a first user that is different than the second user, wherein the plurality of users includes users other than the second user and the first user;
determining a first language that is associated with the first user;
determining a second language that is different than the first language and that is associated with the second user;
wherein a plurality of data items in a profile of the first user are in the first language;
translating the plurality of data items into the second language using the one or more translation models;
in response to receiving the input, causing a translated version of the plurality of data items to be displayed to the second user, wherein the translated version is in the second language;
wherein the method is performed by one or more computing devices.

US Pat. No. 10,114,816

ASSESSING COMPLEXITY OF DIALOGS TO STREAMLINE HANDLING OF SERVICE REQUESTS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented dialogue complexity assessment method, the method comprising:calculating a complexity utilizing domain-dependent terms and domain-independent terms of a dialogue in stages by:
computing a complexity of the dialogue at an utterance level;
computing a complexity of the dialogue at a turn level; and
computing a complexity of the dialogue based on the complexity of the constituent turns and utterances,
wherein the complexity calculation distinguishes between any of a domain-dependent term, a common language term, and a stop word in the dialogue.

US Pat. No. 10,114,815

CORE POINTS ASSOCIATIONS SENTIMENT ANALYSIS IN LARGE DOCUMENTS

INTERNATIONAL BUSINESS MA...

1. A method comprising:aggregating, from a set of points extracted from a large document, a set of core points, a point and a core point each being a topic covered in the document;
constructing, for the core point in the set of core points, a network of associations, wherein an association in the network comprises an entity that has a relationship with the core point by virtue of having contributed data in the document that relates to the core point;
computing, from the contributed data a sentiment value of the contributed data, the sentiment value being indicative of a sentiment of the entity towards the core point;
analyzing a data stream from a data source other than the document to isolate a portion of data stream that relates to the core point;
identifying a second entity in the portion of the data stream, wherein the second entity contributes the portion of the data stream that relates to the core point;
computing, from the portion of the data stream that relates to the core point, a second sentiment value of the second entity towards the core point;
determining that the second entity does not exist in the network of associations;
adding, responsive to determining that the second entity does not exist in the network of associations, the second entity and the second sentiment value to the network of associations;
computing from a set of sentiment values corresponding to the associations in the network of associations, an overall sentiment value for the core point; and
reporting overall sentiment values for each core point in the document.

US Pat. No. 10,114,814

SYSTEM AND METHOD FOR ACTIONIZING PATIENT COMMENTS

NARRATIVEDX, INC., Austi...

1. A system for processing and actionizing patient experience data, the system comprising:a server comprising a natural language processing (NLP) engine; and
a relational database;
wherein a plurality of communications is received at the server, each of the plurality of communications comprises comment data collected from publicly available data of an Internet web site or from one or more surveys, wherein the comment data comprises structured or unstructured patient experience data;
wherein the comment data from each of the plurality of communications is transformed to structured patient experience data and stored at the relational database in a response table that includes one or more records, wherein each record corresponds to the comment data from a communication, and wherein each record comprises the corresponding comment data and a timestamp;
wherein the comment data from each of the plurality of communications is parsed for individual phrases to generate a plurality of phrases;
wherein one or more phrases are selected from the plurality of phrases based on a predetermined parameter;
wherein the NLP engine is to predict one or more annotations for the one or more phrases based upon a score, wherein to predict the one or more annotations for the one or more phrases based upon the score comprises to (i) predict one or more annotations for the one or more phrases based upon a machine learning score, wherein the one or more annotations comprise a sentiment, a theme, or any named entity of the one or more phrases, (ii) determine whether the machine learning score is less than a predetermined threshold score, and (iii) predict the one or more annotations for the one or more phrases based upon a reference score in response to a determination that the machine learning score is less than the predetermined threshold score;
wherein the one or more annotations are stored at the relational database in an annotation table that includes one or more records in response to prediction of the one or more annotations, wherein each record corresponds to an annotation, and wherein each record includes the sentiment, the named entity, a primary tag indicative of a subject matter, or a secondary tag indicative of the theme; and
wherein the server is to generate a dashboard web page for a user that includes the one or more annotations in response to prediction of the one or more annotations.

US Pat. No. 10,114,813

MOBILE TERMINAL AND CONTROL METHOD THEREOF

LG ELECTRONICS INC., Seo...

1. A display device, comprising:a display;
a sound sensor;
a network interface for communicating with a voice recognition server;
a database; and
a controller configured to:
display, on the display, a content including text,
extract the text from in the displayed content,
store the extracted text and a weight of the extracted text in the database for a predetermined time period,
update the database by increasing the weight to the stored text when a user's voice corresponding to the stored text is received within the predetermined time period through the sound sensor,
update the database by decreasing the weight to the stored text when the user's voice corresponding to the stored text is not received within the predetermined time period through the sound sensor,
calculate a first matching score representing a degree of matching between the user's voice and the stored text and a second matching score between the user's voice and text stored in the voice recognition server,
select the text corresponding to a higher score among the first matching score reflecting the weight and the second matching score, and
display, on the display, the selected text,
wherein the controller is configured to communicate with a search server via the network interface, and transmit text selected from among at least one displayed text to the search server and to receive a search result corresponding to the transmitted text from the search server and to display the received search result.

US Pat. No. 10,114,812

METHOD, APPARATUS, AND COMPUTER PROGRAM PRODUCT FOR SOLVING AN EQUATION SYSTEM USING PURE SPREADSHEET FUNCTIONS

1. A method for computing a value of a root formula expression stored in a spreadsheet software application, the method comprising:A) displaying and operating one or more tabular datasheets by executing the spreadsheet software application on a computer device, each tabular datasheet having a plurality of cells each being designated with a column identifier and a row identifier, the cells being configured to receive values or formula expressions input, evaluate formula expressions, and display output; and
B) providing a programming interface to the spreadsheet software application, the programming interface being configured to:
B1) receive the identifiers for at least one of the cells and retrieve a value or a formula expression from the one of the cells;
B2) receive at least an evaluable formula expression, and evaluate its equivalent value, wherein the evaluable formula expression represents an independent textual expression that is evaluated to an equivalent numerical value by said programming interface; and
C) providing a graph constructor algorithm for representing the root formula and its interdependence on an additional nested formulas and the variables on a tree-structured graph of relational nodes, the graph constructor algorithm comprising:
C1) receiving the identifiers for the root formula's cell and the variables' cells and communicating with said programming interface to retrieve the root formula expression;
C2) identifying and retrieving the expressions for the additional nested formulas expressions on which the root formula depends by traversing the dependency of any retrieved expression on additional formulas;
C3) constructing a tree-structured graph of relational nodes for representing the interdependence of the root formula expression on the additional nested formulas expressions and the variables;
C4) transforming the tree-structured graph into an evaluation graph of relational nodes containing an equivalent sequence of formula expressions; and
D) providing a graph evaluator algorithm for evaluating said evaluation graph, the graph evaluator algorithm comprising:
D1) receiving said evaluation graph and a supply of values for the variables;
D2) traversing the relational nodes of said evaluation graph in an order of their interdependence and transforming the formula expression in each relational node into an evaluable formula expression by replacing the variables in said formula expression in the relational node by the supplied values of the variables, and any reference to a traversed child relational node;
D3) obtaining a value of the relational node by evaluating the value of the evaluable formula expression in the relational node via said programming interface;
D4) aggregating the obtained values of the relational nodes in an order of their interdependence to obtain the value of the root formula expression; and
E) providing a computer software application communicable with the spreadsheet software application via said programming interface, the computer software application being configured to execute said graph constructor algorithm followed by said graph evaluator algorithm obtaining the value of the root formula expression in accordance with the supplied values of the one or more variables for the root formula, the root formula depending on the variables explicitly or implicitly through interdependence on the additional nested formulas, not storing or modifying any stored values in the spreadsheet application.

US Pat. No. 10,114,811

METHODS AND SYSTEMS FOR VALIDATING MULTIPLE METHODS OF INPUT USING A UNIFIED RULE SET

Citrix Systems, Inc., Fo...

1. A server for validating input using a unified rule set, the server comprising:one or more processors configured to execute one or more modules of an application programming interface (API); and
a memory storing the one or more modules of the API, the API comprising a definition of a unified rule set of input validation rules that are executable by different components of a system:
wherein the unified rule set comprises two or more sets of input validation rules in which set membership is based in part on where an input validation rule is to be executed, the unified rule set comprising:
a first set of input validation rules including rules that are to be executed by the API, and
a second set of input validation rules including rules that are to be executed by a first component downward from the API, wherein the downward component is on a device separate from the server;
wherein input is received from the downward component, wherein the input has been validated using the second set of validation rules; and
whether the input is valid is determined using the first set of validation rules.

US Pat. No. 10,114,809

METHOD AND APPARATUS FOR PHONETICALLY ANNOTATING TEXT

TENCENT TECHNOLOGY (SHENZ...

1. A method, comprising:at a computing device having one or more processors and memory:
receiving a text input from a user, including receiving copied or scanned text for which context-appropriate phonetic annotation is to be performed at the computing device;
identifying a first polyphonic word segment and a first monophonic word segment in the text input, the first polyphonic word segment having at least a first pronunciation and a second pronunciation that is distinct from the first pronunciation, and the first monophonic word segment having a single pronunciation;
determining at least a first probability corresponding to the first pronunciation being a correct pronunciation for the first polyphonic word segment and a second probability corresponding to the second pronunciation being the correct pronunciation for the first polyphonic word segment, wherein the first probability is greater than the second probability;
determining a predetermined threshold difference based on: (1) a comparison of the first probability and the second probability with a preset threshold probability value, respectively, and (2) a magnitude of a difference between the first probability and the second probability;
comparing the difference between the first probability and the second probability with the predetermined threshold difference; and
selecting the first pronunciation as a current pronunciation for the first polyphonic word segment in accordance with a determination that the difference between the first probability and the second probability exceeds the predetermined threshold difference; and
in a text presentation user interface, displaying the input text concurrently with context-appropriate pronunciation annotations to facilitate a user's reading the input text aloud, including:
phonetically annotating the first monophonic word segment in the displayed input text with the single pronunciation of the first monophonic word segment;
phonetically annotating the first polyphonic word segment in the displayed input text with the first pronunciation of the first polyphonic word segment; and
forgoing phonetically annotating the first polyphonic word segment in the displayed input text with the second pronunciation of the first polyphonic word segment.

US Pat. No. 10,114,808

CONFLICT RESOLUTION OF ORIGINALLY PAPER BASED DATA ENTRY

International Business Ma...

1. A method for updating automated annotations for a paper based document, the method comprising:identifying, by an automated system utilizing optical character recognition, ambiguous content within a paper-based medical record, wherein the ambiguous content includes one or more terms that each correspond to a same acronym, wherein the automated system is located on one or more client devices and on a server that is communicatively coupled to the one or more client devices over a communications network;
highlighting, by the automated system, the ambiguous content, wherein the highlighted ambiguous content within the paper-based document comprises information corresponding to automated entity extraction;
including, by the automated system, an area of text that links to a choice box within a margin of the paper-based document;
transmitting, by the automated system and to the SME, an electronically transmitted image of the paper-based document;
receiving, by the automated system, an edited image of the paper-based document that includes:
at least one edited annotation made by the SME that clarifies the ambiguous content, wherein the edited at least one annotation comprises a manual edit using a pen on the paper-based document including the plurality of highlighted ambiguous content, wherein the manual edit using the pen comprises at least one of an approval of an annotation, a disapproval of an annotation, a clarification of an annotation, and an addition of an annotation, wherein the disapproval of the annotation comprises placing an “X” through the annotation, and wherein the approval of the annotation comprises placing a “check mark” through the annotation, and wherein the clarification of the annotation comprises placing a “check mark” in an appropriate check box, wherein the addition of the annotation comprises circling or underlining a non-annotated area of the paper-based document; and
an approval, by the SME filing in the choice box, of the at least one edited annotation;
extracting the at least one edited annotation from the received image of the paper-based document, wherein only the at least one edited annotation is identified while the remainder of the paper-based document is subtracted;
in response to determining the extracted at least one edited annotation is approved by the SME, adding the extracted at least one edited annotation of the paper-based document to a data retention system, wherein the data retention system is a structured data system located on the server and includes a plurality of electronic medical records (EMR), wherein the plurality of electronic medical records are an electronic representation of the medical record;
retrieving the approved added extracted at least one edited annotation to identify and update the plurality of EMR within the data retention system that include the highlighted ambiguous content by replacing the highlighted ambiguous content with the approved at least one edited annotation;
updating previously stored ambiguous content that was highlighted within the data retention system with the approved at least one edited annotation, and wherein updating the previously stored highlighted ambiguous content within the data retention system includes updating the previously stored highlighted ambiguous content within the one or more client devices; and
rescanning the paper-based document back into the automated system;
removing original text from the rescanned paper-based document; and
utilizing acronym token boundaries to identify any acronym tokens within the paper-based that are needed to be updated for future use in previously saved EMRs.

US Pat. No. 10,114,807

AUTOMATICALLY EVALUATING LIKELY ACCURACY OF EVENT ANNOTATIONS IN FIELD DATA

PHYSIO-CONTROL, INC., Re...

1. A device comprising:a processor; and
a non-transitory storage medium communicatively coupled to the processor, the storage medium configured to store one or more programs which, when executed by the processor, cause the device to:
receive field data, derived from a Cardio Pulmonary Resuscitation (CPR) session, including events of at least two different types occurring to a patient over time, the event types including at least one of chest compressions and ventilations within the CPR session;
receive annotations that have been previously generated from field data, the annotations identifying at least some of the events;
calculate at least one of a relative timing of at least two events of the field data based on the annotations identifying the at least two events;
obtain at least one accuracy criterion, the accuracy criterion indicating an expected event sequence and the relative timing of the at least two events;
compute at least one accuracy score for the annotations based on the accuracy criterion;
assign, out of a plurality of possible grades, at least one grade based on the accuracy score, the assigned grade indicating an accuracy with which the annotations identify the events identified by the annotations, and
output a user signal that includes the at least one grade for the annotations.

US Pat. No. 10,114,806

MANAGEMENT OF BUILDING PLAN DOCUMENTS UTILIZING COMMENTS AND A CORRECTION LIST

E-PLAN, INC., Irvine, CA...

1. A system for managing building plan documents, comprising:one or more computing devices;
non-transitory computer readable memory storing program code that when executed by the one or more computing devices is configured to cause the system to perform operations comprising:
receiving an electronic building plan document including a plurality of plan sheets;
providing a first of the plurality of plan sheets for display, the first plan sheet comprising a first plurality of layers including at least a document layer and a comments layer;
providing a layer selection user interface via which a user can select which of the first plurality of layers are to be displayed;
receiving, via the layer selection user interface, user layer selections;
inhibiting the display of unselected layers and enabling the display of selected layers;
providing a user interface via which a user can select, via a predefined standard comments library, a first comment, the first comment comprising text, and associate a document to the first comment;
storing a first plurality of comments;
providing for display a comments list in association with the first plan sheet, the comments list including a second plurality of comments;
at least partly in response to the user selecting a second comment with a specified plan sheet coordinate in the comments lists, providing the second comment in a comment layer for display over the first plan sheet at the plan sheet coordinate in a sheet review display pane;
enabling the user to search for comments by specifying one or more search terms;
generating and providing comments search results in response to a search query received via the search user interface;
providing a user interface via which the user can select one or more comments to be included in a plan correction list;
generating a correction list of items that need to be corrected in order for at least one approval document to be issued, the correction list including a plurality of comments specified by a plurality of users wherein the correction list includes:
associated comments, and
respective sheet identifiers for comments included in the correction list;
transmitting the correction list to at least one user;
tracking approval status of one or more building-related approval documents and providing the approval status to one or more users.

US Pat. No. 10,114,805

INLINE ADDRESS COMMANDS FOR CONTENT CUSTOMIZATION

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:presenting, by a computing system, a network document associated with a uniform resource locator, the network document having a predetermined structure and being presented in a browser;
upon detecting a change in the uniform resource locator, and upon determining that the change in the uniform resource locator does not impact a domain name in the uniform resource locator associated with the network document, processing the change as a request from a user, the request comprising:
a first selector which indicates a first portion of the network document, the first portion comprising less than the entire network document;
a first action indicator which indicates a first change to be made to the first portion of the network document indicated by the first selector such that portions of the network document other than the indicated first portion remain unaffected by the first change, the first action indicator comprising a first method to be performed with respect to the indicated first portion that causes the first portion to be resized from its position in the predetermined structure;
a second selector which indicates a second portion of the network document that is distinct from the first portion of the network document, the second portion comprising less than the entire network document; and
a second action indicator which indicates a second change to be made to the second portion of the network document indicated by the second selector such that portions of the network document other than the indicated second portion remain unaffected by the second change, the second action indicator comprising a second method to be performed with respect to the indicated second portion that causes the second portion to be resized from its position in the predetermined structure;
asynchronously updating, by the computing system, the first portion of the network document such that the first portion is updated based on the first method, based at least in part on information contained within the request, with portions of the network document other than the indicated first portion of the network document remaining unchanged by the first change;
asynchronously updating, by the computing system, the second portion of the network document such that the second portion is updated based on the second method, based at least in part on the second selector and the second action contained within the request, with portions of the network document other than the indicated second portion of the network document remaining unchanged by the second change; andcausing the network document with the updated first portion and the updated second portion to be presented in the browser.

US Pat. No. 10,114,804

REPRESENTATION OF AN ELEMENT IN A PAGE VIA AN IDENTIFIER

INTERNATIONAL BUSINESS MA...

1. A computer program product comprising:a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code readable by a processor to cause the processor to perform:
encountering a first element in a web page for a first time;
computing a first identifier for the first element using an algorithm responsive to the encountering of the first element,
wherein the first identifier represents a subtree;
causing the first identifier to be stored in a storage device using a hash representation of the first identifier to minimize a density of the storage device that is used when a history is unavailable for the first element in the storage device,
wherein the first identifier is available without re-computing the first identifier when the first identifier exists as the hash representation in the history, and
wherein a re-computing of the first identifier for the first element using the algorithm responsive to the encountering of the first element is performed and the re-computed first identified is stored as the hash representation in the history for future reference when the first identifier does not exists as the hash representation in the history;
subsequently encountering the first element two or more times in a different location of the web page and in a second web page;
representing the first element two or more times time of the subsequent encounters by retrieving the first identifier from the storage device,
wherein the algorithm computes the first identifier in a bottom-up fashion starting with leaf nodes and continuing to a root node, in a deterministic way to yield the first identifier when computing the first identifier, and
wherein the algorithm computes the first identifier in screening operation that comprises a first stripping out of the first element when it is deemed unimportant, a grouping of similar leaf nodes under the root node, a second stripping out of the first element when the first element is repeated two or more times within the similar leaf nodes under the root node, and repeating the grouping and the second stripping until there are no more duplicates to group; and
recursively computing respective unique identifiers for each of one or more subsequent elements derived from the first element,
wherein each of the one or more subsequent elements derived are from the first identifier and represented in the subtree by the respective unique identifiers,
wherein the recursive computing includes:
encountering a second element in a web page for a second time,
wherein the second element is derived from the first identifier;
computing a second identifier for the second element using the algorithm,
wherein second identifier is based on having encountered the second element, is different than the first identifier, and is stored within the subtree represented by the first identifier;
causing the second identifier to be stored in the storage device using a hash representation of the second identifier to minimize the density of the storage device that is used;
subsequently encountering the second element two or more times in a second different location of the web page and in the second web page; and
representing the second element by the second identifier at the time of the subsequent encounters by retrieving the second identifier from the storage device.

US Pat. No. 10,114,802

METHOD, DEVICE, AND SYSTEM FOR ACCESSING THIRD PARTY PLATFORMS VIA A MESSAGING APPLICATION

TENCENT TECHNOLOGY (SHENZ...

1. A method for providing a first client with access to information of a second client managed by an instant messaging server system, wherein the first client and the second client each have respective user accounts of the instant messaging server system, wherein the user account of the first client is a personal account that is a first type of user account of the instant messaging server system through which the first client exchanges instant messages with other user accounts of the instant messaging server system, and the user account of the second client is a Public Number that is a second type of user account of the instant messaging server system distinct from the first type of user account, through which the second client posts one or more documents to be broadcast to other user accounts of the instant messaging server system, the method comprising: the instant messaging server systemreceiving an information access request from the first client to subscribe to the Public Number of the second client by requesting information of the Public Number of the second client, wherein the information access request is generated by scanning a 2D barcode using a first user interface of an instant messaging application associated with the instant messaging server system, wherein the first user interface is displayed on a display of a client terminal through which the first client has logged into its user account at the instant messaging system, and the 2D barcode includes information identifying the Public Number of the second client;
in response to receiving the information access request generated by scanning the 2D barcode:
connecting the user account of the first client to the Public Number of the second client such that the user account of the first client is a follower of the Public Number;
generating an icon linked to the requested information of the Public Number of the second client; and
forwarding the icon to the first client, wherein the icon comprises the information identifying the Public Number of the second client, and wherein the icon is one of a plurality of icons to be arranged within a second user interface of the instant messaging application, displayed on the display of the client terminal in response to receiving the icon, each of the plurality of icons corresponding to a distinct Public Number to which the user account of the first client is subscribed, each Public Number being associated with a distinct client of the instant messaging server system;
receiving an information download request from the first client, wherein the information download request is generated by a user selection of the icon;
in response to the information download request:
generating one or more information snippets, each information snippet including a link to a respective document posted by the second client to the Public Number; and
returning the one or more information snippets to the client terminal, wherein the one or more information snippets correspond to the one or more documents posted by the second client and are to be arranged within a third user interface of the instant messaging application displayed on the display of the client terminal in replacement of the second user interface;
receiving, from the first client, a content viewing request for a respective information snippet displayed on the display of the client terminal, wherein the content viewing request is generated by a user selection of the respective information snippet within the third user interface; and
in response to the content viewing request, returning, to the client terminal, a respective document posted by the second client and corresponding to the respective information snippet, wherein the respective document is to be arranged within a fourth user interface of the instant messaging application displayed on the display of the client terminal in replacement of the third user interface.

US Pat. No. 10,114,801

TREEMAP OPTIMIZATION

ENTIT SOFTWARE LLC, Sunn...

1. A method executed by a system comprising a processor, comprising:accessing a plurality of data files in storage;
assigning a size value to each of the plurality of data files, wherein each size value is determined based on a size of a corresponding data file of the plurality of data files;
in ascending order of size value of the size values assigned to the plurality of data files, iteratively merging each of data files of a first subset of the plurality of data files into a merge file until a threshold is reached, the plurality of data files comprising a second subset of data files in addition to the first subset of data files;
displaying the second subset of data files as a plurality of boxes on a treemap on a display screen, wherein the size of each box of the plurality of boxes correlates to the size value of a corresponding data file of the second subset of data files, and displaying the merge file as a merge box on the treemap on the display screen; and
in response to user selection of the merge box in the treemap, presenting information of each individual data file of the first subset of data files merged into the merge file, the presented information enabling user access to the first subset of data files.

US Pat. No. 10,114,800

LAYOUT RECONSTRUCTION USING SPATIAL AND GRAMMATICAL CONSTRAINTS

INTUIT INC., Mountain Vi...

1. A computer-implemented method for determining a layout of information in a document, the method comprising:receiving an image of the document;
receiving an identifier of a user;
prior to performing an image analysis on the image, selecting an initial layout of the document based on the identifier;
performing the image analysis on the image to calculate features, wherein the features include binary robust invariant scalable keypoints or fast retina keypoints;
determining a layout of the document based on the calculated features, as well as spatial constraints and grammatical constraints associated with the initial layout, wherein the layout specifies locations of content in the document;
sending information specifying the determined layout to the user;
requesting feedback regarding the determined layout;
receiving the requested feedback, wherein the feedback confirms that the determined layout matches the document;
populating fields in a form with the content, wherein the form matches the determined layout such that the content appears in the specified locations in the form; and
sending the form for presentation on a display in the determined layout to the user.

US Pat. No. 10,114,799

METHOD FOR ARRANGING IMAGES IN ELECTRONIC DOCUMENTS ON SMALL DEVICES

Canon Kabushiki Kaisha, ...

1. A method of displaying digital images using an image display device, the method comprising the steps of:displaying a plurality of digital images on the display device;
dragging, across an electronic document on the display device, a digital image from the plurality of displayed digital images in response to a user's designation;
making the plurality of displayed digital images partly transparent except for the dragged digital image while the digital image is dragged;
changing a display position of the dragged digital image according to a user operation; and
changing opacity properties of each of the plurality of displayed digital images other than the dragged digital image so that an opacity property of each of the plurality of displayed digital images other than the dragged digital image is restored to an opacity property possessed before the dragging commenced, the changing being performed once the dragging of the digital image across the electronic document is completed.

US Pat. No. 10,114,798

AUTOMATED CONTEXT-BASED UNIQUE LETTER GENERATION

Progrexion IP, Inc., Nor...

1. A computer-implemented method that causes a computing system to generate a letter, the computing system comprising one or more processors and a memory storing the following: (a) a first plurality of text strings, each suitable for use in at least a first portion of a salutation in the letter, (b) a second plurality of text strings, each suitable for use in at least a first portion of a request for action in the letter, and (c) a third plurality of text strings, each suitable for use in at least a first portion of a signature block in the letter, the computing system also storing computer-executable instructions that are executed by the one or more processors of the computing system for causing the computing system to perform the method, the method comprising:generating a first random number;
selecting, based on the first random number, one text string from the first plurality of text strings;
generating a second random number;
selecting, based on the second random number, one text string from the second plurality of text strings;
generating a third random number;
selecting, based on the third random number, one text string from the third plurality of text strings;
obtaining a first set of rules that defines a mandatory ordering for each portion and each sub-portion of the following letter portions: (a) the salutation of the letter, (b) the request for action in the letter, and (c) the signature block of the letter, the first set of rules governing a recursive construction of the letter such that construction of subsequent portions of the letter are at least partially determined by one or more earlier portions of the letter; and
generating the letter using at least the one text string selected from the first plurality of text strings, the one text string selected from the second plurality of text strings, and the one text string selected from the third plurality of text strings, wherein generating the letter is performed by evaluating the first set of rules against each of the following: (a) the one text string selected from the first plurality of text strings, (b) the one text string selected from the second plurality of text strings, and (c) the one text string selected from the third plurality of text strings, whereby the letter is recursively constructed by evaluating the first set of rules against various different selected portions of text strings.

US Pat. No. 10,114,797

METHODS AND APPARATUS FOR PROVIDING A PROGRAMMABLE MIXED-RADIX DFT/IDFT PROCESSOR USING VECTOR ENGINES

Cavium, LLC, Santa Clara...

1. An apparatus, comprising:a memory bank;
a vector data path pipeline coupled to the memory bank, wherein the vector data path pipeline includes a vector scaling unit that receives vector data from the memory bank and outputs scaled vector data; and
a configurable mixed radix engine coupled to the vector data path pipeline, wherein the configurable mixed radix engine is configurable to perform a radix computation selected from a plurality of radix computations, and wherein the configurable mixed radix engine performs the selected radix computation on data received from the memory through the pipeline to generate a radix result.

US Pat. No. 10,114,796

EFFICIENT IMPLEMENTATION OF CASCADED BIQUADS

TEXAS INSTRUMENTS INCORPO...

1. A infinite impulse response (IIR) filter comprising:an input terminal to receive an input signal;
an output terminal to output an output signal;
a first multiplier circuit to multiply the input signal by the sum of a first filter coefficient and a second filter coefficient to produce a first signal, the first multiplier circuit having an input to receive the input signal and an output to output the first signal;
a second multiplier circuit to multiply the input signal by the sum of a third filter coefficient and a fourth filter coefficient to produce a second signal, the second multiplier circuit having an input to receive the input signal and an output to output the second signal;
a third multiplier circuit to multiply a feedback signal by the first filter coefficient to produce a third signal, the third multiplier circuit having an input to receive the feedback signal and an output to output the third signal;
a fourth multiplier circuit to multiply the feedback signal by the third filter coefficient to produce a fourth signal, the fourth multiplier circuit having an input to receive the feedback signal and an output to output the fourth signal;
a first summing circuit to sum the second and fourth signals to produce a fifth signal, the first summing circuit having first and second inputs to receive the second and fourth signals, respectively, and an output to output the fifth signal;
a first delay circuit to receive the fifth signal and apply a first delay thereto to produce a sixth signal, the first delay circuit having an input to receive the fifth signal and an output to output the sixth signal;
a second summing circuit to sum the first, third, and sixth signals to produce a seventh signal, the second summing circuit having first, second, and third inputs to receive the first, third, and sixth signals, respectively, and an output to output the seventh signal;
a second delay circuit to receive the seventh signal and apply a second delay thereto to produce an eighth signal, the second delay circuit having an input to receive the seventh signal and an output to output the eighth signal; and
a third summing circuit to sum the input signal and the eighth signal to produce the output signal;
wherein the eighth signal is the feedback signal.

US Pat. No. 10,114,795

PROCESSOR IN NON-VOLATILE STORAGE MEMORY

WESTERN DIGITAL TECHNOLOG...

1. A computing system comprising a device, the device comprising:a non-volatile memory divided into a plurality of selectable locations, each bit in the non-volatile memory configured to have corresponding data independently programmed and erased, wherein the selectable locations are grouped into a plurality of data lines;
one or more processing units coupled to the non-volatile memory, each of the processing units associated with a data line of the plurality of data lines, the one or more processing units comprising one or more reconfigurable processing units, the one or more processing units configured to:
manipulate, based on one or more instruction sets, data in an associated data line of the plurality of data lines to generate results that are stored in selectable locations of the associated data line reserved to store results of the manipulation;
determine which of the instruction sets are most frequently used by the one or more processing units to manipulate data; and
reconfigure the one or more reconfigurable processing units to manipulate data using the determined most frequently used instruction sets.

US Pat. No. 10,114,794

PROGRAMMABLE LOAD REPLAY PRECLUDING MECHANISM

VIA ALLIANCE SEMICONDUCTO...

1. An apparatus for reducing replays in an out-of-order processor, the apparatus comprising:a first reservation station, coupled to a hold bus, configured to dispatch a first load micro instruction, and configured to detect and indicate on the hold bus if said first load micro instruction is one of a plurality of specified load instructions directed to one of a plurality of non-core resources which are shared by a plurality of cores of the out-of-order processor;
replay reducer circuitry configured to evaluate an unmodified opcode portion of a load micro instruction that implicates a non-core resource, in order to detect said specified load micro instruction directed to non-core resources;
a second reservation station, coupled to said hold bus, configured to dispatch one or more younger micro instructions therein that depend on said first load micro instruction for execution after a first number of clock cycles following dispatch of said first load micro instruction, and if it is indicated, in response to the detection by the replay reducer circuit, on said hold bus that said first load micro instruction is said one of said plurality of specified load instructions, said second reservation station is configured to stall dispatch of said one or more younger micro instructions until said first load micro instruction has retrieved an operand;
load execution logic, coupled to said first reservation station, configured to receive and execute said first load micro instruction, wherein, if said first load micro instruction is not said specified load micro instruction, said load execution logic indicates on a miss bus if said first load micro instruction fails to successfully execute in said first number of clock cycles, thus initiating a replay of said one or more younger micro instructions, and wherein, if said first load micro instruction is said specified load micro instruction, said load execution logic does not indicate that said first load micro instruction fails to successfully execute if more than said first number of clock cycles are required to successfully execute, thus precluding a replay of said one or more younger micro instructions; and
said plurality of non-core resources which are located outside of said plurality of cores, comprising:
a fuse array, configured to store said plurality of specified load instructions corresponding to the out-of-order processor, wherein the out-of-order processor, upon initialization, accesses said fuse array to determine said plurality of specified load instructions.

US Pat. No. 10,114,793

METHOD AND APPARATUS FOR DETERMINING A WORK-GROUP SIZE

Samsung Electronics Co., ...

1. A method of determining a work-group size comprising:analyzing, using at least one processor, kernel code including a work-group;
calculating, using the at least one processor, a first value denoting spatial locality of a memory that is shared by one or more work items included in the work-group;
calculating, using the at least one processor, a second value denoting footprints of the one or more work items included in the work-group based on the first value;
determining, using the at least one processor, the work-group size based on the first and second values; and
executing, using the at least one processor, the kernel code using the determined work-group size, wherein
the calculating the first value includes,
converting a memory reference included in the kernel code into a symbol expression including at least one desired symbol,
partially evaluating the symbol expression by substituting a value corresponding to the at least one desired symbol included in the symbol expression into the symbol expression,
calculating a reuse distance by substituting a zero vector and unit vectors into the partially evaluated symbol expression, the zero vector based on a number of dimensions of a work-space of the kernel code, and each of the unit vectors corresponding to a separate dimension of the work-space; and
calculating the first value based on the calculated reuse distance and a memory line size of the memory.

US Pat. No. 10,114,792

LOW LATENCY REMOTE DIRECT MEMORY ACCESS FOR MICROSERVERS

CISCO TECHNOLOGY, INC, S...

1. A method, comprising:generating queue pairs (QPs) in a memory of an input/output (I/O) adapter of a microserver chassis having a plurality of compute nodes executing thereon, the QPs being associated with a remote direct memory access (RDMA) connection, based on an RDMA protocol, between a first compute node and a second compute node in the microserver chassis;
setting a flag in the QPs to indicate that the RDMA connection is local to the micro server chassis;
performing, in response to the flag, a loopback of RDMA packets within the I/O adapter from one memory region in the I/O adapter associated with the first compute node of the RDMA connection to another memory region in the I/O adapter associated with the second compute node of the RDMA connection, without sending the RDMA packets external to micro server chassis;
wherein an egress direct memory access (DMA) engine performs a lookup of the memory region in the I/O adapter associated with the first compute node of the RDMA connection according to a first microcode routine associated with an I/O operation as loaded by the second compute node and retrieves data according to the I/O operation; and
wherein an ingress DMA engine writes the data in the RDMA packets into the another memory region in the I/O adapter associated with the second compute node of the RDMA connection according to a second microcode routine loaded by an ingress packet classifier.

US Pat. No. 10,114,791

ELECTRONIC APPARATUS, CALCULATION PROCESSING METHOD, AND RECORDING MEDIUM STORING CALCULATION PROCESSING PROGRAM

CASIO COMPUTER CO., LTD.,...

1. An electronic apparatus comprising:a display;
a memory; and
a processor,
wherein the processor is configured to:
accept inputs of a plurality of equations including numerical data in response to user operations;
display, on the display, numerical data of a grand total of calculation results of the plurality of equations;
store, in the memory, the calculation results corresponding to the plurality of equations;
search at least one of the calculation results in response to at least one user operation, without searching contents of the plurality of equations;
display, on the display, the searched at least one of the calculation results;
compare whether a value of newly input numerical data for recalculation is equal to a value of corresponding input numerical data of the searched at least one of the calculation results; and
substitute, in the searched at least one of the calculation results, the newly input numerical data for the corresponding input numerical data without inputting any numerical value key again for substitution of the newly input numerical data in response to the comparison determining that the value of the newly input numerical data is not equal to the value of the corresponding input numerical data, and
wherein the substitution is performed by operating at least one key other than a numerical value key.

US Pat. No. 10,114,788

ADJUSTING AN OPTIMIZATION PARAMETER TO CUSTOMIZE A SIGNAL EYE FOR A TARGET CHIP ON A SHARED BUS

International Business Ma...

1. A system, comprising:a shared bus;
a plurality of chips coupled to respective locations along the shared bus;
a driver coupled to a first end of the shared bus;
a dynamic termination resistor coupled to a second end of the shared bus; and
configuration logic configured to:
evaluate received data to identify at least one target chip of the plurality of chips, wherein the target chip is an intended recipient of the received data,
adjust a resistance value of the termination resistor based upon a location of the target chip on the shared bus before the driver transmits the received data on the shared bus; and
a communication link coupled to the configuration logic and the termination resistor, wherein the communication link is separate and disconnected from the shared bus, and wherein the configuration logic is configured to transmit control signals on the communication link to change the resistance value of the termination resistor.

US Pat. No. 10,114,787

DEVICE IDENTIFICATION GENERATION IN ELECTRONIC DEVICES TO ALLOW EXTERNAL CONTROL OF DEVICE IDENTIFICATION FOR BUS COMMUNICATIONS IDENTIFICATION, AND RELATED SYSTEMS AND METHODS

QUALCOMM Incorporated, S...

1. A bus communications system for allowing a slave device coupled to a communications bus to reprogram its device identification, comprising:the communications bus comprised of a data line and a clock line;
a master device comprised of a master data port and a master clock port, the master device coupled to the communications bus by the master data port coupled to the data line and the master clock port coupled to the clock line;
the slave device comprising:
a device identification port coupled to the communications bus, the device identification port comprising one of a data pin and a clock pin, the data pin coupled to the data line of the communications bus, and the clock pin coupled to the clock line of the communications bus; and
a device identification generation circuit coupled to the device identification port, the device identification generation circuit configured to:
detect an electrical characteristic of a communications bus signal received from the communications bus on the device identification port through the one of the data pin and the clock pin;
generate a device identification based on the detected electrical characteristic selected from a plurality of device identifications provided in the slave device; and
store the generated device identification in a device identification memory;
an alternative current (AC) couple coupled to the device identification port;
an external resistor coupled to the device identification port;
a switch configured to switchably couple the external resistor to the device identification port; and
a switch control line configured to control an opening or closing of the switch,
wherein the master device is configured to generate a control signal on the switch control line to cause the switch to open or close to control coupling of the external resistor to the device identification port.

US Pat. No. 10,114,786

BACK CHANNEL SUPPORT FOR SYSTEMS WITH SPLIT LANE SWAP

CISCO TECHNOLOGY, INC., ...

1. A method comprising:configuring a back channel layer to form a back channel path to carry a first message to a first transmitter from a first receiver, wherein the first transmitter is in a first serializer/deserializer (SERDES) slice of a plurality of SERDES slices, and the first SERDES slice is assigned a lane ID;
receiving, at the first receiver, a second message, wherein the first receiver is in a second SERDES slice of a plurality of SERDES slices, and the back channel layer inserts a recipient ID in the second message to produce the first message; and
determining, with an interface layer, whether the recipient ID in the first message matches the lane ID, wherein the interface layer interfaces to the first SERDES slice.

US Pat. No. 10,114,785

STORAGE DEVICE AND SERVER DEVICE

Toshiba Memory Corporatio...

1. A storage device comprising:a memory configured to store data;
a control circuit configured to control writing of data to the memory and reading of data from the memory;
an interface circuit that includes a first terminal, a second terminal, and a third terminal, and configured to be connected to a first device or a second device,
the first terminal having a first status when the storage device and the first device are connected, and a second status when the storage device and the second device are connected,
the second terminal having a third status where a first power of first voltage is supplied from the first device to the storage device when the first terminal is in the first status, and a fourth status where a control signal is input from the second device to the storage device when the first terminal is in the second status,
the third terminal being a terminal through which a second power of second voltage is supplied to the storage device; and
a switch control circuit configured to control switching a connection status and a disconnection status based on statuses of the first terminal and the second terminal, the connection status representing that the third terminal and the control circuit are electrically connected, the disconnection status representing that the third terminal and the control circuit are electrically disconnected.

US Pat. No. 10,114,784

STATISTICAL POWER HANDLING IN A SCALABLE STORAGE SYSTEM

Liqid Inc., Broomfield, ...

1. A data storage assembly, comprising:a plurality of storage drives each comprising a drive Peripheral Component Interconnect Express (PCIe) interface and solid state storage media, with each of the plurality of storage drives configured to store and retrieve data responsive to storage operations received over the associated drive PCIe interface;
a PCIe switch circuit coupled to the drive PCIe interfaces of the plurality of storage drives and configured to receive the storage operations issued by one or more host systems over a shared PCIe interface and transfer the storage operations for delivery to the plurality of storage drives over selected ones of the drive PCIe interfaces;
a control processor configured to monitor activity levels of the drive PCIe interfaces of the plurality of storage drives;
the control processor configured to alter properties of the drive PCIe interfaces for one or more of the plurality of storage drives based at least on the activity levels of the one or more of the plurality of storage drives, wherein the properties of the drive PCIe interfaces comprise one or more of a quantity of active PCIe lanes and a PCIe throughput; and
the control processor configured to instruct power control circuitry to provide power to the one or more of the plurality of storage drives based at least on activity levels of the drive PCIe interfaces of the one or more of the plurality of storage drives being above a threshold activity level, and instruct the power control circuitry to remove the power from the one or more of the plurality of storage drives based at least on the activity levels indicating the one or more of the plurality of storage drives are dormant.

US Pat. No. 10,114,783

CONFIGURABLE INPUT/OUTPUT UNIT

CAE Inc., St-Laurent, QC...

1. A configurable input/output unit comprising:a plurality of configurable inputs and outputs comprising:
several outputs capable of sending a broadcast message to a configuration device and several inputs capable of receiving a broadcast response message from the configuration device;
a predefined output for sending the broadcast message to the configuration device; and
a predefined input for receiving the broadcast response message from the configuration device,
wherein the broadcast message comprises a configuration request and the broadcast response message comprises configuration data for configuring the plurality of configurable inputs and outputs;
wherein the broadcast message allows a determination by the configuration device that the configuration device is an intended recipient of the configuration request; and
wherein the configuration request comprises an identifier, the identifier comprising an identification of the predefined input of the configurable input/output unit for allowing transmission of the broadcast response message by the configuration device to the predefined input of the configurable input/output unit.

US Pat. No. 10,114,782

USB TYPE C DUAL-ROLE-PORT UNATTACHED DUTY CYCLE RANDOMIZATION

NXP B.V., Eindhoven (NL)...

1. A universal serial bus (USB) circuit, comprising:a USB interface configured to transmit and receive power and data;
a random number generator circuit configured to generate a random number; and
a controller configured to receive the random number and to select a dual role port (DRP) duty cycle and to select a DRP duration based upon the random number, wherein the DRP duty cycle and DRP duration are used when connecting the USB interface of a first USB type-C DRP device to a USB interface of a second USB type-C DRP device.

US Pat. No. 10,114,781

CONTACT CORROSION MITIGATION

Apple Inc., Cupertino, C...

1. A method of operation of a dedicated downward-facing port for a USB Type-C™ interface, the method comprising:applying a first voltage to a first end of a first resistor, where a second end of the first resistor is connected to a CC1 contact of the dedicated downward-facing port and applying the first voltage to a first end of a second resistor, where a second end of the second resistor is connected to a CC2 contact of the dedicated downward-facing port for a first duration; then
providing an open circuit at the CC1 contact of the dedicated downward-facing port and an open circuit at the CC2 contact of the dedicated downward-facing port for a second duration;
detecting a connection to the dedicated downward-facing port;
detecting a disconnection from the dedicated downward-facing port; then
waiting a third duration; then
applying the first voltage to the first end of the first resistor and applying the first voltage to the first end of the second resistor.

US Pat. No. 10,114,780

INFORMATION PROCESSING APPARATUS THAT PERMITS USE OF A USB DEVICE BY AN APPLICATION BEING DISPLAYED, METHOD OF CONTROLLING THE SAME AND NON-TRANSITORY COMPUTER READABLE MEDIUM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus in which a plurality of applications operate, the apparatus comprising:an identification unit configured to identify, from among the plurality of applications, an application that is being displayed in a display unit; and
a control unit configured to perform control so that while one application among the plurality of applications is occupying a USB device USB-connected to the information processing apparatus, another application cannot use the USB device;
wherein the control unit performs control so as to forcibly close usage of the USB device by an application for which usage of the USB device is permitted, and permit usage of the USB device by the application identified as being displayed in the display unit.

US Pat. No. 10,114,779

ISOLATING A REDIRECTED USB DEVICE TO A SET OF APPLICATIONS

Dell Products L.P., Roun...

1. A method, implemented by a virtual bus driver, for isolating a redirected USB device to a set of applications, the method comprising:receiving, at a virtual bus driver executing on a server with which a client terminal has established a remote session, a first USB request block (URB) that is associated with a first IO request packet (IRP) and directed to a USB device that is redirected from the client terminal to the server;
evaluating the first IRP to determine an application that originated the first IRP;
determining whether the application that originated the first IRP is allowed to access the redirected USB device;
upon determining that the application that originated the first IRP is not allowed to access the redirected USB device, preventing the first URB and the associated first IRP from being routed over the remote session to the redirected USB device;
receiving, at the virtual bus driver, a second URB that is associated with a second IRP and directed to the redirected USB device;
evaluating the second IRP to determine an application that originated the second IRP;
determining whether the application that originated the second IRP is allowed to access the redirected USB device; and
upon determining that the application that originated the second IRP is allowed to access the redirected USB device, allowing the second URB and the associated second IRP to be routed over the remote session to the redirected USB device.

US Pat. No. 10,114,778

MULTI-PROTOCOL IO INFRASTRUCTURE FOR A FLEXIBLE STORAGE PLATFORM

Samsung Electronics Co., ...

1. A storage system, comprising:a storage motherboard, comprising:
a first plurality of storage interface connectors;
a cable connector; and
a storage adapter circuit having a host side interface connected to the cable connector and a first plurality of storage side interfaces, each connected to a respective storage interface connector of the storage interface connectors, the storage adapter circuit comprising:
a first protocol translator, configured to translate communications from a host interface protocol to a first storage interface protocol;
a second protocol translator, configured to translate communications from a host interface protocol to a second storage interface protocol;
a first consolidation device configured to connect the first protocol translator to a plurality of storage devices configured to use the first storage interface protocol;
a second consolidation device configured to connect the second protocol translator to a plurality of storage devices configured to use the second storage interface protocol; and
a storage adapter circuit controller configured to:
detect a protocol of a mass storage device connected to a connector of the first plurality of storage interface connectors,
connect the first protocol translator to the host side interface, and connect the first consolidation device between the first protocol translator and the first plurality of storage side interfaces, when the detected protocol is the first protocol, and
connect the second protocol translator to the host side interface, and connect the second consolidation device between the second protocol translator and the first plurality of storage side interfaces, when the detected protocol is the second protocol.

US Pat. No. 10,114,777

I/O SYNCHRONIZATION FOR HIGH INTEGRITY MULTICORE PROCESSING

Rockwell Collins, Inc., ...

1. A system for input/output (I/O) synchronization in a high-integrity multicore processing environment (MCPE), comprising:at least one logical processing unit (LPU) comprising a plurality of homogeneous processing cores, each homogeneous processing core associated with at least one of a guest operating system (GOS) and a user application configured to execute on the homogeneous processing core, each GOS configured to concurrently:
forward at least one input dataset to the at least one user application;
receive at least one output dataset from the at least one user application; and
load the received output dataset into an output I/O synchronization (IOS) channel;
at least one I/O synchronization engine (IOSE) connected to the at least one LPU and comprising the at least one output IOS channel and at least one input IOS channel, the IOSE configured to:
execute at least one verification of the at least one loaded output dataset;
return at least one successful verification by selecting a final output dataset from the at least one loaded output dataset;
route the final output dataset to at least one of an external processing core and a network connected to the MCPE;
receive the at least one input dataset from the at least one of the external processing core and the network;
generate at least one synchronous input by atomically replicating the at least one input dataset into the at least one input IOS channel;
and
concurrently transfer the at least one synchronous input to the at least one GOS via the at least one input IOS channel;
and
at least one hypervisor coupled to the at least one LPU and configured to synchronize the receipt of the at least one output dataset by the at least one GOS.

US Pat. No. 10,114,776

SYSTEM ARBITER WITH PROGRAMMABLE PRIORITY LEVELS

MICROCHIP TECHNOLOGY INCO...

1. An embedded controller comprising:a system bus;
a central processing unit (“CPU”) communicatively coupled to the system bus, the central processing unit comprising a CPU priority register, and the central processing unit operable to access the system according to a plurality of operating modes;
a plurality of arbiter clients communicatively coupled to the system bus, each of the plurality of arbiter clients comprising a programmable priority register; and
a programmable system arbiter for granting access to the system bus among the plurality of arbiter clients and the central processing unit, the programmable system arbiter communicatively coupled to the plurality of arbiter clients and the system bus and the central processing unit, wherein:
the programmable system arbiter comprises one or more interrupt priority registers, each of the one or more interrupt priority registers associated with an interrupt type; and
the programmable system arbiter is operable to arbitrate access to the system bus among the plurality of arbiter clients and the CPU based at least on an analysis of a programmed priority order, the programmed priority order comprising a priority order for each of the plurality of arbiter clients, each of the plurality of operating modes, and each of the one or more interrupt types.

US Pat. No. 10,114,775

STACKED SEMICONDUCTOR DEVICE ASSEMBLY IN COMPUTER SYSTEM

RAMBUS INC., Sunnyvale, ...

1. A stacked semiconductor device assembly, comprising:first and second integrated circuit (IC) devices,
wherein the first IC device includes:
a first master interface;
a first channel master circuit coupled to the first master interface and configured to receive read/write data using the first master interface;
a first slave interface;
a first channel slave circuit coupled to the first slave interface and configured to receive read/write data using the first slave interface;
a first memory core coupled to the first channel slave circuit via a first core interface; and
a first modal pad;
wherein the second IC device includes:
a second master interface;
a second channel master circuit coupled to the second master interface and configured to receive read/write data using the second master interface;
a second slave interface;
a second channel slave circuit coupled to the second slave interface and configured to receive read/write data using the second slave interface;
a second memory core coupled to the second channel slave circuit via a second core interface; and
a second modal pad; and
wherein the first and second IC devices are configured such that in response to at least a modal selection signal received at one of the modal pads of the first and second IC devices, one of the first and second IC devices is configured to receive read/write data using its respective channel master circuit, and the other of the first and second IC devices is configured to receive read/write data using its respective channel slave circuit.

US Pat. No. 10,114,773

TECHNIQUES FOR HANDLING INTERRUPTS IN A PROCESSING UNIT USING VIRTUAL PROCESSOR THREAD GROUPS AND SOFTWARE STACK LEVELS

International Business Ma...

1. A method of handling interrupts in a data processing system, the method comprising:receiving, at an interrupt presentation controller (IPC), an event notification message (ENM), wherein the ENM specifies a level, an event target number, and a number of bits to ignore;
determining, by the IPC, a group of virtual processor threads that may be potentially interrupted based on the event target number, the number of bits to ignore, and a process identifier (ID) when the level specified in the ENM corresponds to a user level, wherein the event target number identifies a specific virtual processor thread and the number of bits to ignore identifies the number of lower-order bits to ignore with respect to the specific virtual processor thread when determining the group of virtual processor threads that may be potentially interrupted; and
selecting a single virtual processor thread from the group of virtual processor threads to service the interrupt.

US Pat. No. 10,114,772

ADDRESS LAYOUT OVER PHYSICAL MEMORY

International Business Ma...

1. A method for translating, within a main memory of a computer system, a physical address of a memory line to a storage location of the memory line, the main memory including a plurality of memory devices, each memory device of the plurality memory devices having a respective memory capacity, each of the respective memory capacities including at least one contiguous memory portion of a uniform size, the memory line being stored in one of the at least one contiguous memory portions, the method comprising:calculating, with a first index calculation unit, for the physical address, a first row index that identifies, within a first data table structure having a set of consecutive rows, wherein each row of the set of consecutive rows is configured to uniquely identify one of the at least one contiguous memory portions, a row of the first data table structure that identifies a memory portion, of the at least one contiguous memory portions, that includes the storage location of the memory line.

US Pat. No. 10,114,771

INTERCONNECTION OF PERIPHERAL DEVICES ON DIFFERENT ELECTRONIC DEVICES

Open Invention Network LL...

1. A method, comprising:creating a generic virtual device object via a processor of an electronic device, the generic virtual device object representing an image of a peripheral device attached to the electronic device and comprising properties of the peripheral device;
installing the generic virtual device object on a remote electronic device using existent setup information of the electronic device;
receiving data at the electronic device from a remote peripheral device attached to the remote electronic device;
generating a setup file via the electronic device responsive to determining a device class of the remote peripheral device attached to the remote electronic device is a same one as a device class of the peripheral device attached to the electronic device;
installing a remote virtual device object at the remote peripheral device via the setup file; and
emulating the remote peripheral device from the electronic device via an emulation driver loaded by the remote virtual device object;
wherein the remote virtual device object is created for the emulation driver.

US Pat. No. 10,114,770

HOMOGENOUS DEVICE ACCESS METHOD WHICH REMOVES PHYSICAL DEVICE DRIVERS IN A COMPUTER OPERATING SYSTEM

Universiti Teknologi Mala...

1. A uniform and homogenous access method of devices, comprising:providing a first device that comprises an operating system (OS) kernel and a logical device driver,
providing a second device that comprises a physical device driver and physical device driver codes that can be executed by the physical device driver, wherein the second device is an input/output (I/O) device,
receiving, by a control port of the second device, a configuration control sent by the first device, and
receiving, by a data port of the second device, a synchronous burst transfer of data sent by the first device, wherein a local addressing scheme, and bytecodes over the control port and the data port are within a homogenous system,
thereby enabling the first device to access and control the second device, wherein device-specific codes of the second device are not stored in the first device.

US Pat. No. 10,114,768

ENHANCE MEMORY ACCESS PERMISSION BASED ON PER-PAGE CURRENT PRIVILEGE LEVEL

Intel Corporation, Santa...

1. A processing system comprising:a processing core; and
a memory management unit, communicatively coupled to the processing core, comprising:
a storage device to store a page table entry (PTE) comprising:
a mapping from a virtual memory page referenced by an application running on the processing core to an identifier of a memory frame of a memory,
a first plurality of access permission flags associated with accessing the memory frame under a first privilege mode, and
a second plurality of access permission flags associated with accessing the memory under a second privilege mode,
wherein the memory management unit is to allow:
accessing the memory frame under the first privilege mode based on the first plurality of access permission flags when a privilege configuration flag is set to enabled;
accessing the memory frame under the second privilege mode based on the second plurality of access permission flags when the privilege configuration flag is enabled; and
accessing the memory frame under the first privilege mode or the second privilege mode based on commonly shared access permission flags from the first plurality of access permission flags and the second plurality of access permission flags when the privilege configuration flag is disabled.

US Pat. No. 10,114,767

VIRTUALIZING PHYSICAL MEMORY IN A VIRTUAL MACHINE SYSTEM USING A HIERARCHY OF EXTENDED PAGE TABLES TO TRANSLATE GUEST-PHYSICAL ADDRESSES TO HOST-PHYSICAL ADDRESSES

Intel Corporation, Santa...

1. A processor, comprising:memory management hardware to translate a guest-virtual address, the memory management hardware including:
extended page table (EPT) access logic to access an EPT hierarchy while guest software operates on the processor, the EPT hierarchy including
a first table input, the first table input coupled downstream from a first node, the first node to receive a first segment of a guest-physical address;
a first table output, the first table output to provide a base host-physical address for a second table;
a second table input, the second table input coupled downstream from a second node, the second node to receive a second segment of the guest-physical address, the second segment contiguous with and of lower order than the first segment;
a second table output, the second table output to provide a base host-physical address for a third table;
a third table input, the third table input coupled downstream from a third node, the third node to receive a third segment of the guest-physical address, the third segment contiguous with and of lower order than the second segment;
a third table output, the third table output to provide a base host-physical address for a physical page;
a fourth node to present an input for the physical page, the fourth node to present the input for the physical page coupled downstream from a fifth node, the fifth node to receive a fourth segment of the guest-physical address, the fourth segment contiguous with and of lower order than the third segment; and
a register to store a base host-physical address of a highest-level table of the EPT hierarchy;
wherein the EPT hierarchy is to translate each of a plurality of guest-physical addresses to one of a plurality of host-physical addresses, each of the plurality of guest-physical addresses to be formed by the memory management hardware based on one of a plurality of portions of the guest-virtual address, wherein for each translation the processor is to validate access permission according to controls in EPT tables, and wherein the register is to be loaded from a field in a virtual machine control structure in connection with a virtual machine entry.

US Pat. No. 10,114,766

MULTI-LEVEL INDEPENDENT SECURITY ARCHITECTURE

Secturion Systems, Inc., ...

1. A system, comprising:a plurality of data input ports, each port corresponding to one of a plurality of different levels of security classification;
a plurality of computing devices coupled to receive incoming data from the plurality of input ports, wherein the incoming data includes a first data packet having a first classification level, the first data packet comprises a tag that identifies one of the levels of security classification, and wherein each computing device is configured to perform, by at least one processor, security processing for at least one of the different levels of security classification;
wherein a first computing device of the plurality of computing devices is further configured to:
encrypt, using a first set of keys, the first data packet for sending to a data storage;
read the first data packet from the data storage;
after reading the first data packet from the data storage, detect that the first data packet is stored at the first classification level;
generate, based on detecting that the first data packet is stored at the first classification level, a key address to select a second set of keys; and
decrypt the first data packet using the second set of keys;
a multiplexer configured to route, based on the tag, the first data packet from one of the data input ports to the first computing device; and
a key manager configured to select the first set of keys from a plurality of key sets stored in at least one memory, each of the key sets corresponding to one of the different levels of security classification.

US Pat. No. 10,114,765

AUTOMATIC RECOVERY OF APPLICATION CACHE WARMTH

Microsoft Technology Lice...

1. A computer system, comprising:one or more processors; and
one or more hardware storage devices having stored thereon computer-executable instructions that are executable by the one or more processors to cause the computer system to perform at least the following:
during operation of an application in a first state,
identify one or more cache portions in an application cache associated with the application;
capture a cache portion identifier associated with each of the one or more cache portions, each cache portion identifier comprising less than entirety of the associated cache portion; and
store the one or more cache portions in a first data store that is external to the application cache and each captured cache portion identifier in a second data store that is separate from the first external store;
after storing the one or more cache portions in the first data store, detect a first change in state of the application to a second state;
after detecting the change in state of the application to the second state, detect a second change in state of the application back to the first state; and
based at least on detecting the second change in state of the application back to the first state,
retrieve the one or more cache portions from the first data store using one or more of the captured cache portion identifiers stored in the second data store; and
store each retrieved one or more cache portions in the application cache, such that the application cache is warmed for the first state.

US Pat. No. 10,114,764

MULTI-LEVEL PAGING AND ADDRESS TRANSLATION IN A NETWORK ENVIRONMENT

CISCO TECHNOLOGY, INC, S...

1. A method executed at a network element having a processor comprising:receiving a request from a device for space in a physical memory of the network element;
determining if any allocated remap windows in a remap address space can be reused to satisfy the request;
allocating a remap window in the remap address space to the device if none of the allocated remap windows satisfy the request; and
returning a remapped address of a physical offset in an allocated remap window if one of the allocated remap windows satisfies the request.

US Pat. No. 10,114,763

FORK-SAFE MEMORY ALLOCATION FROM MEMORY-MAPPED FILES WITH ANONYMOUS MEMORY BEHAVIOR

KOVE IP, LLC, Chicago, I...

1. A system comprising:a memory comprising a mapping of a first portion of a memory-mapped file to a virtual address for a first process, wherein the memory-mapped file comprises virtual memory backed by a file; and
a processor configured to:
map a second portion of the memory-mapped file to the virtual address for a second process in response to a forking of the second process from the first process, wherein the first and second portions of the memory-mapped file are backed by the file; and
write data from the first and second portions of the memory-mapped file to corresponding first and second portions of the file that backs the memory-mapped file.

US Pat. No. 10,114,762

METHOD AND APPARATUS FOR QUERYING PHYSICAL MEMORY ADDRESS

Huawei Technologies Co., ...

1. A method for querying a physical address to access a memory, the method comprising:deleting data stored in a prefetch buffer;
storing, after deleting the data, page table entries of a second thread into the prefetch buffer when determining that a memory addressing operation will be switched from a first thread to the second thread within a future set time, wherein the page table entries of the second thread were previously stored in a standby buffer corresponding to only the second thread, and wherein the standby buffer stores page table entries of the second thread that are not queried within a set time in a translation lookaside buffer (TLB);
receiving a memory addressing request message corresponding to the second thread, wherein the memory addressing request message comprises a virtual address; and
querying, in parallel, page table entries stored in the TLB and the page table entries stored in the prefetch buffer for the virtual address.

US Pat. No. 10,114,760

METHOD AND SYSTEM FOR IMPLEMENTING MULTI-STAGE TRANSLATION OF VIRTUAL ADDRESSES

NVIDIA CORPORATION, Sant...

1. A method comprising:receiving, at a first memory management unit, a memory request including a virtual address in a first address space;
translating the virtual address to generate a second virtual address in a second address space using a first page table including entries configured for a first page size, wherein the first page table entries include a first attribute that is consistent for pages of the first page size including a first physical page in a memory; and
transmitting a modified memory request including the second virtual address to a second memory management unit,
wherein the second memory management unit is configured to translate the second virtual address to generate a physical address in a third address space using a second page table including entries configured for a second page size that is different compared with the first page size, wherein the second page table entries include a second attribute that is consistent for pages of the second page size and varies for pages of the first page size including the first physical page,
wherein the physical address is associated with a location in the first physical page, and
wherein the first memory management unit and the second memory management unit are implemented in hardware.

US Pat. No. 10,114,759

TRAPLESS SHADOW PAGE TABLES

VMWARE, INC., Palo Alto,...

1. A method for implementing trapless shadow page tables, the method comprising:intercepting, by a shadow page tables (SPT) accelerator device of a host system, a memory write operation originating from a virtual machine (VM) and directed to a guest OS page table of the VM, the guest OS page table being stored in a device memory of the SPT accelerator device, the device memory being physically separate from a system memory of the host system;
extracting, by the SPT accelerator device, a guest virtual address (GVA)-to-guest physical address (GPA) mapping in the memory write instruction;
translating, by the SPT accelerator device, the GVA-to-GPA mapping into a GVA-to-host physical address (HPA) mapping; and
writing, by the SPT accelerator device, the GVA-to-HPA mapping to a shadow page table of the host system.

US Pat. No. 10,114,758

TECHNIQUES FOR SUPPORTING FOR DEMAND PAGING

NVIDIA CORPORATION, Sant...

1. A computer-implemented method for supporting demand paging, the method comprising:receiving one or more requests for a copy engine within a processing subsystem to perform one or more operations, wherein the one or more requests comprise one or more virtual memory addresses that correspond to the one or more operations;
prior to transmitting any request included in the one or more requests to the copy engine, ensuring that the processing subsystem includes a memory mapping for each of the one or more virtual memory addresses; and
transmitting the one or more requests to the copy engine for processing.

US Pat. No. 10,114,756

EXTERNALLY PROGRAMMABLE MEMORY MANAGEMENT UNIT

QUALCOMM Incorporated, S...

1. An apparatus comprising:a first processor configured to store, at a first memory that is external to and accessible to a second processor, addresses of address translation tables, the addresses stored in configuration blocks of the first memory; and
the second processor configured to:
store, at a memory of the second processor, a table of pointers to the configuration blocks of the first memory;
identify, in the table of pointers, a first pointer to a first configuration block based on an index operand of an instruction;
read, from the first configuration block, an address of a first address translation table based on the first pointer; and
load the address, from the first configuration block of the first memory, into a register of the second processor, wherein the register of the second processor is configured to be exclusively writeable responsive to execution of the instruction based on a value of a write enable signal received from the first processor, wherein the instruction includes an override operand to indicate whether to override contents of the register of the second processor.

US Pat. No. 10,114,755

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR WARMING A CACHE FOR A TASK LAUNCH

NVIDIA CORPORATION, Sant...

1. A method comprising:receiving, by a task management unit within a parallel processor, a task data structure that defines a processing task;
extracting information stored in a cache warming field of the task data structure; and
generating, by the task management unit prior to execution of the processing task by a processing core within the parallel processor, a cache warming instruction that is configured to load one or more entries of a cache storage with data fetched from a memory when executed by the processing core.

US Pat. No. 10,114,754

TECHNIQUES FOR SPACE RESERVATION IN A STORAGE ENVIRONMENT

Veritas Technologies LLC,...

1. A method comprising:caching allocating writes on a cache storage, wherein the cached allocating writes are writes not having a previously allocated physical storage space that are later to be stored on a physical storage separate from the cache storage that stores the cached allocating writes;
in response to caching allocating writes, reserving a block of the physical storage separate from the cache storage that stores the cached allocating writes;
comparing a total cumulative size of the cached allocating writes to an upper threshold; and
when the total cumulative size of the cached allocating writes exceeds the upper threshold, taking a cache occupancy reduction action;
wherein the reservation of the block of the physical storage, the caching allocating writes, and the cache reduction action are managed to avoid failures to commit to write to the physical storage.

US Pat. No. 10,114,753

USING CACHE LISTS FOR MULTIPLE PROCESSORS TO CACHE AND DEMOTE TRACKS IN A STORAGE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing tracks in a storage in a cache accessed by a plurality of processors, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:indicating tracks in the storage stored in the cache in lists, wherein there is one list for each of the plurality of processors, wherein each of the processors is dedicated to processing tracks indicated in one of the lists, wherein each of the processors processes the list for which that processor is dedicated to process the tracks in the cache indicated on the list, wherein each of the lists includes track identifiers of tracks in the cache, wherein a track identifier comprises at least one of a track address or a cache control block index for the track;
determining one of the lists from which to select a track of the tracks in the cache indicated in the determined list to demote; and
demoting a track identified by a track identifier of the selected track from the cache.

US Pat. No. 10,114,752

DETECTING CACHE CONFLICTS BY UTILIZING LOGICAL ADDRESS COMPARISONS IN A TRANSACTIONAL MEMORY

International Business Ma...

1. A computer system for determining whether to abort or continue a transaction based on logical addresses that map to a same real memory addresses, comprising:a memory; and
a first processor in communication with the memory, the first processor having a dynamic address translation mechanism for translating logical addresses into real memory addresses, wherein the computer system is configured to perform a method comprising:
assigning one or more transactions spawning from one or multiple programs executing on one or multiple processors to a shared transactional space, wherein the shared transactional space comprises a plurality of real addresses mapped to a plurality of logical addresses from the one or more transactions;
in response to receiving, by the first processor, a first memory access operation of a first transaction executing in a first logical address space, comparing a logical address of the first memory access operation to a logical address of a second memory access operation of a currently executing transaction of a second logical address space, wherein the logical address of the first memory access operation and the logical address of the second memory access operation map to a real address in the shared transactional space;
performing cross interrogate conflict detection for a logical address based on a translation table entry comprising:
referencing a real address corresponding to the logical address from the translation table entry based on the translation table entry indicating that a common logical to real translation exists for the shared transactional space;
transmitting the logical address and the corresponding real address of the XI conflict request to transaction tables of all threads of the first processor by an XI bus;
recognizing a conflict when the logical address of the XI request matches a logical address in a logical address space of an executing thread of the first processor; and
using the transmitted real address of the XI conflict request and cache coherency to determine the conflict if the logical address of the XI request is not within the range of the shared transactional space; and
based on the logical address of the first memory access operation matching the logical address of the second memory access operation of the currently executing transaction, aborting the first transaction and continuing the second memory access operation of the currently executing transaction.

US Pat. No. 10,114,751

METHOD AND SYSTEM FOR IMPLEMENTING CACHE SIZE ESTIMATIONS

Nutanix, Inc., San Jose,...

1. A method implemented with a processor for performing cache estimation, comprising:generating a list of cache sizes, the list of cache sizes corresponding to different sizes of caches, the caches comprising one or more storage components;
initializing a hyperloglog (HLL) for each cache size on the list of cache sizes, wherein a first HLL is initialized for a first cache having a first cache size and a second HLL is initialized for a second cache having a second cache size, wherein the first cache size is different than the second cache size;
performing cache estimation using the HLL by representing a change of state of the HLL as a cache miss and a non-change of state of the HLL as a cache hit;
computing, using the HLL, a miss rate curve (MRC) from a count of the cache miss and the cache hit; and
changing a size of a cache based at least in part on a MRC value determined from the MRC computed by the HLL.

US Pat. No. 10,114,750

PREVENTING THE DISPLACEMENT OF HIGH TEMPORAL LOCALITY OF REFERENCE DATA FILL BUFFERS

QUALCOMM Incorporated, S...

1. A method of accessing memory content with a high temporal locality of reference, comprising:storing the memory content in a data buffer of a processor, the data buffer coupled to a cache of the processor so that data may be written from the data buffer to the cache and from the cache to the data buffer;
initializing a counter associated with the data buffer;
incrementing the counter when the memory content in the data buffer is accessed within a predetermined number of clock cycles;
decrementing the counter when the memory content in the data buffer is not accessed within the predetermined number of clock cycles;
determining that the memory content of the data buffer has the high temporal locality of reference when the counter is above a threshold; and
accessing the data buffer, instead of the cache, for each operation targeting the memory content when the counter is above the threshold.

US Pat. No. 10,114,749

CACHE MEMORY SYSTEM AND METHOD FOR ACCESSING CACHE LINE

HUAWEI TECHNOLOGIES CO., ...

1. A cache memory system, comprising:multiple upper level caches, wherein each upper level cache comprises multiple cache lines; and
a current level cache comprising an exclusive tag random access memory (Exclusive Tag RAM) and an inclusive tag random access memory (Inclusive Tag RAM), wherein the Exclusive Tag RAM is configured to preferentially store an index address of a first cache line that is in each upper level cache and whose status is unique dirty (UD), wherein the Inclusive Tag RAM is configured to store an index address of a second cache line that is in each upper level cache and whose status is unique clean (UC), shared clean (SC), or shared dirty (SD), and wherein data of the second cache line is backed up and stored in the current level cache.

US Pat. No. 10,114,748

DISTRIBUTED RESERVATION BASED COHERENCY PROTOCOL

NXP USA, Inc., Austin, T...

1. A method of operating a cache-coherent computing system comprising:storing first state information corresponding to a first reservation for a first exclusive access to a first memory address requested by a first thread executing on a first processor of a first plurality of processors, the first state information including a proxy monitor indicator and an exclusive-write-ready indicator;
maintaining a set state of the proxy monitor indicator and a reset state of the exclusive-write-ready indicator until receiving an atomic response transaction associated with a successful colliding access of the first memory address or until detecting selection for issuance of the first exclusive access; and
transmitting an output atomic response transaction indicating a status of the first reservation to a coherency interconnection in response to issuance of the first exclusive access to the coherency interconnection,
wherein the output atomic response transaction is based on the first state information.

US Pat. No. 10,114,747

SYSTEMS AND METHODS FOR PERFORMING OPERATIONS ON MEMORY OF A COMPUTING DEVICE

Lenovo Enterprise Solutio...

1. A method comprising:storing an update image data on a first memory of a computing device, wherein the update image data comprises data for updating a second memory on a firmware of the computing device;
triggering an update mode of a serial peripheral interface (SPI) memory of the firmware based on an input/output (I/O) write operation at the second memory;
trapping the I/O write at a first register of the computing device when the update mode is triggered;
in response to trapping the input/output write, stopping the I/O write, switching to a system management mode (SMM), and invoking a system management interrupt (SMI) handler to determine the update mode based on the I/O write data trapped in the first register the computing device;
retrieving, via the SMI handler of the computing device, the update image data from the first memory;
determining, via the SMI handler, whether the update image data is valid; and
in response to determining that the update image data is valid, updating the second memory on the firmware of the computing device based on the retrieved update image data, and
wherein the update image data comprises operations data for performing one or more operations on the second memory on the firmware of the computing device, the second memory being read only memory (ROM) of a baseboard management controller (BMC).

US Pat. No. 10,114,746

NONVOLATILE STORAGE USING LOW LATENCY AND HIGH LATENCY MEMORY

Micron Technology, Inc., ...

1. An apparatus comprising:a first memory having a first read latency, wherein a first portion of a sequence of instructions and a size of the first portion is stored at the first memory; and
a second memory having a second read latency larger than the first read latency, wherein a second portion of the sequence of instructions is stored at the second memory and the size of the first portion and a size of the second portion are periodically adjusted by a memory controller based, at least in part, on the size of the first portion stored in the first memory, wherein the sequence of instructions follows a branch instruction stored at the second memory, wherein a request for each of the first portion of the sequence of instructions and the second portion of the sequence of instructions is sent without delay responsive to a branch of the branch instruction being taken, wherein the size of the first portion of the sequence of instructions is based, at least in part, on a ratio between latency in accessing the second memory and a latency in accessing the first memory, wherein the first and second portions of the sequence of instructions are written to the first and second memory, respectively, in an interleaving manner based on a write latency of the first memory.

US Pat. No. 10,114,745

ASSISTED GARBAGE COLLECTION IN A VIRTUAL MACHINE

Red Hat, Inc., Raleigh, ...

1. A method comprising:receiving, by a processing device of virtual machine, bytecode comprising a bytecode object and a garbage collection descriptor associated with the bytecode object, the garbage collection descriptor comprising an identifier of a garbage collection rule stored in a garbage collection rule database, wherein the garbage collection rule database stores a plurality of garbage collection rules specifying garbage collections to be performed on bytecode objects;
responsive to determining that the garbage collection descriptor indicates a first type of garbage collection in view of the garbage collection rule associated with the identifier in the garbage collection descriptor, storing the bytecode object in a first region of a memory associated with the processing device, wherein the first region is utilized for storing a first set of bytecode objects that have persisted for fewer than a determined number of rounds of garbage collection operations;
responsive to determining that the garbage collection descriptor indicates a second type of garbage collection in view of the garbage collection rule associated with the identifier in the garbage collection descriptor, storing the bytecode object directly in a second region of the memory associated with the processing device bypassing the first region, wherein the second region is utilized for storing a second set of bytecode objects that have persisted for at least the determined number of rounds of garbage collection operations in the first region;
responsive to determining that the garbage collection descriptor indicates a third type of garbage collection in view of the garbage collection rule associated with the identifier in the garbage collection descriptor, storing the bytecode object directly in a third region of the memory associated with the processing device bypassing the first region and second region, wherein the third region is to store a third set of bytecode objects until the third set of bytecode objects are destructed responsive to a class, associated with each of the third set of bytecode objects, being unloaded from the virtual machine; and
performing the garbage collection operation on the bytecode object in accordance with the garbage collection descriptor.

US Pat. No. 10,114,744

MEMORY UNIT ASSIGNMENT AND SELECTION FOR INTERNAL MEMORY OPERATIONS IN DATA STORAGE SYSTEMS

Western Digital Technolog...

1. A data storage system comprising:a non-volatile solid-state memory array comprising a plurality of memory units; and
a controller configured to:
during a garbage collection operation, garbage collect the plurality of memory units, based at least on ages of respective data stored in the plurality of memory units; and
write, in connection with the garbage collection operation, the respective data of the plurality of memory units to one or more available memory units selected from at least one of multiple memory unit availability lists based on the ages of the respective data;
classify data of a first source memory unit designated to be garbage collected into one of a predetermined number of data classifications;
determine that a first destination memory unit storing data of a same data classification as the classified data has reached its capacity; and
suspend, based on determining that the first destination memory unit reached its capacity, the garbage collection operation and performing a wear-leveling operation.

US Pat. No. 10,114,743

MEMORY ERASE MANAGEMENT

SANDISK TECHNOLOGIES INC....

1. A data storage device with reduced latency to complete an erase command, comprising:a memory; and
a controller coupled to the memory, wherein the controller is configured to:
store an indicator of an active address translation table within the controller of the data storage device,
maintain a first address translation table associated with the memory and a second address translation table associated with the memory,
receive a command to erase the memory, and
switch the indicator from the first address translation table to the second address translation table in response to receiving the command.

US Pat. No. 10,114,742

EFFICIENT BUFFER ALLOCATION FOR NAND WRITE-PATH SYSTEM

SK Hynix Inc., Gyeonggi-...

1. A system, comprising:a non-volatile memory comprising a group of solid state storage cells;
a memory controller coupled with the non-volatile memory, wherein the memory controller is configured to:
receive a first write data destined for a first solid state storage channel and a second write data destined for a second solid state storage channel, wherein the first solid state storage channel is different than the second solid state storage channel;
chop the first write data using at least a chopping factor in order to obtain (1) a first piece of chopped write data destined for the first solid state storage channel and (2) a second piece of chopped write data destined for the first solid state storage channel, wherein the first piece of chopping write data is addressed prior to the second piece of chopped write data;
chop the second write data using at least the chopping factor in order to obtain (1) a third piece of chopped write data destined for the second solid state storage channel and (2) a fourth piece of chopped write data destined for the second solid state storage channel, wherein the third piece of chopped write data is addressed prior to the fourth piece of chopped write data;
transfer the first piece of chopped write data to a write-path system (“WRP”);
store, in a first channel buffer in the WRP, the first piece of chopped write data, wherein the first channel buffer is a same size as the first piece of chopped write data;
after transferring the first piece of chopped write data, transfer the third piece of chopped write data to the WRP;
store, in a second channel buffer in the WRP, the third piece of chopped write data, wherein the second channel buffer is a same size as the third piece of chopped write data;
after transferring the third piece of chopped write data, transfer the second piece of chopped write data to the WRP;
store, in the first channel buffer in the WRP, the second piece of chopped write data;
after transferring the second piece of chopped write data, transfer the fourth piece of chopped write data to the WRP; and
store, in the second channel buffer in the WRP, the fourth piece of chopped write data.

US Pat. No. 10,114,741

DATA TRAFFIC RESERVATION SYSTEMS AND METHODS

LEVYX, INC., Irvine, CA ...

1. A computer-implemented method for transmitting data using a shared buffer having a head memory address and a tail memory address, comprising:receiving a first request to transmit a first set of data before receiving a second request to transmit at least a second set of data;
allocating a first memory location in the shared buffer to the first set of data and a second memory location in the shared buffer to the second set of data, wherein the first memory location is closer to the head memory address than the second memory location;
saving the first set of data to the first memory location at a first speed and the second set of data to the second memory location at a second speed, wherein the first speed is slower than the second speed;
asynchronously saving the first set of data in to the first memory location and the second set of data to the second memory location by initiating saving the first set of data before initiating saving the second set of data;
sequentially transmitting data from the shared buffer to a target device from the head memory address to the tail memory address.

US Pat. No. 10,114,740

MEMORY MANAGEMENT TECHNIQUES

Microsoft Technology Lice...

1. A computing device, the computing device comprising:at least one memory and at least one processor, the at least one memory and the at least one processor being respectively configured to store and execute instructions, including instructions for performing operations, the operations including:
allocating space in the at least one memory to a process executing on the computing device;
receiving an instruction from the process that:
data stored in at least a portion of the allocated space is to be discarded,
the data stored in at least the portion of the allocated space is to be retained, or
the data stored in at least the portion of the allocated space is available to be potentially discarded;
retaining or discarding the data based at least in part on the instruction and on memory usage for the computing device, wherein the instruction is based at least in part on whether the process is capable of regenerating the data.

US Pat. No. 10,114,739

REAL TIME ANALYSIS AND CONTROL FOR A MULTIPROCESSOR SYSTEM

Coherent Logix, Incorpora...

1. A method, comprising:analyzing application software;
developing test software based, at least in part, on results of analyzing the application software;
deploying the application software on a first hardware resource of a multi-processor array (MPA), wherein the MPA includes a plurality of processing elements, a plurality of memories, and an interconnection network communicatively coupling the plurality of processing elements to the plurality of memories, wherein the first hardware resource includes at least a first subset of the plurality of processing elements;
deploying the test software on a second hardware resource of the MPA, wherein the second hardware resource includes at least a second subset of the plurality of processing elements different than the first subset of the plurality of processing elements;
executing the application software on the first hardware resource; and
executing the test software on the second hardware resource, wherein executing the test software includes:
polling, by a first processing element included in the second hardware resource, one or more registers associated with a direct memory access (DMA) transfer in the first hardware resource resulting from executing one or more program commands included in the application software; and
sending, by the first processing element, auxiliary data retrieved from the one or more registers to a storage location for analysis, wherein an amount of auxiliary data is less than an amount of data generated by the application software; and
rebuilding the application software based on the auxiliary data.

US Pat. No. 10,114,738

METHOD AND SYSTEM FOR AUTOMATIC GENERATION OF TEST SCRIPT

WIPRO LIMITED, Bangalore...

1. A method for automatic generation of a test script and a validation script for an application under test, the method comprising:acquiring, by a system, a plurality of test steps from a database, the plurality of test steps being associated with a test case and including one or more words in natural language;
identifying, by the system, one or more actions to be performed in a testing process based on the plurality of test steps by using natural language processing;
generating, by the system, the test script based on the identified one or more actions, wherein the test script is executable by one or more processors to perform the plurality of test steps;
identifying, by the system, an expected test result associated with each of the plurality of test steps by performing the natural language processing on the plurality of test steps; and
generating, by the system, a validation script based on the expected test result associated with each of the plurality of test steps, wherein the validation script is executable by the one or more processors to validate whether the expected test result occurred, and wherein the natural language processing provides at least one of a context or a domain with respect to the application under test.

US Pat. No. 10,114,737

METHODS AND SYSTEMS FOR COMPUTING CODE COVERAGE USING GROUPED/FILTERED SOURCE CLASSES DURING TESTING OF AN APPLICATION

salesforce.com, inc., Sa...

1. A system, comprising:a user system, comprising:
an input system configured to receive input parameters specified by a user of the user system, wherein the input parameters comprise: one or more regular expressions specified by the user of the user system;
a processing system; and
memory configured to store: a source class filter module executable by the processing system, wherein the source class filter module, upon being executed by the processing system; is configured to: group and filter source class identifiers, based on one or more of the input parameters, to generate a unique source class identifier array of filtered source class identifiers that correspond to a particular subset of source classes that targeted code coverage metrics are to be computed for during code coverage computations when testing an application, wherein each regular expression is a pattern that is used to perform a pattern match to identify a particular group of one or more source class identifiers associated with that regular expression that each correspond to a source class name of a source class, and wherein the source class filter module comprises: a first source class fetcher comprising: a regular expression processor configured to receive the regular expressions input by the user of the user system; and a query builder configured to: generate a query associated with each regular expression; and
a cloud-based computing platform, communicatively coupled to the user system, and comprising:
an interface that interfaces with the source class filter module, wherein the quay builder is configured to: send each query associated with each regular expression to the interface, wherein the interface is configured to receive each query, contact a data store of the cloud-based computing platform and find matching source class names for each regular expression, and return a source class identifier for each of the matching source class names to the query builder, and wherein the query builder is further configured to receive the source class identifiers for each of the matching source class names, and group the source class identifiers corresponding to each of the regular expressions into a first source class identifier array of the unique source class identifier array; and
a code coverage computation unit that is configured to compute the targeted code coverage metrics for the particular subset of source classes corresponding to the filtered source class identifiers of the unique source class identifier array, wherein the particular subset of source classes correspond to a particular subset of source code of the application.

US Pat. No. 10,114,736

VIRTUAL SERVICE DATA SET GENERATION

CA, Inc., Islandia, NY (...

1. A method comprising:instantiating a virtual service from a service model, wherein the virtual service is operable to receive requests intended for a particular one of a plurality of software components in a system and generate simulated responses of the particular software component based on a service model modeling responses of the particular software component;
receiving, at the virtual service, a particular request from another software component intended for the particular software component, wherein the particular request is a particular type and is redirected to the virtual service;
identifying a size request by a testing system, wherein the size request corresponds to the particular request, the size request indicates a first number of records to be included in a data set for inclusion in a simulated response of the virtual service to the particular request, the first number of records is different than a second number of records defined in the service model to be included in responses to requests of the particular type;
generating the simulated response at the virtual service based on the size request and the service model, wherein generating the simulated response comprises generating the data set to include the first number of records; and
sending the simulated response to the other software component in response to the particular request, wherein the simulated response comprises the data set.

US Pat. No. 10,114,734

END USER REMOTE ENTERPRISE APPLICATION SOFTWARE TESTING

DevFactory FZ-LLC, Dubai...

1. A method comprising:receiving an electronically transmitted virtual desktop infrastructure template with a first computer system;
utilizing the virtual desktop infrastructure template as a management layer to provision the first computer system and transform the first computer into an enterprise software application (ESA) testing platform computer system in accordance with the virtual desktop infrastructure template interacting with virtualization software executing on the first computer, wherein the ESA testing platform computer system hosts an enterprise software application;
receiving a request from an end user using a second computer to access an ESA hosted by the ESA testing platform computer system that is remote from the second computer used by the end user;
receiving data with the ESA testing platform computer system from the second computer to test the ESA, wherein the remote ESA testing platform computer system is provisioned to emulate an actual operating environment for which the ESA is being tested; and
testing the ESA in the ESA testing platform computer system.

US Pat. No. 10,114,733

SYSTEM AND METHOD FOR AUTOMATED TESTING OF USER INTERFACE SOFTWARE FOR VISUAL RESPONSIVENESS

Cadence Design Systems, I...

1. A system for testing user interface software for time lag in actuating a visual prompt responsive to user manipulation of a user input device, the system comprising:a display unit defining a canvas for displaying image frames;
a root capturing unit executable to capture user actuation of the user input device as at least one root event at a root software level, an operating system operating at the root software level, each root event being captured as a series of time-displaced samples of input device actuation;
a canvas capturing unit executable to capture processing of the root event by the user interface software as a canvas response at a canvas software level, the user interface software operating at the canvas software level for user interaction with an application, the canvas response being captured as a series of time-displaced image frames; and,
a test analysis unit coupled to said root and canvas capturing units, said test analysis unit executable to determine a parametric difference between corresponding ones of the root events and canvas responses, and to determine a degree of visual responsiveness for the user interface software based thereon, said test analysis unit thereby discriminating portions of the time lag introduced at the canvas software level from portions of the time lag introduced at the root software level.

US Pat. No. 10,114,732

DEBUGGING IN-CLOUD DISTRIBUTED CODE IN LIVE LOAD ENVIRONMENT

CA, INC., New York, NY (...

1. A remote debugging method comprising:configuring a transactions traffic distributor in a networked system to distinguish between production traffic and development traffic and to route at least a first portion of production traffic received by the distributor to an associated first production server;
configuring the transactions traffic distributor to route at least a respective portion of development traffic received by the distributor to an associated second production server, the second production server being configured to perform remotely controlled debugging on code of the at least respective portion of development traffic routed thereto;
configuring a development server remote from the first and second production servers and operatively coupled to the networked system to route transaction requests of at least one under-development process to the transactions traffic distributor in the form of development traffic; and
configuring the development server to provide debugging instructions to, and receive debugging results from the second production server.

US Pat. No. 10,114,731

INCLUDING KERNEL OBJECT INFORMATION IN A USER DUMP

EMC IP Holding Company LL...

1. A method of identifying a software issue in a data storage system, comprising:storing selected operating system kernel data in a memory location of the data storage system;
analyzing the stored operating system kernel data in the memory location identifying a root cause of the software issue from the analyzed operating system kernel data; and
transmitting an alert to a user after the root cause is identified, the alert identifying the root cause to the user;
wherein the method further comprises:
performing, after storing the selected operating system kernel data in the memory location, a memory dump operation configured to output memory dump data of a software process associated with the data storage system;
storing memory dump data in another memory location; and
analyzing the memory dump data, analysis of the memory dump data and the operating system kernel data providing the identification of the root cause of the software issue;
wherein the selected operating system kernel data is stored at a time selected based on a number of available handles in a thread of the data storage system; and
wherein the method further comprises:
comparing the number of available handles to a selected minimum handle threshold level of available handles for the thread;
when the number of available handles is below the selected minimum handle threshold level, generating a collection thread to collect file name data for each file handle from the operating system kernel;
storing the file name data in the memory location; and
initiating the memory dump operation.

US Pat. No. 10,114,730

DYNAMIC INSTRUMENTATION BASED ON DETECTED ERRORS

International Business Ma...

1. A method for dynamically instrumenting a program at runtime, the method comprising:identifying, by a processor, a sequence of memory related operations from an instruction stream, wherein the sequence comprises: a plurality of memory related operations that each reference a first address in memory, including a first memory related operation and a second memory related operation, wherein the second memory related operation is only able to execute subsequent to the first memory related operation;
instrumenting, by the processor, the first memory related operation;
detecting, by the processor, an error at the first memory related operation based on the instrumentation of the first memory related operation;
responsive to detecting the error at the first memory related operation, instrumenting, by the processor, at least the second memory related operation based on the presence of the second memory related operation within the identified sequence.

US Pat. No. 10,114,729

PERFORMANCE ANALYSIS USING PERFORMANCE COUNTERS AND TRACE LOGIC

QUALCOMM Incorporated, S...

1. A method of analyzing performance of a processing system, the method comprising:identifying a first transaction as a transaction to be monitored, at a first trace point of the processing system, based on detecting the first transaction at least a threshold number of times at the first trace point, wherein detecting the first transaction at least the threshold number of times at the first trace point comprises counting, in a performance counter provided at the first trace point, a number of times the first transaction is detected at the first trace point and comparing the number of times the first transaction is detected, to a threshold;
associating a first trace tag identifier with the first transaction, at the first trace point;
identifying the first transaction at one or more other trace points of the processing system based on the first trace tag identifier;
determining time stamps at which the first transaction is identified at the first trace point and the one or more other trace points; and
determining trace information for the first transaction from the time stamps.

US Pat. No. 10,114,728

DYNAMIC FUNCTION-LEVEL HARDWARE PERFORMANCE PROFILING FOR APPLICATION PERFORMANCE ANALYSIS

NEC Corporation, (JP)

1. A system with a computer implementation of performance profiling for performance analysis, the system comprising:a processor coupled to a non-transitory computer-readable storage medium, the processor being configured for:
inserting probe points, using an application instrumentation, into a target application program so that at run-time, performance profiling can be done by enabling those probe points;
profiling, using an application dynamic tracing, with selected targets and overhead budget, the target application performance during its execution; and
analyzing, using a performance data analyzer, the application performance data output by the application dynamic tracing;
wherein the application instrumentation, application dynamic tracing and performance data analyzer are configured to cooperate to selectively enable and disable dynamic function-level hardware performance profiling of hardware performance events and association of the hardware performance events with function calls for application performance analysis at a plurality of times on any subset of application functions and any subset of the hardware performance events;
wherein a profiling scope is specified by inputting the selected targets and the profiling is configured to begin upon execution of the target application or on demand by a user or an external process at any selected time during the execution of the target application, the selected targets including interested hardware performance events and interested application functions;
wherein the overhead budget is specified by a target overhead limit;
wherein the profiling ends after a specified time interval or upon termination of the target application;
wherein the application dynamic tracing comprises a function tracing for running the target application processes and threads through the probe points of the application instrumentation;
wherein the tracing function generates an index to a shared data table using a process or thread identification and a function identification; and
wherein the tracing function, if the probe point is for a beginning of the application function, uses available hardware performance counters to read current values of selected hardware performance events, and stores those values, and wherein the tracing function, if the probe point is for an ending of the application function, uses available hardware performance counters to read current values of selected hardware performance events, subtracting them by corresponding beginning values stored earlier for the same function of the same thread, and updating an event value attribute for each selected hardware event with a calculated value.

US Pat. No. 10,114,727

DISPLAY WINDOW CONTEXTUAL VISUALIZATION FOR APPLICATION PERFORMANCE MONITORING

CA, Inc., New York, NY (...

1. A method for displaying application performance data, said method comprising:receiving, by a processor, performance data collected from an application during display of a first display window by the application;
receiving, by the processor, performance data collected from the application during display of a second display window by the application;
generating, by the processor based, at least in part, on the performance data collected from the application during display of the first and second display windows by the application, a record in a non-transitory machine-readable storage medium, the record including a performance data field containing a portion of the performance data collected from the application during display of the first and second display windows by the application and a display window identifier (ID) field containing a display window ID value; and
simultaneously, providing, by the processor, for display on a display device,
an image of the first display window that includes a first displayable performance indicator that is visually modifiable to correlate to variations in the performance data collected from the application during display of the first display window by the application; and
an image of the second display window that includes a second displayable performance indicator that is visually modifiable to correlate to variations in the performance data collected from the application during display of the second display window by the application.

US Pat. No. 10,114,726

AUTOMATED ROOT CAUSE ANALYSIS OF SINGLE OR N-TIERED APPLICATION

Virsec Systems, Inc., Sa...

1. A method for facilitating a root cause analysis associated with one or more computer applications, the method executed by a physical computer comprising a processor within a system, the method comprising, by the processor:receiving a global time reference at the one or more computer applications, each computer application of the one or more computer applications having a corresponding local time reference;
synchronizing each local time reference with the global time reference;
monitoring at least one computer instruction of the one or more computer applications with respect to the corresponding local time reference;
retrieving information associated with the at least one computer instruction; and
forwarding at least a portion of the retrieved computer instruction information to a validation engine, wherein the at least a portion facilitates the root cause analysis at the validation engine.

US Pat. No. 10,114,725

INFORMATION PROCESSING APPARATUS, METHOD, AND COMPUTER READABLE MEDIUM

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus configured to execute a program to generate events, the information processing apparatus comprising:a counter configured to count a number of events generated when a program is executed by the information processing apparatus, the counter not having a function of outputting an interruption signal;
a memory; and
a processor coupled to the memory and configured to:
acquire, at a regular time interval, an integrated value of count values and first information, the integrated value of count values being acquired by the counter by counting the number of the events, and the first information being operation information that includes identification information of the program at a timing of acquiring the integrated value or information of hardware that is used to execute the program,
store the integrated value and the first information in a first area of the memory,
when the integrated value acquired at a first timing of the regular time interval is lower than a threshold and the integrated value acquired at a second timing which is a next timing of the first timing of the regular time interval is higher than the threshold, determine a first difference between the threshold and the integrated value acquired at the first timing and a second difference between the threshold and the integrated value acquired at the second timing,
select, when the first difference is less than the second difference, the first information acquired at the first timing, and select, when the second difference is less than the first difference, the first information acquired at the second timing, and
store the selected first information in a second area of the memory.

US Pat. No. 10,114,724

TECHNIQUES FOR REAL TIME SERVER TESTING IN A PRODUCTION ENVIRONMENT

A9.com, Inc., Palo Alto,...

1. A computer-implemented method, comprising:receiving a plurality of requests for content by a content management server, the content management server deployed to a production environment and the plurality of requests received from a plurality of content client servers, each content client server being in communication with a plurality of client devices;
receiving, by a test controller executing on the content management server, a request to initiate a test, the test associated with a plurality of message attributes including at least one of a device identifier, content client identifier, and content identifier;
identifying, by the test controller, a subset of requests for content that match the plurality of message attributes, the subset of requests for content being identified as the plurality of requests are received;
processing, by the content management server, each request from the subset of requests;
responsive to processing of at least one of the subset of messages, initiating the test;
retrieving a plurality of request traces, each request trace associated with a request from the subset of requests;
comparing at least one test condition associated with a request test to each request trace to determine a plurality of test results;
determining that a threshold number of requests from the subset of requests have been processed, the threshold number of requests associated with the request test; and
aggregating the plurality of test results to determine an aggregate test result for the request test.

US Pat. No. 10,114,723

SYNCHRONOUS INPUT/OUTPUT MEASUREMENT DATA

INTERNATIONAL BUSINESS MA...

1. A method for acquiring measurement data of a synchronous input/output (I/O) link between an operating system and a recipient, the operating system and the recipient executing on a processor coupled to a memory, the method comprising:monitoring operating system usage of synchronous I/O commands on the synchronous I/O link including identifying a plurality of operating systems using the synchronous I/O link,
wherein the plurality of operating systems includes the operating system,
wherein each of a plurality of operating systems is communicatively coupled to peripheral component interconnect function measurement block and corresponds to a separate logical partition of a synchronous system;
storing the operating system usage in a measurement block as the measurement data,
wherein the measurement block is accessed by the operating system to determine that the measurement data is acquired; and
aggregating the operating system usage from the measurement block into a link measurement block,
wherein the link measurement block comprises a peripheral component interconnect function measurement block for the synchronous I/O link that directly measures credentials of the synchronous I/O link by accumulating the measurement data related to each virtual function of each of the plurality of operating systems and the measurement data includes successful commands, processing time, local rejects, remote errors, bytes written, and bytes read for the synchronous I/O link.

US Pat. No. 10,114,722

TEST OF THE EXECUTION OF WORKLOADS IN A COMPUTING SYSTEM

INTERNATIONAL BUSINESS MA...

1. A method for servicing workloads, the method comprising:providing a computing environment for servicing the workloads, the computing environment including a production computing environment and a common staging computing environment, the common staging computing environment comprising multiple staging computing machines shared by a plurality of users of the computing environment;
providing a definition of one or more workloads for each one of the plurality of users of the computing environment, the definition of each workload comprising a plurality of indications, the plurality of indications comprising an indication of one or more work units to be executed, an indication of the production computing machine of a production computing environment of the corresponding user for executing each work unit, and an indication of an execution mode of the workload setting the workload as a production workload to be executed in a production mode or as a test workload to be executed in a test mode;
identifying from the plurality of indications of the definition of a workload, the production computing machine for executing each work unit of the workload, and based on the plurality of indications of the definition of the workload indicating that the workload is to be executed as a test workload in the test mode, processing each work unit of the test workload by:
automatically mapping, by a transposer of the computing environment, the production computing machine of the work unit of the test workload to a staging computing machine of the multiple staging computing machines shared by the plurality of users, the mapping considering one or more computing resource characteristics of the production computing machine, and the transposer controlling the multiple staging computing machines shared by the plurality of users, the executing including the mapped staging computing machine accessing in read-only mode a production database used by the production computing machine of the production computing environment when the workload is set as the production workload;
executing the work unit of the test workload on the mapped staging computing machine of the multiple staging computing machines shared by the plurality of users; and
determining a test result of the executing of the test workload, and based thereon, verifying whether the executing of the test workload behaves correctly on the mapped staging computing machine of the multiple staging computing machines shared by the plurality of users; and
based on the test workload behaving correctly on the mapped staging computing machine of the multiple staging computing machines shared by the plurality of users, resetting, by a scheduler of the computing environment, the indication of the execution mode of the workload to the production workload, and executing the workload as the production workload in the production mode on the production computing machine of the production computing environment, wherein the test workload behaving correctly means execution of the test workload is successful for a predetermined number of consecutive times.

US Pat. No. 10,114,721

POWER CONSUMPTION ASSESMENT OF AN HVAC SYSTEM

SENSIBO LTD., Tel Aviv (...

1. An apparatus having a processing unit and a storage device, comprising:an information receiving module for receiving, directly or indirectly, information related to a Heating Ventilation or Air-Conditioning (HVAC) unit from at least one sensor external to the HVAC unit;
a long term obtaining and analysis module for determining a behavioral model of the HVAC based on observations provided by the at least one sensor, wherein the behavioral model is determined independently of factory data of a model of the HVAC unit; and
a power consumption determination component for indirectly assessing power consumption of the HVAC unit from the behavioral model and from further observations received from the at least one sensor.

US Pat. No. 10,114,720

ESTIMATING POWER USAGE IN A COMPUTING ENVIRONMENT

International Business Ma...

1. A method for estimating power usage in a computing environment using differing manufacturers of hardware and software executed on the hardware, by a processor device, comprising:automatically, and with no input from a user, detecting hardware configuration information by comparing detected device characteristics and checking the detected characteristics against a library of device power models by use of a software agent in lieu of using an external power measurement device, the library of device power models comprising known minimum and maximum electrical power consumption rates for each device in the library, and including as input to a device power model for a given device in the library at least: an amount of memory currently in use, a number of physical processors and associated current utilization levels, an amount of storage contained or attached to the given device, and a current level of read and write activity on any of the contained or attached storage; wherein the detected hardware configuration information is translated into power consumption information for implementing one of a plurality of power estimation models for measuring electrical power consumption and utilization, the one of the plurality of power estimation models implemented by transmitting the power consumption information to a management application which reads the power consumption information provided by the software agent as if the power consumption information were an actual measurement of a particular data metric;
storing the power consumption information within a storage device in the computing environment, wherein the power consumption information is transmitted from the storage device to the management application;
optimizing, through the management application, the electrical power consumption and utilization of detected hardware within the computing environment associated with the detected hardware configuration information to increase an efficiency of the electrical power consumption and utilization; and
integrating a minimum and a maximum power usage and utilization of a plurality of hardware components into a linear power model.

US Pat. No. 10,114,719

ESTIMATING POWER USAGE IN A COMPUTING ENVIRONMENT

International Business Ma...

1. A system for estimating power usage in a computing environment using differing manufacturers of hardware and software executed on the hardware, comprising:a processor device, operable in the computing environment, wherein the at least one processor device:
automatically, and with no input from a user, detects hardware configuration information by comparing detected device characteristics and checking the detected characteristics against a library of device power models by use of a software agent in lieu of using an external power measurement device, the library of device power models comprising known minimum and maximum electrical power consumption rates for each device in the library, and including as input to a device power model for a given device in the library at least: an amount of memory currently in use, a number of physical processors and associated current utilization levels, an amount of storage contained or attached to the given device, and a current level of read and write activity on any of the contained or attached storage; wherein the detected hardware configuration information is translated into power consumption information for implementing one of a plurality of power estimation models for measuring electrical power consumption and utilization, the one of the plurality of power estimation models implemented by transmitting the power consumption information to a management application which reads the power consumption information provided by the software agent as if the power consumption information were an actual measurement of a particular data metric;
stores the power consumption information within a storage device in the computing environment, wherein the power consumption information is transmitted from the storage device to the management application;
optimizes, through the management application, the electrical power consumption and utilization of detected hardware within the computing environment associated with the detected hardware configuration information to increase an efficiency of the electrical power consumption and utilization; and
integrates a minimum and a maximum power usage and utilization of a plurality of hardware components into a linear power model.

US Pat. No. 10,114,718

PREVENTION OF EVENT FLOODING

International Business Ma...

9. A non-transitory computer-readable storage medium storing program code executed by a plurality of processors to perform a method comprising:responsive to receiving monitored activity data, analysing the monitored activity data to identify an event value corresponding to an event;
responsive to a identifying the event value, identifying a set of threshold values and determining whether the event value has met a first threshold value of the set of threshold values;
responsive to determining that the event value has met the first threshold value, determining if the first threshold value is equal a second threshold value met by a previous event; and
responsive to determining that the first threshold value is equal the second threshold value, disregarding the event.

US Pat. No. 10,114,717

SYSTEM AND METHOD FOR UTILIZING MACHINE-READABLE CODES FOR TESTING A COMMUNICATION NETWORK

Fluke Corporation, Evere...

1. A testing device comprising:a testing unit to perform test procedures on network elements of a communication network;
a machine-readable code reader to read a machine-readable code associated with a first network element of the communication network; and
a computer configured to:
determine a component type associated with the machine-readable code read by the machine-readable code reader;
select, from a plurality of configuration files, a configuration file based on the component type determined;
configure the testing unit for a test procedure using the configuration file selected;
instruct the testing unit to perform the test procedure on the first network element;
determine a location of a second network element relative to the first network element based on data associated with the machine-readable code for the first network element; and
provide the location of the second network element to a user of the testing device.

US Pat. No. 10,114,716

VIRTUAL FAILURE DOMAINS FOR STORAGE SYSTEMS

International Business Ma...

1. A method comprising:collecting, by one or more processors, information that indicates one or more failure correlations for disks in a storage system;
defining, by one or more processors, each disk as a vector of parameters associated with the respective disk, wherein the parameters are based on the information that indicates one or more failure correlations, and wherein the parameters for each respective disk include the respective disk's physical location inside the storage system, the respective disk's manufacture data, and the respective disk's performance/usage parameters, wherein the performance/usage parameters include a number of head crashes and a number of bad sectors;
separating, by one or more processors, the disks into a plurality of virtual failure domains based on the parameters of their corresponding vectors;
determining, by one or more processors, that all data objects of a set of redundant data objects are included in a first virtual failure domain; and
responsive to determining that all data objects of the set of redundant data objects are included in the first virtual failure domain, migrating, by one or more processors, at least one data object of the set of redundant data objects from a first disk in the first virtual failure domain to a second disk in a second virtual failure domain.

US Pat. No. 10,114,714

REDUNDANT, FAULT-TOLERANT, DISTRIBUTED REMOTE PROCEDURE CALL CACHE IN A STORAGE SYSTEM

Pure Storage, Inc., Moun...

1. A storage cluster, comprising:a plurality of storage nodes configurable to cooperate as a storage cluster and to support a plurality of filesystems, each storage node of the plurality of storage nodes having solid-state storage;
a first remote procedure call cache in a first one of the plurality of storage nodes, the first remote procedure call cache configurable to receive a remote procedure call under a first one of the plurality of filesystems; and
a first mirrored remote procedure call cache in a second one of the plurality of storage nodes, configurable to mirror the first remote procedure call cache.

US Pat. No. 10,114,713

SYSTEMS AND METHODS FOR PREVENTING SPLIT-BRAIN SCENARIOS IN HIGH-AVAILABILITY CLUSTERS

Juniper Networks, Inc., ...

1. A computer-implemented method comprising:detecting, at a standby node of a high-availability cluster, a partitioning event that isolates the standby node from an active node of the high-availability cluster;
after the partitioning event has occurred:
broadcasting, from a health-status server, a cluster-health message to at least the standby node, wherein:
the health-status server is separate and distinct from the standby node and the active node;
the cluster-health message comprises at least a health status of the active node;
the health status of the active node is based at least in part on whether the health-status server received a node-health message from the active node after the partitioning event occurred;
reacting, at the standby node, to the partitioning event such that the partitioning event does not result in a split-brain scenario within the high-availability cluster by performing, based at least in part on whether the standby node received the cluster-health message from the health-status server, at least one of:
leaving the high-availability cluster;
assuming at least one computing task assigned to the active node.

US Pat. No. 10,114,712

FAILURE DETECTION VIA IMPLICIT LEASES IN DISTRIBUTED COMPUTING SYSTEMS

Microsoft Technology Lice...

1. A computing device, comprising:a processor; and
a memory containing instructions executable by the processor to cause the processor to perform a process including:
receiving an arbitration request from a first node in a computing system having a plurality of nodes interconnected by a computer network, each of the nodes having a logic relationship with another node in the computing system, the arbitration request indicating that the first node is unable to establish a lease with a second node for a predetermined threshold period, wherein the second node is logically related to the first node according to the logic relationship and is a default monitor for the first node for the lease; and
in response to receiving the arbitration request from the first node, providing a neutral arbitration result to the first node within an arbitration timeout period, the neutral arbitration result allowing the first node to continue to operate without causing the second node to terminate, thereby allowing both the first and second nodes to continue to operate despite that the first node is unable to establish the lease with the second node.

US Pat. No. 10,114,709

BLOCK STORAGE BY DECOUPLING ORDERING FROM DURABILITY

Microsoft Technology Lice...

1. A method comprising:receiving multiple write commands having corresponding write data;
receiving multiple flush commands, the multiple flush commands defining corresponding flush epochs;
issuing the write data to a persistent log on a physical storage device with consistency data;
acknowledging an individual flush command that defines an individual flush epoch before confirming that at least some write data for the individual flush epoch has committed on the physical storage device; and
after a crash, recovering the write data on the physical storage device to a consistent state using the consistency data.

US Pat. No. 10,114,708

AUTOMATIC LOG COLLECTION FOR AN AUTOMATED DATA STORAGE LIBRARY

INTERNATIONAL BUSINESS MA...

1. A method, by one or more processors, for automatic log collection of an automated data storage library, comprising:detecting an occurrence of a triggering event associated with the automated data storage library, wherein the triggering event includes at least detecting an opening of one or more doors of the automated data storage library;
capturing a snapshot of one or more logs associated with the automated data storage library upon detection of the triggering event, the one or more logs including at least error logs, service-related logs, and accessor logs at an appliance-level of the automated data storage library, and logs associated with data storage media stored within the automated data storage library; and
storing the snapshot of the one or more logs by the automated data storage library.

US Pat. No. 10,114,707

CROSS SITE RECOVERY OF A VM

EMC IP Holding Company LL...

1. A system for restoring a virtual machine, comprising:a communication interface configured to:
receive an indication to restore the virtual machine of a primary site at a remote site, wherein the virtual machine of the primary site is one of one or more virtual machines of the primary site available to be restored at the remote site, wherein the virtual machine of the primary site is to be restored using a backup copy of the primary site virtual machine that is stored in a backup storage at the remote site, wherein the backup copy of the primary site virtual machine was generated using a virtual appliance of the primary site directly accessing data of the virtual machine of the primary site from a data storage of the primary site without accessing the virtual machine itself, wherein the backup copy of the primary site virtual machine was replicated from the backup storage of the primary site to the backup storage of the remote site, wherein the backup storage of the primary site is configured to store one or more backup images of the one or more virtual machines of the primary site, wherein the data storage of the primary site is configured to store data of the one or more virtual machines of the primary site; and
a processor coupled with the communication interface and configured to:
determine a type of restoration site of the remote site; and
restore the primary site virtual machine to the remote site using a backup application of the remote site, wherein in response to receiving a communication from a central backup application, the backup application of the remote site is configured to restore the primary site virtual machine at the remote site using the backup copy of the primary site virtual machine that is stored in the backup storage at the remote site.

US Pat. No. 10,114,706

BACKUP AND RECOVERY OF RAW DISKS [RDM] IN VIRTUAL ENVIRONMENT USING SNAPSHOT TECHNOLOGY

EMC IP Holding Company LL...

1. A computer-implemented method comprising:receiving, by a virtual machine manager on a first computing device, a request for backup of a virtual disk of a virtual machine on the first computing device to a target storage device, the request originating from a second computing device, wherein the virtual machine is identified by a unique virtual machine identifier within the request, and wherein the virtual machine manager includes a backup application programming interface (API);
in response to receiving the request, determining, by the virtual machine manager using the backup API therein, an identifier for the virtual disk of the virtual machine to be backed up to the target storage device;
determining, by the virtual machine manager, a mapping of the identifier for the virtual disk to one or more portions of disk storage on a source storage device, wherein the virtual machine manager uses the backup API therein to communicate with a backup agent on the source storage device to determine the mapping of the identifier for the virtual disk to the one or more portions of disk storage on the source storage device, wherein determining the mapping of the identifier for the virtual disk to the one or more portions of disk storage includes requesting, by the virtual machine manager, a mapping of a raw virtual disk in the virtual disk to the one or more portions of disk storage from the source storage device, wherein the mapping of the raw virtual disk to the one or more portions of storage is defined by a raw disk mapping file, wherein the raw disk mapping is requested by the virtual machine manager from the source storage device;
triggering, by the virtual machine manager, transmission to the target storage device, the identifier for the virtual disk, the mapping of the identifier for the virtual disk to the one or more portions of disk storage, and data stored in the one or more portions of disk storage on the source storage device.

US Pat. No. 10,114,705

PRESENTING VIRTUAL MACHINE BACKUP FILES FOR BLOCK AND FILE LEVEL RESTORE

EMC IP Holding Company LL...

1. A computer-implemented method of providing virtual machine backup files for instant system restoration, comprising:setting up a kernel mode interceptor hook system object and a user mode redirector process;
presenting a backed up save set comprising a virtual disk in a backup system including a backup server computer and virtual machine targets;
creating a temporary container in a storage medium coupled to the backup server computer;
formatting the temporary container with a directory structure similar to that created for the virtual disk during backup;
receiving read operations (reads) to the backup save set intercepted by the kernel interceptor hook system object of the backup server computer,
redirecting, using the kernel mode redirector component, reads to the backup save set residing either on one of a deduplication backup platform or a network file system; and
servicing the read operations through a user mode read thread to use file transfer protocol (FTP) libraries to mount arbitrary disk images on defined operating system platforms to ensure availability of the backup save set without requiring file share protocols to access a remote virtual machine hard disk files and without changing the original backup save set format.

US Pat. No. 10,114,704

UPDATING DATABASE RECORDS WHILE MAINTAINING ACCESSIBLE TEMPORAL HISTORY

INTUIT INC., Mountain Vi...

1. A method for updating database records while maintaining accessible temporal history, comprising:receiving a request, at a database, to select a specific instance of a record from the database at a specific point in time;
in response to the request:
reading an instance of the record from a snapshot of the database, wherein the snapshot of the database was made prior to the specific point in time;
loading one or more deltas associated with the record from the database, wherein each delta in the one or more deltas comprises a difference between a new state of the record and a prior state of the record;
chronologically applying the one or more deltas to the instance of the record to create the specific instance of the record; and
returning the specific instance of the record; and
if the request causes a percentage of recent requests to exceed a predetermined percentage of recent requests for most-current data, then creating a new snapshot of the database by:
loading a most recent snapshot of the database;
loading a complete set of deltas associated with the database from a time of the most recent snapshot to a current time; and
applying to the database the complete set of deltas associated with the database from the time of the most recent snapshot to the current time.

US Pat. No. 10,114,703

FLASH COPY FOR DISASTER RECOVERY (DR) TESTING

INTERNATIONAL BUSINESS MA...

1. A computer program product for disaster recovery (DR) testing, the computer program product comprising a computer readable storage device having program code embodied therewith, wherein the computer readable storage device is not a transitory signal per se, the program code being readable and/or executable by a hardware processor to:define, by the processor, a DR family, the DR family comprising one or more DR clusters accessible to a DR host and one or more production clusters accessible to a production host, wherein the DR host is configured to replicate data from the one or more production clusters to the one or more DR clusters;
create, by the processor, a backup copy of data stored to the one or more production clusters;
store, by the processor, the backup copy to the one or more DR clusters;
establish, by the processor, a time-zero in the DR family in response to a user enabling snapshot within the DR family;
create, by the processor, a snapshot of each backup copy stored to the one or more DR clusters, wherein each snapshot represents data stored to the one or more DR clusters at the time-zero;
share, by the processor, a point-in-time data consistency at the time-zero among all clusters within the DR family; and
perform, by the processor, DR testing using, in descending order of preference: a snapshot from a first cluster within the DR family, a snapshot accessed via the first cluster from a second cluster within the DR family when the first cluster does not include an up-to-date consistent snapshot, a backup copy from the first cluster, and a backup copy from the second cluster when the first cluster does not include an up-to-date consistent backup copy.

US Pat. No. 10,114,702

METHOD AND SYSTEM TO DISCOVER AND MANAGE DISTRIBUTED APPLICATIONS IN VIRTUALIZATION ENVIRONMENTS

International Business Ma...

1. A method for managing a plurality of computing machines, the method comprising:accessing a catalogue memory structure storing a plurality of component signatures for discovering corresponding software components and, for each of a plurality of software applications, an indication of one or more of the software components belonging to a software application and one or more connection signatures for detecting corresponding connections, each connection being a connection between at least two of the software components of the software application;
discovering one or more of the software components being instantiated in a software image of each computing machine according to corresponding component signatures;
detecting one or more of the connections each connection being established between at least two instantiated software components of different computing machines according to corresponding connection signatures;
associating each software image with a software deployment of the software application of each established connection of the software image;
receiving a restore command for restoring a target recovery point selected among a plurality of recovery points, each recovery point comprising snapshots of respective software images of one or more of the software deployments being directly or indirectly overlapped; and
restoring the target recovery point in response to the restore command by restoring the snapshots associated with the target recovery point on corresponding computing machines.

US Pat. No. 10,114,701

SPACE EFFICIENT CASCADING POINT IN TIME COPYING

INTERNATIONAL BUSINESS MA...

1. A computer program product stored on a non-transitory tangible computer readable storage medium for a space efficient cascading point-in-time copying of source data by creating a plurality of cascading point-in-time target copies, the target copies being created at different points in time, the computer program including code forphysically copying data from the source data to a repository to create a physical copy;
creating a data mapping that associates the physical copy with a most recent target copy of said plurality of cascading target point-in-time copies, the data mapping indicating shared mapping and non-shared mapping, the shared mapping indicating that an address of the physical copy in the repository, the repository not comprising a cache, is shared with at least one previously created target copy of said plurality of cascading point-in-time copies; wherein the source data includes a time stamp of when a data block of the source data was last modified or overwritten, and if the time stamp indicates the source data is older than a time of creation of a previous target volume which was created prior to the most recent target copy of a most recent target volume, the source data corresponds to logical copies in both the most recent target volume and the previous target volume and the data mapping indicates the source data as shared; otherwise, if the time stamp indicates the source data was modified more recently than the time of creation of the previous target volume which was created prior to the most recent target copy of the most recent target volume, the data mapping indicates the source data as non-shared;
receiving a request to perform a read operation on a logical copy of one target copy of said plurality of cascading target point-in-time copies;
in response to the request, directing the read operation to a corresponding address of the repository if the step of physically copying data from the source data was performed prior to the creation of the one target copy of said plurality of cascading target point-in-time copies, and directing the read operation to the logical copy of the one target copy of said plurality of cascading target point-in-time copies if the step of physically copying data from the source data was performed after the creation of the one target copy of said plurality of cascading target point-in-time copies; and
updating the data mapping of one or more target copies of said plurality of cascading target point-in-time copies that are older than a designated target copy of said plurality of cascading target point-in-time copies, so as to include information relating to the designated target copy;
wherein the data mapping comprises a leaf of a B-tree structure, and the B-tree structure includes inner nodes and one or more leaves, the inner nodes include information for assisting in searching the B-tree and the one or more leaves indicate the shared mapping and the non-shared mapping.

US Pat. No. 10,114,699

RAID CONSISTENCY INITIALIZATION METHOD

Infortrend Technology, In...

1. A method for initializing a plurality of physical storage devices (PSD) in a redundant array of independent disks (RAID) subsystem, the method comprising:creating a RAID, including setting a RAID configuration of the RAID and creating a consistency initialization progress table for storing a plurality of consistency initialization states of a plurality of consistency initialization regions of the physical storage devices of the RAID subsystem; wherein the consistency initialization progress table includes a plurality of fields, and wherein one of the plurality of fields corresponds to one of the plurality of consistency initialization regions and said one of the plurality of fields includes a value for indicating the consistency initialization state of said one of the plurality of consistency initialization regions of the physical storage devices of the RAID subsystem, in which the consistency initialization state is a first consistency initialization state meaning that said one of the plurality of consistency initialization regions is completed with an induced regional consistency initialization, or is a second consistency initialization state meaning that said one of the plurality of consistency initialization regions is not completed with the induced regional consistency initialization, wherein after the consistency initialization progress table is created, the RAID subsystem is allowed to be accessed by a host I/O when there are still some of the consistency initialization regions not completed with the induced regional consistency initialization, wherein said induced regional consistency initialization makes data consistent with one another in the consistency initialization region of the physical storage devices of the RAID subsystem;
receiving the host I/O;
determining, after referring to the consistency initialization progress table, whether the consistency initialization region that is associated with the host I/O has been completed with the induced regional consistency initialization, so as to determine whether or not to trigger the induced regional consistency initialization on the consistency initialization region that is associated with the host I/O to define data of the consistency initialization region to a predetermined consistent state;
triggering by the host I/O, before executing an I/O access associated with the host I/O and after referring to the consistency initialization progress table where the consistency initialization progress table shows that the consistency initialization region which is associated with the host I/O has not been completed with the induced regional consistency initialization, is not being performed with the induced regional consistency initialization, and has not been started with the induced regional consistency initialization, the induced regional consistency initialization on the consistency initialization region which is associated with the host I/O to define data of the consistency initialization region to the predetermined consistent state, executing, after triggering the induced regional consistency initialization, the I/O access associated with the host I/O, and updating, after triggering the induced regional consistency initialization, a consistency initialization state of the consistency initialization region that is associated with the host I/O, into the consistency initialization progress table; and
executing the I/O access associated with the host I/O on the consistency initialization region that is associated with the host I/O without first triggering the induced regional consistency initialization on the consistency initialization region that is associated with the host I/O, if the consistency initialization progress table shows that the consistency initialization region that is associated with the host I/O has been completed with the induced regional consistency initialization, after referring to the consistency initialization progress table.

US Pat. No. 10,114,698

DETECTING AND RESPONDING TO DATA LOSS EVENTS IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:scanning a plurality of distributed storage units to identify one or more compromised encoded data slices (EDSs) of a set of EDSs, wherein the set of EDSs represents a first data segment;
when one or more compromised EDSs of the set of EDSs is found, determining whether a decode threshold number of EDSs of the set of EDSs is available to recover the first data segment;
when a decode threshold number of EDSs of the set of EDSs is determined not to be available to recover the first data segment, determining whether the first data segment is involved in an indeterminate state of processing a storage function;
when the first data segment is involved in an indeterminate state of processing a storage function, waiting until the processing a storage function is complete; and
when the first data segment is not involved in an indeterminate state of processing a storage function, initiating a process to recover at least a portion of the first data segment.

US Pat. No. 10,114,697

LARGE OBJECT PARALLEL WRITING

International Business Ma...

1. A method comprises:partitioning, by a computing device of a dispersed storage network (DSN), a data object into a first partition and a second partition;
dispersed storage error encoding, by the computing device, the first partition into a first plurality of sets of encoded data slices and the second partition into a second plurality of sets of encoded data slices;
generating, by the computing device, a first segment allocation table (SAT) regarding storage of the first plurality of sets of encoded data slices in a first set of storage units of the DSN and a second SAT regarding storage of the second plurality of sets of encoded data slices in a second set of storage units of the DSN;
dispersed storage error encoding, by the computing device, the first SAT to produce a first set of SAT slices and the second SAT to produce a second set of SAT slices;
sending, by the computing device, the first plurality of sets of encoded data slices and the first set of SAT slices to the first set of storage units;
sending, by the computing device, the second plurality of sets of encoded data slices and the second set of SAT slices to the second set of storage units; and
generating, by the computing device, a third SAT regarding storage of the first and second sets of SAT slices in the first and second set of storage units.

US Pat. No. 10,114,696

TRACKING DATA ACCESS IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a dispersed storage and task (DST) processing unit that includes a processor, the method comprises:receiving an access request from a first requesting entity via a network indicating a first original data object;
generating a first at least one read request for transmission to at least one storage unit to retrieve a plurality of encoded original data slices associated with the first original data object;
generating a first regenerated original data object by utilizing a decoding scheme on the plurality of encoded original data slices; and
generating a first transformed data object for transmission to the first requesting entity via the network by utilizing a transformation function on the first regenerated original data object based on a first entity identifier associated with the first requesting entity.

US Pat. No. 10,114,695

INFORMATION PROCESSING DEVICE, SEMICONDUCTOR DEVICE, AND MEMORY INSPECTION METHOD

FUJITSU LIMITED, Kawasak...

1. An information processing device comprising:a processor that executes processing of data; and
a memory module that includes a first memory in which a plurality of memory chips each which stores the data are mounted in layers, and a memory controller that controls the first memory, the memory controller:
inspects the data;
executes correction processing of the data when a single bit error is detected;
determines, when a single bit error is detected in a memory chip which is included in the plurality of memory chips and is mounted in a first layer of the layers, a first inspection area provided in another memory chip which is included in the plurality of memory chips and is mounted in another layer of the layers, based on a first location at which the single bit error occurs; and
executes first inspection of data in the first inspection area.

US Pat. No. 10,114,694

METHOD AND CONTROLLER FOR RECOVERING DATA IN EVENT OF PROGRAM FAILURE AND STORAGE SYSTEM USING THE SAME

Storart Technology Co. Lt...

1. A method for recovering data in event of a program failure, comprising the steps of:A. receiving a write data to be programmed into a plurality of non-volatile memory units;
B. generating a parity from the write data and separating the write data into a plurality of sub-data;
C. storing the parity in a volatile memory;
D. programming each of the plurality of sub-data into some of the plurality of non-volatile memory units;
E. determining if step D is successful; and
F. if the result of step E is no, recovering the sub-data in at least one program-failed non-volatile memory unit with the parity in the volatile memory and other sub-data successfully programmed, wherein the recovered sub-data is stored in a non-volatile memory unit that is not among the ones which the sub-data are programmed into.

US Pat. No. 10,114,693

MEMORY SYSTEMS AND ELECTRONIC SYSTEMS PERFORMING AN ADAPTIVE ERROR CORRECTION OPERATION WITH PRE-CHECKED ERROR RATE, AND METHODS OF OPERATING THE MEMORY SYSTEMS

SK hynix Inc., Icheon-si...

1. A memory system comprising:a test vector generator configured to generate a test vector to be written into a memory device;
a data discrepancy checker configured to compare read data outputted from the memory device with the test vector to generate an information signal corresponding to a comparison between the read data and the test vector;
an error correction code (ECC) controller configured to perform an ECC encoding operation and an ECC decoding operation according to any one among a plurality of ECC levels based on a control signal; and
a memory controller configured to control the test vector generator, the data discrepancy checker and the ECC controller,
wherein the memory controller transmits the control signal corresponding to an error rate of the memory device to the ECC controller, based on the information signal generated by the data discrepancy checker.

US Pat. No. 10,114,691

INFORMATION STORAGE SYSTEM

HITACHI, LTD., Tokyo (JP...

1. An information storage system for improving reliability, comprising:a first storage apparatus that includes a storage device and a hardware processor operatively coupled with the storage device configured to provide a first logical volume;
a second storage apparatus that includes a storage device and a hardware processor operatively coupled with the storage device configured to provide a second logical volume; and
a quorum accessed from the first storage apparatus and the second storage apparatus and including information regarding states of the first storage apparatus and the second storage apparatus,
wherein the hardware processor of the first storage apparatus and the hardware processor of the second storage apparatus are configured to make a host recognize the first logical volume of the storage device in the first storage apparatus and the second logical volume of the storage device in the second storage apparatus as a single volume,
wherein the first storage apparatus is configured to read, by the hardware processor, first data from the first logical volume and send the first data to the host upon receiving a read command to the single volume sent from the host,
wherein the second storage apparatus is configured to read, by the hardware processor, second data from the second logical volume and send the second data to the host upon receiving a read command to the single volume sent from the host,
wherein the hardware processor of the first storage apparatus is configured to, upon receiving a write command to the single volume sent from the host, write data to the first logical volume and the second logical volume,
wherein the hardware processor of the second storage apparatus is configured to, upon receiving a write command to the single volume sent from the host, write data to the first logical volume and the second logical volume, and
wherein, after detecting communication failure with the quorum, the hardware processors of the first storage apparatus and the second storage apparatus are configured to halt use of the quorum and change a state from Pair state to RC-pair state,
wherein, the hardware processor of the first storage apparatus is configured to, upon receiving a write command to the single volume sent from the host, write data to the first logical volume and the second logical volume in the RC-pair state,
wherein, after detecting communication failure between the first storage apparatus and the second storage apparatus in the Pair state, either the hardware processor of the first storage apparatus or the hardware processor of the second storage apparatus is configured to continue I/O processing according to the information in the quorum, and
wherein, after detecting communication failure between the first storage apparatus and the second storage apparatus in the RC-Pair state, the hardware processor of the first storage apparatus is configured to continue I/O processing if possible, and the hardware processor of the second storage apparatus is configured to halt I/O processing.

US Pat. No. 10,114,690

MULTI-DIE STATUS MODE FOR NON-VOLATILE STORAGE

SanDisk Technologies LLC,...

1. An apparatus comprising:a plurality of memory die; and
a memory controller configured to:
broadcast a first status command to the plurality of memory die;
receive a first status response concurrently from the plurality of memory die based on the first status command;
broadcast a second status command to the plurality of memory die;
receive a second status response concurrently from the plurality of memory die based on the second status command;
compare responses to the first status command to responses to the second status command to detect one or more memory die that no longer function properly and failed to respond to the status commands; and
send a reset command to the one or more memory die that failed to respond to the status commands to restore functionality to the one or more memory die.

US Pat. No. 10,114,689

DYNAMIC PLAYLIST GENERATION

Amazon Technologies, Inc....

12. A non-transitory computer-readable storage medium having stored thereon executable instructions that, as a result of being executed by one or more processors of a computer system, cause the computer system to:detect that a first media stream has ceased being received for a threshold amount of time, the first media stream corresponding to a first playlist comprising a first sequence of references, the first sequence of references corresponding to respective segments of the first media stream, the first playlist associated with an identifier usable at least in part to obtain a copy of the first playlist;
determine a second playlist associated with the identifier that comprises a second sequence of references, wherein the second sequence of references:
switches from a first reference corresponding to a segment of the first media stream to a second reference corresponding to a segment of a second media stream; and
includes a reference to a default segment that precedes the second reference of the second media stream, the default segment having a duration determined based at least in part on a time gap between the segment of the first media stream and the segment of the second media stream; and
allow a copy of the second playlist to be obtained at least in part by using the identifier.

US Pat. No. 10,114,688

SYSTEM AND METHOD FOR PERIPHERAL BUS DEVICE FAILURE MANAGEMENT

Dell Products L.P., Roun...

1. A method for managing peripheral device failures, comprising:defining, at a management module, a first cluster of redundant bus devices and a second cluster of redundant bus devices, wherein each cluster contains an active bus device and one or more passive bus devices, wherein the bus devices in the first cluster are different types of bus devices than those in the second cluster;
detecting, at a processor of a peripheral bus, a failure of a first bus device at a downstream port from the processor, wherein the downstream port is populated by the first bus device, wherein the processor is communicatively coupled at an upstream port to a root complex and the processor is configured to isolate the failure of the first bus device from the root complex, wherein the processor further includes an arbitration entity communicatively coupled to and located downstream from the root complex, the arbitration entity configured to present the first bus device upstream to the root complex, the arbitration entity further associated with the management module, and wherein the first bus device is a member of the first cluster; and
responsive to detecting the failure at the management module:
suspending communication of data to the first bus device;
receiving information regarding a second bus device, wherein the second bus device is a member of the first cluster and redundant of the first bus device;
transitioning the second bus device from a passive state to an active state; and
assigning the second bus device to the downstream port.

US Pat. No. 10,114,687

SYSTEM FOR CHECKING THE INTEGRITY OF A COMMUNICATION BETWEEN TWO CIRCUITS

STMICROELECTRONICS (GRENO...

1. A method, comprising:sending transactions from a master circuit to a slave circuit;
verifying an integrity of communications between the master circuit and the slave circuit, the verifying including:
updating a first cyclic multibit signature based on each transaction sent by the master circuit to the slave circuit;
updating a second cyclic multibit signature based on each transaction received by the slave circuit; and
comparing one or more bits based on the second cyclic multibit signature with corresponding bits based on the first cyclic multibit signature, wherein a number of the one or more bits based on the second cyclic multibit signature being compared is less than a total number of bits of the second cyclic multibit signature; and
detecting and responding to an error condition based on the comparing, wherein the method includes:
sending the transactions on a bus coupled between the master circuit and the slave circuit;
storing each transaction in a first-in-first-out (FIFO) memory having a depth corresponding to a delay introduced by the bus;
updating the first cyclic multibit signature based on an output of the FIFO memory; and
transmitting from the slave circuit to the master circuit the one or more bits based on the second cyclic multibit signature on a line of the bus.

US Pat. No. 10,114,686

REDUCING SIZE OF DIAGNOSTIC DATA DOWNLOADS

International Business Ma...

1. A method for reducing size of diagnostic data downloads, comprising:reading a format and a content of a diagnostic data file;
applying pre-defined priority rules to the diagnostic data file utilizing the format and the content;
assigning a priority level to the diagnostic data file based on an ability of the diagnostic data file to diagnose a failure as determined by the pre-defined priority rules;
wherein the assigning of the priority level to the diagnostic data file includes utilizing multiple levels of granularity based on one or more data elements, wherein the data elements include a structure of a file type, wherein at least a first failure data capture (FFDC) file has a higher priority level than a log file;
wherein the pre-defined priority rules prioritize the diagnostic data file by a file type, a format of the diagnostic data file, a content of the diagnostic data file, and a closeness of data elements in the diagnostic data file to a failure point with respect to a time, wherein data elements within 10 seconds of the failure point have a highest priority level;
wherein data elements of the diagnostic data file are prioritized by the closeness of data elements to a failure point and data elements further from a failure point are prioritized with decreasing priority levels;
ordering the diagnostic data file into a file stream, wherein the file stream is in a Tape Archive (TAR) format;
streaming the file stream to a remote diagnostic system, wherein the streaming of the file stream to the remote diagnostic system includes:
compiling a second diagnostic data file having a highest level of priority not yet sent to a stream file and obtaining a difference file of a difference between the second diagnostic data file and previous data of the second diagnostic data file with a higher level of priority; and
writing the difference file to a stream file, wherein the remote diagnostic system reconstructs an end file from the difference file;
receiving a notification from the remote diagnostic system to stop the streaming in response to sufficient diagnostic data to diagnose the failure being received by the remote diagnostic system; and
stopping the streaming in response to the notification, wherein at least a portion of the diagnostic data file is not streamed to the remote diagnostic system.

US Pat. No. 10,114,685

SYSTEM AND METHOD FOR ERROR DETECTION OF EXECUTED PROGRAM CODE EMPLOYING COMPRESSED INSTRUCTION SIGNATURES

Infineon Technologies AG,...

1. A system, comprising:a first processor configured to:
load from a first memory an instruction block comprising a plurality of opcodes and a stored error code;
for each opcode of the plurality of opcodes of the instruction block, determine a first determined signature depending on said opcode;
determine, for the instruction block, a determined error code which depends on each opcode and on the first determined signature of each opcode of the plurality of opcodes of the instruction block; and
determine that a first error occurred, if the determined error code is different from the stored error code; and
a second processor configured to:
determine a second determined signature for a current opcode of the plurality of opcodes of the instruction block depending on said current opcode; and
determine that a second error occurred, if the second determined signature for the current opcode is different from the first determined signature for the current opcode.

US Pat. No. 10,114,683

MANAGING A VIRTUAL OBJECT

INTERNATIONAL BUSINESS MA...

1. A method of managing a virtual object, said method comprising:storing said virtual object in a database accessible to a server device, said database comprising a number of avatars and a number of virtual objects distinct from said avatars; and
in response to a non-subscriber user performing, with one of said avatars, a first action on said virtual object, sending a message from said server device to at least one user that subscribes to said virtual object;
wherein said at least one user that subscribes to said virtual object comprises at least one user subscribing to said virtual object that is not a member of a community to which said non-subscriber user belongs.

US Pat. No. 10,114,682

METHOD AND SYSTEM FOR OPERATING A DATA CENTER BY REDUCING AN AMOUNT OF DATA TO BE PROCESSED

INTERNATIONAL BUSINESS MA...

6. A method for reducing data by a hardware-implemented reduce task tracker in a data center, the method comprising:in response to a reduce task distributed by a job tracker:
acquiring one or more map outputs for key names having given version information assigned by map task trackers;
wherein the hardware-implemented reduce task tracker comprises a special-purpose integrated circuit;
wherein the acquired one or more map outputs comprise one or more current map outputs with the given version information and one or more historical map outputs with historical version information indicating a time prior to the version information;
wherein the given version information was assigned by a map task tracker and indicates when a map task from which the one or more current map outputs originated was added;
wherein acquiring the one or more map outputs for key names comprises:
receiving from the job tracker the reduce task, the reduce task specifying a given version value;
requesting from the map task tracker the one or more current map outputs having the given version value;
receiving from the map task trackers the one or more current map outputs that have the given version value and are associated with the m reduce task tracker;
extracting key names from the received one or more current map outputs;
requesting from the map task trackers the one or more historical map outputs for the extracted key names having historical version values prior to the given version value; and
receiving from the map task trackers the one or more historical map outputs for the key names having the historical version values prior to the given version value; and
executing the reduce task on the acquired one or more map outputs.

US Pat. No. 10,114,681

IDENTIFYING ENHANCED SYNCHRONIZATION OPERATION OUTCOMES TO IMPROVE RUNTIME OPERATIONS

QUALCOMM Incorporated, S...

1. A method of identifying enhanced synchronization operation outcomes in a computing device, comprising:receiving a plurality of resource access requests for a first resource of the computing device from a plurality of computing elements of the computing device including a first resource access request having a first requester identifier from a first computing element of the plurality of computing elements and a second resource access request having a second requester identifier from a second computing element of the plurality of computing elements;
granting the first computing element access to the first resource based on the first resource access request;
returning a response to the second computing element including the first requester identifier as a winner computing element identifier;
determining whether the second computing element has a task to execute;
sending a signal to steal a task from the first computing element in response to determining that the second computing element does not have a task to execute, wherein the signal includes the second requester identifier;
receiving a response to the signal to steal a task including a task winner computing element identifier;
comparing the second requester identifier to the task winner computing element identifier;
determining whether the second computing element is a task winner computing element by the second requester identifier matching the task winner computing element identifier; and
adjusting a task stealing list of the second computing element in response to determining that the second computing element is not the task winner computing element.

US Pat. No. 10,114,677

METHOD AND SYSTEM FOR WORKLOAD RECOMMENDATIONS ON INFORMATION HANDLING SYSTEMS

Dell Products L.P., Roun...

1. A method of deploying a recommended workload on an information handling system comprising:extracting solution details from a solution template queried from a solution template repository, wherein the solution details include infrastructure information and workload information from the solution template;
transforming the solution details into a document;
storing the document in a data store;
obtaining existing components of the information handling system;
accessing the data store to extract components required by the document corresponding to at least one of: the infrastructure information or the workload information;
determining whether the existing components of the information handling system correlate to the components required by the document;
displaying the recommended workload based on the determination that the existing components of the information handling system correlate to the components required by the document, wherein the recommended workload includes the infrastructure information and workload information from the document;
deploying the recommended workload based on a selection of the recommended workload;
obtaining monitoring data of the existing components of the information handling system; and
displaying a resource utilization for the selected workload based on the monitoring data.

US Pat. No. 10,114,676

BUILDING MULTIMODAL COLLABORATIVE DIALOGS WITH TASK FRAMES

Microsoft Technology Lice...

1. A system comprising:at least one processor; and
memory communicatively coupled to the at least one processor, encoding computer executable instructions that, when executed by the at least one processor perform a method, the method comprising:
receiving initial input at a client, wherein the input requests a digital assistant application to perform a task;
sending the initial input to a remote service;
receiving, by the client from the remote service, a predefined task frame to serve as a master reference for completing the task, wherein the task frame is a non-graphical-user-interface (GUI) data structure including a value for a status of the task and two or more required parameters to complete the task, wherein the task frame includes a name and a value for each of the two or more required parameters;
based on the task frame, determining, by the client, a next action to complete the task;
performing, by the client, the next action to complete the task;
based on performance of the task, updating, by the client, the task frame by updating one or more values for at least one of the two or more required parameters to create an updated task frame;
sending, by the client, the updated task frame to the remote service; and
receiving, by the client, a further updated version of the task frame from the remote service, wherein at least one of a value for a task frame parameter and a status of the task frame in the further updated task frame has been updated by the remote service.

US Pat. No. 10,114,675

APPARATUS AND METHOD OF MANAGING SHARED RESOURCES IN ACHIEVING IO VIRTUALIZATION IN A STORAGE DEVICE

Toshiba Memory Corporatio...

1. A data storage device comprising:a non-volatile semiconductor memory device; and
a controller for the non-volatile semiconductor memory device, the controller including a virtual function mapping unit that maintains a function mapping table which stores values that are programmable by a host system to which the data storage device is connected and associate virtual functions with portions of shared resources of the controller, the controller configured to:
receive a command from the host system to read data from or write data in the non-volatile semiconductor memory device, the command including a virtual function identifier and a transaction identifier;
identify, via the virtual function mapping unit, a portion of a shared resource of the controller based on the virtual function identifier and the transaction identifier, wherein the portion of the shared resource comprises a time-slice of a shared processing resource; and
access the identified portion of the shared resource based on the received command.

US Pat. No. 10,114,674

SORTING DATABASE COLLECTIONS FOR PARALLEL PROCESSING

International Business Ma...

1. A method comprising:determining, by at least one processor of a computing device, an order of collections of records by sorting the collections of records according to a byte length of an index key for each collection of records of the collections of records, wherein the byte length of the index key for each collection of records of the collections of records corresponds to a sum of byte lengths of one or more columns in the respective collection of records;
identifying, by the at least one processor, subsets of the collections of records as having index keys of equal byte length;
modifying, by the at least one processor, the order of the collections of records by sorting each subset of the collections of records of the subsets of the collections of records identified as having index keys of equal byte length according to a number of records per collection;
and assigning, by the at least one processor, the collections of records to a plurality of parallel processing tasks based on the modified order of the collections of records, including assigning a respective two or more consecutive collections of records of the collections of records according to the modified order of the collections of records to each parallel processing task of the plurality of parallel processing tasks,
wherein assigning the collections of records to the plurality of parallel processing tasks comprises:
determining an accumulated total number of the records assigned to a particular parallel processing task among the plurality of parallel processing tasks based at least in part on a number of records among the collections of records currently assigned to the particular parallel processing task and any previously addressed parallel processing tasks among the plurality of parallel processing tasks;
determining an accumulated average number of the records assigned to the particular parallel processing task based at least in part on determining an average number of records among the collections of records per parallel processing task among the plurality of parallel processing tasks and accumulating the average number of records over the previously addressed parallel processing tasks and the particular parallel processing task;
assigning an initial portion of the collections of records to the particular parallel processing task in the order of the collections of records while the accumulated total number of the records assigned to the particular parallel processing task is less than the accumulated average number of records assigned to the particular parallel processing task;
and sending, by the computing device, the collections of records to a plurality of processing units in accordance with the assigning of the collections of records to the plurality of parallel processing tasks in the order of the collections of records, wherein each of the processing units is available to execute a respective one of the parallel processing tasks.

US Pat. No. 10,114,673

HONORING HARDWARE ENTITLEMENT OF A HARDWARE THREAD

International Business Ma...

1. A method comprising:managing an entitlement requirement of a logical partition entirely in processor hardware such that preemption by a hypervisor is excluded by:
associating a logical partition in a set of logical partitions to an entitlement special purpose register in a set of entitlement special purpose registers, wherein the logical partition does not time-share the entitlement special purpose register;
generating, by computer software running on computer hardware, an entitlement processor resource percentage for the logical partition, wherein the entitlement processor resource percentage holds an enumeration that represents a minimum percentage of instruction dispatches for the logical partition;
setting an entitlement special purpose register percentage for the entitlement special purpose register equal to the entitlement processor resource percentage; and
dispatching, by an instruction dispatcher of a computer processor, instructions from one or more hardware threads to the logical partition based, at least in part, on the entitlement special purpose register percentage.

US Pat. No. 10,114,672

USER-CENTERED TASK SCHEDULING FOR MULTI-SCREEN VIEWING IN CLOUD COMPUTING ENVIRONMENT

Thomson Licensing, Issy ...

1. A method performed by one or more processors, comprising:receiving a plurality of requests from users, wherein each user has a plurality of associated viewing screens;
determining a priority score for each one of said requests based on at least one factor related to said plurality of associated viewing screens; and
processing one of said requests based on said priority score, wherein:
the at least one factor related to said plurality of associated viewing screens includes at least one of whether said request is from a user device associated with a dominant screen that is most frequently used by said user, and whether none of said plurality of associated viewing screens for said user is active;
the priority score is further based on a deadline associated with said request;
the deadline is based on an arrival time of said request and a user latency tolerance associated with said request; and
the user latency tolerance is based on an applicable user device.

US Pat. No. 10,114,671

INTERRUPTING A DEVICE BASED ON SENSOR INPUT

Lenovo (Singapore) PTE. L...

1. An apparatus comprising:an information handling device comprising one or more sensors, the information handling device in use by a first individual;
a processor of the information handling device; and
a memory that stores code executable by the processor to:
detect an interrupt cue from a second individual in response to input received from the one or more sensors, the interrupt cue comprising a voice command;
determine an identity of the second individual based on one or more characteristics of the second individual as determined by the input received from the one or more sensors, the one or more characteristics comprising one or more of a visual characteristic of the second individual and a wireless signal of the second individual;
determine that the second individual has a predefined relationship to the first individual;
determine one or more actively executing applications that the first user is using on the information handling device; and
interrupt the one or more actively executing applications that the first individual is using in response to the interrupt cue, determining that the voice command is associated with the determined identity of the second individual, and determining that the second individual has a predefined relationship to the first individual.

US Pat. No. 10,114,670

POINT-OF-USE-TOOLKIT

The Boeing Company, Chic...

1. An apparatus for implementation of a back-end system for providing a point-of-use toolkit, the apparatus comprising a processor and a memory storing executable instructions that, in response to execution by the processor, cause the apparatus to implement at least:a loading and synchronization system configured to:
receive an assignment of one or more work tasks of a work plan composed of a plurality of work tasks for manufacture of a tangible product, the one or more work tasks being assigned to a technician and received from a manufacturing scheduling system based on a schedule for performance of the plurality of work tasks; and in response to thereto,
compile a point-of-use toolkit including comprehensive information regarding the one or more work tasks, including the loading and synchronization system being configured to encode the point-of-use toolkit in a format that is viewable on any of a plurality of front-end systems having different operating systems; and
transmit the point-of-use toolkit to a front-end system of the plurality of front-end systems associated with the technician;
a device and compatibility system configured to:
associate the front-end system with a viewer of a plurality of viewers, the viewer or the point-of-use toolkit being developed according to an architecture that is operating system independent and thus configured for compatibility with any of the different operating systems; and
a schedule interruption system operatively coupled to the loading and synchronization system, and configured to:
determine an occurrence of a delay associated with the schedule after the initiation of the one or more work tasks by the technician, wherein the delay impacts the assignment of the one or more work tasks, and wherein in at least one instance the occurrence of the delay is determined independent of user-input and based on information automatically retrieved from one or more of the manufacturing scheduling system, a manufacturing instructions library, a parts distribution database, or a tools distribution database; and
transmit information associated with the delay to the manufacturing scheduling system that in at least one instance causes the manufacturing scheduling system to update the assignment of the one or more tasks during execution of the one or more work tasks by the technician and based at least in part on the information associated with the delay,
wherein the loading and synchronization system is configured to receive the update to the assignment of the one or more tasks, and in response thereto, (i) compile a corresponding update of the point-of-use toolkit, and (ii) transmit the update of the point-of-use toolkit to the front-end system in real-time during the execution of the one or more work tasks by the technician.

US Pat. No. 10,114,669

DYNAMIC SMT

International Business Ma...

1. A method for simultaneous multithreading in a processor, the method comprising:measuring SMT-performance value of a software code, wherein the software code is executed in a simultaneous multithreading mode by the processor;
measuring non-SMT-performance value of the software code, wherein the software code is executed in a non-simultaneous multithreading mode by the processor;
comparing the SMT-performance value with the non-SMT-performance value and validating performance values, in a performance table, based on comparing a version of the software code with a version stored in the performance table with regard to the use of performance values in a decision to dispatch in SMT mode or non-SMT mode; and
dispatching the software code for an execution mode by the processor, depending on the comparison, wherein the execution mode is at least one of SMT-mode or non-SMT-mode, and wherein different portions of the software code can be executed in different execution modes based on resource requirements of the different portions.

US Pat. No. 10,114,668

MANAGING PRIVATE USE OF PROGRAM EXECUTION CAPACITY

Amazon Technologies, Inc....

1. A computer-implemented method for managing execution of programs for users, the method comprising:receiving, by one or more programmed computing systems of a program execution service, instructions from a first user to execute a first program using indicated non-reserved program execution capacity that is not reserved by the program execution service for use of the first user;
determining, by the one or more programmed computing systems, and in response to the received instructions to execute the first program using the indicated non-reserved program execution capacity, to instead execute the first program by using reserved program execution capacity that is reserved by the program execution service for use of the first user and that is separate from the indicated non-reserved program execution capacity;
executing, based at least in part on the determining and by the one or more programmed computing systems, the first program on behalf of the first user by using the reserved program execution capacity;
receiving, by the one or more programmed computing systems and during the executing of the first program using the reserved program execution capacity, a request from the first user to use the reserved program execution capacity of the first user to execute one or more second programs on behalf of the first user; and
determining, by the one or more programmed computing systems and based at least in part on the received request, to terminate the use by the first program of the reserved program execution capacity, and to use the reserved program execution capacity to execute the one or more second programs on behalf of the first user.

US Pat. No. 10,114,667

METHOD OF CONTROLLING COMMUNICATION PATH BETWEEN VIRTUAL MACHINES AND COMPUTER SYSTEM

Hitachi, Ltd., Tokyo (JP...

1. A computer system, comprising:one or more virtual machines operating on one or more physical machines each of which includes one or more CPUs, one or more memories, and one or more I/O devices;
wherein the physical machine has a hypervisor that controls the virtual machine;
wherein the hypervisor includes (1) a buffer control unit that controls an actual buffer and a virtual buffer allocated to the virtual machine and the I/O device, (2) a connection control unit that controls a communication process on the I/O device mounted in the physical machine, and (3) a memory management unit that holds an inter-LPAR communication table in which the communication path between the virtual machines is recorded, and communication port information is included and an operation position information table in which position information of the virtual machine is recorded, and memory address information is included;
wherein (A) when communication between a transmission side virtual machine and a reception side virtual machine is switched to communication between the transmission side virtual machine and another virtual machine, (A1) the buffer control unit switches an allocation of the virtual buffer serving as an alias of the actual buffer allocated to a communication port used by the transmission side virtual machine from a communication port with which the reception side virtual machine is able to perform communication to a communication port with which the other virtual machine is able to perform communication, and (A2) the buffer control unit performs memory address translation on a region of the memory referred to by the virtual buffer, and establishes a communication path between the transmission side virtual machine and the other virtual machine by associating a region of the memory referred to by the transmission side virtual machine and a region of the memory referred to by the other virtual machine, based on the memory address translation on the region of the memory referred to by the virtual buffer serving as the alias of the actual buffer allocated to the communication port used by the transmission side virtual machine;
wherein (B) when the communication path is changed to a communication path in which communication is performed between the transmission side virtual machine operating in a first physical machine and the other virtual machine operating in the first physical machine by a communication path change instruction of a management server, (B1) the buffer control unit acquires memory address information of the transmission side virtual machine and the other virtual machine from the operation position information table of the memory management unit and acquires communication port information of the transmission side virtual machine and the other virtual machine from the inter-LPAR communication table, (B2) the buffer control unit performs an allocation of the virtual buffer to a communication port of the other virtual machines and address changes based on a descriptor, and (B3) the buffer control unit uses a shared memory for the communication path between the transmission side virtual machine and the other virtual machine; and
wherein (C) when the communication path is changed to a communication path in which communication is performed between the transmission side virtual machine operating in a first physical machine and the other virtual machine operating in a second physical machine other than the first physical machine by a communication path change instruction of a management server, (C1) the buffer control unit acquires memory address information of the transmission side virtual machine and the connection control unit from the operation position information table of the memory management unit and acquires communication port information of the transmission side virtual machine and the connection control unit from the inter-LPAR communication table, (C2) the buffer control unit performs an allocation of the virtual buffer to a communication port of the connection control unit and address changes based on a descriptor, and (C3) the buffer control unit generates a communication path in which a region of the memory used by the transmission side virtual machine is communicated to the other virtual machine of the second physical machine through the connection control unit.

US Pat. No. 10,114,666

LOADING SOFTWARE COMPONENTS

EMC IP Holding Company LL...

1. A method executed by a processor for use in loading a plurality of Java components in connection with executing a first program, the plurality of Java components corresponding to one or more JAR files, the method comprising:receiving an index of indexed Java components that is at least a subset of the plurality of Java components, the index associating each of the indexed Java components with a respective one of the plurality of JAR files;
generating from the index a hash table that maps each indexed Java component to a respective one of the plurality of JAR files;
generating from the index a dynamic loading list of non-indexed Java components that are not included in the index;
loading the non-indexed Java components using the dynamic loading list; and
loading the indexed Java components using the hash table.

US Pat. No. 10,114,665

COMMUNICATION NODE UPGRADE SYSTEM AND METHOD FOR A COMMUNICATION NETWORK

Level 3 Communications, L...

1. A communication node upgrade system comprising:a computing system comprising at least one processing system and at least one memory for storing instructions that are executed by the at least one processing system to:
identify an existing virtual machine (VM) to be upgraded, the existing VM comprising at least one communication node that provides one or more communication services for a communication network, wherein the existing VM is executed in a virtualized computing environment;
obtain upgraded software for the existing VM;
create a new VM in the virtualized computing environment using the upgraded software;
copy configuration information from the existing VM to the new VM, the configuration information including information associated with configuration of the existing VM to provide the communication services by the existing VM; and
replace operation of the existing VM with the new VM in the communication network.

US Pat. No. 10,114,664

SYSTEMS AND METHODS FOR AUTOMATED DELIVERY AND IDENTIFICATION OF VIRTUAL DRIVES

Veritas Technologies LLC,...

1. A computer-implemented method for automated delivery and identification of virtual drives, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:creating a drive-template archive that comprises a plurality of virtual-drive templates, each virtual-drive template comprising a burned-in configuration identifier that:
includes information describing the configuration of the virtual-drive template;
is encoded directly into the virtual-drive template such that systems external to the virtual-drive template are able to read the information describing the configuration of the virtual-drive template from the virtual-drive template; and
is not shared by any other virtual-drive template in the plurality of virtual-drive templates;
receiving, from a requesting application executing on a requesting virtual machine, a provision request to provision a virtual drive for the requesting virtual machine, the provision request identifying at least one configuration specification of the virtual drive;
in response to receiving the provision request, fulfilling the provision request by:
creating, from the plurality of virtual-drive templates, a copy of an appropriate virtual-drive template that matches the configuration specification of the provision request;
converting the copy of the appropriate virtual-drive template into a fulfilling virtual-drive template by modifying, based at least in part on identification information of the requesting application included in the provision request, the burned-in configuration identifier of the copy of the appropriate virtual-drive template to include the identification information of the requesting application, thereby indicating that the fulfilling virtual-drive template is intended to be utilized by the requesting application executing on the requesting virtual machine; and
providing the fulfilling virtual-drive template to the requesting virtual machine; and
utilizing, by the requesting application executing on the requesting virtual machine, the fulfilling virtual-drive template in response to determining, at the requesting virtual machine, that the information included within the burned-in configuration identifier of the fulfilling virtual-drive template indicates that the fulfilling virtual-drive template is intended to be utilized by the requesting application.

US Pat. No. 10,114,663

DISPLAYING STATE INFORMATION FOR COMPUTING NODES IN A HIERARCHICAL COMPUTING ENVIRONMENT

Splunk Inc., San Francis...

1. A computer-implemented method for dynamically monitoring performance of a virtual machine environment, comprising:causing display of an arrangement of user-selectable nodes corresponding to components in a virtual machine architecture operating in a virtual machine environment, wherein the user-selectable nodes are connected to represent hierarchical relationships between components in the virtual machine environment;
receiving input selecting one or more performance factors of interest from a plurality of different performance factors for assessing performance of the virtual machine environment;
computing performance data corresponding to the one or more selected performance factors, wherein the computed performance data includes a performance state determined by applying performance state criteria;
upon receiving a user selection of a node in the displayed architecture, causing display of the performance data for the corresponding component in the virtual machine environment, according to the selected one or more performance factors, wherein the nodes are displayed with an appearance characteristic that represent the performance state of the corresponding component, wherein the display includes a time graph indicating the performance data for the corresponding component according to one or more selected performance factors across a period of times as compared with other components to enable a user to detect changes in performance for the corresponding component;
when the display indicates that a change is warranted in the virtual machine environment, receiving input to add, modify, or delete the node connections; and
according to the received input, adding or deleting components in the virtual machine architecture or modifying the relationships between components in the virtual machine architecture.

US Pat. No. 10,114,662

UPDATING PROCESSOR TOPOLOGY INFORMATION FOR VIRTUAL MACHINES

Red Hat Israel, Ltd., Ra...

1. A method, comprising:identifying a plurality of virtual processors of a Non-Uniform Memory Access (NUMA) system;
associating, by a first physical processor, a first virtual processor of the plurality of virtual processors with a first proximity domain of the NUMA system, a second virtual processor of the plurality of virtual processors with a second proximity domain, a first memory block on a first physical processor of a plurality of physical processors with a third proximity domain and a second memory block on a second physical processor of the plurality of physical processors with a fourth proximity domain, wherein the first proximity domain, the second proximity domain, the third proximity domain and the fourth proximity domain are different from each other, and wherein the first virtual processor and the second virtual processor are associated with the first physical processor of the plurality of physical processors;
determining, by the first physical processor, that the first virtual processor has been moved to the second physical processor of the plurality of physical processors;
determining memory access latency values for the first virtual processor residing on the second physical processor;
updating a first element of a data structure storing memory access latency information between proximity domains, the first element associated with the first proximity domain of the first virtual processor and the third proximity domain of the first memory block; and
updating a second element of the data structure, the second element associated with the first proximity domain of the first virtual processor and the fourth proximity domain of the second memory block.

US Pat. No. 10,114,661

SYSTEM AND METHOD FOR FAST STARTING AN APPLICATION

ROKU, INC., Los Gatos, C...

1. A method for fast starting an application in an operating system, the method comprising:starting a plurality of channel applications during a boot up sequence of the operating system;
placing a first channel application of the plurality of channel applications into a suspend mode after an initial time has passed, wherein the initial time allows the plurality of channel applications to finish the boot up sequence and load resources;
adding the first channel application to a suspended list of channel applications that have booted up, wherein a second channel application of the plurality of channel applications is not on the suspended list;
receiving an application programming interface (API) call from the first channel application on the suspended list;
preventing, by a processor, the API call from being executed based upon a determination that the first channel application from which the API call is received is on the suspended list, such that processing resources associated with an execution of the API call are made available for processing functions other than executing the API call from the first channel application on the suspended list based upon the prevention of the execution;
incrementing a block count indicating a number of times that the API call to the processor is determined to be received from the first channel application while on the suspended list;
terminating the first channel application based on the block count exceeding a certain level, wherein the terminating includes removing the first channel application from the suspended list;
receiving a search term;
identifying a group of channels based on the search term;
determining that a particular channel from the group of channels is associated with the second channel application that is not on the suspended list; and
starting the second channel application associated with the particular channel in a background mode prior to receiving a selection of the particular channel from a user.

US Pat. No. 10,114,660

SOFTWARE APPLICATION DELIVERY AND LAUNCHING SYSTEM

1. A method, comprising:allocating, by a first computing device, a first virtual memory;
receiving, by the first computing device, executable code of a first software from a disparate second computing system via a communication network;
writing, by the first computing device, the executable code of the first software directly into the first virtual memory;
marking, by the first computing device, the first virtual memory as executable;
executing, by the first computing device, the executable code of the first software directly from the first virtual memory;
determining, by the first computing device via the execution of the first software, a specific version of a second software to be downloaded by identifying an operating system of the first computing device and identifying the specific version of the second software most compatible with the operating system;
allocating, by the first computing device via the execution of the first software, a second virtual memory for loading and executing the second software, the second virtual memory being distinct from the first virtual memory;
downloading, by the first computing device, executable code of the specific version of the second software for directly writing into the second virtual memory as facilitated by the downloaded executable code of the first software, the second software is disparate from the first software;
executing, by the first computing device, the executable code of the second software; and
de-allocating, by the first computing device, the first virtual memory distinct from the second virtual memory after the execution of the executable code of the first software is completed so that memory locations of the first virtual memory are released.

US Pat. No. 10,114,659

REMOTE PROVISIONING OF HOSTS IN PUBLIC CLOUDS

VMware, Inc., Palo Alto,...

1. A method for automatic provisioning of cloud instances of a host, the method comprising:performing a first boot of a cloud instance of a host by a master boot image executing on at least one server of a cloud environment, the master boot image is a stateless boot image;
on determining the first boot is complete, retrieving a cloud host-state configuration associated with the cloud instance of the host from cloud metadata storage, the cloud host-state configuration comprising host-specific configuration data;
installing the cloud host-state configuration onto the master boot image to generate a self-configured boot image, by a cluster manager; and
performing a second boot of the cloud instance of the host by executing the self-configured boot image to automatically provision the cloud instance of the host with the host-specific configuration data in the cloud environment.

US Pat. No. 10,114,658

CONCURRENT TESTING OF PCI EXPRESS DEVICES ON A SERVER PLATFORM

Baida USA LLC, Sunnyvale...

1. A computer-implemented method for testing peripheral component interconnect express (PCIe) devices, the method comprising:detecting that a plurality of PCIe devices have been inserted into one or more PCIe buses of a data processing system;
in response to the detection, scanning all PCIe buses of the data processing system to discover the plurality of PCIe devices;
for each of the PCIe devices discovered,
repairing and retraining a PCIe link associated with the PCIe device, without rebooting the data processing system, and
loading a device driver instance for the PCIe device to be hosted by an operating system;
executing a test routine to concurrently test the plurality of PCIe devices via respective device driver instances;
in response to a signal indicating that the execution of the test routine has been completed, unloading the device driver instances of the PCIe devices; and
communicating with the operating system to remove the Me devices from a namespace of the operating system, without rebooting the data processing system.

US Pat. No. 10,114,657

MEMORY INTERFACE INITIALIZATION WITH PROCESSOR IN RESET

Western Digital Technolog...

1. A device comprising:control circuitry comprising:
a processor;
a memory interface;
memory interface initialization circuitry; and
non-volatile storage storing initialization parameters for initializing the memory interface;
wherein the control circuitry is configured to:
while the processor is held in reset, initialize the memory interface using the initialization parameters and the memory interface initialization circuitry;
after the memory interface has been initialized, receive instructions from a non-volatile memory module over the memory interface; and
after the processor has been released from reset, execute the instructions using the processor.

US Pat. No. 10,114,656

ELECTRONIC DEVICE SUPPORTING DIFFERENT FIRMWARE FUNCTIONS AND OPERATION METHOD THEREOF

ASMedia Technology Inc., ...

1. An electronic device comprising:a mainboard including a first storage circuit, a CPU circuit and a data transmission interface circuit, wherein the first storage circuit is configured to store a first firmware code of a basic input/output system, the CPU circuit is coupled to the first storage circuit, the CPU circuit is configured to execute the first firmware code to run the basic input/output system, and the data transmission interface circuit is coupled to the CPU circuit; and
an equipment coupled to the data transmission interface circuit of the mainboard and providing functions to the CPU circuit via the data transmission interface circuit, wherein the equipment includes a controller, the controller includes a second storage circuit, a microcontroller and a suspend power register, the microcontroller is coupled to the second storage circuit and the suspend power register, the second storage circuit is configured to store a second firmware code of the device, the suspend power register is configured to store an option data of the second firmware code, and the microcontroller executes the second firmware code to provide the device function to the CPU circuit according to the option data.

US Pat. No. 10,114,655

RAPID START UP METHOD FOR ELECTRONIC EQUIPMENT

Amlogic (Shanghai) Co., L...

1. A rapid start-up method for an electronic equipment comprising:providing the electronic equipment comprising:
i) a first storage device in communication with a central processing unit (CPU), wherein the first storage device stores a memory image; and
ii) ii) a memory in communication with the CPU and the first storage device;
generating the memory image after processing of memory data in the memory and running state data of a related device in the electronic equipment, when a normal start-up of the electronic equipment is finished;
storing the memory image in the first storage device;
calling the memory image saved in the first storage device by the CPU, and restarting the electronic equipment according to the memory image;
detecting whether the memory image saved in the first storage device is corrupted by a CRC checking method;
wherein if the memory image saved in the first storage device is corrupted during the electronic image restarting, generating a newly generated memory image when the electronic equipment is restarted in a common way.

US Pat. No. 10,114,653

MULTIPLE-STAGE BOOTLOADER AND FIRMWARE FOR BASEBOARD MANAGER CONTROLLER AND PRIMARY PROCESSING SUBSYSTEM OF COMPUTING DEVICE

Lenovo Enterprise Solutio...

1. A method comprising:at power on of a computing device, executing, by a baseboard management controller (BMC) of the computing device, a first-stage bootloader program to download a second-stage bootloader program from a first server over a network;
after downloading the second-stage bootloader program, executing, by the BMC, the second-stage bootloader program that causes the BMC to:
determine a plurality of attributes of the computing device;
send a request to a second server over the network for third-stage firmware of the BMC, the request including the attributes;
download the third-stage firmware from the second server over the network, the third-stage firmware selected by the second server based on at least the attributes provided by the BMC in the request;
after downloading the third-stage firmware, executing, by the BMC, the third-stage firmware that causes the BMC to:
send a request to a third server for firmware of a primary processing subsystem of the computing device, the request including the attributes;
download the firmware of the primary processing subsystem from the third server over the network, the firmware selected by the third server based on at least the attributes provided by the BMC in the request; and
after downloading the firmware, starting, by the BMC executing the third-stage firmware, the primary processing subsystem by causing the primary processing subsystem to execute the firmware.

US Pat. No. 10,114,652

PROCESSOR WITH HYBRID PIPELINE CAPABLE OF OPERATING IN OUT-OF-ORDER AND IN-ORDER MODES

International Business Ma...

1. A circuit arrangement, comprising:a hybrid pipeline including a plurality of pipeline stages configured to execute at least one instruction stream, wherein the plurality of pipeline stages includes a dispatch stage, wherein the dispatch stage is configured to dispatch instructions to an issue queue when the hybrid pipeline is in an out-of-order mode, and wherein the dispatch stage is configured to bypass the issue queue when the hybrid pipeline is in an in-order mode; and
control logic coupled to the hybrid pipeline and configured to dynamically switch the hybrid pipeline between the out-of-order and in-order modes to selectively execute instructions from the at least one instruction stream using out-of-order and in-order pipeline processing, wherein the control logic includes a mode selector configured to dynamically switch the hybrid pipeline between the out-of-order and in-order modes based on power requirements of one or more upcoming instructions in the at least one instruction stream, and wherein the dispatch stage is further configured to:
maintain a reorder buffer active when the hybrid pipeline is in the out-of-order mode and when the hybrid pipeline is in the in-order mode;
allocate each in-order instruction and each out-of-order instruction that is included in incorrect load speculations to the reorder buffer;
squash an execution of each in-flight instruction associated with each thread that includes each in-order instruction and each out-of-order instruction included in the incorrect load speculations; and
replay each in-order data instruction and each out-of-order data instruction that were included in the incorrect load speculations from the reorder buffer after each incorrect load speculation is returned.