US Pat. No. 10,891,205

GEOGRAPHICALLY DISPERSED DISASTER RESTART MULTITENANCY SYSTEMS

EMC IP Holding Company LL...

1. A geographically dispersed disaster restart (“GDDR”) multitenancy system comprising a first GDDR complex and a second GDDR complex, wherein;the first GDDR complex comprises:
a first primary data storage device and a first secondary data storage device, wherein the first primary storage and the first secondary storage devices are geographically separated from one another; and
a first primary control instance and a first secondary control instance; and
the second GDDR complex comprises:
a second primary data storage device and a second secondary data storage device, wherein the second primary storage and the second secondary storage devices are geographically separated from one another; and
a second primary control instance and a second secondary control instance; the first GDDR complex and the second GDDR complex are run as a single instance of an operating system;
a third GDDR complex communicatively coupled to the first and second GDDR complexes, wherein the third GDDR further comprises:
a first tertiary data storage device; and
a first tertiary control instance, wherein the first tertiary control instance is located on the single logical partition.

US Pat. No. 10,891,204

MEMORY SYSTEM, A METHOD OF DETERMINING AN ERROR OF THE MEMORY SYSTEM AND AN ELECTRONIC APPARATUS HAVING THE MEMORY SYSTEM

SAMSUNG ELECTRONICS CO., ...

1. A memory system, comprising:a memory apparatus comprising a buffer die, a plurality of core dies disposed on the buffer die, a plurality of channels and a through silicon via configured to transmit a signal between the buffer die and at least one of the core dies;
a memory controller configured to output a command signal and an address signal to the memory apparatus, to output a data signal to the memory apparatus and to receive the data signal from the memory apparatus; and
an interposer comprising a plurality of channel paths for connecting the memory controller and the channels,
wherein the memory apparatus further comprises a path selector for changing a connection state between the channels and channel paths, wherein the path selector includes a multiplexer connected to at least two of the channels and at least two of the channel paths, and
wherein when an error of the memory system is detected in a first connection state between the channels and the channel paths, the path selector changes the first connection state to a second connection state between the channels and the channel paths,
wherein the buffer die is connected to the channel paths,
wherein the buffer die comprises a plurality of buffers, and
wherein the buffers are configured to output the data signal transmitted through at least one of the channel paths to at least one of the channels.

US Pat. No. 10,891,203

PREDICTIVE ANALYSIS, SCHEDULING AND OBSERVATION SYSTEM FOR USE WITH LOADING MULTIPLE FILES

Bank of America Corporati...

1. An apparatus for monitoring loading of multiple files, the apparatus comprising:a server comprising:
a receiver, said receiver being configured to receive a plurality of files, each of said files being associated with a specific data load job;
an observing engine configured to analyze the files, the analysis comprising deriving and/or otherwise determining the size and type of the files and deriving a timestamp reflecting the arrival time of each of the files at the server;
a comparison engine configured to compare the derived information with expected or anticipated information, said expected or anticipated information corresponding to information assigned to the files prior to receipt thereof;
a processor configured to:
use the comparison to create a profile for each of the plurality of files; and
gather a list of problematic file names and store the problematic file names and a problematic status code in the associated profile; and
a predictive analysis module configured to:
receive each profile; and
use the received profiles to provide to the server duration times for future file load jobs.

US Pat. No. 10,891,202

RECOVERY OF IN-MEMORY DATABASES USING A BACKWARD SCAN OF THE DATABASE TRANSACTION LOG

SAP SE, Walldorf (DE)

1. A method for data recovery in a database, the method comprising:accessing a transaction log having stored therein a plurality of transaction blocks, each transaction block associated with a database transaction and corresponding operations that comprise the database transaction, the plurality of transaction blocks ordered according to when their corresponding database transactions were completed, the plurality of log records in each transaction block ordered according to when their corresponding operations were performed on the database;
accessing a range of transaction blocks in the transaction log in reverse chronological order, starting from a latest transaction block and ending with an earliest transaction block that occurs earlier in time than the latest transaction block; and
recovering data in the database from each of the transaction blocks accessed in reverse chronological order by recovering database rows in the database that were acted on by database transactions associated with the accessed transaction blocks,
wherein the transaction log is a first transaction log segment, the method further comprising processing a plurality of subsequent transaction log segments to recover data in the database, including for each of the subsequent transaction log segments:
accessing a range of transaction blocks in the subsequent transaction log segment in reverse chronological order, starting from a latest transaction block and ending with an earliest transaction block that occurs earlier in time than the latest transaction block; and
recovering data in the database from each of the transaction blocks accessed in reverse chronological order by recovering database rows in the database that were acted on by database transactions associated with the accessed transaction blocks.

US Pat. No. 10,891,201

DYNAMIC RULE BASED MODEL FOR LONG TERM RETENTION

EMC IP Holding Company LL...

1. A method for restoring data on a multi-tiered storage system comprising:providing a server, which uses the data, coupled to a backup storage device and the multi-tiered storage system having a first storage device and a second storage device, wherein the first storage device has a higher performance than the second storage device;
storing a first group of the data on the first storage device and a second group of the data on the second storage device, wherein the first group of the data is accessed by the server more frequently than the second group of data;
writing by the multi-tiered storage system, storage device information as metadata for each of the first group of the data and the second group of the data;
backing up the first group of the data and the second group of the data on the backup storage device;
creating a data block map which includes a storage tier categorization that is based upon an access frequency for each of the first group of data and the second group of data, wherein the data block map comprises an identifier, an access frequency, a tiering policy, and a last categorization for each data chunk for each of the first group of data and the second group of data, wherein the last categorization is used to optimize migration of the data in case of loss of present categorization information for each data chunk;
recovering the first group of the data and the second group of the data on the backup storage device when a data restore is required;
identifying physical locations of the data, corresponding to a backup time, for each of the first group of the data and the second group of the data based upon the storage tier categorization for each of the first group of data and the second group of data recorded on the data block map; and
restoring a first group of the data on the first storage device and a second group of the data on the second storage device.

US Pat. No. 10,891,200

DATA PROTECTION AUTOMATIC OPTIMIZATION SYSTEM AND METHOD

Colbalt Iron, Inc., Lawr...

1. A system comprising:a memory; and
at least one processor to:
continually analyze at least one of metrics, events, and conditions in a computer network;
under normal operating conditions in the computer network, obtain a first level of data from at least one hardware device in the computer network;
detect that one of a condition and an event has occurred in the computer network;
automatically transmit an instruction to modify the first level of data obtained from the at least one hardware device to a second level of data more robust than the first level of data when one of the condition and the event has occurred, the second level of data collected from the at least one hardware device at a higher frequency than the first level of data and a higher fidelity than the first level of data;
collect the second level of data from the at least one hardware device for a data custody policy determined length period of time; and
store the second level of data obtained from the at least one hardware device.

US Pat. No. 10,891,199

OBJECT-LEVEL DATABASE RESTORE

Commvault Systems, Inc., ...

1. A system for backing up and restoring database data, the system comprising:a computing device comprising computer hardware, the computing device having a data agent executing thereon configured to:
intercept a first request from a database application executing on the computing device to read a portion of a database file,
wherein a secondary copy of the database file resides on one or more secondary storage devices in a secondary storage subsystem and is organized on the one or more secondary storage devices as a plurality of first blocks,
wherein the database file is organized by the database application as a plurality of application-level blocks, and each block of the plurality of first blocks includes multiple application-level blocks,
wherein the portion corresponds to a subset of one or more database objects of a plurality of database objects represented by the database file;
determine a subset of first blocks of the plurality of first blocks corresponding to portion of a database file included in the first request; and
issue a second request to restore the subset of first blocks from the one or more secondary storage devices; and
one or more secondary storage controller computers comprising computer hardware configured to:
in response to the second request:
access a table that maps the plurality of first blocks to one or more storage locations on the one or more secondary storage devices;
using the table, locate the subset of first blocks on the one or more secondary storage devices identified by the second request and retrieve the subset of first blocks from the one or more secondary storage devices;
forward the retrieved subset of first blocks for storage in one or more primary storage devices associated with the computing device;
extract application-level blocks corresponding to the requested portion from the retrieved subset of first blocks; and
forward the extracted application-level blocks to the database application.

US Pat. No. 10,891,198

STORING DATA TO CLOUD LIBRARIES IN CLOUD NATIVE FORMATS

Commvault Systems, Inc., ...

1. A tangible computer-readable storage medium excluding transitory signals, and which contains instructions for performing a method of storing a set of data within an information management system via one or more data storage operations of the information management system, the method comprising:creating a storage policy for the set of data in a storage manager of the information management system,
wherein the storage policy identifies a cloud service subscription associated with a cloud storage library to which the set of data is to be stored via the one or more data storage operations of the information management system, and
wherein the storage policy further identifies at least three of:
data that will be associated with the storage policy,
datapath information specifying how the data will be communicated to the destination,
a type of secondary copy operation to be performed, or
retention information specifying how long the data will be retained;
performing a snapshot on the set of data via a snapshot engine within a media agent of the information management system,
wherein the media agent is configured to transfer the set of data to the cloud storage library, and
wherein the media agent is associated with the cloud storage library and transfers data to the cloud storage library based on the storage policy;
transferring data blocks of the set of data from the media agent to a cloud storage SDK of the cloud storage library,
wherein the data blocks are identified by the media agent using the performed snapshot, and
wherein the media agent transfers the data blocks to cloud storage SDK of the cloud storage library in order for the cloud storage SDK to add metadata associated with the cloud storage library to the data blocks before the data blocks are stored in the cloud storage library via the cloud storage SDK, wherein the added metadata includes metadata that identifies a native format of the cloud storage library; and
transferring an incremental copy of the set of data from the media agent to the cloud storage SDK of the cloud storage library,
wherein the incremental copy includes data blocks associated with the set of data that have changed since a previous transfer of data blocks.

US Pat. No. 10,891,197

CONSOLIDATED PROCESSING OF STORAGE-ARRAY COMMANDS USING A FORWARDER MEDIA AGENT IN CONJUNCTION WITH A SNAPSHOT-CONTROL MEDIA AGENT

Commvault Systems, Inc., ...

1. A method comprising:receiving, by a first media agent executing on a first computing device, a command instructing a storage array to perform a snapshot-related operation within the storage array,
wherein an application that executes on the first computing device reads and writes primary data residing on the storage array, and
wherein the command is received by the first media agent from at least one of: a storage manager, and a data agent associated with the application;
forwarding the command by the first media agent to a snapshot-control media agent that executes on a second computing device, wherein the snapshot-control media agent is configured with a command device for directly communicating the command to the storage array;
wherein the first media agent on the first computing device is configured without a command device for directly communicating the command to the storage array;
executing, by the storage array, the snapshot-related operation in response to the command received via the command device configured on the snapshot-control media agent;
receiving, by the first media agent from the snapshot-control media agent, a response from the storage array based on the snapshot-related operation within the storage array; and
wherein by being configured without a command device and by forwarding the command to the snapshot-control media agent, the first media agent protects the first computing device from directly communicating the command to the storage array.

US Pat. No. 10,891,196

APPARATUS AND METHOD FOR CONTENTS BACK-UP IN HOME NETWORK SYSTEM

Samsung Electronics Co., ...

1. A content backup method by a home gateway in a home network system, the method comprising:receiving, by the home gateway, from a mobile terminal, a control command requesting backup of content generated in the mobile terminal;
backing up, by the home gateway, first content among the content generated in the mobile terminal to a storage of the home gateway in response to the control command;
transmitting, by the home gateway, to the mobile terminal, a backup complete message indicating completion of the backing up of the first content to the storage of the home gateway;
upon receiving a reproduction request message for reproducing the first content backed up to the storage of the home gateway from the mobile terminal,
searching, by the home gateway, a plurality of home devices which are set as synchronized devices, and
detecting a home device which is set as synchronized with the mobile terminal from the plurality of home devices which are set as the synchronized devices; and
in response to detecting the home device set as synchronized with the mobile terminal, transmitting, by the home gateway, the first content backed up to the storage of the home gateway to the home device belonging to the home network system for reproduction of the first content by the home device,
wherein second content among the content generated in the mobile terminal is excluded from backup, by the mobile terminal, based on information related to the second content being generated at a specific time or a specific place as a backup exception which is set through a user interface of the mobile terminal.

US Pat. No. 10,891,195

STORAGE SYSTEM WITH DIFFERENTIAL SCANNING OF NON-ANCESTOR SNAPSHOT PAIRS IN ASYNCHRONOUS REPLICATION

EMC IP Holding Company LL...

1. An apparatus comprising:at least one processing device comprising a processor coupled to a memory;
said at least one processing device being configured:
to generate a current snapshot set for a consistency group comprising a plurality of storage volumes subject to replication from a source storage system to a target storage system;
to schedule a differential scan of the current snapshot set relative to a previous snapshot set generated for the consistency group;
for each of one or more snapshot trees maintained for the consistency group:
to determine if a first node corresponding to the previous snapshot set is an ancestor of a second node corresponding to the current snapshot set; and
to alter a manner in which an instance of the differential scan is performed for the snapshot tree responsive to a result of the determination.

US Pat. No. 10,891,194

VERSIONED FILE SYSTEM USING STRUCTURED DATA REPRESENTATIONS

Nasuni Corporation, Bost...

1. A storage-as-a-service system to provide storage for an enterprise, comprising:a management console to provision and manage a scalable file system across one or more cloud-based storage service providers;
one or more file system interfaces associated with the enterprise, wherein at least one file system interface executes either as a virtual machine or on physical hardware and is configured to represent, to the enterprise, a local file system whose data is stored in one or more cloud-based storage service providers;
wherein the one or more file system interfaces export their local file system data as a structured data representation, wherein the structured data representation associated with the at least one file system interface comprises a Uniform Resource Identifier (URI)-addressable cloud node that contains information passed by that file system interface about its associated local file system, together with an access control;
wherein the structured data representation associated with the at least one file system interface is self-contained in that it includes or points to all data structures and data needed to reconstruct the associated local file system at a point-in-time.

US Pat. No. 10,891,193

APPLICATION HEALTH MONITORING AND AUTOMATIC REMEDIATION

ACCENTURE GLOBAL SOLUTION...

1. An application health monitoring system comprising:at least one processor;
a non-transitory processor-readable medium storing machine-readable instructions that cause the at least one processor to:
 receive data regarding one or more messaging server clients (MSCs) serviced by a messaging server,
the messaging server supplying the MSCs with messages,
the MSCs including a plurality of applications, and
the applications comprising one or more services;
 obtain a MSC comparison report,
the MSC comparison report providing comparisons of current states of each of the MSCs with a standard state;
 identify one or more MSC anomalies from the comparison report,
the MSC anomalies signifying disconnected sessions with at least one of the MSCs;
 restart at least one of the services associated with the disconnected sessions;
 generate another MSC comparison report,
the another MSC comparison report providing comparisons of the current states of each of the MSCs with the standard state after the at least one service is restarted;
 determine if a MSC validation stop condition is present,
the MSC validation stop condition including validation of the at least one MSC, and
the validating of the at least one MSC performed via establishing connections with the at least one MSC;
 prevent, when the consumer validation stop condition is present, performing the steps of:
accessing the MSC comparison report,
identifying the one or more MSC anomalies,
restarting at least one of the services, and
generating another MSC comparison report; and
 validate one or more process starters included in the services upon the validation of the at least one MSC, wherein the validation of the one or more process starters includes an automatic comparison of current attribute values of the one or more process starters with respective standard states that include attribute values to be maintained for a stable environment at the MSC.

US Pat. No. 10,891,192

UPDATING RAID STRIPE PARITY CALCULATIONS

Pure Storage, Inc., Moun...

1. A method comprising:receiving, at a first set of solid state drives, a last portion of a redundant array of independent disks (RAID) stripe among multiple portions of the RAID stripe, wherein the RAID stripe includes multiple shards, and wherein each previous portion of the RAID stripe is written to the first set of solid state drives;
calculating a current parity value based on the last portion of the RAID stripe and a previous parity value updated after receiving each previous portion of the RAID stripe; and
responsive to receiving all portions of a shard of the RAID stripe, copying the shard of the RAID stripe from the first set of solid state drives to a second set of solid state drives.

US Pat. No. 10,891,191

APPARATUSES AND METHODS FOR GENERATING PROBABILISTIC INFORMATION WITH CURRENT INTEGRATION SENSING

Micron Technology, Inc., ...

1. A method comprising:sensing a first set of memory cells from a plurality of memory cells at a first sense threshold;
responsive to sensing the first set of memory cells of the plurality of memory cells, identifying the first set of memory cells as having a voltage stored thereon within a first range of voltages;
sensing a second set of memory cells from the plurality of memory cells at a second sense threshold, wherein the first set of memory cells are undetected by the sensing of the second set of memory cells; and
performing an error correction operation on the first set of memory cells based on the first range of voltages.

US Pat. No. 10,891,190

FLASH MEMORY AND OPERATION METHOD THEREOF

GigaDevice Semiconductor ...

1. A method for operating a flash memory, comprising:reading out raw data from a plurality of memory cells;
correcting the raw data by using error correction code (ECC) data to obtain corrected data;
determining an address of a memory cell having a data loss error in the plurality of memory cells; and
programming the memory cell having the data loss error,
wherein the step of determining the address of the memory cell having the data loss error in the plurality of memory cells comprises: comparing the raw data and the corrected data, wherein a bit in the raw data corresponding to the memory cell having the data loss error is 1, and a bit in the corrected data corresponding to the memory cell having the data loss error is 0.

US Pat. No. 10,891,189

CUSTOMIZED PARAMETERIZATION OF READ PARAMETERS AFTER A DECODING FAILURE FOR SOLID STATE STORAGE DEVICES

Seagate Technology LLC, ...

1. A method, comprising:determining a correlation between log likelihood ratio (LLR) values and signal count metrics associated with different pages types of a first memory of a solid-state storage device, the correlation based on linear fitted curves to difference values of signal count metrics;
storing a plurality of parameters defining the determined correlation in a second memory of the storage device associated with each of the different page types;
in response to a decoding failure of a codeword read from the first memory of the solid-state storage device, obtaining at least three read values of the codeword;
calculating a difference value between signal count metrics from the at least three reads;
retrieving the plurality of parameters defining the correlation for the page type of the first memory where the codeword is stored;
calculating a plurality of dynamic LLR values based on the difference value and the correlation; and
decoding the codeword following the decoding failure based at least in part upon the calculated plurality of dynamic LLR values.

US Pat. No. 10,891,188

MEMORY DEVICES HAVING DIFFERENTLY CONFIGURED BLOCKS OF MEMORY CELLS

Micron Technology, Inc., ...

1. A memory device, comprising:a plurality of blocks of memory cells, each of the plurality of blocks of memory cells being individually erasable; and
a controller configured to configure a first block of memory cells of the plurality of blocks of memory cells in a first configuration comprising one or more groups of overhead data memory cells, to configure a second block of memory cells of the plurality of blocks of memory cells in a second configuration comprising a group of user data memory cells and a group of overhead data memory cells, and to configure a third block of memory cells of the plurality of blocks of memory cells in a third configuration comprising only a group of user data memory cells;
wherein the first configuration is different than the second configuration and the third configuration is different than the first and second configurations;
wherein the group of overhead data memory cells of the second block of memory cells comprises a different storage capacity than at least one group of overhead data memory cells of the one or more groups of overhead data memory cells of the first block of memory cells; and
wherein the controller is further configured in response to each read operation on the second block of memory cells, to read user data from the group of user data memory cells of the second block of memory cells, to read a first portion of error correction code (ECC) data specific to the read user data from the group of overhead data memory cells of the second block of memory cells, and to read a second portion of the ECC data from a group of overhead data memory cells of the one or more groups of overhead data of the first block of memory cells corresponding to the read user data.

US Pat. No. 10,891,187

MEMORY DEVICES HAVING DIFFERENTLY CONFIGURED BLOCKS OF MEMORY CELLS

Micron Technology, Inc., ...

1. A memory device, comprising:a plurality of blocks of memory cells, each of the plurality of blocks of memory cells being individually erasable; and
a controller configured to configure a first block of the plurality of blocks of memory cells in a first configuration that stores only ECC data and comprising one or more groups of overhead data memory cells, to configure a second block of the plurality of blocks of memory cells in a second configuration comprising a group of user data memory cells and a group of overhead data memory cells, and to configure a third block of the plurality of blocks of memory cells in a third configuration comprising only a group of user data memory cells that stores only user data;
wherein the first configuration is different than the second configuration and the third configuration is different than the first and second configurations;
wherein the group of overhead data memory cells of the second block comprises a different storage capacity than at least one group of the one or more groups of overhead data memory cells of the first block;
wherein the group of overhead data memory cells of the second block is configured to store a portion of ECC data that is specific to user data stored in the group of user data memory cells of the second block; and
wherein a first group of the one or more groups of overhead data memory cells of the first block is configured to store a remaining portion of the ECC data that is specific to the user data stored in the group of user data memory cells of the second block.

US Pat. No. 10,891,186

SEMICONDUCTOR DEVICE AND SEMICONDUCTOR SYSTEM INCLUDING THE SAME

RENESAS ELECTRONICS CORPO...

1. A semiconductor device comprising: an ECC (Error Correction Code) decoder which diagnoses whether or not an error occurs in data transmitted from a transmitting circuit, using an error detection code for the data; an ECC encoder which generates a first error detection code for a first divided piece of data equivalent to a bit range accounting for a part of plural bits configuring the data and generates a second error detection code for a second divided piece of data equivalent to a bit range accounting for a remaining part of the bits configuring the data; and a diagnosis circuit which, when no error in the data transmitted from the transmitting circuit has been detected by the ECC decoder, compares a part of the data corresponding to the first divided piece of data with the first divided piece of data used in generating the first error detection code in the ECC encoder and compares a part of the data corresponding to the second divided piece of data with the second divided piece of data used in generating the second error detection code in the ECC encoder, and generates an error detection signal to detect an error in the first and second divided piece of data used in generating the first and second error detection code respectively when there is a data mismatch, wherein the diagnosis circuit does not perform the comparison when the ECC decoder detects the data having a correctable error.

US Pat. No. 10,891,185

ERROR COUNTERS ON A MEMORY DEVICE

Hewlett Packard Enterpris...

1. A machine-readable storage medium encoded with instructions executable by a processor, the machine-readable storage medium comprising:instructions to determine whether a value of one of a plurality of error counters on a memory device equals a threshold value, wherein:
the memory device comprises on-die error-correcting code (ECC);
the one of the plurality of error counters is associated with a memory unit on the memory device; and
the one of the plurality of error counters is to be incremented in response to an error being detected, by the on-die ECC, in the memory unit; and
instructions to initiate, in response to a determination that the value of the one of the plurality of error counters equals the threshold value, a post package repair (PPR), wherein the PPR comprises replacing the memory unit with a repair unit.

US Pat. No. 10,891,184

CONFIGURABLE DATA INTEGRITY MODE, AND MEMORY DEVICE INCLUDING SAME

MACRONIX INTERNATIONAL CO...

1. A method for data integrity checking, comprising:receiving on a device, a data stream having a reference address, and including a plurality of data chunks with data integrity codes;
parsing the data chunks and data integrity codes from the data stream; and
computing data integrity codes of data chunks in the data stream, and comparing the computed data integrity codes with received data integrity codes to test for data errors, wherein said parsing includes:
responding to configuration data stored on the device, and if the configuration data indicates floating boundary data integrity mode, then identifying chunk boundaries based upon an offset from a reference address of the data stream, and if the configuration data indicates fixed boundary data integrity mode, then identifying chunk boundaries based upon fixed addresses.

US Pat. No. 10,891,183

SYSTEMS AND METHODS FOR SELF CORRECTING SECURE COMPUTER SYSTEMS

Keep Security LLC, Kansa...

1. A system comprising:a self-correcting secure computer system comprising:
a read-only memory (ROM) device;
a random access memory (RAM) device; and
at least one processor in communication with the ROM device and the RAM device, the at least one processor programmed to:
receive an activation signal;
retrieve, from the ROM device, data to execute an operating system;
execute, on the RAM device, the operating system based on the data from the ROM device;
execute a network connection;
receive a switch signal from a user;
deactivate the network connection;
adjust one or more network settings including at least one of a device name and a media access control address; and
reactivate the network connection using the one or more adjusted network settings.

US Pat. No. 10,891,182

PROACTIVE FAILURE HANDLING IN DATA PROCESSING SYSTEMS

MICROSOFT TECHNOLOGY LICE...

1. A computer system configured to optimize predictions regarding future health statuses of a computing node, the computer system comprising one or more processors executing computer executable instructions that cause the computer system to at least:monitor one or more health indicators for a node;
access one or more stored health indicators that provide a health history for the node;
based at least on both the monitored one or more health indicators and the health history, predict a future health status for the node;
present the predicted future health status, wherein presentation of the predicted future health status includes presenting a likelihood that the node will keep functioning correctly within a specified future time period;
re-evaluate a health status of the node to modify an accuracy of the node's predicted future health status; and
based on a determination that the predicted future health status is below a threshold level, prevent new data from being placed on the node.

US Pat. No. 10,891,181

SMART SYSTEM DUMP

International Business Ma...

1. A computer-implemented method for performing dump collection on a computing system, comprising:detecting an error event within the computing system;
after detecting the error event, determining a subset of hardware registers of a plurality of hardware registers associated with the error event;
determining one or more hardware units within the computing system based on a set of rules that specify an association between the one or more hardware units and the subset of hardware registers associated with the error event; and
capturing data from each of the one or more hardware units, comprising:
identifying one or more commands in a hardware dump content table (HDCT) corresponding to the one or more hardware units; and
executing the one or more commands in the HDCT, wherein at least one of the set of rules and the HDCT is generated via a machine learning model.

US Pat. No. 10,891,180

MULTIPLE-PROCESSOR ERROR DETECTION SYSTEM AND METHOD THEREOF

HYUNDAI AUTRON CO., LTD.,...

1. An error detection system comprising:an input unit for setting a system operation request time based on an external input;
a plurality of processors for performing a predetermined operation; and
an error detection processor connected to each of the plurality of the processors and configured for detecting an error of each of the plurality of processors to generate an error detection signal,
wherein the error detection processor transmits the error detection signal to a predetermined first processor among the plurality of processors, the predetermined first processor updating the error detection signal received from the error detection processor to generate a first updated error detection signal and transmitting the first updated error detection signal to remaining processors among the plurality of processors,
wherein the error detection processor receives an updated error detection signal which is updated from the first updated error detection signal by a predetermined second processor which is one of the remaining processors among the plurality of processors, and
wherein the error detection processor determines whether an operation processing time of the plurality of processors is processed within the system operation request time based on the updated error detection signal.

US Pat. No. 10,891,179

DATA STORAGE DEVICE WITH DEADLOCK RECOVERY CAPABILITIES

WESTERN DIGITAL TECHNOLOG...

1. A data storage apparatus comprising:a non-volatile memory (NVM); and
a controller coupled to the NVM, the controller comprising:
a memory;
a processor coupled to the memory, the processor configured to:
determine whether there is a deadlock in a communication link between the data storage apparatus and a host; and
transmit, when there is a deadlock in the communication link between the data storage apparatus and the host, a recovery command to the host to re-establish a link layer connection between the data storage apparatus and the host, wherein the recovery command is constructed using a transport layer of the communication link.

US Pat. No. 10,891,178

METHOD AND DEVICE FOR IDENTIFYING PROBLEMATIC COMPONENT IN STORAGE SYSTEM

EMC IP HOLDING COMPANY LL...

1. A method for identifying a problematic component in a storage system, comprising:determining, based on history error logs of components of the storage system, a graph indicating error information of the components, nodes in the graph indicating the components, and edges in the graph indicating connections between the components, wherein the error information of the components from the history error logs is filtered with a predetermined time segment, such that only error logs associated with a latest state of the storage system are retained, and wherein the error information comprises at least one of:
a non-contagious error,
a contagious error related to a topology structure, and
a contagious error unrelated to the topology structure; and
identifying, based on the graph, an error source in the components of the storage system to be the problematic component, wherein identifying the error source in the components of the storage system comprises:
determining, from the graph, the error information of the components; and
determining the error source in the components of the graph based on the error information.

US Pat. No. 10,891,177

MESSAGE MANAGEMENT METHOD AND DEVICE, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A message management method, executed by a computing device, the method comprising:storing received messages into a plurality of cache queues according to priorities of the received messages;
extracting messages from the plurality of cache queues, and storing the extracted messages into a uniform cache queue, wherein the uniform cache queue includes multiple entries, each entry corresponding to a respective one of the plurality of cache queues;
scheduling the stored messages in the uniform cache queue to a plurality of outputting scheduling queues according to their respective priorities; and
transmitting the stored messages from the scheduling queues to respective terminals by using a transmit channel corresponding to the scheduling queues.

US Pat. No. 10,891,176

OPTIMIZING MESSAGING FLOWS IN A MICROSERVICE ARCHITECTURE

Ciena Corporation, Hanov...

19. A computer-implemented method comprising:in a distributed system with a microservice architecture having a plurality of services and a messaging layer between an application layer and a transport layer for communication between the plurality of services, receiving messages from a first service to a second service in the messaging layer;
queuing responses from the messages; and
utilizing one or more bulk messaging techniques to send the responses back to the first service from the second service,
wherein the messaging layer is configured to perform the queuing the responses and the utilizing the one or more bulk messaging techniques independent of the first service, the second service, and transport protocol of the transport layer.

US Pat. No. 10,891,175

SYSTEM HAVING IN-MEMORY BUFFER SERVICE, TEMPORARY EVENTS FILE STORAGE SYSTEM AND EVENTS FILE UPLOADER SERVICE

salesforce.com, inc., Sa...

18. A non-transitory computer-readable medium including instructions, which when executed by a processing system at an application server having an in-memory buffer service, are configurable to cause the processing system to perform a method, comprising:instantiating, at an indirect events writer, at least one event capture thread comprising an events file writer of a first application server configured to generate, via a client application executed at the first application server, a plurality of events;
generating, at an events file writer of the first application server including at least one event capture thread, a particular events file, wherein the particular events file comprises the plurality of events flushed from an in-memory buffer service which are then received from the in-memory buffer service when those events are unable to be directly written by the indirect events file writer to a data store comprising memory that is configured to store events; and
writing, via the indirect events file writer, the particular events file to a temporary events file storage system (TEFSS), wherein the TEFSS comprises memory that is configured to temporarily store each events file from the events file writer for subsequent writing to the data store, wherein the data store further comprises: an events uploader job detail table that comprises a plurality of rows; and
executing, at an events file uploader service of a second application server, to: determine that the first application server is inactive; read at least one events file from the TEFSS; and write the events from each of the events files that was read to the data store; and
wherein the events file uploader service comprises: an events uploader manager that is configured to perform steps of:
creating an uploader job record for each events file stored at the TEFSS, wherein each uploader job record includes job detail information that points to that particular events file; and
writing each uploader job record to one row of the events uploader job detail table maintained at the data store.

US Pat. No. 10,891,174

PERFORMING HIERARCHICAL PROVENANCE COLLECTION

International Business Ma...

1. A computer-implemented method, comprising:identifying an event within a system;
identifying a plurality of models associated with an event source that implements the event, where each of the plurality of models has a granularity different from the other models;
selecting one of the plurality of models associated with the event source;
applying the selected model to the event to create an aggregated event; and
storing the aggregated event.

US Pat. No. 10,891,173

DISTRIBUTED ALERT SYSTEM USING ACTOR MODELS

salesforce.com, inc., Sa...

1. A method for monitoring computer resource usage, comprising:obtaining, by a manager actor executing on one or more computing devices and via an HTTP service, a request to create a first alarm, wherein the first alarm is associated with a first alert action and usage of a first computing resource;
creating, by the manager actor and in response to the request, a first alert actor configured to run the first alert action associated with the first alarm and usage of the first computing resource;
creating, by the manager actor, a second alert actor configured to run a second alert action associated with a second alarm and usage of a second computing resource;
notifying, by the manager actor, a routing module of a first subscription for the first alert actor and of a second subscription for the second alert actor;
obtaining, by the HTTP service, a first datapoint related to the first alert action and a second datapoint related to the second alert action;
streaming, by the routing module executing on the one or more computing devices, the first datapoint related to the first alert action to the first alert actor based on the first subscription;
streaming, by the routing module, the second datapoint related to the second alert action to the second alert actor based on the second subscription,
wherein the first datapoint and the second datapoint relate to usage of the first computer resource and the second computer resource, respectively;
determining, by the first alert actor, a new status of a first alert by processing the first datapoint against the first alert action and providing the new status of the first alert to a notification actor;
determining, by the second alert actor, a new status of a second alert by processing the second datapoint against the second alert action and providing the new status of the second alert to the notification actor;
providing, by the notification actor, a notification of the first alert to a system administrator based on a change between a current status of the first alert and the new status of the first alert; and
providing, by the notification actor, a notification of the second alert based on a change between a current status of the second alert and the new status of the second alert.

US Pat. No. 10,891,172

MODIFYING AN OPERATING SYSTEM

Intel Corporation, Santa...

1. A system for modifying operating systems comprising:logic to:
modify a basic input/output system (BIOS) to load a virtual general purpose input/output (GPIO) driver, the virtual GPIO driver comprising at least one control method corresponding to a system control interrupt (SCI);
detect the system control interrupt invoking the virtual GPIO driver;
execute the control method corresponding to the system control interrupt, the control method to be identified in the modified BIOS;
detect an error from the execution of the control method; and
modify an operating system to prevent the error.

US Pat. No. 10,891,171

METHOD, APPARATUS AND DEVICE FOR TRANSITIONING BETWEEN DATA AND CONTROL CORE AND MIGRATING CLOCK TASK FROM DATA CORE TO CONTROL CORE

HUAWEI TECHNOLOGIES CO., ...

1. A clock task processing method, wherein the clock task processing method is used in a multi-core computer operating system, wherein the multi-core computer operating system runs on a physical host comprising multiple data cores and multiple control cores, and wherein the method comprises:running at least one service process of a to-be-processed service using at least one data core of the multiple data cores;
detecting, by the operating system, a quantity of data packets of the to-be-processed service;
in response to the quantity of data packets being greater than a first threshold, changing a control core of the multiple control cores into a data core so that the data core runs the at least one service process;
in response to the quantity of data packets being less than a second threshold, changing a data core of the at least one data core into a control core, wherein the first threshold is greater than the second threshold;
obtaining at least one first clock task associated with the at least one service process in response to running the at least one service process using the at least one data core;
disabling a clock interrupt of the at least one data core;
migrating the at least one first clock task associated with the at least one service process in the at least one data core to a specified task queue associated with at least one control core of the multiple control cores after the clock interrupt of the at least one data core is disabled, wherein the at least one first clock task is a task completed using the clock interrupt; and
processing the at least one first clock task using the at least one control core of the multiple control cores.

US Pat. No. 10,891,170

TASK GROUPING BY CONTEXT

International Business Ma...

1. A method, the method comprising:receiving, by one or more computer processors, a first task initialization by a first user, wherein a task represents a work unit that belongs to a work assignment to be performed by the first user;
determining, by the one or more computer processors, whether one or more additional tasks contained in one or more task groups are in use by the first user;
responsive to determining the one or more additional tasks contained in the one or more task groups are in use, determining, by the one or more computer processors, whether the first task is related to at least one task of the one or more additional tasks, wherein the first task and the at least one task are related by sharing one or more software resources;
responsive to determining the first task is not related to the at least one task of the one or more additional tasks, determining, by the one or more computer processors, whether an event is detected; and
responsive to determining an event is not detected, adding, by the one or more computer processors, the first task to a first task group of the one or more task groups that contains a second task, wherein the second task is in focus immediately prior to the first task initialization, and wherein a task is in focus while the user is actively utilizing the task.

US Pat. No. 10,891,169

METHODS AND APPARATUS TO DISTRIBUTE A WORKLOAD FOR EXECUTION

Intel Corporation, Santa...

1. An apparatus to distribute a workload for execution, the apparatus comprising:a workload container interface to access the workload for execution at a remote device, the workload including workload instructions and a specified capability to be met by the remote device, the remote device including at least one of a type of input or a type of output that is not available to the apparatus;
a runtime selector to select the remote device for execution of the workload based on the specified capability being present in a list of capabilities discovered for the remote device, the runtime selector further to, in response to determining that no remote device is available for execution of the workload, determine whether the workload can be executed by the apparatus;
a workload transmitter to transmit, in response to the selection of the remote device for execution of the workload, the workload to the remote device for execution; and
a workload executor to execute the workload and, in response to the determining that the workload can be executed by the apparatus, to provide a result of the execution of the workload to device functionality of the apparatus, wherein at least one of the workload container interface, the runtime selector, the workload transmitter, or the workload executor is implemented by hardware.

US Pat. No. 10,891,168

AUTOMATICALLY SCALING UP PHYSICAL RESOURCES IN A COMPUTING INFRASTRUCTURE

Red Hat, Inc., Raleigh, ...

1. A method comprising:determining that no physical resource in a plurality of resources of a cluster has available capacity in view of utilization of individual virtual resources and individual physical resources in the cluster;
determining a change to implement in a physical configuration of the cluster in view of the utilization of the individual virtual resources and the individual physical resources in the cluster, the change indicating one or more actions to be performed to modify a non-provisioned physical resource in view of a cluster type of the cluster; and
performing, by a processing device without user interaction, an action to implement the change, wherein the change comprises adding the non-provisioned physical resource to the cluster.

US Pat. No. 10,891,167

MEMORY FRACTIONATION SOFTWARE PROTECTION

Siege Technologies, LLC, ...

1. A method of protecting software in a computer system, the method comprising:defining a memory fractionation configuration for an application software program in the computer system, wherein:the application software includes two or more pages of code blocks, andthe memory fractionation configuration represents how the code blocks should be assigned to different code block fractions based, at least in part, on a first frequency with which one or more code blocks of a first code block fraction are transferred to a second code block fraction;fractionating at least one page of the application software program into the code block fractions according to the memory fractionation configuration;
running the application in such a manner that, at any particular point in time when the application is running, at least one of the first and second code block fractions is stored in a manner that is not accessible from a user space or a kernel space of the computer system, according to the memory fractionation configuration;
determining whether the first frequency is greater than or equal to a predetermined value;
in accordance with a determination that the first frequency is greater than or equal to the predetermined value, reducing the first frequency to a second frequency with which one or more code blocks of the first and second code block fractions are transferred between the first and second code block fractions; and
defining the memory fractionation configuration to represent how the code blocks should be assigned to different code block fractions based, at least in part, on the second frequency.

US Pat. No. 10,891,166

STORAGE SYSTEM AND INFORMATION MANAGEMENT METHOD HAVING A PLURALITY OF REPRESENTATIVE NODES AND A PLURALITY OF GENERAL NODES INCLUDING A PLURALITY OF RESOURCES

HITACHI, LTD., Tokyo (JP...

1. A storage system comprising a plurality of representative nodes and a plurality of general nodes including a plurality of resources,wherein each of the plurality of representative nodes and the plurality of general nodes comprises:
a processor;whereineach of the general nodes stores resource status information indicating respective statuses of the plurality of resources in a first storage unit thereof,
the plurality of representative nodes stores resource status information collected from the plurality of general nodes in a second storage unit thereof, decides whether to acquire the resource status information from the first storage unit of one of the general nodes or to acquire the resource status information from the second storage unit based on a received request, and transmits the resource status information acquired from a decided acquisition destination to an issuing source of the request,
the plurality of representative nodes includes a main representative node,
the main representative node specifies a general node that is a target of the received request, specifies a representative node, which manages the specified general node, based on management information in which a management relation between the specified representative node and the specified general node has been defined, and allows the specified representative node to acquire the request, and
the specified representative node acquires the request, decides whether to acquire the resource status information from the specified general node or to acquire the resource status information from the second storage unit of the specified representative node, and transmits the resource status information acquired from a decided acquisition destination to an issuing source of the request.

US Pat. No. 10,891,165

FROZEN INDICES

Elasticsearch B.V., Moun...

1. A computer-implemented method for searching a frozen index comprising:receiving an initial search and a subsequent search;
loading the initial search and the subsequent search into a throttled thread pool, the throttled thread pool including a first-in first-out queue;
getting the initial search from the throttled thread pool;
storing a first shard from a mass storage in a memory in response to the initial search;
performing the initial search on the first shard;
providing first top search result scores from the initial search;
removing the first shard from the memory when the initial search is completed;
getting the subsequent search from the throttled thread pool;
storing a second shard from the mass storage in the memory in response to the subsequent search;
performing the subsequent search on the second shard;
providing second top search result scores from the subsequent search; and
removing the second shard from memory when the subsequent search is completed.

US Pat. No. 10,891,164

RESOURCE SETTING CONTROL DEVICE, RESOURCE SETTING CONTROL SYSTEM, RESOURCE SETTING CONTROL METHOD, AND COMPUTER-READABLE RECORDING MEDIUM

NEC CORPORATION, Tokyo (...

1. A resource setting control device comprising:A memory storing instructions; and
at least one processor configured to process the instructions for:
extracting a load pattern of a service request in a specific period of time from a) a request history, of a service request requesting a service of one of a plurality of versions, including information of a group, the requested version, and a resource usage during execution of the service, and b) the request history in operation information storage, storing a reference pattern of a load for the service request, and updating the reference pattern when detecting that change from the reference pattern is beyond a specific range; and
determining, for each of the versions, a resource request amount, based on a peak value of the resource usage in the specific period of time and a number of server devices providing the version, when detecting change of the load pattern, and outputting the determined resource request amount.

US Pat. No. 10,891,163

ATTRIBUTE DRIVEN MEMORY ALLOCATION

International Business Ma...

1. A method for memory allocation of a computer system, the method comprising:collecting a plurality of computer system architecture specifications, a computer system configuration, and a plurality of user requirements;
identifying a plurality of memory intervals to be allocated, based on the collected plurality of computer system architecture specifications, the collected computer system configuration, and the collected plurality of user requirements;
grouping the identified plurality of memory intervals into a plurality of color groups, wherein each memory interval within each of the plurality of color groups comprises a plurality of identical memory attributes;
dividing memory into sets of memory segments, wherein each set of memory segments is assigned a color of the plurality of color groups, wherein dividing memory into sets of memory segments comprises calculating a weighed score for each memory interval within the plurality of memory intervals, sorting each memory interval within the identified plurality of memory intervals, by decreasing weighed score and dividing the memory into sets of memory segments, depending on the memory interval sorting, wherein a size of each memory segment of the set of memory segments is smaller than a maximum page size of the memory;
allocating a memory interval of the identified plurality of memory intervals within the set of memory segments of the corresponding color; and
selecting a page size for a translation of the memory interval of the plurality of memory intervals, depending upon the allocation of the memory interval and the sets of memory segments.

US Pat. No. 10,891,162

METHODS AND APPARATUS TO IMPROVE EXTERNAL RESOURCE ALLOCATION FOR HYPER-CONVERGED INFRASTRUCTURES BASED ON COSTS ANALYSIS

VMWARE, INC, Palo Alto, ...

1. An apparatus comprising:memory; and
at least one processor to execute instructions to improve network communications associated with a virtual server rack, the at least one processor to:
identify a set of hyper-converged infrastructure (HCI) storage resources associated with the virtual server rack that executes a workload domain;
identify a set of external storage resources in response to determining a first latency between the virtual server rack and an HCI storage resource of the set of the HCI storage resources exceeding a latency match parameter;
calculate a storage capacity cost for ones of external storage resources in the external storage resource set based on a comparison of storage capacity of the ones of the external storage resources and a storage requirement of the workload domain;
calculate second latencies between the virtual server rack and the ones of the external storage resources in the external storage resource set;
determine whether a storage capacity cost threshold is satisfied based on the storage capacity cost and a storage network cost threshold is satisfied based on the second latencies;
identify a first external storage resource from the external storage resource set as a storage solution; and
allocate the first external storage resource to the workload domain.

US Pat. No. 10,891,161

METHOD AND DEVICE FOR VIRTUAL RESOURCE ALLOCATION, MODELING, AND DATA PREDICTION

Advanced New Technologies...

1. A computer-implemented method, comprising:receiving, from a plurality of data providers, a plurality of user evaluation results of a plurality of users, generated by a plurality of user evaluation models, respectively, wherein each user evaluation model is trained on a corresponding training sample set by
generating, for a corresponding user data sample, a respective data feature vector comprising data feature values, wherein the data feature values correspond to data features of a plurality of dimensions that are extracted from the user data sample, and
constructing a target matrix based on the data feature vectors generated for the user data sample;
constructing a plurality of risk evaluation model training samples from the user evaluation results, wherein each risk evaluation model training sample of the plurality of risk evaluation model training samples comprises a respective subset of the user evaluation results corresponding to a first user of the plurality of users;
generating a label for each risk evaluation model training sample of the plurality of risk evaluation model training samples based on an actual service execution status of the first user to provide a plurality of labels;
training a risk evaluation model based on the plurality of risk evaluation model training samples and the plurality of labels, wherein training the risk evaluation model comprises setting a plurality of variable coefficients, each variable coefficient specifying a contribution level of a corresponding data provider of the plurality of data providers; and
allocating virtual resources to the plurality of data providers based on the plurality of variable coefficients.

US Pat. No. 10,891,160

RESOURCE PROVISIONING

INTERNATIONAL BUSINESS MA...

1. A method, comprising:generating, by a computer device, a resource provisioning policy for a resource;
receiving, by the computer device, a request for an allocation of the resource from an account; and
applying, by the computer device, the resource provisioning policy to the request based on receiving the request;
wherein the resource is a database in a remote resource device and the request is for allocating an amount of storage space in the database, and
wherein the generating comprises analyzing approved requests for the allocation of the resource and denied requests for the allocation of the resource contained in a historical request database,
further comprising filtering the denied requests stored in the historical request database, prior to executing the analyzing on the denied requests, to determine whether the denied request is related to the generation of the resource provisioning policy to eliminate from analysis requests which were denied for reasons not related to the generating of the resource provisioning policy.

US Pat. No. 10,891,159

ACTIVATION POLICIES FOR WORKFLOWS

salesforce.com, inc., Sa...

1. An article of manufacture comprising:a non-transitory machine-readable storage medium that provides instructions that, if
executed by a machine, will cause the machine to perform operations comprising:
receiving a definition of a workflow, wherein the receiving includes:
receiving data defining an input set for the workflow;
receiving data defining activities and a flow of the activities for the
workflow, wherein the data includes an activation policy for at least a particular activity of the activities;
executing the workflow including the particular activity, wherein executing the
particular activity includes:
repetitively performing the following:
receiving items from the input set, individually or in a set of more than one, for processing by the particular activity;
grouping into a current subset the items received that have not been previously grouped into a subset according to the activation policy;
performing an action of the activity on the current subset; and
sending each of the items in the current subset to a next activity in the workflow.

US Pat. No. 10,891,158

TASK SCHEDULING METHOD AND APPARATUS

Huawei Technologies Co., ...

1. A task scheduling method, comprising:adding, by one or more processors, according to a correspondence between a plurality of tasks and M data blocks accessed by the plurality of tasks, each task of the plurality of tasks to one of M task queues associated with a data block that corresponds to the task being added, wherein the M data blocks correspond one-to-one to the M task queues; and
executing, by the one or more processors, N threads to concurrently perform tasks in N task queues of the M task queues, wherein each of the N threads executes a task in one task queue of the N task queues, different threads of the N threads execute tasks in different task queues, and 2?N?M.

US Pat. No. 10,891,157

PERFORMANCE TOWARDS COMPLETION OF A TASK LIST THROUGH EMPLOYMENT OF A SWARM

1. A system comprising:a communication component that establishes a communication link with at least one element of a swarm, where a first element and a second element are part of the swarm and where the swarm at least partially accomplishes a task list collaboratively and autonomously; and
a management component that causes removal of the second element from the swarm, where the removal of the second element is forwarded through use of the communication link,
where the communication component, the management component, or a combination thereof are embodied, at least in part, by way of non-software.

US Pat. No. 10,891,156

INTELLIGENT DATA COORDINATION FOR ACCELERATED COMPUTING IN CLOUD ENVIRONMENT

EMC IP Holding Company LL...

1. A method, comprising:performing an offline process which comprises:
executing a task;
determining data flow patterns which occur between resources as a result of data flow operations that are performed during the execution of the task; and
storing the determined data flow patterns in a knowledge base;
executing the task on a first computing node;
monitoring requests issued by the executing task;
intercepting requests issued by the executing task which correspond to data flow operations to be performed as part of the task execution, wherein the intercepted requests comprise at least one of requests for prefetching data from a memory, requests for loading data into a memory, requests for copying data from a first memory to a second memory, and requests for communicating data to a second computing node; and
asynchronously executing the intercepted requests at scheduled times to coordinate intra-node data flow between resources on the first computing node and inter-node data flow between resources on the first computing node and the second computing node, wherein asynchronously executing the intercepted requests at scheduled times comprises:
enqueuing the intercepted requests;
utilizing the determined data flow patterns in the knowledge base to schedule times for executing the enqueued requests in a manner which coordinates intra-node data flow between resources on the first computing node and inter-node data flow between resources on the first computing node and the second computing node; and
dispatching a given enqueued request for execution by an asynchronous background thread according to a scheduled time for the given enqueued request.

US Pat. No. 10,891,155

WEARABLE DEVICE TASK OFFLOADING TO CONSERVE PROCESSING RESOURCES

McAfee, LLC, San Jose, C...

1. A wearable device, comprising:a memory element operable to store electronic code; and
a processor operable to execute instructions associated with the electronic code, said instructions for leveraging a companion device when in proximity to the wearable device to conserve resources of the wearable device, such that the wearable device is configured to
receive a token from the companion device via a message channel between the wearable device and the companion device, wherein the message channel is Bluetooth or Near Field Communication (NFC);
output the token via a display, a haptic output, or a speaker, the token having a limited time to live;
configure a push message channel mapped to the token;
receive a configuration of the wearable device over the push message channel;
receive a request to perform a task;
receive metadata with the request;
determine a priority level associated with the request, based on the metadata;
perform the task, if the priority level is high;
queue the task, if the priority level is low;
determine that the companion device is in proximity to the wearable device;
configure the message channel between the wearable device and the companion device when the companion device is in proximity to the wearable device; and
perform the task using the message channel and one or more resources of the companion device.

US Pat. No. 10,891,154

HOSTING VIRTUAL MACHINES ON A SECONDARY STORAGE SYSTEM

Cohesity, Inc., San Jose...

1. A method comprising:hosting at least a portion of a virtual machine on at least one node of a first subset of a plurality of nodes of a secondary storage system, wherein the secondary storage system stores data associated with one or more primary storages, and wherein, in a first state of a plurality of states, the virtual machine comprises a plurality of portions that are distributed between the first subset of the plurality of nodes of the secondary storage system;
moving to one or more selected nodes of a second subset of the plurality of nodes of the secondary storage system to host the plurality of portions of the virtual machine in a second state of the plurality of states the plurality of portions of the virtual machine hosted on the first subset of the plurality of nodes of the secondary storage system, wherein the plurality of portions of the virtual machine include one or more executable portions and one or more data portions; and
running the virtual machine in the second state of the plurality of states on the one or more selected nodes of the second subset of the plurality of nodes of the secondary storage system, wherein, in the second state of the plurality of states, a first node of the second subset of the plurality of nodes of the secondary storage system is selected to be configured to store one executable portion of the one or more executable portions of the virtual machine and a second node of the second subset of the plurality of nodes of the secondary storage system is selected to be configured to store one data portion of the one or more data portions of the virtual machine.

US Pat. No. 10,891,153

SYSTEM AND METHOD FOR SWITCHING FILE SYSTEMS UNDERNEATH WORKING PROCESSES

Virtuozzo International G...

1. A method for switching file systems underneath working processes, the method comprising:identifying at least one process that is using a first file on a first file system, the at least one process running on an operating system of a computing device;
temporarily suspending an execution of the at least one process;
identifying an existing reference of the at least one process to the first file on the first file system, the existing reference used by the at least one process to access the first file on the first file system during execution of the at least one process;
replacing, when the execution of the at least one process is suspended, the existing reference of the at least one process with a new reference for the at least one process to a second file on a second file system, the second file on the second file system corresponding to the first file on the first file system, wherein the replacing the existing reference comprises replacing one of a file descriptor, a file mapping, a reference to a current working directory, a reference to a root directory, and a reference to an executable file; and
resuming the execution of the at least one process, wherein, after resuming the execution of the at least one process, the new reference is used by the at least one process to access the second file on the second file system during execution of the at least one process.

US Pat. No. 10,891,152

BACK-END TASK FULFILLMENT FOR DIALOG-DRIVEN APPLICATIONS

Amazon Technologies, Inc....

1. A system, comprising:one or more processors; and
memory storing program instructions that, if executed, cause the one or more processors to perform a method comprising:
determining whether a value of a first parameter of a first application is to be obtained using a natural language interaction;
identifying, based at least in part on received input, a first service of a plurality of services, wherein the first service is to be used to perform a first task associated with the first parameter; and
based, at least in part, on determining that the value of the first parameter of the first application is to be obtained using the natural language interaction, generating one or more portions of application code for the first application, wherein the one or more generated portions of application code are associated with obtaining the value of the first parameter and invoking the first service.

US Pat. No. 10,891,151

DEPLOYMENT AND MANAGEMENT PLATFORM FOR MODEL EXECUTION ENGINE CONTAINERS

ModelOp, Inc., Chicago, ...

1. A system, comprising:a processor configured to:
generate a connect container, wherein the connect container provides a discovery service for a plurality of analytic engines for determining an IP address of each of the plurality of analytic engines for communication via an interface, wherein the plurality of analytic engines includes a first analytic engine and a second analytic engine, wherein the connect container is in communication with a model manager data store for storing descriptions of each of a plurality of analytic models, wherein the plurality of analytic models includes a first analytic model for processing data and a second analytic model for processing data;
generate a fleet controller container, wherein the fleet controller container binds the descriptions of the plurality of analytic models in the model manager data store to run-time abstractions in each of the plurality of analytic engines and orchestrates communication between each of the plurality of analytic engines that can be configured in a pipeline of engines;
receive at the interface the first analytic model for processing data and the second analytic model for processing data, wherein the first and second analytic models comprise at least one of the following: scientific computing, numerical computing, and analytical computing;
generate a first virtualized execution environment for the first analytic engine that includes executable code to implement the first analytic model for processing a first input data stream and a first output port comprising a first schema and a first operating configuration, wherein the first analytic engine is in communication with the connect container;
generate a second virtualized execution environment for the second analytic engine that includes executable code to implement the second analytic model for processing a second input data stream and a first input port comprising a second schema and a second operating configuration;
deploy the first virtualized execution environment for the first analytic engine in a first infrastructure system and the second virtualized execution environment for the second analytic engine in the first infrastructure system; and
route a first output data stream from the first virtualized execution environment for the first analytic engine to the second input data stream based at least in part on checking the first schema and the second schema; and
a memory coupled to the processor and configured to provide the processor with instructions.

US Pat. No. 10,891,150

STORAGE CONTROL METHOD AND STORAGE CONTROLLER FOR USER INDIVIDUAL SERVICE ENVIRONMENT

GLUESYS CO., LTD., Anyan...

1. A storage control method for a virtualization environment, the storage control method comprising:analyzing I/O workload patterns by using block I/O information for virtual machines;
adjusting an over-provisioning proportion for a virtual storage device allotted to each virtual machine according to the I/O workload pattern for each of the virtual machines; and
allotting an over-provisioning space for each of the virtual storage devices according to the over-provisioning proportion,
wherein the block I/O information includes at least one of a block size, access method, and a read/write command,
wherein the analyzing of the I/O workload patterns comprises analyzing an average block size, a ratio of random accesses to sequential accesses, and a ratio of write commands to read commands, for block I/O within a preset section for each of the virtual machines,
wherein the adjusting of the over-provisioning proportion comprises increasing the over-provisioning proportion if the ratio of write commands to read commands increases.

US Pat. No. 10,891,149

AUTHENTICATION AND INFORMATION SYSTEM FOR REUSABLE SURGICAL INSTRUMENTS

Covidien LP, Mansfield, ...

1. A surgical system, comprising:an actuation assembly including a controller having at least one program and a memory; and
a loading unit releasably securable to the actuation assembly, the loading unit including: a tool assembly mounted for articulation relative to the actuation assembly, and
an articulation member for articulation of the tool assembly relative to the actuation assembly,
the tool assembly including:
a removable and replaceable staple cartridge assembly, and
at least one chip assembly disposed within the staple cartridge assembly and including a chip configured to receive data from and transmit data to the controller regarding a position of the articulation member.

US Pat. No. 10,891,148

METHODS AND SYSTEMS FOR IDENTIFYING APPLICATION COMPONENTS IN DISTRIBUTED COMPUTING FACILITIES

VMware, Inc., Palo Alto,...

1. A distributed-application-discovery system comprising multiple information-collection and information-processing components that run within physical computer systems to discover the distributed applications currently executing within a distributed computing facility, the multiple components including:agent processes that each
runs in an application-execution-environment node within a computer system within the distributed computing facility; and
periodically
collects information about the application-execution-environment node, and
transmits the collected information to information-processing components;
the information-processing components that
maintain state information for each application-execution-environment node,
extract state-related information from the collected information received from the agent processes,
generate application-execution-environment-node state changes for the application-execution-environment nodes, and
compile application-execution-environment node characteristics from the state changes; and
an application monitor that
periodically
uses the compiled application-execution-environment node characteristics to discover distributed applications currently executing within the application-execution-environment nodes,
generates application changes, and
persistently stores the application changes.

US Pat. No. 10,891,147

EXTENDING APPLICATION TIERS ACROSS A LARGE-SCALE HETEROGENEOUS DATA CENTER

Cisco Technology, Inc., ...

1. A method of extending application tiers across virtual machine management (VMM) domains, the method comprising:defining a first VMM domain associated with a first VMM system type; defining a second VMM domain associated with a second VMM system type, the second VMM system type different from the first VMM system type;
defining a first endpoint group associated with a first application tier of an application;defining a second endpoint group associated with a second application tier of the application;associating the first endpoint group with each of the first and second VMM domains;
associating the second endpoint group with each of the first and second VMM domains;
defining a first tenant profile, the first tenant profile associated with the first endpoint group and at least one third endpoint group forming a first tenant that is mapped to the first and second VMM domains;
defining a second tenant profile, the second tenant profile associated with the second endpoint group and at least one fourth endpoint group forming a second tenant that is associated with the first and second VMM domains; and
attaching a network interface controller to each of the first and second endpoint groups to direct traffic of the first and second application tiers to any appropriate endpoint group from among the first endpoint group and the second endpoint group within any of the first VMM domain and the second VMM domain, to run the application,
wherein the second VMM domain is a different type from the first VMM domain.

US Pat. No. 10,891,146

ACCESS CONTROL AND CODE SCHEDULING

ARM IP Limited, Cambridg...

1. A method of processing data using a data processing apparatus having a plurality of privilege modes including a first privilege mode and a second privilege mode, said first privilege mode giving rights of access that are not available in said second privilege mode, said method comprising the steps of:executing application code in said second privilege mode to generate a function call to hypervisor code to perform a secure function using said rights of access;
upon generation of said function call, executing hypervisor code in said first privilege mode to at least control execution of said secure function; and
executing scheduling code in said second privilege mode to control scheduling of execution of said application code in said second privilege mode by said data apparatus and executing scheduling code in said second privilege mode to control scheduling of execution of said hypervisor code in said first privilege mode by said data processing apparatus by determining, in the second privilege mode, which of a plurality of sections of said hypervisor code is to execute in the first privilege mode after a scheduling event,
wherein said hypervisor code calls delegated code executing in said second privilege mode as part of servicing said function call.

US Pat. No. 10,891,145

PROCESSING PRE-EXISTING DATA SETS AT AN ON DEMAND CODE EXECUTION ENVIRONMENT

Amazon Technologies, Inc....

1. A system for processing a plurality of data items within a data source via an on-demand code execution environment, the system comprising:a non-transitory data store configured to implement:
an in-process data cache configured to store indications of which data items, from the plurality of data items, are awaiting processing at the on-demand code execution environment via execution of computer-executable code representing a task; and
a results data cache configured to store an indication of which data items, from the plurality of data items, have been processed at the on-demand code execution environment by successful executions of the computer-executable code;
one or more first processors configured to implement a user interface subsystem that obtains, from a user computing device, information identifying the data source and the task, on the on-demand code execution environment, to utilize in processing the plurality of data items;
one or more second processors configured to implement a data retrieval subsystem that:
retrieves a first set of data items, from the plurality of data items, from the data source; and
for individual data items of the first set of data items retrieved from the data source:
generates identifiers for the individual data items retrieved from the data source;
determines, from the identifiers, that the individual data items are not identified within the in-process data cache or the results data cache; and
enqueues the individual data items in the in-process data cache; and
one or more third processors configured to implement a call generation subsystem that, in response to detecting that the in-process data cache is not empty:
submits one or more calls to the on-demand code execution environment to process each data item enqueued in the in-process data cache by execution of the computer-executable code;
determines that the task successfully processed one or more data items in the in-process data cache; and
responsive to determining that the task successfully processed the one or more data items:
removes the one or more data items from the in-process data cache; and
places the one or more data items in the results data cache;
wherein the user interface subsystem further transmits a notification to the user computing device when the plurality of data items have been processed at the on-demand code execution environment.

US Pat. No. 10,891,144

PERFORMING LOGICAL NETWORK FUNCTIONALITY WITHIN DATA COMPUTE NODES

NICIRA, INC., Palo Alto,...

1. A method comprising:at a first managed forwarding element executing within a first data compute node (DCN) of a plurality of DCNs that operate on virtualization software executing on a first host computer:
from an application also executing within the first DCN, receiving a packet destined for a second DCN (i) that is logically connected to the first DCN through a set of logical forwarding elements of a logical network and (ii) that operates on virtualization software executing on a second host computer;
performing forwarding processing on the packet (i) to identify a particular logical forwarding element in the set of logical forwarding elements, a logical port of which is coupled to the second DCN, and (ii) to identify a second managed forwarding element that implements the logical port of the particular logical forwarding element; and
forwarding the packet to the second managed forwarding element.

US Pat. No. 10,891,143

SYSTEM, METHOD AND INTERACTIVE GUI FOR CREATING ON-DEMAND USER-CUSTOMIZED INSTRUMENTS

Raisin Technology Europe,...

1. A system comprising:at least one computer server configured to communicate with one or more entity systems and at least one user device, the at least one computer server comprising a non-transitory memory and a processor, the at least one computer server configured to:
receive, via one or more data feed interfaces, one or more baseline data structures from among the one or more entity systems;
generate an interactive graphical user interface (GUI) on a display of the at least one user device, the interactive GUI comprising one or more screens configured to display the one or more baseline data structures and one or more user adjustment tools for customizing characteristics of a displayed structure among the one or more baseline data structures;
generate, by a dynamic filtering component of the interactive GUI, a list of baseline data structures among the one or more baseline data structures, said list being generated to be particular to the at least one user device according to preprogrammed rules, said list including said displayed structure;
display, by the interactive GUI, said list of baseline data structures, including the displayed structure, on said one or more screens;
receive, from the at least one user device via the interactive GUI, at least one adjustment indication associated with the displayed structure via the one or more user adjustment tools;
adjust, responsive to the at least one adjustment indication, at least one characteristic among the characteristics of the displayed structure;
update, responsive to the adjusting of the at least one characteristic, at least one other characteristic among the characteristic of the displayed structure, thereby reflecting an impact of said adjusting on said at least one other characteristic;
dynamically display, via the interactive GUI on at least one among the one or more screens, the adjusting of the at least one characteristic and the impact of said adjusting on said at least one other characteristic as said adjusting occurs;
receive, from the at least one user device via the interactive GUI, input comprising a confirmation indication; and
create a user-customized data structure responsive to the confirmation indication received from the at least one user device via the interactive GUI.

US Pat. No. 10,891,142

METHOD AND DEVICE FOR PRELOADING APPLICATION, STORAGE MEDIUM, AND TERMINAL DEVICE

GUANGDONG OPPO MOBILE TEL...

1. A method for preloading an application, comprising:collecting, in a preset collection period, historical state feature information of a terminal device at each time point at which a target application is closed, as samples of the target application;
monitoring whether the target application is launched within a preset time period starting from the each time point at which the target application is closed;
recording monitoring results as sample labels of the samples;
acquiring current state feature information of the terminal device, in response to the target application being detected to be closed;
comparing the current state feature information with the historical state feature information of the terminal device when the target application was closed, the historical state feature information corresponding to historical usage regularities of the target application;
determining, from within the historical state feature information, target historical state feature information closest to the current state feature information according to a comparison result; and
preloading the target application, in response to determining that the target application is about to be launched again according to a historical usage regularity corresponding to the target historical state feature information;
wherein comparing the current state feature information with the historical state feature information of the terminal device when the target application was closed comprises calculating distances between the current state feature information and each of the historical state feature information;
wherein determining, from within the historical state feature information, the target historical state feature information closest to the current state feature information according to the comparison result comprises determining historical state feature information corresponding to the smallest distance as target historical state feature information; and
wherein preloading the target application, in response to determining that the target application is about to be launched again comprises preloading the target application, in response to a sample label corresponding to the target historical state feature information indicating “launch”.

US Pat. No. 10,891,141

PLUGIN LOADING METHOD AND APPARATUS, TERMINAL, AND STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. A method for plugin loading, the method comprising:when a plugin is started, obtaining, by a device comprising a memory and a processor in communication with the memory, an identifier of a plugin component of the plugin from a threading module, the threading module being a parent class component of a host module;
recording, by the device, the identifier of the plugin component, and replacing the identifier of the plugin component with an identifier of a host component of an application program, the application program being a host program of the plugin;
sending, by the device, the identifier of the host component to the threading module, to perform system permission verification;
receiving, by the device, runnable notification information from the threading module when the identifier of the host component passes the system permission verification, wherein the host component of the application program is pre-registered with the host program and has a permission to run from the host program;
in response to the received runnable notification information, replacing, by the device, the identifier of the host component with the identifier of the plugin component according to the recorded identifier of the plugin component; and
sending, by the device, the identifier of the plugin component to the threading module, to load the plugin.

US Pat. No. 10,891,140

DYNAMIC CONFIGURATION MANAGEMENT

AMAZON TECHNOLOGIES, INC....

1. A computer-implemented method, comprising:sending, from a monitoring service to a host machine having a hardware offload card installed thereon, a request for configuration information for the hardware offload card;
receiving, to the monitoring service, a configuration snapshot for the hardware offload card, the configuration snapshot including current configuration values for a processor and memory of the hardware offload card;
determining, based upon a type of the hardware offload card and an intended function of the hardware offload card, a configuration model including a set of expected configuration values;
determining a discrepancy between the set of expected configuration values of the configuration model and the current configuration values from the configuration snapshot; and
causing the expected configuration values corresponding to the discrepancy to be automatically applied to the hardware offload card to eliminate the discrepancy.

US Pat. No. 10,891,139

PROVIDING FIRMWARE SPECIFIC INFORMATION VIA ACPI TABLES

American Megatrends Inter...

1. A computer-implemented method, comprising:executing a firmware of a computing system to perform a boot process for the computing system;
during execution of the firmware, constructing a table in a memory of the computing system, the table comprising firmware specific information defining attributes of the firmware;
following performance of the boot process, receiving a request to execute an application;
responsive to the request, retrieving the firmware specific information from the table; and
restricting functionality of the application based upon the firmware specific information.

US Pat. No. 10,891,138

SECURE START SYSTEM FOR AN AUTONOMOUS VEHICLE

UATC, LLC, San Francisco...

1. A secure start system for an autonomous vehicle, the secure start system comprising:a plurality of encrypted drives that, when decrypted, enable one or more functions of the autonomous vehicle; and
a communications router comprising an input interface to receive a boot-loader to enable network communications with a backend system;
wherein the secure start system (i) utilizes a tunnel key from the backend system to establish a private communications session with a backend data vault, and (ii) retrieves a set of decryption keys from the backend data vault, via the private communications session, to decrypt the plurality of encrypted drives and enable the one or more functions of the autonomous vehicle.

US Pat. No. 10,891,137

MAKING AVAILABLE INPUT/OUTPUT STATISTICS FOR DATA SETS OPENED DURING INITIAL PROGRAM LOAD

INTERNATIONAL BUSINESS MA...

7. A system comprising:a memory;
a processor communicatively coupled to the memory, the processor operable to execute instructions stored in the memory, the instructions causing the processor to:
receive during system initialization a data extent block associated with a data set opened during system initialization;
build a data set block for the data extent block, the building comprising adding the data set block to a chained list of data set blocks;
create during system initialization a data set statistics block for the data extent block, the data set statistics block linked to the data extent block via the data set block; and
store input/output (I/O) statistics of the data set in the data set statistics block, wherein the data set statistics block is accessed via the data set block and the accessing includes traversing the chained list to locate the data set block for the data extent block.

US Pat. No. 10,891,136

DATA TRANSMISSION BETWEEN MEMORY AND ON CHIP MEMORY OF INFERENCE ENGINE FOR MACHINE LEARNING VIA A SINGLE DATA GATHERING INSTRUCTION

Marvell Asia Pte, Ltd., ...

1. A system to support data gathering for a machine learning (ML) operation, comprising:a memory unit configured to maintain data for the ML operation, wherein the memory unit includes a plurality of memory blocks each accessible via a memory address;
an inference engine comprising a plurality of processing tiles, wherein each processing tile comprises at least:
an on-chip memory (OCM) configured to load and maintain data for local access by components in the processing tile; and
one or more processing units configured to perform one or more computation tasks of the ML operation on the data in the OCM;
a core configured to
program components of the plurality of processing tiles of the inference engine by translating one or more commands from a host into a set of programming instructions for the ML operation according to an instruction set architecture (ISA) designed for data processing in a data-path; and
specify one or more processing tiles via a programming instruction, wherein the programming instruction identifies the one or more OCMs of the one or more processing tiles to have data written into;
a programmable processor configured to stream data between the memory unit and the OCMs of the plurality of processing tiles of the inference engine wherein the programmable processor is configured to perform a data gathering operation via a single data gathering instruction of the ISA to
gather data from one or more memory blocks of the plurality of memory blocks of the memory unit for the ML operation at the same time; and
write the gathered data into the OCM of each of the specified one or more processing tiles for the one or more processing units of the specified one or more processing tiles to perform the one or more computation tasks of the ML operation.

US Pat. No. 10,891,135

REGISTER RENAMING OF A SHAREABLE INSTRUCTION OPERAND CACHE

SAMSUNG ELECTRONICS CO., ...

1. A processing unit, comprising:a physical register file (PRF) that stores operands;
an instruction execution unit (EU) including an operand cache (OC) that stores a copy of at least one frequently used operand stored in the PRF, the EU to process instructions of a software program using operands obtained from the PRF, or to process instructions using operands obtained from the OC;
an OC renaming unit (OC REN) that operates in a first mode or a second mode, in the first mode the OC REN indicates to the EU to process instructions using operands obtained from in the OC, and in the second mode the OC REN indicates to the EU to process instructions using operands obtained from the PRF; and
an OC control unit (OC CTL) that determines an estimated power usage and controls the OC REN to operate, based on an evaluation by the OC CTL of a difference between an amount of power used to read a register in the PRF and an amount of power used to read a register in the OC in which the difference is multiplied by a number of currently executed instructions that read from the OC compared to an amount of power used to write a register in the OC multiplied by a number of currently executed instructions that write to the OC, in the first mode as a result of the estimated power usage indicating that processing instructions using operands obtained from the OC uses less power than using operands obtained from the PRF, and in the second mode as a result of the estimated power usage indicating that processing the instructions using operands obtained from the PRF uses less power than using operands obtained in the OC.

US Pat. No. 10,891,134

METHOD AND APPARATUS FOR EXECUTING INSTRUCTION FOR ARTIFICIAL INTELLIGENCE CHIP

BEIJING BAIDU NETCOM SCIE...

1. A method for executing an instruction for an artificial intelligence chip, the artificial intelligence chip comprising at least one general-purpose execution component and at least one special-purpose execution component, the method comprising:receiving descriptive information for describing a neural network model sent by a central processing unit, the descriptive information including at least one operation instruction;
analyzing the descriptive information to acquire the at least one operation instruction; and
determining, for an operation instruction of the at least one operation instruction, a special-purpose execution component executing the operation instruction, and locking the determined special-purpose execution component; sending the operation instruction to the determined special-purpose execution component; and unlocking the determined special-purpose execution component in response to receiving a notification for instructing the operation instruction being completely executed.

US Pat. No. 10,891,133

CODE-SPECIFIC AFFILIATED REGISTER PREDICTION

INTERNATIONAL BUSINESS MA...

1. A computer system for facilitating processing within a computing environment, the computer system comprising:a memory; and
a processor in communication with the memory, wherein the computer system is configured to perform a method, said method comprising:
determining for a unit of code whether the unit of code is a candidate for affiliated register prediction, wherein the determining employs a code specific indicator specific to the unit of code, wherein the code specific indicator specific to the unit of code is part of configuration information associated with the unit of code;
loading into a selected location a location identifier of an affiliated register, based on determining the unit of code is a candidate for affiliated register prediction; and
employing, based on the loading, the affiliated register in speculative processing.

US Pat. No. 10,891,132

FLOW CONVERGENCE DURING HARDWARE-SOFTWARE DESIGN FOR HETEROGENEOUS AND PROGRAMMABLE DEVICES

Xilinx, Inc., San Jose, ...

1. A method, comprising:for an application having a software portion for implementation in a data processing engine (DPE) array of a device and a hardware portion for implementation in programmable logic of the device, performing, using a processor executing a hardware compiler, an implementation flow on the hardware portion based on an interface block solution that maps logical resources used by the software portion to hardware of an interface block coupling the DPE array to the programmable logic;
in response to not meeting a design metric during the implementation flow, providing, using the processor executing the hardware compiler, an interface block constraint to a DPE compiler;
in response to receiving the interface block constraint, generating, using the processor executing the DPE compiler, an updated interface block solution; and
providing the updated interface block solution from the DPE compiler to the hardware compiler.

US Pat. No. 10,891,131

PROCESSORS, METHODS, SYSTEMS, AND INSTRUCTIONS TO CONSOLIDATE DATA ELEMENTS AND GENERATE INDEX UPDATES

Intel Corporation, Santa...

1. A processor comprising:a decode unit to decode an instruction, the instruction to indicate a source packed data that is to include data elements, the instruction to indicate a source mask that is to include mask elements, each of the mask elements to correspond to a different one of the data elements, each of the mask elements to be one of a masked mask element and an unmasked mask element, and the instruction to indicate a general-purpose register which is to have a scalar source value; and
an execution unit coupled with the decode unit, in response to the instruction, to:
store a result packed data in a first destination storage location, wherein in cases where the source packed data is to include one or more masked data elements that are to correspond to one or more masked mask elements disposed within unmasked data elements that are to correspond to unmasked mask elements, the result packed data is to include, the unmasked data elements consolidated together without the one or more masked data elements disposed within them; and
store a scalar result value in a second destination storage location that is to reflect a number of the unmasked data elements consolidated together in the result packed data, wherein the scalar result value is to be a sum of the scalar source value and the number of the unmasked data elements consolidated together in the result packed data, wherein the second destination storage location is to be the general-purpose register, and wherein it is implicit to the instruction that the scalar source value is to be overwritten by the scalar result value in the general-purpose register.

US Pat. No. 10,891,130

IMPLEMENTING A RECEIVED ADD PROGRAM COUNTER IMMEDIATE SHIFT (ADDPCIS) INSTRUCTION USING A MICRO-CODED OR CRACKED SEQUENCE

INTERNATIONAL BUSINESS MA...

1. A computer program product for implementing a received add program counter immediate shift (ADDPCIS) instruction using a micro-coded or cracked sequence, the computer program product comprising a non-transitory computer readable storage medium having program instructions embodied therewith, the program instructions being readable and executable by a processing circuit to cause the processing circuit to:recognize register operand and integer terms associated with the ADDPCIS instruction; and
set a value of a target register associated with the ADDPCIS instruction in accordance with the integer term summed with another term by:
obtaining a next instruction address (NIA),
moving the NIA to a first temporary register of a register file and storing the NIA in the first temporary register,
determining whether register files for function return addresses correspond to a general purpose register or do not correspond to the general purpose register,
moving the NIA from the first temporary register to the general purpose register in an event the function return addresses correspond to the general purpose register,
moving the NIA from the first temporary register to a second temporary register in an event the function return addresses do not correspond to the general purpose register, and
adding a shifted immediate constant to a value stored in a second temporary register.

US Pat. No. 10,891,129

DECENTRALIZED DEVELOPMENT OPERATIONS BLOCKCHAIN SYSTEM

Accenture Global Solution...

1. A method, comprising:detecting, by a first participant node of a distributed ledger network, a tool event token stored on a blockchain, the tool event token generated by a second participant node of the distributed ledger network, the tool event token representative of execution of a devops tool in a toolchain for an integrated devops environment, the tool event token comprising runtime information generated by the devops tool;
generating, in response to detection of the tool event token, a new tool path, the new tool path comprising a starting tool identifier, an ending tool identifier, and an execution time duration, the starting tool identifier corresponding to a preceding devops tool and the ending tool identifier corresponding to a subsequent devops tool in the toolchain, wherein generating the new tool path comprises:
accessing a natural language processing model previously trained based on annotated runtime information;
determining a classification tag based on the natural language processing model and the runtime information; and
including the classification tag in the new tool path;
aggregating the new tool path with a previously generated tool path by:
identifying a sequence of tool paths based on respective starting tool identifiers and respective ending tool identifiers of the tool paths, the sequence of tool paths comprising the new path and the previously generated tool path,
determining a total execution time based on a combination of the respective execution time durations of the sequence of tool paths, and
generating an aggregated tool path comprising the total execution time;
storing the aggregated tool path in a repository;
executing a fitness logic to generate a fitness metric based the aggregated tool path;
store, in the repository, a mapping between the fitness metric and the aggregated tool path;
accessing a plurality of unique tool paths from the repository that are mapped to respective fitness metrics previously generated based on the fitness logic;
prioritizing the unique tool paths based on the respective fitness metrics;
selecting an optimal tool path based on the prioritized unique tool paths; and
communicating a devops deployment instruction, the devops deployment instruction comprising an instruction to configure an integrated devops environment to communicate with devops tools identified in the optimal tool path.

US Pat. No. 10,891,128

SOFTWARE REGRESSION DETECTION IN COMPUTING SYSTEMS

Microsoft Technology Lice...

1. A method for software regression detection in a computing system, comprising:accessing a dataset having multiple entries each containing data representing an identification of multiple payloads included in a build of a software product executed at a computing device and a corresponding value of a performance metric of executing the build at the computing device, the payloads individually representing a source code change, a feature enablement, or a configuration modification of the software product; and
upon accessing the dataset, at the computing system,
estimating a set of coefficients individually corresponding to one of the multiple payloads using a multiple variable model with the performance metric as a dependent variable and the multiple payloads as independent variables;
identifying at least one of the multiple payloads with a corresponding estimated coefficient whose absolute value is greater than a preset threshold; and
generating and transmitting, from the computing system, a message to a development team associated with submitting the at least one of the multiple payloads, the message indicating to the development team that a software defect that impacts the performance metric of the software product is likely present in the at least one of the payloads.

US Pat. No. 10,891,127

CONFIGURING DATA COLLECTION

BANMA ZHIXING NETWORK (HO...

1. A system, comprising:a processor configured to:
receive a configuration file at a device, wherein the configuration file is modifiable and extensible;
detect, with respect to the device, a data collection event, wherein the data collection event is specified by the configuration file, wherein the configuration file further specifies a set of target data information and a corresponding set of target data information providers from which to collect target data in response to the detection of the data collection event;
in response to the detection of the data collection event, collect the target data based at least in part on the set of target data information and the corresponding set of target data information providers, including to:
determine a file path associated with a target data item value corresponding to a target data item identifier included in the set of target data information;
query a target application included in the corresponding set of target data information providers for the target data item value using the file path; and
establish and store a mapping between the target data item identifier and the target data item value; and
control the device based at least in part on the collected target data; and
a memory coupled to the processor and configured to provide the processor with instructions.

US Pat. No. 10,891,126

ON-DEVICE FEATURE AND PERFORMANCE TESTING AND ADJUSTMENT

MX TECHNOLOGIES, INC., L...

1. An apparatus, comprising:an audit module configured to determine one or more capabilities of a mobile device;
an accessibility module configured to determine one or more accessibility capabilities of the mobile device from the determined one or more capabilities of the mobile device, wherein the determined one or more accessibility capabilities of the mobile device comprise one or more of hardware and software features for altering default functionality of the mobile device to assist a user experience for persons with disabilities;
a feature module configured to determine one or more potential features that are executable on the mobile device; and
an adjustment module configured to:
selectively configure, during runtime, the one or more potential features that are executable on the mobile device in response to execution of the one or more potential features on the mobile device being affected by the determined one or more capabilities of the mobile device;
determine that a replacement accessibility feature is compatible and executes properly with the one or more potential features that are executable on the mobile device than a predefined and/or user selected accessibility feature that is in use on the mobile device; and
replace the predefined and/or user selected accessibility feature that is in use on the mobile device with the replacement accessibility feature in response to determining that the replacement accessibility feature is compatible and executes properly with the one or more potential features that are executable on the mobile device than the predefined and/or user selected accessibility feature that is in use on the mobile device.

US Pat. No. 10,891,125

VIDEOGAME PATCH DATA COMPILATION SYSTEM

Electronic Arts Inc., Re...

1. A method for executing a game application, on a client computing device, the method comprising:by one or more processors configured with computer-readable instructions,
executing a game application on a client computing device using application code comprising a function store, wherein the function store comprises one or more function assets, the one or more function assets including at least a first function asset associated with a first game function, wherein the first function asset is compiled machine code and includes a first version identifier; and
during runtime of the game application:
receiving, over a network, a data packet that includes a second function asset that is associated with the first game function, wherein the second function asset is non-compiled code and includes a second version identifier,
updating the function store to include the second function asset as one of the one or more function assets of the function store, wherein said updating occurs without recompiling the game application, and
executing the first game function, wherein said executing the first game function comprises:
identifying the first function asset and the second function asset as being associated with the first game function,
selecting the second function asset instead of the first function asset based, at least in part, on a comparison of the second version identifier to the first version identifier, and
executing the first game function using the second function asset based, at least in part, on said selecting.

US Pat. No. 10,891,124

DEPLOYMENT OF PATCHES IN SOFTWARE APPLICATIONS

Oracle International Corp...

1. A method of facilitating deployment of patches in computing systems, the method comprising:identifying specific objects of a plurality of objects of a software application that have been used for processing of commands by examining a usage data indicating usage of each object, wherein said specific objects comprises a first object and said plurality of objects comprises a second object that is not comprised in said specific objects;
checking whether there exists a corresponding patch for each of said specific objects, but not for said second object, wherein objects of said specific objects having said corresponding patch comprises a set of objects;
retrieving patches available for the set of objects; and
applying the patches to the software application.

US Pat. No. 10,891,123

IN-VEHICLE SOFTWARE DISTRIBUTION SYSTEM, IN-VEHICLE SOFTWARE DISTRIBUTION SERVER, AND IN-VEHICLE SOFTWARE DISTRIBUTION METHOD

HITACHI, LTD., Tokyo (JP...

1. An in-vehicle software distribution system which controls updates to an identical function for in-vehicle systems of a plurality of vehicles, comprising:an in-vehicle software distribution server which manages updates to the identical function according to a main campaign and distributes software remotely to the plurality of vehicles that are the distribution destinations of the main campaign;
a terminal which performs input/output (I/O) to/from the in-vehicle software distribution server; and
a software update apparatus which is mounted in each of the plurality of vehicles that downloads the software that has been distributed by the in-vehicle software distribution server, and installs the software in target in-vehicle systems,
wherein the in-vehicle software distribution server includes a processor coupled to a memory storing instructions that when executed configure the processor to:
categorize the plurality of vehicles that are the distribution destinations of the main campaign into groups based on a predetermined criterion and create a plurality of sub-campaigns which are subordinate to the campaign for each of the categorized groups; and
distribute, for each of the sub-campaigns, software remotely to vehicles targeted by the sub-campaigns based on the sub-campaigns,
wherein the processor is further configured to:
create a first test sub-campaign that is subordinate to the main campaign which distributes verification software to a group of specific vehicles used for verification of an update software distribution test among the plurality of vehicles that are the distribution destinations of the main campaign; and
create a second distribution sub-campaign which distributes software to a group of vehicles among the plurality of vehicles that are the distribution destinations of the main campaign other than the specific vehicles,
wherein, upon verification of a successful result of the update software distribution test of the first test sub-campaign, the processor creates the second sub-campaign which is subordinate to the main campaign, and
wherein the processor is further configured to perform the software distribution to the group of vehicles targeted by the second sub-campaign only upon the successful result of the update software distribution test of the specific vehicles of the first sub-campaign.

US Pat. No. 10,891,122

ROLLING UPGRADE OF A DISTRIBUTED APPLICATION

International Business Ma...

1. A method comprising:upon receiving a stop command at a first node of a plurality of nodes executing a runtime environment, suspending the runtime environment on the first node, wherein the stop command is received from a master node of the plurality of nodes;
upon receiving a restart command from the master node, restarting the runtime environment on the first node, wherein subsequent to transmitting the stop command and prior to transmitting the restart commend, the master node is upgraded by an administrator;
upon restarting the runtime environment on the first node, determining, by the first node, a second version of the runtime environment via a registry maintained by a configuration server;
upon determining, by the first node, that the second version is a more recent version than the first version, automatically:
transmitting, by the first node, to the configuration server, a request for a list of nodes of the plurality of computing nodes that are executing the second version of the runtime environment and are able to provide an install package for the second version, wherein the registry maintains records indicating which install packages are available on each of the plurality of computing nodes;
receiving, by the first node, from the configuration server, the list of nodes that are executing the second version and have the install package for the second version, wherein the list of nodes includes at least a second node and a third node of the plurality of nodes, and does not include the configuration server;
requesting, by the first node, the install package for the second version from the second node, wherein the second node was randomly selected from the list of nodes by the first node;
receiving, by the first node, from the second node, the install package for the second version; and
installing the second version on the first node using the install package;
upon successfully installing the second version on the first node, restarting the runtime environment on the first node for a second time;
upon restarting for a second time, advertising, by the first node, to the configuration server, an indication that the first node is executing the second version of the runtime environment and that the install package for the second version is available for download from the first node, wherein, responsive to the advertising, the configuration server updates the registry by adding the first node to the list of nodes that are available to provide the second version of the runtime environment;
advertising, to the configuration server, availability of a down-level version of the runtime environment on the first node;
receiving, at the first node, from a fourth node of the plurality of computing nodes, a request to obtain the install package for the second version of the runtime environment, wherein the fourth node transmitted the request responsive to receiving an indication of the first node from the configuration server;
sending, by the first node, the install package of the second version to the fourth node;
receiving a request from a fifth node of the plurality of computing nodes to obtain the install package for the second version of the runtime environment; and
upon determining that the first node is not available to transmit the install package, refraining from sending the install package of the second version to the fifth node.

US Pat. No. 10,891,121

APPLICATION BLUEPRINTS BASED ON SERVICE TEMPLATES TO DEPLOY APPLICATIONS IN DIFFERENT CLOUD ENVIRONMENTS

VMWare, Inc., Palo Alto,...

1. An apparatus to configure an application blueprint, the apparatus comprising:one or more processors; and
memory including machine readable instructions that, when executed, cause the one or more processors to at least, during a design phase:
bind a service template to a node of the application blueprint, the application blueprint corresponding to an application to be deployed, the service template mapped to a plurality of services, ones of the services from the plurality of services selectable during a runtime phase to implement the node; and
store the application blueprint, the application blueprint accessible during the runtime phase to generate a first deployment profile and a second deployment profile, the first deployment profile to deploy a first instance of the application based on a first service selected from the service template to implement the node, and the second deployment profile to deploy a second instance of the application based on a second service, different from the first service, the second service selected from the service template to implement the node.

US Pat. No. 10,891,120

PERFORMING A COMPILER OPTIMIZATION PASS AS A TRANSACTION

International Business Ma...

1. A method for optimizing a compiling of program code, comprising:adding a proposed state pointer corresponding to a current state pointer to a current state node that represents a section of the program code in an intermediate language (IL) representation of the program code;
creating, in response to a determination by an optimizing compiler to make an optimization to the section of code, a proposed state node that is referenced by the proposed state pointer, the proposed state node being a copy of the current state node;
editing the proposed state node to include the optimization, wherein the current state node remains unchanged;
evaluating whether the optimization is successful; and
removing, based on the evaluating, references to nodes that are no longer in the IL representation to get an updated IL representation, the removing further comprising:
removing references to the current state node in response to an evaluation that indicates that the optimization has succeeded; and
removing the current state node in response to an evaluation that indicates that the optimization has succeeded.

US Pat. No. 10,891,119

INTEGRATING AND SHARING SOFTWARE BUILD COMPONENT TARGETS

International Business Ma...

1. A method comprising:identifying, by at least one computer processor, a first set of initial software component builds and a second set of initial software component builds that respectively have dependencies on a plurality of software targets, wherein the first set of first set of initial software component builds are dependent on a first sub-set of the plurality of software targets and the second set of initial software component builds are dependent on a second sub-set of the plurality of software targets;
building, by the at least one computer processor, a first software target for the first set of initial software component builds;
building, by the at least one computer processor, a second software target of the plurality of software targets for the second set of initial software component builds to generate an intermediate output, wherein the intermediate output is associated with a location property that specifies a location of the intermediate output from a first software component of the second set of initial software component builds to a second software component of the second set of initial software component builds to include content from the intermediate output in an integrated software target; and
building, by the at least one computer processor, the second software component of the second set of initial software component builds using the intermediate output, the intermediate output located by the location property.

US Pat. No. 10,891,118

OBJECT GRAPH TRAVERSAL AND PROCESSING

INTUIT INC., Mountain Vi...

1. A method for processing objects while traversing an object graph, comprising:receiving an instruction to perform a processing operation comprising a processing operation target;
traversing an object graph with a traversal agent, wherein the traversing comprises:
receiving a first object of the object graph;
receiving a data field of the first object, the data field comprising a reference list that identifies one or more objects referenced by the first object, the reference list comprising a reference to a second object of the object graph;
adding the second object to a working graph container comprising the first object;
adding a marker object to the working graph container after a last one of the one or more objects referenced by the reference list have been added to the working graph;
adding the second object to a visited object registry; and
determining that there are no more objects of the reference listing that can be added to the visited object registry;
determining with an object processing agent that the first object matches the processing operation target;
transferring control of the first object from the traversal agent to a visitor agent of the object processing agent;
performing with a visitor agent, a first processing action corresponding to the processing operation, using a data element of the first object;
transferring control of the first object to the traversal agent when the first processing action is complete; and
traversing, using the traversal agent, to the second object.

US Pat. No. 10,891,117

METHOD AND SYSTEM FOR USING SUBROUTINE GRAPHS FOR FORMAL LANGUAGE PROCESSING

1. A method for processing subroutine-structured graph-based intermediate representations during formal language processing implemented by a computing device, the method comprising:classifying a set of subroutines identified in an intermediate representation of code from a source file or set of object files according to mutually recursive relationships between subroutines in the set of subroutines;
recording the mutually recursive relationships in a set of subroutine record data structures;
labeling relevant nodes in the intermediate representation or tokens from the intermediate representation to track the mutually recursive relationships;
constructing a set of graph representations including a graph representation for each subroutine in the set of subroutines;
analyzing a graph of the intermediate representation that is decomposed into subroutine graphs from the set of graph representations by serialization through depth-first pre-order graph traversal or path walks through the graph of the intermediate representation, collecting partial positions that distinguish points of action in generated code, where partial positions are tracked in a partial position list of nodes in the intermediate representation that identify points in the graph of the intermediate representation, where each of the nodes is taken from a separate subroutine graph and the list is a backtrace of an invocations that led to a terminal node, and where each node, except a terminal node in the list, references one subroutine from the set of subroutines;
labeling nodes of the graph of the intermediate representation or tokens of the intermediate representation from the partial position lists to enable runtime tracking in the generated code so that associated actions are executed at associated places in the generated code;
generating a subsequent intermediate representation by serialization of the graph of the intermediate representation through pre-order depth-first traversal; and
creating the generated code from the intermediate representation.

US Pat. No. 10,891,115

MODEL CONFIGURATION USING PARTIAL MODEL DATA

MODEL N, INC., Redwood C...

1. A computer-implemented method comprising:receiving, via a product configurator user interface, a configuration input configuring one or more aspects of a configurable model;
sending a difference model to a configuration engine, the difference model based on the configuration input changing the configurable model;
generating a first partial structured data set for evaluation by the configuration engine based on the difference model from the configuration input and a segmented data model associated with the configurable model, the first partial structured data set including a first subset of segments from the segmented data model;
sending the first partial structured data set to the configuration engine for evaluation;
evaluating the first partial structured data set to determined changes to the configurable model based one the first partial structured data set; and
receiving an evaluated instance of the first partial structured data set from the configuration engine, the evaluated instance of the first partial structured data set reflecting an outcome of a configuration of the one or more aspects of the configurable model.

US Pat. No. 10,891,114

INTERPRETER FOR INTERPRETING A DATA MODEL ALGORITHM AND CREATING A DATA SCHEMA

TIBCO SOFTWARE INC., Pal...

1. A computing device for interpreting a data model algorithm, the computing device comprising:an object searcher configured by a processor to search for attributes within at least one dataset generated from at least one method of an instantiation of the data model algorithm in a development mode workflow, wherein the data model algorithm develops data models for a clustered computing environment;
an interpreter configured by a processor to evaluate the attributes, identify attributes having a use type, identify type information of the identified attribute, and create data schema using the identified attributes and type information; and
a translator configured by a processor to:
retrieve production level data schema from the clustered computing environment in response to selecting the data model algorithm for inclusion in a production mode workflow;
translate the data model algorithm from a development state to a production ready state by validating the data schema by comparing the data schema with the production level data schema
for at least one node in the clustered computing environment.

US Pat. No. 10,891,113

SOURCE CODE REWRITING DURING RECORDING TO PROVIDE BOTH DIRECT FEEDBACK AND OPTIMAL CODE

Apple Inc., Cupertino, C...

1. A computer-implemented method, comprising:receiving, by an application development system, a first event generated by user interaction with an executing application and a snapshot of a first state of user interface elements of the executing application, the first state including the first event;
automatically generating first source code that corresponds to the first event;
receiving, by the application development system, a second event generated by user interaction with the executing application and a snapshot of a second state of user interface elements of the executing application, the second state including the second event;
synthesizing the first and second events and automatically generating second source code that is optimized based at least in part on the synthesis of both the first and second events and the snapshots of the first and second states of the user interface elements of the executing application; and
automatically replacing the first source code with the optimized second source code.

US Pat. No. 10,891,112

SYSTEMS AND METHODS FOR DISCOVERING AUTOMATABLE TASKS

Soroco Private Limited, ...

1. A system, comprising:at least one hardware processor; and
at least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform:
receiving a plurality of events each of at least some of the plurality of events being indicative of an action performed by a user on a computing device and contextual information associated with the action performed by the user;
identifying a task being performed by the user at least in part by clustering events in the plurality of events; and
generating a score for the task indicative of a difficulty of automating the task, wherein the score is generated using at least one value selected from the group consisting of: information identifying applications used to perform the task, a number of applications used to perform the task, a number of user interface elements used in the task, an amount of natural language input provided to the applications during the task, a number of keystrokes performed in the task, a number of clicks performed in the task, a ratio between keystrokes and clicks performed in the task, a percentage of time spent performing typing actions in the task, a percentage of time spent performing clicking actions in the task, and a frequency of copy and paste actions performed in the task.

US Pat. No. 10,891,111

DISTRIBUTED KEY-VALUE CONSISTENCY AND MAPPING

International Business Ma...

1. A computer-implemented method, comprising:generating, by a processor within a networked distributed drafting platform, a public key-value context file that comprises initial default key-value mappings between keywords in a first language and values in a second language for use in a distributed drafting project, the generating of the public key-value file including:
retrieving the keywords from initial drafting project files into a public keys list,
assigning a default value to each of the retrieved keywords within the public keys list, and
storing the public keys list along with the initial default key-value mappings to the public key-value context file;
electing refined project-level key-value mappings by considering differences between the initial default key-value mappings and
personal key-value mappings within a plurality of distributed personal key-value context files each maintained by different drafters of the distributed drafting project; and
updating, within the networked distributed drafting platform, the initial default key-value mappings of the public key-value context file with the elected refined project-level key-value mappings.

US Pat. No. 10,891,110

AES/CRC ENGINE BASED ON RESOURCE SHARED GALOIS FIELD COMPUTATION

The Board of Regents of T...

1. Apparatus comprising:computation circuitry adapted to perform Galois Field computations;
control circuitry adapted to control the computation circuitry so as to selectively compute either an Advanced Encryption Standard cipher or a Cyclic Redundancy Check, wherein the control circuitry comprises:
memory interface circuitry adapted to request a plurality of externally stored predetermined constant values;
selection circuitry adapted to select a predetermined constant value for input to the computation circuitry;
memory circuitry adapted to store a plurality of input and output data; and
control circuitry sequencing circuitry adapted to output control signals to the selection circuitry, the memory circuitry, and the computation circuitry in a plurality of sequences, each sequence adapted to perform a computation.

US Pat. No. 10,891,109

ARITHMETIC PROCESSOR, ARITHMETIC PROCESSING APPARATUS INCLUDING ARITHMETIC PROCESSOR, INFORMATION PROCESSING APPARATUS INCLUDING ARITHMETIC PROCESSING APPARATUS, AND CONTROL METHOD FOR ARITHMETIC PROCESSING APPARATUS

FUJITSU LIMITED, Kawasak...

1. An arithmetic processing apparatus coupled to a main storage apparatus, comprising:a plurality of arithmetic processors each including:
a plurality of arithmetic circuits that individually execute an arithmetic operation instruction for fixed point number data, and
a statistical information acquisition circuit that acquires at least one of first statistical information and second statistical information, with regard to a plurality of fixed point number data that are results of arithmetic operations executed by the plurality of arithmetic circuits, the first statistical information is obtained by accumulating first bit patterns, each of the first bit patterns is obtained by setting a flag bit to each of bit positions corresponding to a range from a least significant bit position of the fixed point number data to a highest-order bit position from among bit positions having a bit value different from a sign bit, for each of the digits corresponding to the bit positions, and the second statistical information is obtained by accumulating second bit patterns, each of the second bit patterns is obtained by setting a flag bit to each of bit positions corresponding to a range from the position of the sign bit to a lowest-order bit position from among bit positions having a bit value different from the sign bit, for each of the digits corresponding to the bit positions.

US Pat. No. 10,891,108

CALCULATION DEVICE

KABUSHIKI KAISHA TOSHIBA,...

1. A calculation device performing a product sum calculation of M input values (M is an integer of 2 or more) and M coefficients corresponding to the M input values on a one-to-one basis, comprising:M coefficient storage units provided corresponding to the M coefficients, each of the M coefficient storage units including a positive-side coefficient and a negative-side coefficient, a corresponding coefficient being represented a sign of a difference between the positive-side coefficient and the negative-side coefficient;
M multiplication units provided corresponding to the M input values, each of the M multiplication units being configured to calculate a positive-side multiplication value obtained by multiplying the positive-side coefficient included in the corresponding coefficient storage unit by a sign inverted according to the corresponding input value and a negative-side multiplication value obtained by multiplying the negative-side coefficient included in the corresponding coefficient storage unit by a sign inverted according to the corresponding input value; and
an output unit configured to output an output value according to a difference between a positive-side product sum calculation value obtained by summing the M positive-side multiplication values and a negative-side product sum calculation value obtained by summing the M negative-side multiplication values.

US Pat. No. 10,891,107

PROCESSING MULTIPLE AUDIO SIGNALS ON A DEVICE

Open Invention Network LL...

1. A method, comprising:determining at least one directionality of at least one audio source from at least two audio signals received by a device;
determining at least one timing of the at least one audio source from the at least two audio signals;
generating at least one context for the at least two audio signals based on the at least one directionality and the at least one timing of the at least two audio signals;
identifying at least one second user that has an interest in the at least one context;
and
providing a connection between the at least one second user and the at least one user interface.

US Pat. No. 10,891,106

AUTOMATIC BATCH VOICE COMMANDS

Google LLC, Mountain Vie...

1. A computer-implemented method, comprising:receiving input data from a computing device;
determining an intended task based on the received input data;
obtaining contextual information related to the intended task;
determining a plurality of tabs to open associated with websites to be accessed at the computing device, wherein each of the plurality of tabs accesses a website that is associated with the intended task and the obtained contextual information;
providing instructions associated with opening the determined plurality of tabs for transmission to the computing device and for execution at the computing device; and
ranking the websites based on the contextual information,
wherein the instructions associated with the opening of the determined plurality of tabs comprise instructions to arrange a display of the plurality of tabs on the computing device based on the ranking.

US Pat. No. 10,891,105

SYSTEMS AND METHODS FOR DISPLAYING A TRANSITIONAL GRAPHICAL USER INTERFACE WHILE LOADING MEDIA INFORMATION FOR A NETWORKED MEDIA PLAYBACK SYSTEM

Sonos, Inc., Santa Barba...

1. A control device for a networked media playback system, comprising:a processor;
a display screen;
non-volatile memory comprising instructions, which when executed configure the process to perform the process of:
determining a starting screen of a media playback system controller application;
requesting media playback information corresponding to a media playback system and a user account to display on the starting screen;
selecting a placeholder template for the starting screen, where the placeholder template includes placeholder locations for placeholders;
randomly selecting, for each location in the placeholder template, a placeholder block from a set of placeholder blocks, where within each set of placeholder blocks each placeholder block includes:
at least one graphical block of the same size and shape, and
at least one textual block that has same length in a first dimension as other placeholder blocks in the set but varies in length in a second dimension as other placeholder blocks in the set;
displaying on the display screen the placeholder template populated with the selected placeholder blocks, where the placeholder blocks display a loading animation until they are replaced;
receiving the media playback information;
displaying, when sufficient media playback information is received to replace the textual blocks of the placeholder blocks with informational text, a partially loading screen to replace the placeholder template with informational text and placeholder graphical blocks;
displaying, on the partially loading screen, the loading animation on the placeholder graphical blocks while receiving media playback information; and
replacing placeholder graphical blocks with informational graphics when sufficient media playback information is received to replace the placeholder graphical blocks with informational graphics.

US Pat. No. 10,891,104

PRIORITIZING MEDIA CONTENT REQUESTS

Sonos, Inc., Santa Barba...

1. A method to be performed by a computing system, the method comprising:receiving, via a network interface, data representing a request to play back a playlist on a group of one or more playback devices of a media playback system, wherein the request to playback the playlist is an explicit playback command;
based on the request, sending, via the network interface to a gateway device of the media playback system, instructions that cause the group of one or more playback devices to play back a given audio track of the playlist, wherein the group of one or more playback devices stream audio tracks of the playlist from one or more remote servers, and wherein the group of one or more playback devices excludes the gateway device;
while the group of one or more playback devices are playing back one or more first tracks of the playlist, receiving, via the network interface, one or more requests for second audio tracks in the playlist, wherein the one or more requests are implicit playback commands;
while the group of one or more playback devices are playing back the second audio tracks of the playlist, receiving, via the network interface, data representing a request to play back audio content on a mobile device, wherein the request to play back the audio content is an explicit playback command;
determining that the request to play back the audio content on the mobile device is a higher priority than the requests for second audio tracks; and
based on the determining, switching playback from the group of one or more playback devices to the mobile device, wherein switching the playback comprises:
sending, via the network interface to the gateway device, instructions to cause playback on the group of one or more playback devices to be stopped, wherein the gateway device is separate from the mobile device; and
causing, via the network interface, the mobile device to play back the audio content, wherein the mobile device streams the audio content from the one or more remote servers.

US Pat. No. 10,891,103

MUSIC-BASED SOCIAL NETWORKING MULTI-MEDIA APPLICATION AND RELATED METHODS

LOOK SHARP LABS, INC., L...

1. An application for use on a communications device for creating and sharing a music-based social media post comprising digital visual media content paired with digital music content between at least two users each having a communications device with computer-readable media, the application including computer executable instructions that, when executed by one or more processors, are configured to cause a computer system to perform the steps of:initiate, using a graphic user interface on a communications device of a first user, a selection by the first user of a digital visual media content file to be uploaded to a host server in digital communication with the communications device of the first user, wherein the selected digital visual media content file is assigned a first uniform resource locator (URL) stored in the host server;
initiate, using the graphic user interface on the communications device of the first user, a selection by the first user of at least one digital music content file to be streamed from an external audio content provider in digital communication with the host server or uploaded to the host server from the communications device of the first user, wherein the selected digital music content file has a second URL stored in the host server, and wherein the second URL is separate from the first URL;
initiate, using the graphic user interface on the communications device of the first user, at least one additional media content file to be paired with the selected at least one digital music content file and the selected visual media content file, wherein the at least one additional selected media content file has at least a one third URL, wherein the third URL is separate from the first and second URLs;
initiate, using the graphic user interface on the communications device of the first user, a request by the first user received by the host server to pair at least a portion of the selected digital visual media content file with at least a portion of the selected digital music content file, and at least a portion of the at least one additional media content file to create paired digital content, whereby digital visual media content of the selected digital visual media content file, digital music content of the selected digital music content file, and media content of the at least one additional media content file, are configured to be presented in parallel while remaining separate and without creation of a single combined file;
share, upon a request by the first user received by the host server, the paired digital content with at least one additional user on a communications device of the at least one additional user; and
play, on the communications device of the at least one additional user, and upon a request received by the host server, the paired digital visual media content file, the selected digital music content file, and the at least one additional digital media content file by executing the first URL, second URL, and the at least one third URL simultaneously.

US Pat. No. 10,891,102

SCENE SOUND EFFECT CONTROL METHOD, AND ELECTRONIC DEVICE

GUANGDONG OPPO MOBILE TEL...

1. A method for controlling a scene sound effect, performed by electronic equipment, comprising:determining whether an audiotrack of the electronic equipment has audio output, the audiotrack having a mapping relationship with an application in the electronic equipment;
in the case that it is determined that the audiotrack has audio output, establishing a communication connection with a server located at a network side and sending a query request to the server at the network side through the communication connection, the query request comprising a name of a client or a name of the application, and classification information of the client or classification information about classifying clients by names of applications being stored in the server at the network side;
receiving a type of the application returned by the server, the type of the application being determined by the server at the network side according to the classification information of the client or the classification information about classifying clients by names of applications;
acquiring a type of a scene sound effect corresponding to the type of the application; and
setting a sound effect of the electronic equipment to be the scene sound effect corresponding to the type of the scene sound effect.

US Pat. No. 10,891,101

METHOD AND DEVICE FOR ADJUSTING THE DISPLAYING MANNER OF A SLIDER AND A SLIDE CHANNEL CORRESPONDING TO AUDIO SIGNAL AMPLIFYING VALUE INDICATED BY A POSITION OF THE SLIDER

TENCENT TECHNOLOGY (SHENZ...

1. A method for displaying a control, comprising:displaying at least one adjustment control on an equalizer displaying interface, wherein the adjustment control includes a slide channel and a slider, the adjustment control is used for adjusting an audio signal amplifying value at a frequency band according to a position of the slider on the slide channel;
receiving a sliding signal of the slider when the slider slides along the slide channel;
obtaining a position of the slider on the slide channel;
adjusting a displaying manner of the adjustment control according to the position of the slider, wherein the displaying manner of the adjustment control comprises a size of the slider and a uniform width of the slide channel, the position of the slider on the slide channel being represented by a location the slider is at from a bottom of the slide channel, and
wherein the adjusting the size of the slider and the uniform width of the slide channel according to the position of the slider comprises:
determining a current signal amplifying value indicated by the slider based on an audio signal amplifying range indicated by the slide channel and the position of the slider;
searching a first correspondence for a slider size corresponding to the current signal amplifying value;
searching a second correspondence for a slide channel width corresponding to the current signal amplifying value;
adjusting the size of the slider according to the slider size and simultaneously adjusting the uniform width of the slide channel according to the slide channel width;
generating image frame data corresponding to the adjusted displaying manner of the adjustment control; and
rendering the image frame data by calling a system application program interface (API), to display the adjustment control on the equalizer displaying interface according to the adjusted displaying manner.

US Pat. No. 10,891,100

SYSTEM AND METHOD FOR CAPTURING AND ACCESSING REAL-TIME AUDIO AND ASSOCIATED METADATA

1. An electronic device, comprising:a processor;
a memory coupled to the processor, the memory containing instructions, which when executed by the processor, perform the steps of:
sending a query to an audio archiving server, wherein the query comprises a start time, a duration, and a plurality of content descriptor types;
receiving a manifest file in response to the query, wherein the manifest file includes a list of a plurality of audio files;
requesting the plurality of audio files listed in the manifest file from the audio archiving server;
receiving the plurality of audio files from the audio archiving server, wherein the plurality of audio files corresponds to the plurality of the one or more content descriptor types; and
further comprising an electronic display coupled to the processor, and wherein the memory further comprises instructions, that when executed by the processor, perform the steps of:
displaying a list of content descriptor types of the plurality of content descriptor types;
receiving a selection of a first content descriptor type from the list of content descriptor types;
playing audio corresponding to the first content descriptor type and having a first time-of-day start time;
rendering an indication of the first content descriptor type on the electronic display;
responsive to a seek back event, playing audio corresponding to a second content descriptor type, wherein the audio corresponding to the second content descriptor type has a second time-of-day start time; and
rendering an indication of the second content descriptor type on the electronic display.

US Pat. No. 10,891,099

CAUSING MOVEMENT OF AN INTERACTION WINDOW WITH A TABLET COMPUTING DEVICE

Hewlett-Packard Developme...

13. A method, comprising:wirelessly transmitting a video stream from a first computing device that includes a first display having a moveable interaction window that is a subset of an entire display area of the first display, wherein the video stream represents content of the interaction window;
wirelessly receiving the video stream with a tablet computing device that includes a second display;
displaying the content of the interaction window on the second display based on the received video stream;
wirelessly transmitting tablet motion information representative of movement of the tablet computing device;
moving the interaction window on the first display based on the transmitted tablet motion information, wherein the tablet motion information includes sliding motion information representative of sliding movement of the tablet computing device against a surface, wherein moving the interaction window includes sliding the interaction window on the first display based on the transmitted tablet motion information, wherein the tablet motion information includes rotational motion information representative of any rotational movement of the tablet computing device including small rotations of less than 90 degrees and large rotations of 90 degrees or more, and wherein moving the interaction window includes rotating the interaction window on the first display based on the transmitted tablet motion information; and
updating content of the video stream being transmitted based on the sliding and rotating of the interaction window.

US Pat. No. 10,891,098

DISPLAY DEVICE AND METHOD FOR CONTROLLING DISPLAY DEVICE

SEIKO EPSON CORPORATION, ...

1. A display device comprising:a display unit which displays an image;
a detection unit which detects an information processing device connectable via wireless communication, wherein the detection unit detects the information processing device prior to connection of the display device to the information processing device; and
a control unit which
makes a determination whether a display prohibition condition is met,
causes the display unit to display a guide image showing an operation that is necessary to achieve connection of the display device to the information processing device, in response to (i) the detection unit detecting the information processing device and (ii) the determination indicating that the display prohibition condition is not met, and
prohibits the display unit from displaying the guide image in response to (i) the detection unit detecting the information processing device and (ii) the determination indicating that the display prohibition condition is met, wherein
the control unit
determines whether the display prohibition condition is met by determining whether the information processing device detected by the detection unit and the display device have ever been connected to each other or not,
determines that the display prohibition condition is not met, and causes the display unit to display the guide image, if the display device has ever been connected to the information processing device, and
determines that the display prohibition condition is met, and prohibits the display unit from displaying the guide image, if the display device has never been connected to the information processing device.

US Pat. No. 10,891,097

RECEIVING DEVICE AND IMAGE FORMING APPARATUS

FUJI XEROX CO., LTD., To...

1. A receiving device comprising:a display that displays an image and that receives an operation corresponding to the image when a user comes into contact with a surface of the display;
a transceiver that communicates with a wireless communication apparatus performing near-field wireless communication; and
a protrusion portion that protrudes with respect to the surface of the display, the protrusion portion being disposed at least between the display and transceiver, and extending to at least the transceiver,
wherein the transceiver is positioned at a top surface of the protrusion portion.

US Pat. No. 10,891,096

COMMUNICATION DEVICE, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING COMPUTER-READABLE INSTRUCTIONS FOR COMMUNICATION DEVICE, AND METHOD PERFORMED BY COMMUNICATION DEVICE

Brother Kogyo Kabushiki K...

1. A communication device comprising:a processor; and
a memory storing computer-readable instructions therein, the computer-readable instructions, when executed by the processor, causing the communication device to:
receive, from a specific server, first authentication information generated by the specific server;
store the first authentication information in the memory in a case where the first authentication information is received from the specific server;
store selection information in the memory in a case where one or more service providing servers are selected by a user, the selection information indicating that the one or more service providing servers have been selected;
send the first authentication information in the memory and first related information being related to the communication device to the specific server in a case where the selection information is stored in the memory, wherein in a case where the first authentication information and the first related information are sent to the specific server, the first related information is received by a first service providing server among the one or more service providing servers, and the first service providing server provides a first service to the user by using the first related information;
determine whether the selection information is stored in the memory in a case where a deletion instruction for instructing deletion of information in the memory is acquired; and
delete the first authentication information in the memory in a case where it is determined that the selection information is not stored in the memory, wherein the first authentication information is not deleted in a case where it is determined that the selection information is stored in the memory.

US Pat. No. 10,891,095

IMAGE FORMING APPARATUS, PRINTING SYSTEM, AND JOB CONTROL METHOD

Ricoh Company, Ltd., Tok...

1. An image forming apparatus comprising:a first memory configured to store print job cache information transmitted from an information processing apparatus, with the print job cache information being associated with user cache information that includes identification information of a user corresponding to a print job; and
one or more processors configured to
in response to an operation performed by a logged-in user logged in to the image forming apparatus, determine whether the first memory stores the print job cache information associated with identification information of the logged-in user logged in to the image forming apparatus,
based on a result of the determination, acquire the print job cache information associated with the identification information of the logged-in user from the first memory, and
display the acquired print job cache information on a display of the image forming apparatus, wherein
the print job cache information includes only bibliographic information of the print job for a predetermined total number of jobs,
the user cache information includes the identification information of a plurality of users including the logged-in user, and the identification information of the plurality of users is registered in the user cache information in descending order of log-in time of the users, and
in a case where registration of jobs associated with the logged-in user would cause the total number of jobs to exceed the predetermined total number of jobs, the job cache information of the plurality of users is deleted in ascending order of log in time of the users until the total number of jobs, including the jobs of the logged-in user, is equal to below the predetermined total number of jobs.

US Pat. No. 10,891,094

GANGED IMPOSITION SORT SYSTEM

PTI Marketing Technologie...

1. A method, comprising:generating, using a composition engine, a first intermediate file from a first data file and a first template;
generating, using the composition engine, a second intermediate file from a second data file and a second template;
receiving, at the composition engine, a sorting data file; and
composing, using the composition engine, a third intermediate file, the third intermediate file including contents from the first intermediate file and the second intermediate file merged based in part on the sorting data file.

US Pat. No. 10,891,093

SYSTEM FOR OPERATING TEXTILE PRINTING MACHINES INCLUDING DATA-PROCESSING MODULE

1. A system for operating textile printing machines, witha plurality of textile printing machines including a first textile printing machine and at least one second textile printing machine, which are provided in a first entity or a second entity, which are independent of each other, the first entity and the second entity each being a buyer of a respective one of the plurality of textile printing machines, each of the first entity and the second entity being capable of accepting local print jobs from direct customers of the respective entity, wherein
each textile printing machine is provided, by a machine manufacturer of each respective textile printing machine, with a data-processing module, which is configured to detect data about the state, set-up, and/or utilization of the textile printing machine and a textile and a printer ink provided on the textile printing machine,
each data-processing module can be activated on a request of the buyer of the textile printing machine, the buyer of the textile printing machine being one of the first entity and the second entity,
by the activated data-processing module, the textile printing machines transmit, via internet connection, the detected data including data on the textile and the printer ink to a data center of the machine manufacturer independently of the first entity, the second entity, and/or location, and
the data center is configured to
accept a print job from a third party, the third party being a customer that is not a direct customer of one of the first entity and the second entity,
determine, based on the transmitted detected data, that the accepted print job from the third party fits the detected data of at least one of the plurality of textile printing machines belonging to one of the first entity and/or the second entity, and
transmit the accepted print job to at least one of the respective determined textile printing machines, so far as sufficient free print time is available on the respective textile printing machine.

US Pat. No. 10,891,092

IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM TO PRESENT A SETTING CONTENT

Canon Kabushiki Kaisha, ...

1. An image processing apparatus that includes a plurality of functions of the image processing apparatus and arranges a first software key for activating one of the plurality of the functions in a first area displayed on a display of the image processing apparatus, the display being configured to display a second area in which at least a second software key generated in response to executing a job using one of the functions is arranged,the image processing apparatus comprising:
at least one processor configured to cause the image processing apparatus to function as:
a display control unit configured to display a part of a setting content of the second software key on the second software key; and
a setting unit configured to set a mode for executing, by pressing the second software key, a function based on the setting content of the pressed second software key, without displaying a first screen with which the setting content of the pressed second software key is changeable,
wherein in a case where the mode is set by the setting unit, and the part of the setting content displayed on the pressed second software key is identical with a part of a setting content displayed on another second software key, a second screen is displayed to indicate the setting content corresponding to the pressed second software key.

US Pat. No. 10,891,091

IMAGE-FORMING APPARATUS AND IMAGE-FORMING METHOD

SHARP KABUSHIKI KAISHA, ...

1. An image-forming apparatus comprising:a printer that performs printing; and
a controller that determines, on a basis of settings related to a print job, whether to execute the print job as a print job related to test printing or execute the print job as a normal print job, and causes the printer to perform printing,
wherein the settings related to the print job include a plurality of items, and one or more combinations of a setting of another item with respect to a setting of one item are set as recommended settings, and
when the print job includes a setting of another item other than the recommended settings with respect to the setting of the one item, the controller determines that the print job is a print job related to test printing.

US Pat. No. 10,891,090

IMAGE FORMING SYSTEM

KYOCERA DOCUMENT SOLUTION...

1. An image forming system comprising:an image forming apparatus installed at a customer site;
a management terminal device installed at the customer site; and
a mobile terminal device of a guest user,
wherein the mobile terminal device (a) downloads and installs an application program including identification information, (b) presents the identification information to the management terminal device according to the application program, and (c) accepts a temporal ID corresponding to the identification information from the management terminal device,
wherein the management terminal device, (a) when the management terminal device accepts the identification information from the mobile terminal device, (a1) generates the temporal ID corresponding to the identification information and (a2) presents the mobile terminal device with the temporal ID generated and transmits the temporal ID to a user authentication device for the image forming apparatus to register the temporal ID as a user ID of the guest user and, (b) when payment of a usage fee for the image forming apparatus by the guest user is completed, (b1) deletes registration of the temporal ID of the guest user, and
wherein the image forming apparatus (a) performs user authentication with the user authentication device when the guest user logs in to the image forming apparatus with the temporal ID and (b) issues an invoice containing the temporal ID and the usage fee when the guest user logs out of the image forming apparatus.

US Pat. No. 10,891,089

SYSTEM AND METHODS FOR USING AN AUTHENTICATION TOKEN WITH A CLOUD-BASED SERVER

KYOCERA DOCUMENT SOLUTION...

1. A method for authenticating user access to a print job at a cloud-based server, the method comprising:uploading job data for the print job to the cloud-based server using a port monitor;
generating a claim code by the cloud-based server;
providing the claim code to the port monitor;
launching a browser with a uniform resource locator (URL) address indicating the cloud-based server, wherein the URL address includes the claim code;
receiving an authentication token at the browser from the cloud-based server;
authenticating user information at the browser;
assigning the authentication token to the claim code; and
submitting the claim code and the authentication token to the cloud-based server to access the print job.

US Pat. No. 10,891,088

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR TRANSMITTING A REQUEST INITIATED IN A FIRST NETWORK TO A SECOND NETWORK

FUJI XEROX CO., LTD., To...

1. An information processing system, comprising:a first information processing apparatus connected to a first network;
a second information processing apparatus connected to a second network different from the first network; and
a storage apparatus communicable with the first information processing apparatus and the second information processing apparatus, wherein
the first information processing apparatus receives, from a user terminal, a processing request associated with the user terminal via the first network, and then transmits the processing request to the storage apparatus when connection of the user terminal to the first network is terminated,
the second information processing apparatus transmits a request to transfer the processing request associated with the user terminal to the storage apparatus when the user terminal approaches the second information processing apparatus, and
the storage apparatus transfers the processing request received from the first information processing apparatus to the second information processing apparatus in response to the transfer request.

US Pat. No. 10,891,087

PRINT SYSTEM, PRINTER AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING INSTRUCTIONS THEREFOR

Brother Kogyo Kabushiki K...

1. A print system comprising an information processing apparatus, a printer and a server which are interconnected through a network,wherein the server has an image processing function of applying image processing to print data, the image processing being a process of changing a mode of a print image represented by the print data, the print image represented by the print data before the image processing is applied having a first mode, the print image represented by the print data after the image processing is applied having a second mode, the first mode being different from the second mode,
wherein the server stores first identifying information and second identifying information in an associated manner, the first identifying information identifying being information identifying at least one of the printer and the information processing apparatus, the second identifying information being information identifying an image processing to be applied,
wherein the information processing apparatus connected to the network is configured to transmit unprocessed print data to the printer when a print instruction is received through a printing program, the printing program being a program implemented in an operation system of the information processing apparatus, the print instruction being an instruction causing the printer connected with the information processing apparatus, the unprocessed print data being print data the server has not yet applied the image processing,
wherein, when the printer connected to the network receives the unprocessed print data from the information processing apparatus, the printer transmits the unprocessed print data to the server,
wherein, when the server connected to the network receives the unprocessed print data from the printer, the server performs, based on the received unprocessed print data:
obtaining a transmission source identifying information which identifies a transmission source device, the transmission source device being one of the transmission source printer and the transmission source apparatus, the transmission source printer being a printer which transmitted the unprocessed print data to the server, the transmission source apparatus being an information processing apparatus which transmitted the unprocessed image data to the transmission source printer;
applying the image processing to the unprocessed print data, the image processing corresponding to the second identifying information, the second identifying information being information stored in the server, the second identifying information being information associated with the first identifying information, the first identifying information being information identifying a device same as the transmission source device identified by the transmission source identifying information obtained by the server; and
transmitting processed print data to the printer, the processed print data being the print data to which the image processing has been applied, and
wherein, when the printer connected to the network receives the processed print data form the server, the printer performs printing based on the processed print data.

US Pat. No. 10,891,086

JOB TICKET CONFLICT RESOLUTION FOR PRINT JOBS

Ricoh Company, Ltd., Tok...

1. A system, comprising:a print controller, comprising:
a Raster Image Processor (RIP); and
a controller configured to identify a logical page comprising a Page Description Language (PDL) and having an unmodifiable media attribute, wherein the unmodifiable media attribute for the logical page prevents a modification of a presentation of the logical page prior to rasterizing the logical page,
the controller is configured to process a job ticket to determine that the logical page has a media exception, wherein the media exception comprises a conflict between the unmodifiable media attribute for the logical page and a media attribute specified in the job ticket for the logical page,
the controller is configured, responsive to the logical page having the media exception and the unmodifiable media attribute, to direct the RIP to rasterize the logical page based on the unmodifiable media attribute by interpreting the PDL, and to modify raster data generated by the RIP for the logical page based on the media attribute specified in a job ticket for the logical page to resolve the media exception for the logical page.

US Pat. No. 10,891,085

SYSTEM, NETWORK ARCHITECTURE AND METHOD FOR ACCESSING AND CONTROLLING AN ELECTRONIC DEVICE

gabi Solutions, Inc., Fa...

1. A system for enabling a target electronic device to be controlled by a user electronic device having at least one function, the system comprising:a smart box connectable to at least one network and to the target electronic device, the smart box having a central processing unit comprising a processor and memory having stored therein general purpose software, and having storable therein smart box special purpose software; and
a server connectable to the at least one network and to the smart box, the server having a processor and memory configured for storing therein server special purpose software,
wherein the smart box is connectable to the user electronic device via the at least one network and is configured to permit a user of the user electronic device to access at least one preset function of the target electronic device, whereby the target electronic device is enabled to at least one of perform or respond to the at least one function of the user electronic device, and
wherein the at least one function of the user electronic device is native to the user electronic device.

US Pat. No. 10,891,084

APPARATUS AND METHOD FOR PROVIDING DATA TO A MASTER DEVICE

Arm Limited, Cambridge (...

1. An interconnect comprising:an interface to couple a master device to the interconnect, the interface comprising buffer storage, wherein the interface is configured to receive, from the master device, a request for data comprising a plurality of data blocks, the master device requiring the data blocks of the plurality to be provided in a defined order; and
a data collator configured, at least for one operation mode of the interconnect, to:
receive the request from the interface;
issue a data pull request to the interface, to cause the interface to allocate buffer space in the buffer storage for buffering the requested data; and
responsive to receiving a confirmation from the interface that the buffer space is allocated, provide the requested data to the buffer storage,
the interface being configured to:
employ the buffer storage to enable re-ordering of the plurality of data blocks of the requested data as received by the interface, prior to outputting the plurality of data blocks to the master device; and
output the plurality of data blocks to the master device in the defined order.

US Pat. No. 10,891,083

SYSTEM AND METHOD FOR RANDOMIZING DATA

MICROSEMI SOLUTIONS (US),...

1. A method for randomizing data in a memory storage device, the method comprising:receiving, at a memory controller, a plurality of data bytes to be randomized and written to a page of a memory storage device coupled to the memory controller, wherein the page comprises a plurality of data sectors and wherein each of the plurality of data sectors are configured to store a plurality of data bytes;
selecting an initial seed value from an initial seed table based upon an initial seed index;
selecting a level one alteration value from a level one alteration value table based upon a level one alteration index;
performing an operation between the initial seed value and the level one alteration value to generate a level one altered seed value;
selecting a first level two alteration value from a first level two alteration value table based upon a level two alteration select signal and a level two inverse alteration select signal;
performing an operation between the level one altered seed value and the first level two alteration value to generate a first seed;
selecting a second level two alteration value from a second level two alteration value table based upon the level two alteration select signal;
performing an operation between the level one altered seed value and the second level two alteration value to generate a second seed;
randomizing a first portion of the plurality of data bytes using a first randomizer initialized by the first seed to generate a first portion of randomized data bytes to be written to a data sector of the plurality of data sectors;
randomizing a second portion of the plurality of data bytes using a second randomizer initialized by the second seed to generate a second portion of randomized data bytes to be written to the data sector of the plurality of data sectors, wherein the first seed is uncorrelated with the second seed.

US Pat. No. 10,891,082

METHODS FOR ACCELERATING COMPRESSION AND APPARATUSES USING THE SAME

SHANGHAI ZHAOXIN SEMICOND...

1. A method for accelerating compression, performed by a configuration logic of a compression accelerator, comprising:obtaining an input parameter from a processor core;
obtaining a configuration setting from a compression parameter table according to the input parameter;
configuring components coupled between a first buffer and a second buffer to form a data transmission path according to the input parameter, wherein the first buffer stores raw data; and
transmitting the configuration setting to the components on the data transmission path for processing the raw data to generate compressed data and storing the compressed data in the second buffer;
wherein the compression parameter table is comprised within the configuration logic, and wherein the compression parameter table comprises a plurality of records and each record stores configuration settings associated with an algorithm type with a compression level;
wherein the configuration settings comprise information indicating a dictionary size, a hash table size, an output format, a minimum-matched length, a maximum-matched length, a checksum type, and a hash algorithm.

US Pat. No. 10,891,081

SYSTEMS AND METHODS FOR ASYNCHRONOUS WRITING OF SYNCHRONOUS WRITE REQUESTS BASED ON A DYNAMIC WRITE THRESHOLD

Open Drives LLC, Culver ...

1. A method comprising:determining first parameters of a first storage device in a storage system, and different second parameters of a second storage device in the storage system;
configuring a first write threshold for the first storage device based on the first parameters, and a second write threshold for the second storage device based on the second parameters, wherein the first write threshold is different than the second write threshold;
receiving a plurality of write requests containing data for different files;
providing a first asynchronous request with the data from a first non-consecutive set of the plurality of write requests to the first storage device in response to the data from the first non-consecutive set of write requests satisfying the first write threshold; and
providing a second asynchronous request with the data from a second non-consecutive set of the plurality of write requests to the second storage device in response to the data from the second non-consecutive set of write requests satisfying the second write threshold, wherein a number of write requests forming the first non-consecutive set of write requests is different than a number of write requests forming the second non-consecutive set of write requests.

US Pat. No. 10,891,080

MANAGEMENT OF NON-VOLATILE MEMORY ARRAYS

MENTIUM TECHNOLOGIES INC....

1. A system comprising:a memory array comprising a plurality of devices;
a digital-to-analog converter configured to convert a digital signal to an analog signal as a voltage or current signal;
a plurality of sample and hold circuits configured to receive the analog signal and store the analog signal as a charge;
an address controller configured to regulate which sample and hold circuits of the plurality of sample and hold circuits propagate the analog signal, the sample and hold circuits configured to feed the analog signal to the plurality of devices;
an output circuit configured to program the plurality of devices by comparing currents of the plurality of devices to their corresponding target currents, in response to one or more of the currents of the plurality of devices being within a threshold range of the corresponding target currents, the output circuit discontinues programming the corresponding devices of the plurality of devices, in response to one or more of the currents of the plurality of devices not being within the threshold range of the target currents, the output circuit continues programming the corresponding devices of the plurality of devices;
a counter configured to generate a counter signal;
a plurality of output digital-to-analog converters configured to generate a plurality of current signals based on the counter signal;
a plurality of comparators configured to receive and compare the plurality of current signals based on the counter signal and the currents of the plurality of devices, in response to one or more of the currents of the plurality of devices being within a threshold of the current signals based on the counter signal, the plurality of comparators configured to change a sign of one or more corresponding signals of a plurality of signals generated by the plurality of comparators; and
a plurality of registers configured to receive the plurality of signals output by the plurality of comparators and in response to the sign of one or more corresponding signals of the plurality of signals output by the plurality of comparators changing, the corresponding registers of the plurality of registers store a corresponding signal of the plurality of signals output by the counter.

US Pat. No. 10,891,079

INFORMATION PROCESSING APPARATUS

FUJI XEROX CO., LTD., To...

1. An information processing apparatus comprising:a non-volatile semiconductor storage device including a flash memory;
a write buffer memory that stores data that is to be written onto the non-volatile semiconductor storage device;
a controller that performs control to write, in a batch and onto the non-volatile semiconductor storage device, the data stored on the write buffer memory when an amount of the data stored on the write buffer memory reaches a predetermined amount of data; and
a read buffer memory that stores data read from the non-volatile semiconductor storage device,
wherein the controller performs control to read in a batch the predetermined amount of data from the non-volatile semiconductor storage device, to store the read data on the read buffer memory, and to successively transfer the data stored on the read buffer memory to a succeeding stage in steps of a data unit.

US Pat. No. 10,891,078

STORAGE DEVICE WITH A CALLBACK RESPONSE

Western Digital Technolog...

1. A method of operating a storage device in a storage system including a host, comprising:receiving a start stop unit (SSU) command with a power condition field of sleep or powerdown from the host;
sending a callback response containing a requested read buffer to the host;
receiving the requested read buffer command from the host;
executing the requested read buffer command by the storage device; and
executing the SSU command by the storage device.

US Pat. No. 10,891,077

FLASH MEMORY DEVICE AND CONTROLLING METHOD THEREOF

MACRONIX INTERNATIONAL CO...

1. A flash memory device, comprising:a memory array;
an in-place update module, used for performing a program procedure or a garbage collection procedure via a bit erase operation or a page erase operation on the memory array;
an out-of-place update module, used for performing the program procedure or the garbage collection procedure via a block erase operation or a migration operation on the memory array; and
a latency-aware module, used for determining a relationship between a first overhead of the in-place update module and a second overhead of the out-of-place update module.

US Pat. No. 10,891,076

RESULTS PROCESSING CIRCUITS AND METHODS ASSOCIATED WITH COMPUTATIONAL MEMORY CELLS

GSI Technology, Inc., Su...

1. A processing array device, comprising:a plurality of memory cells arranged in an array having a plurality of columns and a plurality of rows, each memory cell having a storage element wherein the array has a plurality of sections and each section has a plurality of bit line sections and a plurality of bit lines with one bit line per bit line section, wherein the memory cells in each bit line section are all connected to a single read bit line that generates a computation result and the plurality of bit lines in each section are distinct from the plurality of bit lines included in the other sections of the array;
each bit line section having a read storage device that captures the computation result;
each section having circuitry to logically combine the computation results captured by the bit line sections in the section;
each bit line section having circuitry that stores the combined computation results in one or more memory cells in the bit line section;
an RSP data line, connected to each bit line section, that communicates the combined computation result outside of the processing array device; and
wherein each section has an RSP data line that communicates the computation results for the plurality of bit line sections in the section, and wherein each bit line section has circuitry that generates a logical OR on the RSP data line of the computation results captured in all of the read storage devices in each of the bit line sections in the section and each bit line section stores the combined computation result.

US Pat. No. 10,891,075

MEMORY SYSTEM AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A memory system comprising:a plurality of resources; and
a frequency adjuster configured to adjust operating frequencies of the plurality of resources at a predetermined adjustment timing during operations of the plurality of resources,
wherein the adjustment timing comprises at least one timing for dividing partial operation periods of at least one resource among the plurality of resources.

US Pat. No. 10,891,074

KEY-VALUE STORAGE DEVICE SUPPORTING SNAPSHOT FUNCTION AND OPERATING METHOD THEREOF

Samsung Electronics Co., ...

1. A method, comprising:a key-value storage device receiving a first command from a host, the first command including a first key, a first value, and a first snapshot identification (ID);
in response to the received first command, the key-value storage device generating a first snapshot entry;
the key-value storage device receiving a second command from the host, the second command including the first key, a second value, and a second snapshot ID; and
in response to the received second command, the key-value storage device generating a second snapshot entry,
wherein the key-value storage device further comprises a mapping table memory into which a mapping table storing the first and second snapshot entries is loaded,
wherein the first snapshot entry includes the first snapshot ID, the first key, a first physical address in a non-volatile memory of the key-value storage device at which the first value is written, and one of a first flag and a first link region, and
wherein the second snapshot entry includes the second snapshot ID, the first key, a second physical address in the non-volatile memory of the key-value storage device at which the second value is written, and one of a second flag and a second link region,
wherein when the first snapshot entry includes the first flag and the second snapshot entry includes the second flag, the first flag has a first flag value indicating that the first snapshot entry is not the latest snapshot entry, and the second flag has a second flag value indicating that the second snapshot entry is the latest snapshot entry, and
wherein when the first snapshot entry includes the first link region and the second snapshot entry includes the second link region, the first and second snapshot entries are implemented as a linked list in an order of the second snapshot entry to the first snapshot entry and the second link region is configured to store therein a memory address in the mapping table memory at which the first snapshot entry is stored.

US Pat. No. 10,891,073

STORAGE APPARATUSES FOR VIRTUALIZED SYSTEM AND METHODS FOR OPERATING THE SAME

1. A method for operating a storage apparatus having a write buffer, the method comprising:receiving a write request from a virtual machine;
determining a remaining capacity in the write buffer;
identifying, based on a result of the determining of the remaining capacity, a write pattern corresponding to the received write request by comparing a write data size indicated by the write request with a predetermined threshold; and
determining whether to allow the write request to bypass the write buffer based on the identified write pattern.

US Pat. No. 10,891,072

NAND FLASH THERMAL ALERTING

Micron Technology, Inc., ...

1. A controller to communicate a thermal condition of a memory device, the controller comprising:a memory cell die interface;
a host interface; and
circuitry configured to:
retrieve, via the memory cell die interface, a temperature from a memory cell die in response to an operation on the memory cell die, the memory cell die including memory cells;
store a representation of the temperature in the controller to create a stored representation of the temperature;
communicate, via the host interface, the stored representation of the temperature as part of a response to an operation received at the controller that corresponded to the operation on the memory cell die;
receive, via the host interface, a request for the thermal condition of the memory device; and
respond, via the host interface, to the request with the stored representation of the temperature without communicating with the memory cell die.

US Pat. No. 10,891,071

HARDWARE, SOFTWARE AND ALGORITHM TO PRECISELY PREDICT PERFORMANCE OF SOC WHEN A PROCESSOR AND OTHER MASTERS ACCESS SINGLE-PORT MEMORY SIMULTANEOUSLY

NXP USA, Inc., Austin, T...

1. A method for predicting performance of an integrated circuit device comprising a processor and one or more master devices having shared access to a single-port memory, the method comprising:activating a timer in a performance monitoring hardware unit of the integrated circuit device to measure a specified number of cycles of the processor in a defined measure instance;
activating a memory access counter in the performance monitoring hardware unit of the integrated circuit device to measure a first count of memory access requests to the single-port memory by the processor in the defined measure instance and to measure a second count of memory access requests to the single-port memory by the master device in the defined measure instance;
storing the first count and second count in memory, where activating the timer comprises enabling a first n-bit counter to count processor clock input pulses in response to an enable pulse signal and to stop counting processor input clock pulses in response to a disable pulse signal, thereby measuring the specified number of cycles of the processor in the defined measure instance;
enabling a second n-bit counter to count read acknowledgement pulses issued by a single-port instruction RAM (I-RAM) in response to memory read requests from the processor during the defined measure instance; and
enabling a third n-bit counter to count read acknowledgement pulses issued by the single-port I-RAM in response to memory read requests from the master device during the defined measure instance.

US Pat. No. 10,891,070

MANAGING GARBAGE COLLECTION IN A MEMORY SUBSYSTEM BASED ON CHARACTERISTICS OF DATA STREAMS

MICRON TECHNOLOGY, INC., ...

1. A method comprising:writing data units from a stream of data into an allocated portion of memory, the allocated portion of memory composed of a plurality of blocks;
evaluating a behavior of the stream of data, the behavior including amounts of valid data units from the stream of data in the allocated portion of memory;
estimating a number of block stripe fills until an amount of valid data units is predicted to be within a predetermined range of a threshold value of valid data units in a block using the evaluated behavior;
performing the estimated number of block stripe fills; and
in response to performance of the estimated number of block stripe fills, performing garbage collection of a first block of the plurality of blocks.

US Pat. No. 10,891,069

CREATING LOCAL COPIES OF DATA STORED IN ONLINE DATA REPOSITORIES

Commvault Systems, Inc., ...

1. A method, performed by a data agent of an information management system, for performing on-premises storage of data contained in an online repository, the method comprising:retrieving data stored in an online repository by performing a representational state transfer-conforming (RESTful) application programming interface (API) call to obtain the data stored in the online repository,
wherein performing the RESTful API call results in a compressed copy of the data being transferred to an on-premises site,
wherein the compressed copy of the data includes one or more compressed files for each data library of the online repository and a metadata manifest file for each data library;
parsing each of the one or more compressed files to extract a current version of a data file for every data object within each of the data libraries, and a corresponding metadata file for every data object within each of the data libraries;
retrieving data stored in the online repository by performing a client-side object model (CSOM) API call to obtain any previous versions of the data objects within each of the data libraries;
for each data object having at least a current version and one previous version, normalizing a format of the corresponding metadata files to a format associated with the current version of the data object;
generating a combined metadata file that includes metadata from each of the metadata files associated with the data objects within each of the data libraries; and
transferring the data files for the data objects within each of the data libraries and the combined metadata file to a media agent for storage to an on-premises secondary storage device of the information management system.

US Pat. No. 10,891,068

TEMPORARY RELOCATION OF DATA WITHIN LOCAL STORAGE OF A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a storage unit that includes a processor, the method comprises:receiving a write request, via a network, to write a first data slice;
executing the write request by:
identifying, from a plurality of memory devices of the storage unit, a first memory device that is designated for storage of the first data slice based on determining a slice name of the first data slice compares favorably to a namespace assigned to the first memory device;
determining that the first memory device is unavailable;
performing a function on the slice name of the first data slice to identify a second memory device from the plurality of memory devices of the storage unit for temporary storage the first data slice in response to determining that the first memory device is unavailable; and
storing the first data slice in the second memory device in response to identifying the second memory device;
determining that the first memory device is available at a time after execution of the write request; and
migrating the first data slice from storage in the second memory device to storage in the first memory device in response to determining that the first memory device is available.

US Pat. No. 10,891,067

FAST MIGRATION OF METADATA

Cohesity, Inc., San Jose...

1. A method, comprising:updating a key-value store of a storage node based on one or more underlying database files received from one or more other storage nodes;
receiving one or more logs associated with the one or more underlying database files, wherein the one or more received logs include updates to the one or more underlying database files; and
updating the key-value store based on the updates to the one or more underlying database files included in the one or more received logs.

US Pat. No. 10,891,066

DATA REDUNDANCY RECONFIGURATION USING LOGICAL SUBUNITS

INTELLIFLASH BY DDN, INC....

1. A system, comprising:a processor;
a memory;
a plurality of storage devices configured as a storage group in a first data redundancy configuration, wherein at least one logical data unit is stored in the plurality of storage devices with the first data redundancy configuration; and
a reconfiguration initiator stored in the memory and executable by the processor to perform operations comprising:
accessing a request to migrate data stored at a plurality of storage devices from storage in a first volume having a first data redundancy configuration to storage in a second volume having a second data redundancy configuration;
determining a quantity of available data blocks in the plurality of storage devices;
selecting at least one logical data subunit from the data;
migrating the at least one logical data subunit to one or more of the available data blocks in accordance with the second data redundancy configuration;
during migration of the at least one logical data subunit, receiving a user write operation indicating a change to a portion of the data;
in response to receiving the user write operation:
processing the user write operation to the first volume implementing the change to the portion of the data at the first volume in accordance with the first data redundancy configuration; and
queuing a duplicate user write operation;
migrating a final data subunit from the first volume to one or more other available data blocks at the plurality of storage devices;
activating the second volume subsequent to migrating the final data subunit; and
processing the queued duplicate user write operation to the activated second volume implementing the change to the portion of the data at the activated second volume in accordance with the second data redundancy configuration.

US Pat. No. 10,891,065

METHOD AND SYSTEM FOR ONLINE CONVERSION OF BAD BLOCKS FOR IMPROVEMENT OF PERFORMANCE AND LONGEVITY IN A SOLID STATE DRIVE

Alibaba Group Holding Lim...

1. A computer-implemented method for facilitating data placement, the method comprising:monitoring a condition of a plurality of blocks of a non-volatile memory;
determining that a condition of a first block falls below a first predetermined threshold, wherein the first block has a first capacity;
formatting the first block to obtain a second block which has a second capacity, wherein the second capacity is less than the first capacity;
determining that a condition of the second block falls below a second predetermined threshold;
formatting the second block to obtain a third block which has a third capacity, wherein the third capacity is less than the second capacity; and
responsive to determining that a condition of the third block falls below a third predetermined threshold, formatting the third block to obtain a fourth block which has the third capacity and comprises a non-volatile cache of the non-volatile memory.

US Pat. No. 10,891,064

OPTIMIZING CONNECTIVITY IN A STORAGE SYSTEM DATA

INTERNATIONAL BUSINESS MA...

1. A method for optimizing connectivity in a storage system by a processor, comprising:storing a mapping of connectivity between a host and one or more logical unit numbers (LUNs) exposed by a storage controller in the storage system;
upon receiving one or more Input/Output (I/O) operations by the storage controller from the host to access one of the LUNs, determining, by the storage controller, whether a current connectivity path between the host and the storage controller is a preferred connectivity path to the accessed LUN; wherein the determination as to whether the current connectivity path is the preferred connectivity path is performed, at least in part, by identifying a cache miss during the one or more I/O operations including detecting a number of one or more selected nodes the one or more I/O operations pass through resulting from the cache miss; and wherein the preferred connectivity path between the host and the storage controller is determined by examining the mapping of connectivity from the host to the accessed LUN via the one or more selected nodes and one or more storage virtualization systems; and
responsive to determining the current connectivity path is not the preferred connectivity path, triggering the host to reconnect to the storage controller via the preferred connectivity path to enhance connectivity to between the host and the storage controller, wherein the triggering further includes sending an asynchronous message from the storage controller to the host with a list of Internet protocol addresses (IPs) to login to for connecting to the preferred connectivity path.

US Pat. No. 10,891,063

APPARATUS AND METHODS FOR MANAGING DATA STORAGE AMONG GROUPS OF MEMORY CELLS OF MULTIPLE RELIABILITY RANKS

Micron Technology, Inc., ...

1. A method of operating an electronic system, comprising:allocating a group of memory cells of a particular plurality of groups of memory cells having a particular rank of a plurality of ranks for storing data of a particular data level of a plurality of data levels, wherein each rank of the plurality of ranks is indicative of characteristics of a respective plurality of groups of memory cells regarding its determined ability to retain data at a plurality of different storage densities;
determining a need for an additional group of memory cells for storing data of the particular data level;
moving or discarding data from a different group of memory cells storing data of a different data level of the plurality of data levels in response to determining the need for the additional group of memory cells for storing data of the particular data level; and
allocating the different group of memory cells for storing data of the particular data level;
wherein each data level of the plurality of data levels corresponds to a respective target reliability level,
wherein the respective target reliability level for any data level of the plurality of data levels is different than the respective target reliability level of each remaining data level of the plurality of data levels; and
wherein, for each rank of the plurality of ranks whose respective plurality of groups of memory cells has a determined ability to retain data of two or more data levels of the plurality of data levels, the determined ability of its respective plurality of groups of memory cells to retain data for each data level of its two or more data levels is greater than or equal to the respective target reliability level for that data level of its two or more data levels for at least one storage density of the plurality of different storage densities.

US Pat. No. 10,891,062

MANAGING HOST COMMUNICATION WITH A REGULATOR IN A LOW POWER MODE

Micron Technology, Inc., ...

1. A device, comprising:a data line arranged to communicate with a host device, wherein the data line is configured to send an indication that a low power mode should not be entered; and
a regulator coupled to the data line, the regulator configured to:
receive an instruction to enter a low power mode;
monitor the data line to determine whether the indication is being sent; and
enter the low power mode in response to receiving the instruction, if the indication that the low power mode should not be entered is not being sent.

US Pat. No. 10,891,061

ELECTRONIC DEVICE, COMPUTER SYSTEM, AND CONTROL METHOD

Toshiba Memory Corporatio...

1. An electronic device comprising:a memory; and
a processor configured to execute a program stored in the memory, wherein
the processor is configured to:
determine, when issuing a first command to a storage device is requested, a deadline time by which the first command is to be processed, wherein the storage device is capable of processing commands issued by the processor; and
issue the first command for which the deadline time is designated, to the storage device,
wherein the processor is configured to:
determine, when at least a portion of data constituting a file is written into the storage device while the file is being downloaded to the electronic device via a network, a first deadline time by which a write command for writing at least the portion of data is to be processed based on a speed at which the file is downloaded; and
issue the write command for which the first deadline time is designated to the storage device.

US Pat. No. 10,891,060

DATA STORAGE SYSTEM BINDING VIRTUAL VOLUMES TO HOST-SPECIFIC PROTOCOL ENDPOINTS

EMC IP Holding Company LL...

1. A method of operating a data storage system in a cluster of storage systems to provide virtual-volume data storage to a plurality of virtual-computing (VC) hosts, the data storage system including first and second processing nodes paired in an active-active manner to provide for (a) shared processing of a workload in a non-failure operating condition, and (b) single-node processing of the workload in a failover operating condition, the method comprising:organizing physical storage as a plurality of virtual volumes (VVols) each being a virtualized unit of storage for a corresponding virtual machine hosted by a respective VC host;
creating protocol endpoints (PEs) and organizing the PEs into host-specific initiator groups (IGs), each PE being a conglomerate storage device to which a respective set of VVols of the plurality of VVols are to be bound for access by a respective VC host, each IG containing a pair of the PEs for a corresponding VC host, one PE of the pair being advertised to the corresponding VC host as optimized on the first processing node and being advertised to the corresponding VC host as non-optimized on the second processing node, the other PE of the pair being advertised to the corresponding VC host as optimized on the second processing node and being advertised to the corresponding VC host as non-optimized on the first processing node;
binding the sets of VVols to the respective PEs, each VVol of a given set being bound to a corresponding one of the pair of PEs of the corresponding host-specific IG; and
subsequently providing data access to the plurality of VVols from each of the given VC hosts via the respective PEs,
wherein the providing of the data access via the respective PEs includes use of two asymmetric logical unit access (ALUA) paths from each VC host to the PEs of the respective IG, a first ALUA path being a primary access path to the optimized PE on the first processing node, a second ALUA path being a secondary access path to the non-optimized PE on the second processing node, the secondary access path being used when the first processing node for the optimized PE has failed.

US Pat. No. 10,891,059

OBJECT SYNCHRONIZATION IN A CLUSTERED SYSTEM

International Business Ma...

1. A computer-implemented method comprising: receiving, by a storage system in a clustered system, a first input/output (I/O) request, wherein the storage system includes one or more storage nodes, each of the one or more storage nodes having a copy of a particular object stored thereon; executing the first I/O request, wherein executing the first I/O request modifies data of a first object in a first storage node, the first object being a copy of the particular object; calculating a cyclic redundancy check (CRC) sum, a starting point, and a length of a modified data area of the first object; storing the CRC sum, the starting point, and the length of the modified data area of the first object in a first object descriptor list, wherein the CRC sum, the starting point, and the length of the modified data area are stored by adding one or more identified changes of the first object to the first object descriptor list; and not deleting the one or more identified changes of the first object; and transferring the modified data of the first object with the CRC sum, the starting point, and the length of the modified data area of the first object to a master storage node, wherein the master storage node includes a master object update descriptor list.

US Pat. No. 10,891,058

ENCODING SLICE VERIFICATION INFORMATION TO SUPPORT VERIFIABLE REBUILDING

PURE STORAGE, INC., Moun...

1. A method comprises:storing, by a set of storage units of a distributed storage network (DSN), a set of appended encoded data slices, wherein an appended encoded data slice of the set of appended encoded data slices includes an encoded data slice of a set of encoded data slices and slice integrity check value information—that is regarding the set of encoded data slices, wherein a data segment of a data object is dispersed error encoded to produce the set of encoded data slices, wherein the slice integrity value information includes a slice integrity check value for the encoded data slice, and wherein the encoded data slice is hashed to produce the slice integrity check value;
identifying, by a rebuilding agent of the DSN, one of the set of appended encoded data slices for rebuilding;
rebuilding, by the rebuilding agent, the encoded data slice of the one of the set of appended encoded data slices based on a decode threshold number of appended encoded data slices of the set of appended encoded data slices;
generating, by the rebuilding agent, a current slice integrity check value information for the rebuilt encoded data slice;
sending, by the rebuilding agent, an appended rebuilt encoded data slice that includes the rebuilt encoded data slice and the current slice integrity check value information to a storage unit of the set of storage units;
verifying the current slice integrity check value information corresponds to the slice integrity check value information; and
when the current slice integrity check value information corresponds to the slice integrity check value information, storing, by the storage unit, the appended rebuilt encoded data slice as a trusted rebuilt encoded data slice.

US Pat. No. 10,891,057

OPTIMIZING FLASH DEVICE WRITE OPERATIONS

EMC IP Holding Company LL...

1. A method for use in managing data storage, the method comprising:receiving, at a system having a plurality of NAND flash memory based solid state drives, a write request to overwrite existing data stored on the solid state drives with new data, wherein the write request is formatted using a first write granularity size corresponding to a hard disk drive write granularity and the solid state drives are configured with a write granularity having a second write granularity size, wherein the first write granularity size is larger than the second write granularity size;
avoiding overwriting the existing data with the new data using the first write granularity size by:
reading existing parity data from cache;
comparing parity data of the new data with the existing parity data from cache to identify which new data subunits in the write request include modified data; and
writing the new data subunits identified as having modified data to corresponding locations on the solid state drives.

US Pat. No. 10,891,056

VIRTUALIZATION OF MEMORY COMPUTE FUNCTIONALITY

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for managing allocation of memory compute functionality, the method comprising:receiving, from a first system component of a plurality of system components, a request for free memory pages that are not currently being used by any of the plurality of system components, wherein the plurality of system components further comprises a second system component and a third system component, wherein a priority associated with the first system component is higher than a priority associated with the second system component, and the priority associated with the second system component is higher than a priority associated with the third system component;
allocating, by a virtualized hypervisor, resources of a memory function controller to the first system component;
determining that one or more first criteria are not met for obtaining a group of free memory pages from a group of active memory pages, wherein at least a portion of the group of active memory pages are currently being utilized by the second system component, and the one or more first criteria include a predicted impact to system performance of allocating the group of free memory pages to the first system component;
receiving, from the third system component, a second request for the free memory pages, wherein the second request is received after receiving the first request;
allocating, by the virtualized hypervisor, the resources of the memory function controller to the third system component;
determining that the one or more second criteria are met for obtaining the group of free memory pages from the group of active memory pages, the one or more second criteria including a predicted impact to system performance of allocating the group of free memory pages to the third system component;
obtaining the group of free memory pages from the group of active memory pages, the obtaining comprising moving memory pages from the group of active memory pages into the group of free memory pages;
converting the group of free memory pages into a group of memory compute pages; and
allocating, by the virtualized hypervisor, the group of memory compute pages for exclusive use by a memory compute function of the memory function controller to execute one or more operations on behalf of the third system component.

US Pat. No. 10,891,055

METHODS, SYSTEMS AND DEVICES RELATING TO DATA STORAGE INTERFACES FOR MANAGING DATA ADDRESS SPACES IN DATA STORAGE DEVICES

OPEN INVENTION NETWORK LL...

1. A data storage system, comprising:a storage medium component comprising a plurality of physical storage locations, each of the storage locations having a unique storage location indicator associated therewith;
a translation layer module defining a data address space defining a plurality of data addresses, each of the data addresses being associable with the unique storage location indicator; and
a controller configured to:
map associations in the translation layer module between data addresses and the storage location indicators,
remap a given data set stored in the storage medium component from a plurality of non-contiguous data addresses to a contiguous set of data addresses when a largest contiguous block of the data addresses reaches a predetermined threshold and the remap continues until another predetermined threshold is met.

US Pat. No. 10,891,054

PRIMARY DATA STORAGE SYSTEM WITH QUALITY OF SERVICE

NEXGEN STORAGE, INC., Lo...

1. A data storage system having a quality of service capability, the system comprising:an input/output port configured to receive a block command packet that embodies one of a read block command and a write block command and transmitting a block result packet in reply to a block command packet;
a data store system having at least first and second data stores each configured to receive and store data in response to a write block command and retrieve and provide data in response to a read block-command;
wherein the first data store has first data storage characteristics;
wherein the second data store has second data storage characteristics;
wherein the data store system has a data store system quality of service goal;
a statistics database configured to receive, store, and provide data for use in making decisions related to the pursuit of the data store system quality of service goal; and
a sorting processor configured to sort an input string comprised of multiple read/write block commands, wherein the sorting processor is configured to order the multiple read/write block commands in the output string based on:
(a) the first and second data storage characteristics of the first and second data stores, (b) the data store system quality of service goal, and (c) statistical data provided by the statistics database, wherein the sorting processor is also configured, in connection with the sorting of read/write block command, to determine which of the first and second data stores should receive the read/write block command such that processing of the command via the selected one of the first and second data stores is unlikely to violate a time constraint.

US Pat. No. 10,891,053

PREDICTING GLUCOSE TRENDS FOR POPULATION MANAGEMENT

CERNER INNOVATION, INC, ...

1. One or more computer storage media storing computer-useable instructions, the instructions when executed by one or more computing devices, cause the one or more computing devices to perform operations comprising:receiving, by an integrated home device associated with a patient, one or more interventions based on a determined real-time prediction, wherein the determined real-time prediction is based on analysis of a first set of glucose data corresponding to the patient by a predictive model trained by a second set of glucose data from a plurality of sources including electronic medical records associated with a plurality of patients, the determined real-time prediction indicating whether the patient is likely to have blood glucose levels corresponding to a predetermined threshold; and
automatically adjusting a frequency or dosage of medication dispensed by the integrated home device associated with the patient based on the received one or more interventions,
wherein training the predictive model includes logistic regression analysis of a plurality of data elements associated with the plurality of patients that are relevant to forecasting blood glucose levels, the plurality of data elements comprising at least one of medication data, clinical event data, surgical data, or demographic data, and wherein the first set of glucose data and the second set of glucose data includes blood glucose values and values for each of the plurality of data elements identified by the logistic regression analysis as relevant to forecasting blood glucose levels.

US Pat. No. 10,891,052

ADAPTIVE SYSTEM FOR OPTIMIZATION OF NON-VOLATILE STORAGE OPERATIONAL PARAMETERS

Western Digital Technolog...

1. A method for adjusting performance of a solid state drive, the method comprising:a controller of the solid state drive logging a predetermined set of performance measurements of non-volatile memories of the solid state drive, wherein the solid state drive comprises the controller;
the controller automatically transmitting to a non-volatile memory optimization server, via a host of the solid state drive, data logged for the predetermined set of performance measurements;
detecting a self-calibration trigger; and
in response to detecting the self-calibration trigger:
the solid state drive transmitting, via the host, a calibration update request to the non-volatile memory optimization server, wherein the calibration update request causes the non-volatile memory optimization server to:
identify a non-volatile memory operating parameter from a first plurality of predetermined non-volatile memory operating parameters previously stored in the non-volatile memory optimization server, based on an assessment by the non-volatile memory optimization server of the data logged for the predetermined set of performance measurements and on crowdsourced information received from other solid state drives by the non-volatile memory optimization server, wherein the crowdsourced information is data of performance measurements of non-volatile memories of the other solid state drives; and
transmit a calibration update command to the solid state drive without transmitting the identified non-volatile memory operating parameter of the first plurality of predetermined non-volatile memory operating parameters;
the solid state drive receiving, in response to the calibration update request, the calibration update command from the non-volatile memory optimization server, the calibration update command comprising a non-volatile memory operating parameter identifier identified by the non-volatile memory optimization server, wherein the non-volatile memory operating parameter identifier is associated with the non-volatile memory operating parameter; and
the solid state drive retrieving, using the non-volatile memory operating parameter identifier, one non-volatile memory operating parameter identified by the non-volatile memory operating parameter identifier, from a second plurality of predetermined non-volatile memory operating parameters previously stored in the solid state drive, wherein the non-volatile memory operating parameter identifier is based on the data logged for the predetermined set of performance measurements of non-volatile memories of the solid state drive and the data of performance measurements of non-volatile memories of the other solid state drives,
wherein the solid state drive, the other solid state drives, the host, and the non-volatile memory optimization server are separate and distinct from one another, wherein the non-volatile memory optimization server is outside the solid state drive, wherein the first and second pluralities of predetermined non-volatile memory operating parameters are for non-volatile memories and comprise non-volatile memory read thresholds, a sequence of steps for error recovery, and error correction code parameters.

US Pat. No. 10,891,051

SYSTEM AND METHOD FOR DISABLED USER ASSISTANCE

Poynt Co., Palo Alto, CA...

1. A system for assisting a disabled user with a point of sale (POS) transaction, the system comprising:a first touch display configured to receive a first touch input from the disabled user;
a secure processing system configured to:
store an assistance map and a non-assistance map,
operate between:
an assistance mode, wherein the secure processing system maps the first touch input to a first digital input associated with the POS transaction based on the assistance map, and
a non-assistance mode, wherein the secure processing system maps a second touch input from a non-disabled user to a second digital input based on the non-assistance map, and
encrypt the first digital input; and
a main processing system coupled to, and distinct from, the secure processing system, wherein the main processing system is configured to:
receive the encrypted first digital input from the secure processing system, and
transmit the encrypted first digital input to a remote entity associated with the POS transaction.

US Pat. No. 10,891,050

METHOD AND APPARATUS FOR VARIABLE IMPEDANCE TOUCH SENSOR ARRAYS IN NON-PLANAR CONTROLS

SENSEL, INC., Sunnyvale,...

1. A method for receiving an adjustment gesture formed on or about a plurality of sensor panels on a plurality of faces of a device comprising:detecting two or more touches at a first time at the sensor panels;
determining that the two or more touches are arranged in a pattern corresponding to a predetermined gesture;
determining a relative pressure between the two or more touches;
associating the predetermined gesture with a user interface (UI) element, the UI element accepting an adjustment input based on the relative pressure;
determining that the pattern and the relative pressure correspond to a predetermined see-saw gesture; and
providing an input to the UI element based on the predetermined gesture and the relative pressure.

US Pat. No. 10,891,049

SYSTEMS AND METHODS FOR CONTENT PREFERENCE DETERMINATION BASED ON SWIPE ANALYSIS

ROVI GUIDES, INC., San J...

1. A method for determining a preference for content based on swipe characteristics, the method comprising:generating for display a content identifier on a touchscreen;
detecting user contact at a first point on the touchscreen displaying the content identifier;
while the user contact is maintained on the touchscreen:
detecting an initiation of a swipe gesture to a second point on the touchscreen, wherein completion of the swipe gesture occurs upon release of the user contact; and
determining whether the swipe gesture has been temporarily halted for at least a threshold period of time before completion of the swipe gesture at the second point on the touchscreen; and
in response to determining that the swipe gesture has been temporarily halted for at least the threshold period of time before completion of the swipe gesture at the second point on the touchscreen, assigning to the content identifier a preference level that is a function of an amount of time the swipe gesture is temporarily halted.

US Pat. No. 10,891,048

METHOD AND SYSTEM FOR USER INTERFACE LAYER INVOCATION

NIO USA, Inc., San Jose,...

1. A system, comprising:a processor, comprising memory for storing instructions for execution by the processor;
an output component that presents a representation of an output of the processor;
an input component that receives a gesture provided by a user and provides a logical representation of the gesture to the processor; and
wherein the processor, receives the logical representation of the gesture, and compares the logical representation of the gesture to a representation of a model gesture, wherein the representation of a model gesture is associated with a first application;
wherein, upon the logical representation of the gesture being determined to end outside a portion of an output device displaying an interface element of a second application, the processor further determines whether the logical representation of the gesture matches the representation of the model gesture associated with the first application, and if a match is determined, outputs a representation of the first application to the output component for presentation by the output component;
wherein, upon the logical representation of the gesture being determined to end inside the portion of the output device displaying an interface element of the second application, the processor outputs a representation of the second application, in response to the logical representation of the gesture, to the output component for presentation by the output component; and
wherein the logical representation of the gesture is received while the output device is displaying a second application and while the first application is not active.

US Pat. No. 10,891,047

METHOD AND APPARATUS FOR UNLOCKING TERMINAL

LG CNS CO., LTD., Seoul ...

1. A method of unlocking a looked mode of a terminal, the method comprising:setting, by a controller, an unlocking pattern for unlocking the terminal, including
receiving, on a touch screen, a first input to assign a starting point of the unlocking pattern of the touch screen pattern, and receiving, on the touch screen, a second input to assign an ending point of the unlocking pattern of the touch screen,
setting a lattice having a preset range from the starting point of the unlocking pattern, wherein the preset range of the lattice is smaller than the touch screen, and
after receiving the first input on the touch screen and receiving the second input on the touch screen, receiving a first touch-and-drag input to provide a moving path of the unlocking pattern inside the lattice that extends from the starting point to the ending point, wherein the moving path of the unlocking pattern has a prescribed shape that correspond to a continuous touch-and-drag input on the touch screen, and
storing the unlocking pattern including the starting point, the ending point, and the moving path corresponding to the first touch-and-drag input; and
unlocking, by the controller, the terminal, including:
displaying, on the touch screen, a home screen in a locked mode that includes a first icon displayed at the starting point on the home screen, wherein the first icon is assigned to the starting point of the unlocking pattern;
inputting, on the touch screen, a second touch-and-drag input on the home screen to provide a moving path, wherein the moving path of the second touch-and-drag input has a prescribed shape that starts from the first icon at the starting point displayed on the home screen, extends in a first direction to a predetermined point on the home screen, extends in a second direction from the predetermined point on the home screen, and then extends to an ending point, wherein the predetermined point is positioned at a location on the home screen not occupied by an icon or object displayed on the home screen;
in response to the second touch-and-drag input on the home screen, comparing the starting point of the second touch-and-drag input to the starting point of the stored unlocking pattern, comparing the ending point of the second touch-and-drag input to the ending point of the stored unlocking pattern, and comparing the moving path having the prescribed shape of the second touch-and-drag input to the moving path having the prescribed shape of the stored unlocking pattern;
displaying, on the touch screen, a notification indicating a result of the comparison when the starting point, the ending point, and the moving path having prescribed shape of the second touch-and-drag input does not correspond to the set unlocking pattern; and
displaying the home screen in an unlocked mode when the starting point, the ending point, and the moving path having the prescribed shape of the second touch-and-drag input corresponds to the starting point, the ending point, and the moving path having the prescribed shape of the moving path of the stored unlocking pattern,
wherein an appearance of the home screen is the same in the locked mode as in the unlocked mode.

US Pat. No. 10,891,046

WIRELESS DEVICE HAVING A REAR PANEL CONTROL TO PROVIDE ADVANCED TOUCH SCREEN CONTROL

TracFone Wireless, Inc., ...

1. A wireless device comprising:a housing including a front panel, a rear panel, and side edges, the front panel arranged in a front side of the housing and the front panel being associated with a display device configured to display a graphical user interface, the rear panel arranged on a rear side of the housing and the rear side including at least a rear input, and the side edges being arranged between the rear panel and the front panel;
a processor configured to execute instructions stored in a memory;
a touchscreen associated with the display device and the touchscreen configured to receive a user input on the front panel;
the rear input arranged on the rear panel of the housing configured to receive a user input in conjunction with the processor to provide advanced user controls on the graphical user interface displayed on the front panel;
the display device in response at least in part to the processor being further configured to display a rear input customization user interface on the graphical user interface to request a user's designation, the rear input customization user interface displays graphical user interface elements to request enablement functionality of the rear input to enable the rear input and a plurality of enablement functionalities of advanced user controls to enable the advanced user controls on the graphical user interface of the front panel, one or more of the advanced user controls being configured to receive a user's designation of a different type of user input applied to the rear input for a corresponding type of one of the advanced user controls and thereafter set the user's designation of the different type of user input of the rear input to the corresponding type of one of the advanced user controls; and
the processor configured to display a corresponding type of one of the advanced user controls on the graphical user interface displayed on the display device to provide additional functionality as a part of the graphical user interface in response to receiving a type of user input applied to the rear input that has been set for the corresponding type of one of the advanced user controls,
wherein the advanced user controls comprise at least one of the following: a user menu functionality, a content peek functionality, a pop functionality, and a trackpad functionality;
wherein the type of user input applying to the rear input comprises one of the following: a soft touch, a hard touch, a single click, a double-click, a triple click, and a press and hold, and
wherein the user menu functionality is generated by the processor and displayed on the graphical user interface to provide a user menu with a plurality of possible actions for a user to choose.

US Pat. No. 10,891,045

APPLICATION INSPECTOR

eBay Inc., San Jose, CA ...

1. A system comprising:a memory having stored processor-executable instructions; and
a processor configured to execute the processor-executable instructions to implement an inspector tool to perform operations comprising:
communicating with an executing application to identify an element in a user interface of the executing application, the user interface comprising multiple elements organized in a hierarchy;
superimposing a transparent layer over the user interface, the transparent layer including one or more graphical objects that indicate locations of one or more corresponding elements of the user interface; and
presenting an expandable information bar configured to permit visual manipulation of the transparent layer, the expandable information bar being further configured to be expanded to provide an information region contained within the expandable information bar, the information region providing information on the hierarchy of multiple elements and enabling interaction with the transparent layer, the presenting facilitating access to debugging functionality to debug a layout of the user interface.

US Pat. No. 10,891,044

AUTOMATIC POSITIONING OF CONTENT ITEMS IN A SCROLLING DISPLAY FOR OPTIMAL VIEWING OF THE ITEMS

Twitter, Inc., San Franc...

1. A computer-implemented method comprising:providing, on a touchscreen display of an electronic device, a stream of content items, at least some of the content items being associated with a corresponding display anchor;
detecting, a plurality of inputs consecutively provided to the touchscreen display, the plurality of inputs causing the stream of content items to scroll, each input being associated with at least one point of contact on the touchscreen and at least one time at which the input is made;
determining a scroll speed associated with the plurality of inputs, the scroll speed being based on an elapsed time between an end of a first input and a beginning of a second input from at least two consecutive inputs in the plurality of inputs;
selecting, based on the determined scroll speed, a scroll mode from a first scroll mode and a second scroll mode, wherein the first scroll mode is a snap to location scroll mode that selects a display anchor as a pause location for the stream of content items when the determined scroll speed is at or below a predefined speed threshold and the second scroll mode is a free scroll mode that allows free scrolling without snapping the stream of content items when the determined scroll speed is above the predefined speed threshold;
in response to selecting the first scroll mode and determining a lack of input for a predefined time period, determining and selecting, during scrolling and based on the determined scroll speed, the display anchor as the pause location for the stream, the display anchor corresponding to at least one content item in the stream of content items; and
pausing, according to the selected first scroll mode, the scrolling of the stream of content items at the pause location, such that the at least one content item corresponding to the selected display anchor is displayed in a top viewable portion of the display.

US Pat. No. 10,891,043

SLIDE BAR DISPLAY CONTROL DEVICE AND SLIDE BAR DISPLAY CONTROL METHOD

NEC CORPORATION, Tokyo (...

1. An electronic apparatus comprising:a display comprising a touch panel on which a user performs a plurality of touch operations including: a drag operation, a long touch operation, and a release operation; and
a processor and a computer readable memory storing program instructions that when executed by the processor cause the processor to implement:
displaying, on the display, a slider, which shows a value configured to overlap and slide along a first bar with a first appearance, wherein the first bar is configured to define a first range within which a position of the slider specifies the value,
changing the position of the slider to a first position based on a drag operation to the first position,
specifying a first value in the first range based on the first position of the slider,
changing, while the slider is in the first position and in response to a long touch operation at the first position for more than a predetermined time, the first bar to a second bar configured to defined a second range, wherein the second bar has a second appearance in portions of the second bar that do not overlap with the slider,
changing, after the first bar has changed to the second bar, the position of the slider at the first position to a second position based on a drag operation to the slider to the second position, which specifies a second value in the second range based on the second position of the slider, and
determining the second value in response to a release operation at the second position.

US Pat. No. 10,891,042

ADAPTIVE GRAPHICAL USER INTERFACE FOR APPLIANCE

Electrolux Appliances Akt...

1. A method for controlling a household appliance using a graphical user interface, the user interface comprising a touch-sensitive display and a display control unit for controlling the touch-sensitive display, the method comprising the steps of:displaying a first graphical representation on the touch-sensitive display, the first graphical representation comprising one or more symbols, each symbol representing an appliance subunit of the appliance subunit of the household appliance;
when one of the symbols is selected based on a touch at the one of the symbols by a touching means, changing the first graphical representation into a second graphical representation, the second graphical representation comprising a value range indication that allows the user to change a power setting value of the appliance subunit represented by the selected symbol by a dragging that begins from a starting point within an area of the selected symbol and proceeds across the second graphical representation, wherein the value range indication displays the power setting value in the area of the selected symbol fixed at a position as the dragging proceeds, the power setting value being changed in proportion to a dragging distance from the starting position within the area of the selected symbol; and
when the touching means is lifted from the touch-sensitive display, changing the second graphical representation to a third graphical representation, the third graphical representation comprising a display of the changed power setting value selected by the dragging across the second graphical representation.

US Pat. No. 10,891,041

DATA PREPARATION USER INTERFACE FOR AGGREGATE COMPARISON OF DATASETS AT DIFFERENT NODES IN A PROCESS FLOW

Tableau Software, Inc., ...

1. A method for comparing data sets in a data preparation application, comprising:at a computer system having one or more processors and memory storing one or more programs configured for execution by the one or more processors:
displaying a user interface that includes a plurality of panes, including a data flow pane and a profile pane, wherein the data flow pane displays a flow diagram having a plurality of nodes, each node corresponding to a respective data set having a respective plurality of data fields;
in response to receiving a first user input selecting a first node in the flow diagram, displaying, in the profile pane, information about a first data set corresponding to the first node, including displaying distributions of data values for one or more of the data fields from the first data set;
receiving a second user input to concurrently select a second node in the flow diagram; and
in response to the second user input:
forming a composite data set comprising a union of (i) the first data set and (ii) a second data set corresponding to the second node;
grouping data values for each of a plurality of data fields in the composite data set to form a respective set of bins; and
displaying, in the profile pane, distributions of data values for the plurality of data fields in the composite data set, each distribution comprising the respective set of bins for a respective data field, wherein each displayed bin depicts counts of data values in the respective bin originating from each of the first and second data sets.

US Pat. No. 10,891,040

SYSTEMS AND METHODS INCLUDING BAR-TYPE PARAMETER ADJUSTMENT ELEMENTS

Gambro Lundia AB, Lund (...

1. An extracorporeal blood treatment system comprising:extracorporeal blood treatment apparatus comprising one or more pumps, one or more sensors, and one or more disposable elements for use in performing an extracorporeal blood treatment;
a display comprising a graphical user interface configured to depict a parameter adjustment region corresponding to one or more parameters related to the extracorporeal blood treatment performable using the extracorporeal blood treatment apparatus; and
a processor operatively coupled to the extracorporeal blood treatment apparatus and the display and configured to:
provide one or more parameters related to the extracorporeal blood treatment performable using the extracorporeal blood treatment apparatus and one or more reference values, wherein each of the one or more reference values is associated with a different parameter of the one or more parameters, wherein each of the one or more reference values represents a selected prescription value, a preset default value, or a saved value for a patient for the associated parameter of the one or more parameters,
display the parameter adjustment region on the graphical user interface, wherein the parameter adjustment region comprises one or more bar-type parameter adjustment elements, wherein each of the one or more bar-type parameter adjustment elements is associated with and configured to adjust a different parameter of the one or more parameters related to the extracorporeal blood treatment performable using the extracorporeal blood treatment apparatus, wherein each of the one or more bar-type parameter adjustment elements comprises:
a bar element extending from a first end representative of a lower value for the associated parameter to a second end representative of an upper value for the associated parameter, and
an indicator element located along the bar element between the first end and the second end indicative of a present value of the associated parameter, wherein the indicator element is configurable in a locked state and an unlocked state, wherein the indicator element is unmovable along the bar element by selecting and dragging the indicator element when in the locked state, wherein the indicator element is movable along the bar element by selecting and dragging when in the unlocked state,
configure the indicator element of the one or more bar-type parameter adjustment elements in the locked state when the indicator element is not selected by a user,
configure the indicator element of the one or more bar-type parameter adjustment elements into the unlocked state in response to a user selecting and maintaining selection of the indicator element for an unlock time period, and
decrease or increase the parameter of the one or more parameters related to the extracorporeal blood treatment performable using the extracorporeal blood treatment apparatus associated with the bar-type parameter adjustment element of the one or more bar-type parameter adjustment elements in response a user moving the indicator element of the bar-type parameter adjustment element along the bar element towards the first end or the second end, respectively, when the indicator element is in the unlocked state.

US Pat. No. 10,891,039

SHARED REAL-TIME CONTENT EDITING ACTIVATED BY AN IMAGE

Nant Holdings IP, LLC, C...

1. A collaboration system, comprising:a collaboration database storing a plurality of collaboration interface components; and
at least one processor configured to control an object recognition engine communicatively coupled with the collaboration database, and the object recognition engine being configured to:
receive sensor data related to a game object via an input interface, wherein the game object indicates a subject of a collaboration in a game;
identify a set of object characteristics from the sensor data;
select a set of collaboration interface components from the plurality of collaboration interface components, the set of collaboration interface components having selection criteria satisfied by the set of object characteristics;
instantiate a collaboration interface comprising a new game from the set of collaboration interface components on at least a first electronic device, wherein the collaboration interface is instantiated based on the game object indicating the subject of the collaboration in the new game and based on a location and player characteristic; and
configure the first electronic device to generate a first collaboration command via the instantiated collaboration interface.

US Pat. No. 10,891,038

CLOUD-BASED TOOL FOR CREATING VIDEO INTERSTITIALS

GOOGLE LLC, Mountain Vie...

1. A method for a server-based creation of playlist interstitials, the method comprising:providing, by a processing device of a server, an interstitial creation interface for display on a user device, the interstitial creation interface comprising a selectable interstitial indicator visually illustrating a location of an interstitial in a playlist comprising a plurality of media items, wherein the selectable interstitial indicator is positioned between a first media item and a second media item of the plurality of media items presented in the interstitial creation interface;
responsive to a user selection of the selectable interstitial indicator positioned between the first media item and the second media item of the plurality of media items presented in the interstitial creation interface, causing presentation of a plurality of user interface (UI) elements allowing a user of the user device to specify interstitial configuration parameters for the interstitial being added to the playlist;
receiving, through the interstitial creation interface, user input for at least a subset of the plurality of UI elements to specify the interstitial configuration parameters for the interstitial; and
creating, by the processing device of the server, the interstitial based on an interstitial template and the received interstitial configuration parameters, wherein the created interstitial is supplemental content to be added before or after one of the media files of the media items of the playlist.

US Pat. No. 10,891,037

USER INTERFACES AND SYSTEM INCLUDING SAME

The PNC Financial Service...

32. A computer system programmed to execute an electronic user interface to integrate information from accounts held at a financial institution, the computer system comprising:a processor of a financial institution programmed to execute and communicate in association with the user interface:
an information graphic to a client device, the information graphic comprising a continuous geometrical shape comprising a background element comprising a first graphic element and a second graphic element;
the first graphic element adjacent to the second graphic element;
the first graphic element programmed to display, in operative association with the processor of the computer system, first information associated with at least one first account, wherein the first information associated with the at least one first account comprises a balance of the at least one first account;
the second graphic element programmed to display, in operative association with the processor of the computer system, second information associated with at least one second account, wherein the second information associated with the at least one second account comprises a balance of the at least one second account;
a bar element positioned behind the background element, wherein the background element is transparent such that the bar element is partially visible therethrough, wherein a proportional length between the first graphic element and the bar element is indicative of a combined amount in both the first account and the second account to be consumed by bill payments, wherein a length of the background element exceeds a length of the bar element when a combined amount in both the at least one first account and the at least one second account is greater than the combined amount in both the at least one first account and the at least one second account to be consumed by bill payments, and the length of the bar element exceeds the length of the background element when the combined amount in both the at least one first account and the at least one second account to be consumed by bill payments is greater than the combined amount in both the at least one first account and the at least one second account;
a controller operatively associated with the processor to interface with the processor of the financial institution, the controller comprising a slidable element located on the background element so as to demarcate the first graphic element and the second graphic element, the controller programmed to redistribute funds between the at least one first account and the at least one second account via selective positioning of the slidable element along the background element and to graphically integrate and display information pertaining to the at least one first account and the at least one second account, wherein the positioning redistributes funds between the at least one first account and the at least one second account while simultaneously providing a visual indication of the at least one first account balance and the at least one second account balance, the visual indication being a length of the first graphic element and a length of the second graphic element as defined by the relative position of the slidable element;
the information graphic switchable between first and second display states, wherein with the information graphic in the first display state:
a first dimension of the first graphic element is representative of the balance of the at least one first account;
a first dimension of the second graphic element is representative of the balance of the at least one second account; and
relative first dimensions of the first and second graphic elements are in proportion to relative balances of the at least one first account and the at least one second account, respectively;
wherein the second graphic element includes a selectable portion that is selectable, in operative association with the processor of the computer system, to switch the information graphic between the first and second states to alternately virtually hide and display the first dimension of the second graphic element, the second graphic element programmed to display the balance of the at least one second account in the first dimension and the first graphic element programmed to display the balance of the at least one first account in the first dimension when the information graphic is in the first display state;
wherein the first dimension of the second graphic element is programmed to decrease to a second dimension and the first dimension of the first graphic element is programmed to increase to a second dimension when the selectable portion of the second graphic element is selected to switch the information graphic from the first display state to the second display state to virtually hide the balance of the at least one second account in the second graphic element; and
wherein the at least one first account comprises a demand account, the at least one second account comprises a savings account, and the savings account is configured to provide automatic overdraft protection to the demand account, wherein when an overdraft occurs a predetermined amount is automatically transferred from the savings account to the demand account.

US Pat. No. 10,891,036

USER INTERFACES AND SYSTEM INCLUDING SAME

The PNC Financial Service...

32. A method for communicating and executing an information graphic via an electronic user interface to integrate information from accounts held at a financial institution, the method comprising:communicating and executing, with a computer system including a processor of a financial institution, an information graphic comprising a continuous geometric shape comprising a background element comprising a first graphic element and a second graphic element, the first graphic element to display first information associated with at least one first account, wherein the first information associated with the at least one first account comprises a balance of the at least one first account, and the second graphic element to display second information associated with at least one second account, wherein the second information associated with the at least one second account comprises a balance of the at least one second account;
displaying a bar element behind the background element, wherein the background element is transparent such that the bar element is partially visible therethrough, wherein a proportional length between the first graphic element and the bar element is indicative of a combined amount in both the at least one first account and the at least one second account to be consumed by bill payments, wherein a length of the background element exceeds a length of the bar element when a combined amount in both the at least one first account and the at least one second account is greater than the combined amount in both the at least one first account and the at least one second account to be consumed by bill payments, and the length of the bar element exceeds the length of the background element when the combined amount in both the at least one first account and the at least one second account to be consumed by bill payments is greater than the combined amount in both the at least one first account and the at least one second account;
interfacing with the processor of the financial institution, via a controller operatively associated with the processor, the controller comprising a slidable element located on the background element so as to demarcate the first graphic element and the second graphic element, the controller programmed to redistribute funds between the at least one first account and the at least one second account via selective positioning of the slidable element along the background element and to graphically integrate and display information pertaining to the at least one first account and the at least one second account, wherein the positioning redistributes funds between the at least one first account and the at least one second account while simultaneously providing a visual indication of a balance of the at least one first account and of a balance of the at least one second account, the visual indication being a length of the first graphic element and a length of the second graphic element as defined by the relative position of the slidable element;
wherein the information graphic is switchable between first and second display states, wherein with the information graphic in the first display state:
a first dimension of the first graphic element is representative of the balance of the at least one first account;
a first dimension of the second graphic element is representative of the balance of the at least one second account; and
relative first dimensions of the first and second graphic elements are in proportion to relative balances of the at least one first account and the at least one second account, respectively;
wherein the second graphic element includes a selectable portion that is selectable, in operative association with the processor of the computer system, to switch the information graphic between the first and second states to alternately virtually hide and display the first dimension of the second graphic element, the second graphic element programmed to display the balance of the at least one second account in the first dimension and the first graphic element programmed to display the balance of the at least one first account in the first dimension when the information graphic is in the first display state;
wherein the first dimension of the second graphic element is programmed to decrease to a second dimension and the first dimension of the first graphic element is programmed to increase to a second dimension when the selectable portion of the second graphic element is selected to switch the information graphic from the first display state to the second display state to virtually hide the balance of the at least one second account in the second graphic element; and
wherein the at least one first account comprises a demand account, the at least one second account comprises a savings account, and the savings account is configured to provide automatic overdraft protection to the demand account, wherein when an overdraft occurs a predetermined amount is automatically transferred from the savings account to the demand account.

US Pat. No. 10,891,035

LASER FINISHING DESIGN TOOL

1. A method comprising:providing a garment previewing tool that allows previewing on a computer screen of a jeans garment customized by a user with a finishing pattern created using a laser input file by a laser, wherein the garment previewing tool comprises
providing an option for the user to select a jeans garment base and upon the user's selection, showing a first garment preview image on the computer screen comprising a jeans base image for the selected garment base,
providing an option for the user to select a wear pattern from a menu of wear patterns, wherein each wear pattern is associated with a laser input file to be used by a laser to produce that wear pattern onto a jeans garment,
after the wear pattern is selected, showing a second garment preview image on the computer screen comprising the selected wear pattern in combination with the jeans base image, wherein the second garment preview image replaces the first garment preview image,
in the second garment preview image, allowing the user to select the wear pattern and modify a sizing of the wear pattern relative to the jeans base image, wherein as the user makes changes, the modified sizing of the wear pattern is displayed to the user in real time,
in the second garment preview image, allowing the user to select the wear pattern and modify a position of the wear pattern relative to the jeans base image, wherein as the user makes changes, the modified positioning of the wear pattern is displayed to the user in real time, and
showing a third garment preview image on the computer screen comprising the jeans base image and selected wear pattern, with modified sizing or modified positioning, or a combination;
providing a target pair of jeans corresponding to the jeans garment base selected by the user; and
based on a laser input file associated with the third garment preview image comprising the selected wear pattern with modified sizing or modified positioning, or a combination, using a laser to create a finishing pattern on an outer surface of the target jeans,
wherein the second garment preview image comprises a plurality of pixels, each pixel is at a pixel location of the second garment preview image and comprises color value, and a pixel is displayed on the computer screen at its pixel location as a color, corresponding to its color value, and
the second garment preview image is generated by a method comprising
generating an adjusted base image from the jean base image without the selected wear pattern,
generating a pattern mask based on the laser input file associated with the selected wear pattern,
for each pixel in the second garment preview image, calculating a first contribution based on values from the pattern mask and the jeans base image,
for each pixel in the second garment preview image, calculating a second contribution based on values from the pattern mask and the adjusted base image,
for each pixel in the second garment preview image, combining the first contribution and second contribution to obtain a color value for each pixel of the second garment preview image.

US Pat. No. 10,891,034

APPARATUS AND METHOD OF OPERATING WEARABLE DEVICE

Samsung Electronics Co., ...

1. A method of operating a wearable device, the method comprising:hierarchically displaying each of a plurality of icon sets arranged along each of a plurality of virtual closed loops on a display of the wearable device, the plurality of virtual closed loops comprising a first virtual closed loop and a second virtual closed loop, and the plurality of icon sets comprising a first icon set arranged along the first virtual closed loop and a second icon set arranged along the second virtual closed loop;
obtaining a first input to a bezel ring of the wearable device;
displaying the first virtual closed loop and the second virtual closed loop on the display of the wearable device by exchanging a position of the first virtual closed loop and a position of the second virtual closed loop based on the first input to the bezel ring,
determining at least one icon included in a virtual closed loop arranged along a boundary of the display based on a second input to the bezel ring; and
executing a preset function corresponding to the determined at least one icon.

US Pat. No. 10,891,033

SYSTEM AND METHOD FOR ENHANCED TOUCH SELECTION OF CONTENT

Microsoft Technology Lice...

1. A user interface (UI) system, comprising:a processing system comprising one or more processors; and
a memory that stores program code to be executed by the processing system, the program code including:
an input detector configured to:
receive an input that is associated with content provided via a UI, and that is applied by a contact instrument via a touch interface; and
determine characterization information of the contact instrument relative to the touch interface, the characterization information of the contact instrument including an orientation;
a parameter generator configured to:
generate a parameter of a selection command based at least in part on the orientation included in the characterization information, the parameter specifying a portion of the content; and
an output manager configured to:
cause the selection command to be executed with the parameter; and
provide an output to the UI based on execution of the selection command, the output including an indication of the portion of the content that was selected.

US Pat. No. 10,891,032

IMAGE REPRODUCTION APPARATUS AND METHOD FOR SIMULTANEOUSLY DISPLAYING MULTIPLE MOVING-IMAGE THUMBNAILS

Samsung Electronics Co., ...

1. An electronic device for displaying video thumbnails, the electronic device comprising:a memory for storing instructions; and
a processor configured to execute the instructions to at least:
store a plurality of video contents including a first video content and a second video content in the memory,
based on receiving a first user request, display a plurality of video thumbnails including a first video thumbnail and a second video thumbnail for the plurality of video contents on a display of the electronic device, wherein the plurality of video thumbnails are respectively generated from the plurality of video contents, and the first video thumbnail and the second video thumbnail respectively include a plurality of image frames to be sequentially displayed,
based on a user input for selecting the first video thumbnail of the displayed plurality of video thumbnails, reproduce the first video content corresponding to the selected first video thumbnail,
receive a user input for displaying a plurality of section-based video thumbnails corresponding to the first video content while reproducing the first video content,
based on the plurality of section-based video thumbnails corresponding to the first video content not existing, divide the first video content into moving-image sections and generate the plurality of section-based video thumbnails corresponding to the moving-image sections by converting each of the moving-image sections into a moving-image content, wherein each of the plurality of section-based video thumbnails includes a plurality of image frames to be sequentially displayed,
display the plurality of section-based video thumbnails corresponding to the first video content on the display, and
based on a user input for selecting one of the displayed plurality of section-based video thumbnails corresponding to the first video content, reproduce a portion of the first video content corresponding to the selected section-based video thumbnail.

US Pat. No. 10,891,031

METHOD AND DEVICE FOR DISPLAYING TASK MANAGEMENT INTERFACE

BEIJING XIAOMI MOBILE SOF...

1. A method for displaying a task management interface, wherein the method is applied in a mobile terminal, and comprises:receiving a first trigger signal for triggering display of the task management interface;
acquiring n application programs running in the mobile terminal triggered by the first trigger signal, where n is an integer greater than 1;
acquiring a preview interface corresponding to each application program; and
displaying the task management interface, the task management interface including preview interfaces respectively corresponding to the n application programs which are arranged in a lattice,
wherein the method further comprises:
determining display positions of the preview interfaces respectively corresponding to the n application programs in the task management interface according to a correlation among the n application programs; and
wherein two application programs with a higher correlation have a closer distance between the preview interfaces corresponding to the two application programs displayed in the task management interface.

US Pat. No. 10,891,030

COMPOUND ANIMATION SHOWING USER INTERACTIONS

Facebook, Inc., Menlo Pa...

1. A method comprising:providing, for display to a plurality of users, a content item received from a posting user;
receiving selections of reaction icons by a subset of the plurality of users, the reaction icons representing a reaction of each of the users of the subset to the content item;
generating an animation comprising the reaction icons selected by the subset of users;
providing the content item for display to a viewing user on a client device associated with the viewing user;
receiving selection of a reaction icon from the viewing user;
in response to the viewing user's selection of the reaction icon, ranking users of the subset based at least on the viewing user's affinity to each of the users of the subset, the viewing user's affinity indicating the viewing user's interest in each of the users of the subset;
identifying a predetermined number of users of the subset based on the ranking;
retrieving images associated with the identified users; and
sending the retrieved images and the generated animation to the client device associated with the viewing user to generate a compound reaction animation that includes the reaction icon selected by the viewing user, the animation of the reaction icons selected by the subset of users, and the retrieved images associated with the identified users as additional icons in the compound reaction animation, the client device configured to display the compound reaction animation, at least a portion of the compound reaction animation overlaying the content item.