US Pat. No. 10,795,751

CELL-AWARE DIAGNOSTIC PATTERN GENERATION FOR LOGIC DIAGNOSIS

Mentor Graphics Corporati...

1. One or more non-transitory computer-readable media storing computer-executable instructions, the computer-executable instructions, when executed, causing one or more processors to perform a method, the method comprising:performing a first diagnosis process on a failed integrated circuit based on a first fail log, a first set of test patterns and a circuit design based on which the failed integrated circuit is fabricated to generate a first set of defect suspects that define one or more cells in the circuit design, thereby identifying one or more defective cells in the circuit design, wherein the first fail log is generated by applying the first set of test patterns to the failed integrated circuit in a first scan-based test, the first fail log including information describing when, where, and how a test failed and which of the first set of test patterns generate expected test responses;
based on identifying the one or more suspect defective cells, generating a second set of test patterns using fault models for internal defects in the one or more suspect defective cells, the second set of test patterns capable of detecting each of the fault models for internal defects in the one or more suspect defective cells for at least a predetermined number of times;
performing a second diagnosis process on the failure integrated circuit based on a second fail log, the second set of test patterns and the circuit design to generate a second set of defect suspects, the second fail log generated by applying the second set of test patterns to the failure integrated circuit in a second scan-based test; and
reporting the second set of defect suspects.

US Pat. No. 10,795,750

AUTO BUG CAPTURE

Apple Inc., Cupertino, C...

23. A non-transitory machine readable medium storing a program for reporting potential bug events on a device, the program for execution by at least one processing unit of the device, the program comprising sets of instructions for:detecting, by a first module of the device, a potential bug event and generating a description to describe the potential bug event;
responsive to the detecting, directing, by the first module of the device, a set of other modules of the device to gather and store a collection of one or more data sets relevant to the potential bug event, in case the potential bug event has to be further analyzed by a set of analysis servers, the one or more data sets capturing a state of the device at a time that the potential bug event is detected and the set of other modules being exclusive of the first module;
reporting the generated description to the set of analysis servers, which receives potential bug events from a plurality of other devices and select a subset of the potential bug events that correspond to a same bug event type for further analysis; and
in response to a request from analysis server set for the stored one or more data sets, sending the stored one or more data sets to the analysis server set.

US Pat. No. 10,795,749

SYSTEMS AND METHODS FOR PROVIDING FAULT ANALYSIS USER INTERFACE

Palantir Technologies Inc...

1. A system comprising:one or more processors;
memory storing instructions that, when executed by the one or more processors, cause the system to perform:
accessing fault information, the fault information identifying faults for one or more machines; and
providing a fault analysis interface, the fault analysis interface including a map view and an organization view, the fault analysis interface displaying correlations of the faults using visuals and spatial locations of the visuals, the map view displaying the correlations of the faults based on geographic locations of the one or more machines during occurrences of the faults, the map view indicating a respective severity of each of the faults within different geographic regions including the geographic locations, wherein each of the respective severities is based on a relative potential cost associated with the corresponding fault, and the organization view providing an analysis of a history of the faults to determine whether one of the faults could have been predicted or prevented based on an occurrence of one or more less severe faults compared to the one of the faults.

US Pat. No. 10,795,748

TAILORING DIAGNOSTIC INFORMATION IN A MULTITHREADED ENVIRONMENT

International Business Ma...

1. A computer-implemented method for tailoring diagnostic information specific to current activity of multiple threads within a computer system, the method comprising:creating, by one or more processors, a system dump, including main memory and system state information, the system dump being a first hardware memory mapping;
storing, by one or more processors, the system dump to a database;
executing, by one or more processors, a program to provide tailored diagnostic information; and
creating, by one or more processors, a virtual memory image of a system state, based on the memory dump, in the address space of the program, by creating a second hardware memory mapping of the hardware memory addresses of the address space of the program to the virtual memory addresses of the virtual memory image of the system state.

US Pat. No. 10,795,747

FILE SYNCHRONIZING SERVICE STATUS MONITORING AND ERROR HANDLING

Microsoft Technology Lice...

1. A computing device for providing status messages indicating a status of a client-side sync manager, the computing device comprising:one or more memory devices that store executable program instructions; and
one or more processors operable to access the memory device(s) and to execute the executable program instructions, the executable program instructions comprising:
a local file system access manager that includes
a status interface configured to receive a status message from a client-side sync manager of the computing device that communicates with a server-side sync manager at a server to synchronize data objects stored in first storage of the computing device with data objects stored in second storage at the server, the status message corresponding to a state of the client-side sync manager during a multi-stage start-up process for the client-side sync manager, the status interface further configured to provide status information of the received status message to an entity.

US Pat. No. 10,795,746

AUTOMATED POWER DOWN BASED ON STATE OF FIRMWARE

Micron Technology, Inc., ...

1. A memory device, comprising:a memory array;
a firmware activity tracker; and
a memory controller, wherein the memory controller is programmed to perform operations comprising:
receiving one or more loading inputs from the firmware activity tracker indicating that a first instance of firmware has either not been successfully loaded, or has loaded but is not valid and operable;
after receiving the one or more loading inputs, receiving one or more reloading status inputs from the firmware activity tracker indicating a number of unsuccessful attempts to load the firmware; and
if the one or more reloading status inputs indicates that the number of unsuccessful attempts has reached a programmable threshold, entering the memory device into a reduced-power state.

US Pat. No. 10,795,745

DYNAMIC AND ADAPTIVE APPROACH FOR FAILURE DETECTION OF NODE IN A CLUSTER

Hewlett Packard Enterpris...

1. A method comprising:transmitting, by a first network device, data to a second network device over a communication link at a first time;
receiving, by the first network device, an acknowledgment of receipt of the data over the communication link from the second network device at a second time;
determining, by the first network device, a communication Round Trip Time (RTT) between the first network device and the second network device based on the first time and the second time;
determining a first frequency that is larger than the RTT;
transmitting a heartbeat protocol message between the first network device and the second network device at the first frequency;
re-determining the RTT based on current network conditions; and
dynamically updating the first frequency for transmitting the heartbeat protocol message by increasing the first frequency of transmitting the heartbeat protocol message in response to determining a reduction in the RTT or by decreasing the first frequency of transmitting the heartbeat protocol message in response to determining an increase in the RTT, wherein the heartbeat messages represent an active status of the first network device and the acknowledgment represents an active status of the second network device.

US Pat. No. 10,795,744

IDENTIFYING FAILED CUSTOMER EXPERIENCE IN DISTRIBUTED COMPUTER SYSTEMS

Teachers Insurance and An...

1. A method, comprising:receiving, by an application performance management (APM) server associated with a distributed computer system, an application layer message associated with a request originated by a client computer system responsive to an action by a user, wherein the application layer message includes a transaction identifier;
analyzing the application layer message;
preventing multiple error identifications with respect to two or more application layer messages sharing the transaction identifier;
identifying a failed customer experience error; and
causing a first graph to be rendered by a graphical user interface in a visual association with a second graph representing a number of user login events grouped by a pre-defined period of time, wherein the first graph represents a number of identified failed customer experience errors grouped by the pre-defined period of time.

US Pat. No. 10,795,743

COMPUTING DEVICE NOTIFICATION MANAGEMENT SOFTWARE

salesforce.com, inc., Sa...

1. A method comprising:receiving, at a computer system, an indication of an exception condition for a particular one of a plurality of computing devices, wherein a response to the exception condition includes a plurality of steps, including computer-implemented steps in which data objects cause outputting of a plurality of notifications for a user that is associated with the particular one of the plurality of computing devices;
publishing, at the computer system on an event bus, events corresponding to ones of the plurality of notifications for the response;
detecting, by the computer system on the event bus, the events corresponding to the plurality of notifications for the response;
storing, by the computer system to a response-specific instance of a user notification object, notification indications corresponding to detected events; and
outputting, by the computer system to the user, a unified status communication that depicts information corresponding to each of the notification indications currently stored by the instance of the user notification object.

US Pat. No. 10,795,742

ISOLATING UNRESPONSIVE CUSTOMER LOGIC FROM A BUS

Amazon Technologies, Inc....

1. A system, comprising:internetworked computer devices organized into domains, wherein the domains include a client domain and a host domain having a higher priority level than the client domain; and
wherein one of the computer devices includes:
a client configurable logic circuit comprising first programmable logic hardware included in the client domain, the first programmable logic hardware configured according to client configuration data provided by a virtual machine within the client domain;
a shell logic circuit comprising second programmable logic hardware, the shell logic circuit included in the host domain, wherein the shell logic circuit encapsulates the client configurable logic circuit; and
a monitoring circuit coupled to the shell logic circuit, wherein the monitoring circuit is included within the host domain and the monitoring circuit is configured to:
determine that an errant action associated with the client configurable logic circuit occurs, wherein the errant action is capable of interfering with an operation of one of the internetworked computer devices or damaging one of the internetworked computer devices; and
isolate the client configurable logic circuit from the virtual machine in response to determining that the errant action occurs.

US Pat. No. 10,795,741

GENERATING DEEPLINKS FOR APPLICATIONS BASED ON MULTI-LEVEL REFERRER DATA

Google LLC, Mountain Vie...

1. A method comprising:receiving, via a first application of a computing device, a first set of one or more data packets indicating a command to navigate from a first resource to a second resource, the first set of data packets identifying the second resource and secondary referrer data associated with at least one of the first resource or a first content item selected on the first resource to generate the first set of data packets;
rendering, within the first application of the computing device, the second resource and a second content item provided within the second resource;
receiving, via the first application of the computing device, a selection of the second content item;
in response to the selection of the second content item, generating, at the computing device, a second set of one or more data packets comprising the secondary referrer data extracted from the first set of data packets and primary referrer data associated with at least one of the second resource or the second content item;
transmitting, by the computing device, the second set of data packets to an application server;
receiving, from the application server at the computing device, a deeplink generated by the application server using both the primary referrer data and the secondary referrer data;
obtaining, via the second application by the computing device, a third content item from the application server using the primary referrer data and the secondary referrer data included in the deeplink, the third content item to be displayed within the second application on the computing device; and
rendering, within the second application on the computing device indicated by the deeplink, a content interface having the third content item generated using the secondary referrer data and the primary referrer data included in the deeplink.

US Pat. No. 10,795,740

PARAMETER DELEGATION FOR ENCAPSULATED SERVICES

Amazon Technologies, Inc....

1. A method, comprising:performing, by one or more computing devices:
receiving, at a first service of a plurality of services from a client, a first service request according to an application programming interface (API) of the first service, the first service request including one or more input parameters for the first service defined by the API of the first service and a data block that is formatted to be uninterpretable by the first service, wherein the data block includes one or more arguments that are uninterpreted by an implementation of the API by the first service, and wherein the one or more arguments are to be sent to a second service; and
sending, from the first service to the second service of the plurality of services, a second service request including the one or more arguments according to an application programming interface (API) of the second service, wherein the one or more arguments are defined by the API of the second service.

US Pat. No. 10,795,739

PORT CONFIGURATION FOR MICROKERNEL OPERATING SYSTEM

Facebook Technologies, LL...

1. A method comprising, by an operating system executed by a computing device:creating an inter process communication (IPC) channel and a port for a process executed in a user space of the operating system, wherein the IPC channel is associated with a key and the port comprises a port buffer mapped to a first virtual address space of a kernel of the operating system and to a second virtual address space of the process;
writing a message for the process in a message buffer associated with the IPC channel;
determining whether the process is actively consuming messages in the message buffer based on one or more criteria;
responsive to determining that the process is not actively consuming messages, writing a notification packet in the port buffer, wherein the notification packet comprises an action type and the key, wherein the notification packet is configured to cause the process to consume the message based on the action type and the key.

US Pat. No. 10,795,738

CLOUD SECURITY USING SECURITY ALERT FEEDBACK

Microsoft Technology Lice...

1. A system comprising:processing circuitry; and
a memory device coupled to the processing circuitry, the memory device including instructions stored thereon for execution by the processing circuitry to perform operations for computer security, the operations comprising:
providing an alert to a device of a first cloud user in response to determining an operation performed by a cloud resource is inconsistent with a behavior profile that defines normal operations performed by the cloud resource including a percentage of cloud resources same as the cloud resource that access a port, a percentage of the cloud resources that read to the port, and a percentage of the cloud resources that write to the port;
receiving feedback from the first cloud user regarding the alert; and
providing, for a second, different cloud user and by prioritizing a second alert based on the feedback from the first cloud user, a second alert.

US Pat. No. 10,795,737

GENERIC DISTRIBUTED PROCESSING FOR MULTI-AGENT SYSTEMS

Introspective Power, Inc....

1. A distributed control system comprising:event agents that generate events,
an Agent Adaptor being configured to discover and communicate with distributed processors;
said distributed processors including:
event transformers that transform events from the event agents to generic events,
event reactors that react to the generic events and produce outputs responsive to the generic events, and
event handlers that register to receive and process one or more classes of generic events, the outputs of the event reactors supplying commands to any of the system agents for action and response.

US Pat. No. 10,795,736

CROSS-CLUSTER HOST REASSIGNMENT

VMWARE, INC., Palo Alto,...

1. A system, comprising:a computing device comprising a processor and a memory;
machine readable instructions stored in the memory that, when executed by the processor, cause the computing device to at least:
identify a computing cluster assigned to a first queue, the first queue comprising a first list of identifiers of computing clusters with insufficient resources for a respective workload;
identify a host machine assigned to a second queue, the second queue comprising a second list of identifiers of host machines in an idle state;
send a command to the host machine to join to the computing cluster; and
remove a host identifier for the host machine from the second queue.

US Pat. No. 10,795,735

METHOD AND APPARATUS FOR LOAD BALANCING VIRTUAL DATA MOVERS BETWEEN NODES OF A STORAGE CLUSTER

EMC IP Holding Company LL...

1. A non-transitory tangible computer readable storage medium having stored thereon a computer program for implementing a method of load balancing virtual data movers (VDM) between nodes of a storage cluster, the computer program including a set of instructions which, when executed by a computer, cause the computer to perform a method comprising the steps of:establishing a storage cluster including a plurality of nodes, one of the nodes implementing a cluster manager and each node having a system Virtual Data Mover (VDM) instantiated thereon;
assigning, by the cluster manager, primary responsibility for each of a plurality of data VDMs to corresponding nodes of the storage cluster, each data VDM having responsibility for at least one user file system;
assigning, by the cluster manager, corresponding backup nodes for the plurality of data VDMs;
collecting node statistics by each system VDM on each node in the cluster of nodes, the node statistics including operational parameters of the node and activity levels of the data VDMs on the node;
collecting the node statistics, by a cluster manager, from each of the system VDMs;
using the collected node statistics to assign a respective node score to each node in the storage cluster;
using the node scores to identify possible data VDM movement combinations within the storage cluster;
selecting one of the data VDM movement combinations for implementation within the storage cluster that will reduce disparity between node scores within the storage cluster; and
implementing the selected one of the data VDM movement combinations by moving at least some of the data VDMs between the nodes of the storage cluster.

US Pat. No. 10,795,734

PROCESSING ELEMENT RESTART PRECEDENCE IN A JOB OVERLAY ENVIRONMENT

International Business Ma...

1. A system comprising:at least one processor and a computer-readable storage medium having program instructions embodied therewith, the program instructions executable by the at least one processor to cause the at least one processor to perform operations comprising:
determining a job overlay, wherein the job overlay involves updates to a subset of processing elements of a plurality processing elements of a job;
determining processing requirements of the plurality processing elements;
determining computation capabilities of computational resources associated with the plurality of processing elements;
determining a processing element restart order based at least in part on processing requirements and computation capabilities;
updating the subset of processing elements;
dropping data based at least in part on a number of buffered tuples exceeding a buffer threshold; and
restarting the subset of processing elements based at least in part on the processing element restart order.

US Pat. No. 10,795,733

SERVER FARM MANAGEMENT

Microsoft Technology Lice...

1. A computer-implemented method comprising:maintaining a farm goal in a data store, the farm goal specifying a target number of machines for each of one or more machine roles for each of one or more server farms, wherein the one or more machine roles include a first machine role for a first server farm, the first machine role being a server role for an online service;
configuring the farm goal responsive to a change in activity of the first server farm;
creating a virtual hard disk (VHD) having a configuration based on the first machine role; and
automatically deploying the VHD to the first server farm based on the farm goal.

US Pat. No. 10,795,732

GRID COMPUTING SYSTEM

SAP SE, Walldorf (DE)

1. A grid computing management system in communication with a grid consumer device and in communication with a plurality of user devices, the grid computing management system comprising:at least one processor; and
a data storage device comprising instructions thereon that, when executed by the at least one processor to perform operations comprising:
receiving, via a computer network and from the grid consumer device, first task description data describing a first task to be performed using the plurality of user devices;
identifying a plurality of task units for executing the first task using the first task description data;
generating a plurality of task unit modules, wherein a first task unit module of the plurality of task unit module, when executed by a first user device of the plurality of user devices, causes the first user device to execute a first task unit of the plurality of task units;
receiving, via the computer network, a ready message from the first user device, wherein the ready message describes web content accessed by the first user device;
selecting the first user device to execute the first task unit, the selecting based at least in part on a trust score for the first user device;
sending the first task unit module to the first user device;
sending the first task unit module to a second user device of the plurality of user devices;
comparing a first task unit result received from the first user device with a first task unit result received from the second user device; and
updating the trust score for the first user device based at least on part on the comparing.

US Pat. No. 10,795,731

SYSTEMS AND METHODS FOR DISTRIBUTED RESOURCE MANAGEMENT

10X Genomics, Inc., Plea...

1. A computing system comprising one or more processors and a memory, the memory storing one or more programs for execution by the one or more processors, the one or more programs singularly or collectively comprising instructions for executing a method comprising:identifying one or more nodes to satisfy at least a subset of a composite hardware requirement for a plurality of jobs in a queue, wherein each respective job in the plurality of jobs is timestamped to indicate when the respective job was submitted to the queue and specifies one or more node resource requirements, wherein the composite hardware requirement is based upon one or more node resource requirements of each job in the plurality of pending jobs, and wherein the identifying comprises:
(i) determining a current availability score for each respective node class in a plurality of node classes, and
(ii) reserving one or more nodes of a first node class in the plurality of node classes when a demand score for the first node class satisfies the current availability score for the corresponding node class by a first threshold amount; and
granting each respective node in the one or more nodes with a draw privilege, wherein the draw privilege permits a distributed computing module of a respective node to draw one or more jobs from the plurality of jobs subject to a constraint that the collective hardware requirements of the one or more jobs collectively drawn by the respective node does not exceed the hardware resources of the respective node, and wherein the respective node identifies the one or more jobs by scanning the plurality of jobs in accordance with the draw privilege.

US Pat. No. 10,795,730

GRAPHICS HARDWARE DRIVEN PAUSE FOR QUALITY OF SERVICE ADJUSTMENT

Apple Inc., Cupertino, C...

1. A non-transitory program storage device, readable by one or more processors and comprising instructions stored thereon to cause the one or more processors to:generate a priority list for a plurality of data masters for graphics processor based on a comparison between a current utilizations for the data masters and a target utilizations for the data masters;
assign, based on the priority list, a first data master of the plurality of data masters as a designated data master, wherein the designated data master has a higher priority to submit work to the graphics processor compared to a second data master of the plurality of data masters;
determine a stall counter value for the designated data master, wherein the stall counter value is indicative of a number of time periods the designated data master has work to submit to the graphics processor, but is unable to submit the work; and
generate a notification to pause work for the second data master based on the stall counter value.

US Pat. No. 10,795,729

DATA ACCELERATED PROCESSING SYSTEM

Cambricon Technologies Co...

1. A data accelerated processing system, comprising:one or more processing devices electrically connected to an interface device,
wherein each of the processing devices includes multiple processors,
wherein the interface device includes a PCIE interface connected to one or more expansion modules,
wherein the one or more expansion modules are respectively electrically connected to the multiple processors of the one or more processing devices such that the multiple processors are configured to process data parallelly to accelerate data processing;
one or more storage devices respectively connected to the multiple processors, wherein each of the one or more storage devices includes one or more storage units configured to store data;
a control device electrically connected to the one or more processing devices, wherein the control device is configured to retrieve parameters of the one or more processing devices and adjust respective status of the one or more processing devices based on the parameters;
a temperature monitoring device electrically connected to the control device, wherein the temperature monitoring device is configured to monitor respective temperatures of the one or more processing devices; and
a reset device electrically connected to the one or more processing devices and configured to reset the one or more processing devices in response to one or more reset signals from the control device or the interface device.

US Pat. No. 10,795,728

SHARING EXPANSION DEVICE, CONTROLLING METHOD AND COMPUTER USING THE SAME

QUANTA COMPUTER INC., (T...

1. A controlling method of a sharing expansion device, wherein a computer has at least one first user account and a second user account, the first user account has been logged in the computer, and the controlling method comprises:verifying whether the sharing expansion device has an identification code;
determining whether the computer has been installed with a driver if the sharing expansion device has the identification code;
installing the driver on the computer and driving the sharing expansion device if the computer has not been installed with the driver;
providing a shared login interface, through which the second user account logs in the computer, when the computer is connected to the sharing expansion device;
continuously receiving a plurality of first commands from the first user account and storing the first commands in a first command queue, and continuously receiving a plurality of second commands from the second user account and storing the second commands in a second command queue; and
executing, by turns, the first commands storing in the first command queue and the second commands storing in the second command queue by way of time division multiplexing.

US Pat. No. 10,795,727

FLEXIBLE AUTOMATED PROVISIONING OF SINGLE-ROOT INPUT/OUTPUT VIRTUALIZATION (SR-IOV) DEVICES

Nicira, Inc., Palo Alto,...

1. A method for assigning physical resources to a virtual computing instance in a data center, the method comprising:defining a device pool associated with a virtual entity in the data center, the device pool identifying available physical hardware devices of one or more host machines associated with the virtual entity, wherein the physical hardware devices comprise at least one physical network interface (PNIC);
connecting the virtual computing instance to the virtual entity; and
in response to the connecting, automatically selecting and assigning one or more of the physical hardware devices from the device pool to the virtual computing instance based on the association of the device pool to the connected virtual entity without a user further selecting the one or more of the physical hardware devices after the connecting.

US Pat. No. 10,795,726

PROCESSING REQUESTS RECEIVED ONLINE AND DIVIDING PROCESSING REQUESTS FOR BATCH PROCESSING

HITACHI, LTD., Tokyo (JP...

1. A computer system comprising:a plurality of processes configured to execute one or more first processing requests received online and one or more second processing requests for batch processing;
a control unit configured to divide the one or more second processing requests for batch processing in accordance with a division size determined in advance, wherein the control unit does not divide the one or more first processing requests; and
an all-order distribution unit configured to receive the one or more divided second processing requests and the one or more first processing requests that are not divided, determine an execution order of the one or more first processing requests and the one or more divided second processing requests and transmit a proposal which includes the determined execution order to all-order distribution units in other computer systems,
wherein, when the all-order distribution unit of the computer system receives consent to the transmitted proposal from at least a predetermined number of the other computer systems, the all-order distribution unit of the computer system respectively causes the plurality of processes to execute the one or more first processing requests and the one or more divided second processing requests in the determined execution order, and
the all-order distribution unit determines the execution order of the one or more first processing requests and the one or more divided second processing requests such that the one or more first processing requests are executed serially with the one or more divided second processing requests, whereby the computer system processes the first processing requests and the one or more divided second processing requests in the same order serially as the other computer systems to maintain consistency among the computer system and the other computer systems.

US Pat. No. 10,795,725

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR IMAGE PROCESSING

FUJI XEROX CO., LTD., Mi...

1. An image processing device that executes image processing by each object of an object group in which a plurality of objects are connected to each other in a directed acyclic graph form, the image processing device comprising:a memory storing instructions; and
at least one hardware processor configured to execute the instructions by implementing:
updating processing and imparting processing, the updating processing comprising updating image processing which is executed by each object of the object group to partial processing which performs image processing on division image data representing a division image obtained by dividing an input image represented by input image data into a plurality of partial regions, and the imparting processing comprising imparting a dependency relationship between pieces of the partial processing of the objects connected to each other; and
control comprising causing a plurality of computation devices to execute, in parallel, the updating processing and the imparting processing by the processing unit and the partial processing which becomes executable based on the dependency relationship.

US Pat. No. 10,795,724

CLOUD RESOURCES OPTIMIZATION

Cisco Technology, Inc., ...

1. A system comprising:a plurality of cloud nodes implemented on computing devices comprising one or more processors, said plurality of cloud nodes configured to execute jobs associated with job requests in a cloud computing environment according to a schedule;
a schedule optimizer configured to:
use a machine learning model to determine functional intent for said job requests according to at least job execution metadata, and
generate a schedule recommendation for said jobs, wherein said schedule recommendation is generated based at least in part on said functional intent;
a job executor configured to provide said schedule recommendation as said schedule to said plurality of cloud nodes, wherein said schedule optimizer and said job executor are instantiated in memory and executed by processing circuitry on at least one computing device; and
a schedule portal user interface configured to:
present a representation of said schedule;
present a representation of said schedule recommendation; and
receive a user input indicative of an acceptance or a rejection of said schedule recommendation;
wherein upon determining that said user input is indicative of said acceptance of said schedule recommendation, said schedule optimizer is configured to indicate to said job executor to provide said schedule recommendation as a replacement schedule for said schedule to said plurality of cloud nodes.

US Pat. No. 10,795,723

MOBILE TASKS

PALANTIR TECHNOLOGIES INC...

1. A computer system comprising:one or more processors;
a memory storing one or more data objects and instructions which, when executed by the one or more processors, cause performance of:
generating task objects and causing the task objects to be stored in the memory;
attaching particular data objects stored in the memory to particular generated task objects;
identifying a first field of a first task object, of the task objects, that corresponds to a second field of a first data object, of the data objects, the first data object having been attached to the first task object, the second field storing a particular value;
assigning the first field of the first task object to the particular value of the second field of the first data object;
determining that the particular value in the first field of the task object has changed and, in response, updating the particular value in the second field of the first data object.

US Pat. No. 10,795,722

COMPUTE TASK STATE ENCAPSULATION

NVIDIA Corporation, Sant...

1. A method of encapsulating and scheduling compute tasks in a streaming multiprocessor, the method comprising:allocating memory for storing a metadata structure for a compute task, wherein the metadata structure includes a software accessible portion and a hardware-only accessible portion, wherein the hardware-only accessible portion is accessible only by one or more hardware units in the streaming multiprocessor;
storing scheduling parameters in the metadata structure that control the scheduling of the compute task, wherein the scheduling parameters specify an execution priority level associated with the compute task;
storing a set of execution parameters in the software accessible portion of the metadata structure, wherein the set of execution parameters controls the execution of the compute task by the streaming multiprocessor;
storing a first pointer in the metadata structure, wherein the first pointer points to a queue that stores data associated with the compute task;
setting a pointer to the metadata structure in a first-in-first-out (FIFO) push buffer;
scheduling, based on the execution priority level and the first pointer, the compute task for execution in the steaming multiprocessor;
determining, based on at least one of the scheduling parameters, that the metadata structure specifies that the set of execution parameters stored in the software accessible portion is to be copied to the hardware-only accessible portion; and
based on the determining, copying, after scheduling the compute task for execution, the set of execution parameters stored in the software accessible portion to the hardware-only accessible portion.

US Pat. No. 10,795,721

TRANSFERRING TASKS FROM FAILING DEVICES USING IOT

International Business Ma...

1. A method for a transfer of a task from a device in a network, the network comprising a plurality of devices comprising:determining, by a failing device of the plurality of devices, that the failing device will not be able to complete one or more tasks configured on the failing device;
comparing, by the failing device, requirements of a given task of the one or more tasks with sets of device capabilities on a device list, each set of device capabilities being associated with a device of the plurality of devices;
determining, by the failing device, that the requirements of the given task match a given set of device capabilities associated with a given device; and
in response, sending, by the failing device to the given device, a request to transfer the given task.

US Pat. No. 10,795,720

ELECTRONIC DEVICE FOR CONTROLLING APPLICATION AND OPERATION METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. An electronic device, comprising:a first memory;
a second memory; and
a processor configured to:
identify, based on input for running a first application stored in the first memory, whether resources of the second memory are sufficient for the first application; obtain a list from the first memory to terminate at least one application among a plurality of applications which are currently running in a background of the electronic device, based on identifying that the resources of the second memory are insufficient, wherein the list is generated based on information related to prior executions and prior terminations of applications stored in the first memory; identify for termination, based on identifying that the resources are insufficient, the at least ones application which is currently running in the background of the electronic device; and control to terminate running of the at least one application, wherein the identifying of the at least one application for termination comprises: determining whether all of the plurality of applications currently running in the background of the electronic device are included in the list; based on determining that all of the plurality of applications are included in the list, applying a first criterion to identify, as the at least one application for termination, at least one of the plurality of applications included in the list; and based on determining that not all of the plurality of applications are included in the list, applying a second criterion different from the first criterion to identify, as the at least one application for termination, at least one of the plurality of applications not included in the list;
wherein the processor is configured to identify whether a remaining space of the second memory is less than or equal to a designated value.

US Pat. No. 10,795,719

DYNAMIC STATE-DRIVEN CENTRALIZED PROCESSING

Alegeus Technologies, LLC...

1. A method of dynamic-state-driven centralized processing, comprising:receiving, by a centralized state processing system comprising one or more processors and memory, a data structure constructed by a remote transaction processing server based on processing a plurality of electronic transactions that occurred within a spatiotemporal area, the data structure including a plurality of entries that each have a type identifier;
parsing, by the centralized state processing system, the plurality of entries based on the type identifier to identify a first entry having a first type identifier and a second entry having a second type identifier;
identifying, by the centralized state processing system, using a parameter repository storing a plurality of thresholds, a first threshold for the first entry based on the first type identifier, and a second threshold for the second entry based on the second type identifier;
determining, based on a comparison between the first threshold and a first value of the first entry, a positive delta value;
determining, based on a comparison between the second threshold and a second value of the second entry, a negative delta value;
selecting, by the centralized state processing system, from a script repository, a first script to apply to the first entry based on the positive delta value and the first type identifier, and a second script to apply to the second entry based on the negative delta value and the second type identifier;
determining, by the centralized state processing system, a first output for the first entry using the first script, and a second output for the second entry using the second script;
mapping, by the centralized state processing system, the first output to a first state of a plurality of states of a distributed heterogeneous electronic transaction process;
mapping, by the centralized state processing system, the second output to a second state of the plurality of states the distributed heterogeneous electronic transaction process;
determining, by the centralized state processing system, a combined state based on the first state and the second state;
providing, by the centralized state processing system responsive to a request from a client device received via a computer network, via a hierarchical graphical tree structure, an indication of at least one of the combined state, the first state, or the second state.

US Pat. No. 10,795,718

UPDATING HARDWARE WITH REDUCED VIRTUAL MACHINE DOWNTIME

Microsoft Technology Lice...

1. A computing system for updating logic in a hardware accelerator, comprising:hardware resources including the hardware accelerator, the hardware accelerator being reconfigurable;
an emulator having privileged access to the hardware resources; and
a hypervisor for providing a virtual machine,
in a pass-through mode of operation, the computing system enabling the virtual machine to directly interact with the hardware accelerator to pass information to and/or from the hardware accelerator,
the hardware resources including hardware logic circuitry that is configured to perform operations of the computing system, the operations including:
receiving logic for use in updating the hardware accelerator;
disabling the pass-through mode in response to receiving the logic;
transferring state information associated with the hardware accelerator into the emulator, the emulator being configured to emulate performance of the hardware accelerator;
commencing updating the hardware accelerator based on the logic that has been received; and
using the emulator to emulate one or more functions of the hardware accelerator while the hardware accelerator is being updated, instead of using the virtual machine to directly interact with the hardware accelerator in the pass-through mode.

US Pat. No. 10,795,717

HYPERVISOR FLOW STEERING FOR ADDRESS SHARING

MICROSOFT TECHNOLOGY LICE...

1. A computing device comprising:processing hardware;
a physical network interface card (NIC) configured to transmit and receive frames on a physical network, the physical NIC further configured to receive inbound packets routed by the physical network to the physical NIC based on the inbound packets being addressed to an Internet Protocol (IP) address assigned to the physical NIC;
storage hardware storing a hypervisor configured to be executed by the processing hardware to provide hardware isolated virtual environments (HIVEs), the hypervisor further configured to provide each HIVE with virtualized access to the processing hardware, the storage hardware, and the physical NIC;
each HIVE comprising a respective virtual NIC, wherein each virtual NIC virtualizes access to the physical NIC, wherein each virtual NIC comprises a software construct implemented by the processing hardware and the storage hardware, and wherein each HIVE is configured to host respective guest software;
the hypervisor further configured to assign to each virtual NIC the IP address that is assigned to the physical NIC;
the hypervisor further configured to determine, according to the inbound packets, which inbound packets to direct to which virtual NICs and which inbound packets to pass to the hypervisor, wherein the inbound packets directed to the HIVEs are received by the guest software while such inbound packets continue to be addressed to the IP address, and wherein first guest software in a first HIVE receives first inbound packets addressed to the IP address of the physical NIC and second guest software in a second HIVE receives second inbound packets addressed to the IP address of the physical NIC.

US Pat. No. 10,795,716

STATIC ROUTE TYPES FOR LOGICAL ROUTERS

NICIRA, INC., Palo Alto,...

1. A method for defining southbound routes into a logical network from an external network, the logical network comprising first and second logical routers, the method comprising:receiving configuration data for the first logical router, the first logical router comprising a centralized routing component and a distributed routing component and serving as an edge gateway to the external network;
based on the configuration data, adding a static first southbound route with a lower, first priority level to a routing table of the centralized routing component of the first logical router, said first southbound route for forwarding southbound data messages with destination addresses falling within a first subnet to the second logical router through the distributed routing component of the first logical router;
based on management plane deployment rules:
adding a static second southbound route with a higher, second priority level to the routing table of the centralized routing component of the first logical router, said second southbound route for forwarding southbound data messages with destination addresses falling within a second subnet to the second logical router through the distributed routing component of the first logical router; and
adding a connected third southbound route to the routing table of the distributed routing component of the first logical router, the connection of a connected third southbound route for forwarding southbound data messages with destination addresses falling within the second subnet to the second logical router.

US Pat. No. 10,795,715

CLOUD OVERSUBSCRIPTION SYSTEM

1. A cloud oversubscription system comprising:an overload detector configured to model a time series of data of at least one virtual machine on a host as a vector-valued stochastic process including at least one model parameter, the overload detector communicating with an inventory database, the overload detector configured to obtain an availability requirement for each of the at least one virtual machine;
a model parameter estimator communicating with the overload detector, the model parameter estimator communicating with a database containing resource measurement data for at least one virtual machine on a host at a selected time interval, the model parameter estimator is configured to estimate the at least one model parameter from the resource measurement data;
a loading assessment module communicating with the model parameter estimator to obtain the at least one model parameter for the host running the at least one virtual machine and determine a probability of overload based on the at least one model parameter, wherein the loading assessment module communicates the probability of overload to the overload detector;
wherein the overload detector compares the probability of overload to the availability requirement to identify a probable overload condition value;
wherein the overload detector communicates the probable overload condition value to a recommender, wherein the recommender generates an alert when the probable overload condition value exceeds service level agreement requirements for any of the at least one virtual machine;
wherein the overload detector is configured to obtain a probability of the host being down from the inventory database, and wherein the overload detector is configured to consider the probability of overload and the probability of the host being down in comparison to the availability requirement; and
wherein the overload detector is configured to consider whether a sum of the probability of overload and the probability of the host being down is less than a one minus a maximum availability value from an availability value obtained for each of the at least one virtual machine.

US Pat. No. 10,795,714

SYSTEM FOR MANAGING AND SCHEDULING CONTAINERS

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:receiving, from a customer of a computing resource service provider, one or more application programming interface (API) calls to create a cluster of virtual machines running a process; and
in response to the one or more API calls:
instantiating a plurality of virtual machines;
associating the plurality of virtual machines with a cluster identifier;
obtaining a task definition that indicates a location of a software image associated with the process; and
for each virtual machine of the plurality of virtual machines:
obtaining the software image from the location;
allocating the amount of the computing resource of the virtual machine instance to a software container;
launching the software container within the virtual machine, the software container running in isolation from other software containers running on the virtual machine; and
executing at least a portion of the process within the software container based at least in part on the software image.

US Pat. No. 10,795,713

LIVE MIGRATION OF A VIRTUALIZED COMPUTE ACCELERATOR WORKLOAD

VMware, Inc., Palo Alto,...

1. A method of implementing shared virtual memory between an application and one or more compute accelerators, the method comprising:(a) launching the application on a central processing unit (CPU) of a host computer, the application including a compute kernel that includes one or more functions;
(b) creating a custom translation lookaside buffer (TLB) by:
obtaining a reference that points to a data item within a portion of memory of the host computer, the portion of memory containing a working set that is an input for the one or more functions;
translating the reference from a virtual address space of the application to a virtual address space of the one or more compute accelerators, wherein the virtual address space of the one or more compute accelerators is associated with at least one local memory of one of the one or more compute accelerators; and
adding a mapping, to the custom TLB, of (a) a virtual address of the data item within the virtual address space of the application to (b) a virtual address of the data item within the virtual address space of the one or more compute accelerators; and
(c) executing, on the one or more compute accelerators, the compute kernel, wherein the compute kernel accesses the custom TLB during the execution of the compute kernel.

US Pat. No. 10,795,712

METHODS AND SYSTEMS FOR CONVERTING A RELATED GROUP OF PHYSICAL MACHINES TO VIRTUAL MACHINES

VMware, Inc., Palo Alto,...

1. A method comprising:receiving a request to convert a plurality of computers on a first host into virtual computers on a second host;
accessing virtualization operations to be executed on the virtual computers;
accessing relationship data identifying parameters that define relationships between the plurality of computers; and
converting the plurality of computers into the virtual computers by executing an execution sequence of the virtualization operations.

US Pat. No. 10,795,711

PREDICTIVE ALLOCATION OF VIRTUAL DESKTOP INFRASTRUCTURE COMPUTING RESOURCES

VMWARE, INC., Palo Alto,...

1. A system for predictive allocation of computing resources in a virtual desktop infrastructure environment, comprising:at least one computing device;
program instructions stored in memory and executable in the at least one computing device that, when executed by the at least one computing device, cause the at least one computing device to:
generate a predictive usage model that forecasts a usage of a plurality of virtual machines that provide virtual desktop sessions in the virtual desktop infrastructure environment, wherein the predictive usage model is generated by applying a smoothing algorithm to a time series of a number of concurrent virtual desktop users over time, the number of the concurrent virtual desktop users being identified based at least in part on a log-on request or a log-off request generated by a plurality of client devices and accessed from a virtual machine usage log stored in the memory;
determine a number of the plurality of virtual machines that will be operating at a future time using the predictive usage model;
identify at least one computing resource required for at least the number of the plurality of virtual machines to operate at the future time; and
allocate the at least one computing resource such that the at least one computing resource is available at the future time.

US Pat. No. 10,795,710

HYPER-CONVERGED COMPUTING DEVICE

VMWARE, INC., Palo Alto,...

1. A hyper-converged computing device for supporting a computer network virtualization environment comprising:a server comprising at least one central processing unit (CPU), memory, and storage;
a central virtualization switch, wherein the server and the central virtualization switch are pretested, preconfigured, and pre-integrated;
a virtualization application to manage virtual machines hosted by the hyper-converged computing device; and
a hyper-converged application to manage the hyper-converged computing device, wherein the hyper-converged application is configured to:
receive, at the hyper-converged computing device by the hyper-converged application, a data packet from a virtual machine of the virtual machines, the data packet received in a first format compatible with a virtual desktop being run in the virtual machine, wherein a plurality of peripheral devices connects directly to the hyper-converged computing device, wherein the plurality of peripheral devices comprises a keyboard, a video display unit and a mouse;
generate, at the hyper-converged computing device by the hyper-converged application, a peripheral signal from the data packet that is configured to be sent to a peripheral device from the plurality of the peripheral devices that corresponds to the virtual desktop, the peripheral signal in a second format compatible with the peripheral device; and
appropriately route data associated with an exclusive communication between the virtual machines and peripheral devices through the central virtualization switch, wherein the virtualization application and the hyper-converged application are configured on the hyper-converged computing device when the hyper-converged computing device is powered on for the first time.

US Pat. No. 10,795,709

SYSTEMS AND METHOD FOR DEPLOYING, SECURING, AND MAINTAINING COMPUTER-BASED ANALYTIC ENVIRONMENTS

The MITRE Corporation, M...

1. A method for provisioning a plurality of secure analytic cells using an automated provisioning framework, the method comprising:receiving one or more specifications of a first analytic cell, wherein an analytic cell includes one or more virtual assets collectively configured to process one or more data sets;
configuring one or more provisioning scripts based on the received one or more specifications of the analytic cell, wherein the one or more provisioning scripts are configured to instantiate the first analytic cell on a cloud computing environment;
executing the one or more provisioning scripts on the cloud computing environment to generate the first analytic cell on the cloud computing environment, wherein executing the one or more provisioning scripts on the cloud computing environment to generate the first analytic cell includes selecting and installing one or more software programs on the first analytic cell based on the received one or more specifications of the analytic cell;
executing one or more security control tests on the provisioned first analytic cell;
generating a security report on the first analytic cell, wherein the security report is based on one or more results of the one or more security control tests;
receiving an approval of the provisioned first analytic cell, wherein the approval is based on the generated security report;
configuring the first analytic cell to be available to a plurality of users based on the received approval;
executing the one or more provisioning scripts on the cloud computing environment to generate a second analytic cell on the cloud computing environment; and
configuring the second analytic cell to be available to a plurality of users based on the received approval of the provisioned first analytic cell.

US Pat. No. 10,795,708

TRANSPARENT DISK CACHING FOR VIRTUAL MACHINES AND APPLICATIONS

PARALLELS INTERNATIONAL G...

1. A method comprising:receiving, by a processing device in a host computer system, a first instruction from a virtual machine executed by a hypervisor on the host computer system to write first data from the virtual machine to an external storage device coupled to the host computer system, wherein the external storage device is coupled to the host computer system by at least one of a disconnectable hardware interface or a network connection;
in response to the first instruction from the virtual machine executed by the hypervisor on the host computer system, storing a copy of the first data in a cache of the host computer system before executing the first instruction to write the first data from the virtual machine executed by the hypervisor on the host computer system to the external storage device, wherein the cache of the host computer system is implemented within the host computer system and is separated from the external storage device by the at least one of the disconnectable hardware interface or the network connection;
after the copy of the first data is stored in the cache of the host computer system, initiating a first write operation to write the first data from the cache of the host computer system to the external storage device;
detecting that the external storage device is disconnected from the host computer system during execution of the write operation;
pausing the first write operation and suspending execution of the virtual machine in response to detecting that the external storage device is disconnected from the host computer system, wherein suspending execution of the virtual machine comprises saving a state of the virtual machine to a file in a memory of the host computer system;
determining that the external storage device is reconnected to the host computer system;
comparing a first portion of the first data written to the external storage device prior to the external storage device being disconnected from the host computer system to the copy of the first data stored in the cache of the host computing system to identify a second portion of the first data not written to the external storage device; and
resuming the first write operation to continue writing the second portion of the first data from the cache of the host computer system to the external storage device and resuming the execution of the virtual machine using the state of the virtual machine from the file in the memory in response to determining that the external storage device is reconnected to the host computer system.

US Pat. No. 10,795,707

SYSTEMS AND METHODS FOR ENSURING COMPUTER SYSTEM SECURITY VIA A VIRTUALIZED LAYER OF APPLICATION ABSTRACTION

1. A host computer processing system comprising:a host processor and an associated host memory system;
a host operating system executed by the host processor;
a virtualization management program running on the host processor for instantiating virtual machines in a manner to create a constantly shifting attack surface to increase the difficulty of penetrating instantiated virtual machines, wherein, in response to a user request made through a user interface to the host operating system to perform a function requiring launching of an application program capable of performing the function, the virtualization management program is operable to:
select pseudo-randomly, transparently and automatically one of a plurality of available virtual machines capable of running the application program, each of the plurality of virtual machines having a different configuration and including a virtual operating system;
instantiate the virtual machine that is selected transparently and automatically;
launch an application session with the application capable of performing the function via the selected and instantiated virtual machine; and
reverting, upon closing the application session, the selected and instantiated virtual machine to a known good state.

US Pat. No. 10,795,706

MULTITIER APPLICATION BLUEPRINT REPRESENTATION IN OPEN VIRTUALIZATION FORMAT PACKAGE

VMWARE, INC., Palo Alto,...

1. A method to deploy a multitier application in a virtualized computing environment, comprising:receiving an open virtualization format (OVF) package comprising:
an OVF descriptor;
one or more virtual disk image files of virtual machines; and
an existing multitier application blueprint specifying software components of the multitier application on the virtual machines and dependencies of the software components,
wherein the one or more virtual disk image files are associated with updated states of the software components on the virtual machines;
deploying the virtual machines based on the one or more virtual disk image files, the virtual machines forming nodes of the multitier application; and
executing the existing multitier application blueprint extracted from the OVF package to deploy the software components on the virtual machines based on the dependencies of the software components and the updated states associated with the one or more virtual disk image files.

US Pat. No. 10,795,705

PARALLEL PROCESSING OF DATA

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:obtaining multiple, parallel map operations and multiple, parallel reduce operations;
determining indexes for each of multiple map workers and each of multiple reduce workers;
generating, based on the indexes, a single mapreduce operation that includes a map function that implements the multiple, parallel map operations and a reduce function that implements the multiple, parallel reduce operations,
wherein the map function specifies which map operation of the multiple, parallel map operations to perform by map workers based on the indexes of the map workers and specifies which reduce operation of the multiple, parallel map operations to perform by reduce workers based on the indexes of the reduce workers; and
executing the single mapreduce operation based on the map function, the reduce function, and the indexes.

US Pat. No. 10,795,704

SERIALIZATION OF OBJECTS TO JAVA BYTECODE

Red Hat, Inc., Raleigh, ...

1. A system comprising:a memory;
a processor in communication with the memory; and
a serializer configured to:
receive an object that includes at least one field,
initiate serialization of the object according to a rule set,
write a first intermediate representation of a new object based on the object,
write a second intermediate representation to set the at least one field in the new object, and
prior to runtime, output a serialization of the new object based on the first intermediate representation and the second intermediate representation.

US Pat. No. 10,795,703

AUTO-COMPLETION FOR GESTURE-INPUT IN ASSISTANT SYSTEMS

Facebook Technologies, LL...

20. A system comprising: one or more processors; and a non-transitory memory coupled to the processors comprising instructions executable by the processors, the processors operable when executing the instructions to:receive from a client system associated with a first user, a user input detected by one or more cameras of the client system, wherein the user input comprises an incomplete gesture performed by one or more hands of the first user;
calculate, by an intent-understanding module, one or more confidence scores for one or more intents corresponding to the incomplete gesture;
determine that the calculated confidence scores associated with each of the intents are below a threshold score;
select, based on a personalized gesture-recognition model, one or more candidate gestures from a plurality of pre-defined gestures responsive to determining that the calculated confidence scores for each of the intents are below the threshold score, wherein each of the candidate gestures is associated with a confidence score representing a likelihood the first user intended to input the respective candidate gesture; and
send, to the client system, instructions for presenting one or more suggested inputs corresponding to one or more of the candidate gestures.

US Pat. No. 10,795,702

METHOD FOR INSERTING VIRTUAL RESOURCE OBJECT IN APPLICATION, AND TERMINAL

TENCENT TECHNOLOGY (SHENZ...

1. A method, comprising:obtaining, by processing circuitry of a terminal device that executes an application to provide a graphical interface on a display device, a present location of a graphical symbol in the graphical interface, the graphical symbol being indicative of a placement of a resource for use in a specific area in the graphical interface;
determining, by the processing circuitry, whether the present location of the graphical symbol satisfies a preset condition for the placement of the resource in the specific area;
determining, by the processing circuitry, a target location in the specific area for disposing the resource when the present location satisfies the preset condition;
sending, by network interface circuitry of the terminal device, a request message that includes the target location to a server device via a network;
receiving, by the network interface circuitry, an approval message including the target location , the approval message being sent by the server device when the target location passes a consistency check; and
updating, by the processing circuitry, the graphical interface with a graphical icon of the resource being positioned at the target location.

US Pat. No. 10,795,701

SYSTEM AND METHOD FOR GUIDING A USER TO A GOAL IN A USER INTERFACE

Express Scripts Strategic...

21. A system for integrating a telephone system and a computing system, the system comprising:an interactive voice response (IVR) platform configured to:
obtain a computer-readable command based on an audio input from a user, and
in response to obtaining the computer-readable command, (i) determine a web application that corresponds to the computer-readable command, (ii) determine a goal in the web application associated with the computer-readable command, and (iii) obtain information indicating a shortest user interface path to the goal in the web application; and
a cobrowse client configured to receive a document object model (DOM) of a current state of the web application from a cobrowse session for a web server hosting the web application,
wherein the IVR platform is configured to:
based on the DOM from the cobrowse client, determine a next user interface action along the shortest user interface path, and
generate a voice prompt for the user based on the next user interface action, and
wherein the cobrowse client is configured to receive an updated DOM in response to execution by the user of the next user interface action,
wherein the IVR platform is configured to:
obtain a cobrowse session identifier from the user;
transmit the cobrowse session identifier to the cobrowse session; and
receive the DOM of the current state in response to transmitting the cobrowse session identifier, and
wherein obtaining the cobrowse session identifier includes generating a voice instruction for the user that requests the user to (i) initiate the cobrowse session and (ii) provide the cobrowse session identifier to the IVR platform.

US Pat. No. 10,795,700

VIDEO-INTEGRATED USER INTERFACES

Accenture Global Solution...

1. A system, comprising:an electronic device; and
a computer-readable storage medium coupled to the electronic device and having instructions stored thereon which, when executed by the electronic device, cause the electronic device to perform operations comprising:
causing the electronic device to display a dynamic user interface comprising a video interface portion for displaying a video providing guidance relative to a device delivered to a user, the device delivered to the user being different than the electronic device displaying the dynamic user interface;
causing the video to play within the video interface portion of the dynamic user interface, the video depicting a first type of device defined in shipping information in a delivery notification;
providing a video status indicator that is unique to instant content depicted in the video, the video status indicator being automatically provided in response to data encoded in the video indicating that user input is to be provided in response to the instant content;
adjusting, in response to the video status indicator and while the video continues to play within the video interface portion, the dynamic user interface to concurrently display the video interface portion and a user interface portion for receiving user input;
receiving, while the video continues to play, data from a peripheral device provided in the electronic device in response to user input to the user interface portion of the dynamic user interface the peripheral device comprising one or more of a microphone of the electronic device and a camera of the electronic device; and
determining, while the video continues to play, that the data indicates that the device delivered to the user is a second type of device, different from the first type of device, and in response the user interface scene is modified to represent the second type of device instead of the generic device.

US Pat. No. 10,795,699

CENTRAL STORAGE MANAGEMENT INTERFACE SUPPORTING NATIVE USER INTERFACE VERSIONS

Cohesity, Inc., San Jose...

1. A method, comprising:providing, by a processor, a central management interface for a plurality of different storage clusters of different storage domains;
receiving, at the processor, an indication of one of the plurality of different storage clusters;
determining, by the processor, a native user interface version of the indicated storage cluster; and
loading, by the processor, the determined native user interface version to provide a remote native management interface of the indicated storage cluster within a user interface context of the central management interface.

US Pat. No. 10,795,698

USER INTERFACE BASED ON METADATA

Microsoft Technology Lice...

1. A computer implemented method, comprising:calling, by a computing device via an application to cause display of a user interface (UI), an application service associated with a server device, the application service including metadata that defines one or more UI features to be displayed by the UI;
receiving, at the computing device, the metadata that defines the one or more UI features to be displayed by the UI, the metadata further including a reference to an application programming interface (API) call;
causing, by the computing device, the application to render, on a display associated with the computing device, the UI that includes the one or more UI features defined by the metadata;
detecting user interaction with the one or more UI features included in the UI;
in response to detecting the user interaction with the one or more UI features, causing, by the computing device, the application to initiate the API call referenced in the metadata;
receiving, by the computing device, additional metadata in response to the API call, the additional metadata defining at least one additional UI feature, the additional metadata received from the server device; and
adding, by the computing device, the at least one additional UI feature to the UI that includes the one or more UI features defined by the metadata, wherein the at least one additional UI feature is displayed concurrently in the UI with the one or more UI features.

US Pat. No. 10,795,697

METHOD AND DEVICE FOR MANAGING DESKTOP

BEIJING KINGSOFT INTERNET...

1. A method for managing a desktop, comprising:obtaining, by a server, a desktop management request sent by a mobile terminal, wherein the desktop management request carries user information and desktop application information;
selecting, by the server, a management rule corresponding to the desktop management request from a preset rule base including a plurality of management rules, each of the plurality of management rules including preset user interest feature information which includes at least one applicant type meeting user preferences;
searching target preset information corresponding to the user information or a target application type associated with the desktop application information from the management rule, and obtaining the user interest feature information corresponding to the target preset information or the target application type from the management rule and planning the desktop application information according to the user interest feature information to generate desktop application arrangement information; and
sending the desktop application arrangement information to the mobile terminal, such that the mobile terminal arranges a plurality of application icons on the desktop according to the desktop application arrangement information,
wherein, the user interest feature information includes priority information of the at least one application type, and the application arrangement information includes information for arranging the plurality of application icons on the desktop, and
wherein, the user information includes information that is predicted according to respective application names in the desktop application information,
wherein, obtaining the user interest feature information corresponding to the target preset information or the target application type from the management rule and planning the desktop application information according to the user interest feature information to generate the desktop application arrangement information, comprises:
when the management rule is a gender management rule, extracting user gender information from the user information, wherein the gender management rule comprises preset interest feature information corresponding to two types of preset gender information;
searching for preset gender information same with the user gender information from the gender management rule as target preset gender information;
obtaining preset interest feature information corresponding to the target preset gender information from the gender management rule as the user interest feature information corresponding to the user information; and
planning the desktop application information according to the user interest feature information to generate the desktop application arrangement information, and
wherein, when the management rule is the gender management rule, extracting the user gender information from the user information comprises:
when the management rule is the gender management rule, detecting whether the user gender information is included in the user information;
when the user gender information is included in the user information, extracting the user gender information in the user information;
when the user gender information is not included in the user information, obtaining application name lists corresponding respectively to the two types of preset gender information and determining an application name list to which each application name in the desktop application information belongs; and
predicting the user gender information corresponding to the user information according to the application name list to which each application name in the desktop application information belongs.

US Pat. No. 10,795,696

INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM

FUJI XEROX CO., LTD., Mi...

1. An information processing apparatus comprising:at least one processor configured to execute:
a recording unit that executes operations comprising:
recording an operation procedure of a piece of software, among pieces of software, operated in a region where a relative display position relationship among the pieces of software is set, and
generating control information that controls operation of the piece of software in accordance with the recorded operation procedure,
wherein, in the recording the operation procedure of the piece of software, the piece of software is identified by a relative display position relationship of the piece of software to another one of the pieces of software,
wherein the control information identifies the piece of software by the recorded relative display position relationship of the piece of software to the another one of the pieces of software,
wherein the at least one processor is further configured to execute a display controller that controls display of a confirmation screen displaying the operation procedure recorded by the recording unit, and
wherein the confirmation screen displays the recorded operation procedure as a moving image.

US Pat. No. 10,795,695

CONTROL METHOD AND APPARATUS FOR WINDOW IN APPLICATION PROGRAM

TENCENT TECHNOLOGY (SHENZ...

1. A control method for a window in an application program, the method comprising:creating a child window corresponding to a parent window in the application program;
creating a proxy window corresponding to the child window;
setting a parent window attribute of the child window to the proxy window;
setting a parent window attribute of the proxy window to the parent window, wherein a thread to which the proxy window belongs communicates with a thread to which the child window belongs by using an asynchronous message, the child window being a lower-level window of the parent window; and
determining, by the thread to which the proxy window belongs, a state of the child window, and in response to determining that the child window is in anunresponsive state, setting the parent window attribute of the proxy window to no parent window, and removing the child window from a current display interface by removing the proxy window;
wherein the proxy window and the child window are in different processes, and the method further comprises:
determining, by the thread to which the proxy window belongs, that the child window is in the unresponsive state when a first message sent from the child window is not received within a first predetermined duration; and
in response to determining that the child window is in the unresponsive state:
moving the child window by a distance so as to be outside of a display screen;
creating a ghost window at a position of the child window and displaying an image of the child window being in the unresponsive state in the ghost window;
determining whether the child window restores a normal state by determining whether a second message sent from the child window is received within a second predetermined duration;
in response to determining that no message has been received within the second predetermined duration, deleting a first process to which the child window belongs; and
in response to determining that the second message has been received within the second predetermined duration, removing the ghost window and restoring the child window.

US Pat. No. 10,795,694

SYSTEM AND METHOD FOR AUTOMATING WORKFLOW APPLICATIONS UTILIZING ROUTES

Intuit Inc., Mountain Vi...

1. A method for connecting data sources to an application with reduced disruption to the application, the method comprising:storing, in a routing library in accordance with a browserless runtime environment, a plurality of route files each defining a data communication route between a data services application and a data source;
storing, in accordance with the browserless runtime environment, application source code for the data services application, the application source code including a callout to each route file in the routing library;
gathering, with the data services application, data from the data sources in accordance with the routes defined in the routing library; and
outputting, with the data services application, data from the data sources.

US Pat. No. 10,795,693

GENERATING DYNAMIC LINKS FOR NETWORK-ACCESSIBLE CONTENT

The Narrativ Company, Inc...

1. A method comprising:receiving, from a first device, a request for a dynamic access link, wherein the request comprises data identifying a first network resource;
identifying a subject associated with the first network resource;
determining one or more additional network resources pertaining to the subject;
generating the dynamic access link and providing the dynamic access link to the first device;
receiving, from a second device, a network resource access request, wherein the network resource access request is generated responsive to the dynamic access link being selected;
responsive to receiving the network resource access request, choosing at least one of the first resource or the one or more additional network resources; and
providing a network address of the chosen resource to the second device.

US Pat. No. 10,795,692

AUTOMATIC SETTINGS NEGOTIATION

InterDigital Madison Pate...

1. A method comprising:receiving input from a sensor, detecting physical characteristics of at least two people present in an area;
determining presence in the area based on the physical characteristics of the at least two people in the area;
retrieving profile information of the at least two people present in the area, the profile information including at least one of physical characteristics, age, gender, favorite teams and relationships between the at least two people;
determining a relationship between the at least two people in the area, the at least two people being represented by their respective profiles;
determining that at least one preference setting is to be applied responsive to the determined relationship between the at least two people present in the area; and
adjusting settings of a first device responsive to determining that the at least one preference setting is to be applied.

US Pat. No. 10,795,691

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR SIMULTANEOUSLY DETERMINING SETTINGS FOR A PLURALITY OF PARAMETER VARIATIONS

NVIDIA CORPORATION, Sant...

1. A method, comprising:storing, by a system, a plurality of first component variations for a first hardware component corresponding to a hardware device and a plurality of second component variations for a second hardware component corresponding to the hardware device;
generating, by the system, a plurality of unique hardware component combinations based on the first component variations and the second component variations;
assigning, by the system, a population value to each unique hardware component combination of the plurality of unique hardware component combinations, the population value determined based on a number of users each having the unique hardware component combination installed within their personal device;
simultaneously determining, by the system for each of two or more of the unique hardware component combinations, optimal settings for one of the first hardware variations included in the unique hardware component combination and one of the second hardware variations included in the unique hardware component combination, by:
initializing a first value of a first setting for the first hardware component and a second value of a second setting for the second hardware component,
incrementally adjusting at least one of the first value or the second value, based on the population values assigned to the plurality of the unique hardware component combinations, and
for each incremental adjustment resulting in current potential settings, determining whether the current potential settings are optimal for the hardware component combination.

US Pat. No. 10,795,690

AUTOMATED MECHANISMS FOR ENSURING CORRECTNESS OF EVOLVING DATACENTER CONFIGURATIONS

Oracle International Corp...

1. A method comprising:receiving a current configuration of a datacenter and a target configuration of said datacenter;
generating a plurality of new configurations of said datacenter that are based on said current configuration;
applying a cost function to calculate a cost of each configuration of said plurality of new configurations based on: a) measuring a logical difference between said each configuration and said target configuration and b) at least one factor selected from the group consisting of:
a capital cost of hardware elements that are required by said each configuration,
a count of types of hardware elements that are required by said each configuration,
a count of distinct stock keeping units (SKUs) of hardware elements that are required by said each configuration,
a count of redundant communication paths between hardware elements that are achieved by said each configuration, and
an amount of space that is required by said each configuration;
selecting a particular configuration of said plurality of new configurations that has a least cost;
when the particular configuration satisfies said target configuration, reconfiguring said datacenter based on said particular configuration;
when the particular configuration does not satisfy said target configuration, repeating said method with said particular configuration as said current configuration and with said target configuration;
wherein the method is performed by one or more computers.

US Pat. No. 10,795,689

RECONFIGURABLE LOGICAL CIRCUIT

FUJI XEROX CO., LTD., Mi...

1. A reconfigurable logical circuit comprising:a data processor configured to perform data processing, the data processor comprising a processing termination detector that determines a status corresponding to termination of the data processing and generate status information based on the determination of the status;
a memory storing a plurality of combinations of configuration control bits;
a reconfiguration controller configured to generate a selector control signal based on a comparison between the status information and a predetermined reconfiguration permission information; and
a selector configured to select one of the plurality of combinations of configuration control bits to the data processor to reconfigure the data processor based on the selector control signal,
wherein the processing termination detector detects termination of a first processing in the data processor and transmits, in response to detecting the termination of the first processing in the data processor, a first setting value to update and store the first setting value in an event holding storage, the first setting value configured for switching to a second processing to the reconfiguration controller, and
wherein the reconfiguration controller compares the first setting value with a second setting value corresponding to the second processing stored in a reconfiguration permission information storage, and generates the selector control signal for the second processing when the reconfiguration controller determines that the first setting value and the second value are same.

US Pat. No. 10,795,688

SYSTEM AND METHOD FOR PERFORMING AN IMAGE-BASED UPDATE

DATTO, INC., Norwalk, CT...

1. A method of updating an operating system on a target device using incremental updates, comprising:storing, in a target device memory a first operating system image according to a first file system on the target device;
storing, in the target device memory, a snapshot of the first operating system image in a target-device-held series of snapshots according to a second file system on the target device that is snapshot capable;
receiving, by the target device and from a remote storage device, data equivalent to a snapshot of a second operating system image, the second operating system image being an image of an updated version of the first operating system image;
forming the snapshot of second operating system image from the data equivalent to the snapshot of the second operating system image;
storing, in the target device, the snapshot of the second operating system image in the target-device-held series of snapshots according to the second file system on the target device;
exporting, to the target device memory, a second operating system image containing data representative of the snapshot of the second operating system image;
storing, in the target device memory the second operating system image according to the first file system on the target device; and
booting the target device using the second operating system image.

US Pat. No. 10,795,687

INFORMATION PROCESSING SYSTEM FOR SETTING HARDWARE, METHOD FOR SETTING HARDWARE AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM RECORDING PROGRAM FOR SETTING HARDWARE

FUJITSU LIMITED, Kawasak...

1. An information processing system comprising:a first information processing device; and
a second information processing device,
the first information processing device includes:
a first memory configured to store first firmware in which first setting information related to a first setting of first hardware of the first information processing device is recorded; and
a first processor coupled to the first memory and configured to generate, by executing the first firmware, data including the first setting information recorded in the first firmware, and
the second information processing device includes:
a second memory configured to store second firmware for reproducing the first setting in setting processing of second hardware of the second information processing device based on the first setting information included in the generated data; and
a second processor coupled to the second memory and configured to reproduce, by executing the second firmware, the first setting based on the first setting information in the setting processing of the second hardware,
the second processor is configured not to start an operating system (OS) in the setting processing.

US Pat. No. 10,795,686

INTERNATIONALIZATION CONTROLLER

International Business Ma...

7. A system, comprising:a first computer processor;
a computer readable memory in circuit communication with the first processor; and
a computer readable storage medium in circuit communication with the first processor;
wherein the first processor executes program instructions stored on the computer readable storage medium via the computer readable memory and thereby:
in response to receiving a payload request, sends a token generation request comprising language requirements to a language token controller, wherein the language token controller generates a token, associates the token with the language requirements, and sends the token to the first processor;
sends the token received from the language token controller to an agent of a secondary server and to different agents of different servers in a plurality of different servers, wherein in response the agent and each different agent sends a respective request identifier to the first processor, wherein the language token controller returns the language requirements associated with the token to the agent and each different agent in response to receiving the token from the agent and each different agent;
distributes a respective unit of work to the secondary server and each different server through a respective application programming interface (API) of the secondary server and each different server in response to receiving each respective request identifier from the agent and each different agent, wherein each respective request identifier identifies the respective unit of work distributed to the secondary server and each different server and each respective request identifier is identified through the respective API of the secondary server and each different server, and wherein the secondary server and each different server translates the respective data for each respective unit of work into a language identified by the language requirements and sends a respective sub-payload with the translated respective data for each respective unit of work to the first processor;
assembles each respective sub-payload with the translated respective data for each respective unit of work into a payload; and
returns the assembled payload in response to the received payload request;
wherein each respective API remains unchanged through the use of the token; and
wherein each request identifier is selected from the group consisting of: a transaction identifier, a work identifier, a thread identifier, a process identifier, and a socket identifier.

US Pat. No. 10,795,685

OPERATING A PIPELINE FLATTENER IN ORDER TO TRACK INSTRUCTIONS FOR COMPLEX

TEXAS INSTRUMENTS INCORPO...

1. An apparatus comprising:a processor that includes:
a memory to store an instruction;
a processing pipeline to execute the instruction and having pipeline stages arranged in a sequence, the pipeline stages including a first pipeline stage that is an initial pipeline stage of the sequence and a last pipeline stage that is a final pipeline stage in the sequence; and
trigger registers, each trigger register being associated with a respective one of the pipeline stages;
wherein, in response to the instruction being received by the first pipeline stage, the processor is configured to:
select a trigger value as one of a first trigger value and a second trigger value depending on whether the instruction is selected for debug tracking;
cause the selected trigger value to be stored in a first trigger register of the trigger registers, the first trigger register being associated with the first pipeline stage; and
cause the selected trigger value to be forwarded through all remaining trigger registers associated with each pipeline stage subsequent to the first pipeline stage.

US Pat. No. 10,795,684

METHOD AND LOGIC FOR MAINTAINING PERFORMANCE COUNTERS WITH DYNAMIC FREQUENCIES

Intel Corporation, Santa...

1. A processor comprising multiple cores, each core comprising:a front end comprising circuitry to decode an instruction from an instruction stream;
an execution pipeline comprising circuitry to execute the decoded instruction;
a dynamic core frequency logic unit comprising circuitry to squash a core clock frequency of the core to reduce the core clock frequency, wherein the clock is invisible to software; and
a counter compensation logic unit comprising circuitry to:
receive performance counter increments from a plurality of end points of the core connected to the counter compensation logic unit through a shared data bus of the core, each performance counter increment associated with a performance counter and each end point associated with a performance counter domain based on a latency associated with circuitry to report the performance counter increments from the end point to the counter compensation logic unit;
select, for each performance counter increment, a dynamic core frequency ratio based on the performance counter domain of the end point that reported the performance counter increment, wherein the dynamic core frequency ratio represents a number of unsquashed clocks to squashed clocks when the performance counter increment was reported; and
generate, for each performance counter increment, an adjusted performance counter increment based on the performance counter increment and the corresponding selected dynamic core frequency ratio, wherein the adjusted performance counter increment is based on multiplying the performance counter increment by the corresponding selected dynamic core frequency ratio.

US Pat. No. 10,795,683

PREDICTING INDIRECT BRANCHES USING PROBLEM BRANCH FILTERING AND PATTERN CACHE

International Business Ma...

1. A method of predicting indirect branch instructions of a processor, comprising:predicting, by the processor, a target address for a fetched branch instruction from a count cache;
tracking, by the processor, accuracy of the target address;
flagging, by the processor, the fetched branch instruction responsive to determining based on the tracking that the fetched branch instruction is a problematic branch instruction;
starting training of a pattern cache for predicting a more accurate target address for the fetched branch instruction, the pattern cache structured to include a tag and a corresponding target address; and
next time the fetched branch instruction is fetched, performing a step of predicting from the pattern cache, overriding the count cache's prediction, wherein the predicting from the pattern cache includes searching the pattern cache for a matching tag given a global history vector and a previous target address;
wherein the tracking of the accuracy of the target address comprises storing in a table a count value associated with the fetched branch instruction,
wherein if the predicting from the count cache is determined to be a correct prediction, the count value is determined by:
setting a temporary counter value to a previous count value minus a predetermined reward value;
comparing the temporary counter value to a predetermined minimum value;
if the temporary counter value is less than the predetermined minimum value, setting the count value to zero;
if the temporary counter value is not less than the predetermined minimum value, setting the count value to the temporary counter value;
wherein if the predicting from the count cache is determined to be a misprediction, the count value is determined by:
setting the temporary counter value to the previous counter value plus a predetermined penalty value;
comparing the temporary counter value to a predetermined maximum value;
if the temporary counter value is greater than the predetermined maximum value, setting the count value to the predetermined maximum value divided by 2, and reducing all other count values stored in the table by ½;
if the temporary counter value is not greater than the predetermined maximum value, setting the count value to the temporary counter value,
the method further including maintaining a storage element external to the table, the storage element storing a most mispredicted effective branch address among entries in the table, wherein content of the storage element is identified as the problematic branch instruction for training in the pattern cache, wherein responsive to the count value stored in the table exceeding a threshold value, raising a signal corresponding to an address associated with the count value that exceeds the threshold value, causing the address to be written to the storage element.

US Pat. No. 10,795,682

GENERATING VECTOR BASED SELECTION CONTROL STATEMENTS

Intel Corporation, Santa...

1. A system for generating vector based selection control statements comprising:a processor to:
determine that a vector cost of a selection control statement is below a scalar cost of the selection control statement according to an execution time of the selection control statement as a vector of values that are evaluated simultaneously and an execution time of the selection control statement via scalar execution techniques;
determine that the selection control statement is to be executed in a sorted order based on dependencies between a plurality of branch instructions of the selection control statement;
in response to the determination of the sorted order, determine that a program ordering of labels of the selection control statement does not match a mathematical ordering of the labels; and
in response to the determination that a program ordering of labels of the selection control statement does not match a mathematical ordering of the labels, execute the selection control statement with a vector of values that are evaluated simultaneously, wherein the selection control statement is to be executed based on a jump table and a sorted unique value technique, wherein the sorted unique value technique comprises selecting at least one of the plurality of branch instructions from the jump table in a non-linear manner, wherein the jump table is generated based on labels of the selection control statement.

US Pat. No. 10,795,681

INSTRUCTION LENGTH DECODING

Intel Corporation, Santa...

1. An apparatus, comprising:a data processor;
memory to store binary translator software code, wherein the binary translator software code is executable by the data processor to:
analyze a stream of variable length instructions to be decoded and executed by a particular processor;
identify a plurality of words by boundary bits in the instructions, wherein each of the words includes at least a portion of one or more of the instructions;
generate a mask for a particular instruction cache line, wherein the particular instruction cache line into be loaded with a subset of the plurality of words, the subset of the words comprises at least a portion of two or more of the instructions, the mask comprises a respective entry to correspond to each one of the two or more instructions, each of the respective entries comprises a respective portion of mask bits of the mask to identify, for the corresponding instruction, which words in the particular instruction cache line correspond to the instruction, and a number of bits are to be set in the respective portion of mask bits equal to a number of words occupied by the instruction in the particular instruction cache line; and
load the mask and the subset of words into the particular instruction cache line to be accessed by the particular processor; and
wherein decoder hardware of the particular processor is to:
access the mask from the particular instruction cache line;
apply the mask to identify a first one of the two or more instructions from the instruction cache line; and
decode the first instruction.

US Pat. No. 10,795,680

VECTOR FRIENDLY INSTRUCTION FORMAT AND EXECUTION THEREOF

Intel Corporation, Santa...

1. A computing apparatus comprising:a processor configured to execute an instruction set, wherein the instruction set includes a first instruction format, wherein the first instruction format has a plurality of fields including a class field, an alpha field, and a beta field, wherein the first instruction format supports different augmentation operations through placement of different values in the alpha field and the beta field, wherein only one of the different values may be placed in each of the alpha field and the beta field on each occurrence of an instruction in the first instruction format in instruction streams, the processor including,
a decode unit configured to decode the occurrences of the instructions in the first instruction format with the class field's content specifying a first class as follows:
distinguish, for each of the occurrences that does not specify memory access, whether to augment with a round type operation or not based on the alpha field's content in that occurrence, wherein the beta field is interpreted as a suppress all floating point exceptions (SAE) field and a round operation field when the alpha field's content indicates the round type operation;
distinguish, for each of the occurrences that does not specify memory access and that does specify the round type operation through the alpha field's content, whether floating point exceptions will be suppressed or not based on the SAE field's content in that occurrence; and
distinguish, for each of the occurrences that does not specify memory access and that does specify the round type operation through the alpha field's content, which one of a plurality of round operations to apply based on the round operation field's content in that occurrence.

US Pat. No. 10,795,679

MEMORY ACCESS INSTRUCTIONS THAT INCLUDE PERMISSION VALUES FOR ADDITIONAL PROTECTION

Red Hat, Inc., Raleigh, ...

1. A method comprising:receiving, by a central processing unit (CPU), a first executable program instruction referencing a first memory address, wherein the first executable program instruction is a first augmented load instruction comprising a first permission value;
responsive to receiving the first augmented load instruction, comparing, by the CPU, the first permission value to a second permission value, wherein the second permission value is associated with a first page table entry for the first memory address; and
responsive to determining that the first permission value matches the second permission value, loading, into a register, contents of a memory location referenced by the first memory address.

US Pat. No. 10,795,678

MATRIX VECTOR MULTIPLIER WITH A VECTOR REGISTER FILE COMPRISING A MULTI-PORT MEMORY

Microsoft Technology Lice...

1. A processor comprising:a vector register file comprising a multi-port memory; and
a plurality of tiles configured to process an N by N matrix of data elements and an N by 1 vector of data elements, wherein N is an integer equal to or greater than 8, and wherein each of the plurality of tiles is configured to process N data elements, and wherein the vector register file is configured to:
in response to a write instruction, during a single clock cycle store N data elements in the multi-port memory and during each one of out of P clock cycles provide N data elements to each one of P input interface circuits of the multi-port memory, wherein P is an integer equal to N divided by L, wherein L is an integer equal to or greater than 2, and wherein each of the P input interface circuits comprises an input lane configured to carry L data elements in parallel, and wherein during the each one of the P clock cycles the multi-port memory is configured to receive N data elements via a selected at least one of the P input interface circuits, and
in response to a read instruction, during a single clock cycle retrieve N data elements from the multi-port memory and during each one of out of Q clock cycles provide L data elements from each one of Q output interface circuits of the multi-port memory, wherein Q is an integer equal to N divided by L, and wherein each of the Q output interface circuits comprises an output lane configured to carry L data elements in parallel, and wherein during the each one of the Q clock cycles the multi-port memory is configured to provide N data elements to a selected at least one of the Q output interface circuits.

US Pat. No. 10,795,677

SYSTEMS, APPARATUSES, AND METHODS FOR MULTIPLICATION, NEGATION, AND ACCUMULATION OF VECTOR PACKED SIGNED VALUES

Intel Corporation, Santa...

1. A method, comprising:decoding an instruction by a decode circuit, the instruction having fields for a first and second packed data source operand, and a packed data destination operand;
executing the decoded instruction by an execution circuit by:
multiplying selected data values from a plurality of packed data element positions in the first and second packed data source operands to generate a plurality of first result values;
summing the plurality of first result values to generate one or more second result values;
negating the one or more second result values to generate one or more third result values;
accumulating the one or more third result values with one or more data values from the destination operand to generate one or more fourth result values; and
storing the one or more fourth result values in one or more packed data element positions in the destination operand.

US Pat. No. 10,795,676

APPARATUS AND METHOD FOR MULTIPLICATION AND ACCUMULATION OF COMPLEX AND REAL PACKED DATA ELEMENTS

Intel Corporation, Santa...

1. A processor comprising:a decoder to decode a first instruction to generate a decoded instruction;
a first source register to store a first plurality of packed real and imaginary data elements;
a second source register to store a second plurality of packed real and imaginary data elements;
execution circuitry to execute the decoded instruction, the execution circuitry comprising:
multiplier circuitry to select real and imaginary data elements in the first source register and second source register to multiply, the multiplier circuitry to multiply each selected imaginary data element in the first source register with a selected real data element in the second source register, and to multiply each selected real data element in the first source register with a selected imaginary data element in the second source register to generate a plurality of imaginary products,
adder circuitry to add a first subset of the plurality of imaginary products to generate a first temporary result and to add a second subset of the plurality of imaginary products to generate a second temporary result;
accumulation circuitry to combine the first temporary result with first data from a destination register to generate a first final result and to combine the second temporary result with second data from the destination register to generate a second final result and to store the first final result and second final result back in the destination register.

US Pat. No. 10,795,675

DETERMINE WHETHER TO FUSE MOVE PREFIX INSTRUCTION AND IMMEDIATELY FOLLOWING INSTRUCTION INDEPENDENTLY OF DETECTING IDENTICAL DESTINATION REGISTERS

ARM Limited, Cambridge (...

1. An apparatus comprising:processing circuitry to perform data processing in response to instructions; and
instruction fusing circuitry to fuse a move prefix instruction and an immediately following instruction fetched from a data store to generate a fused data processing instruction to be processed by the processing circuitry;
wherein:
the move prefix instruction identifies a move destination register and a move source register specifying a data value to be at least partially copied to the move destination register;
in response to detecting said move prefix instruction, the instruction fusing circuitry is configured to determine whether to fuse said move prefix instruction and said immediately following instruction independently of whether the move destination register of the move prefix instruction is the same register as any register specified by said immediately following instruction;
the move prefix instruction indicates that the immediately following instruction is expected to be a destructive data processing instruction for which a destination register is to be set to a result value corresponding to a result of applying a predetermined processing operation to at least two source values specified by at least two source registers, and for which the destination register and one of said at least two source registers are the same as the move destination register of the move prefix instruction; and
when the destination register of the immediately following instruction is not the same as the move destination register of the move prefix instruction, the fused data processing instruction generated by fusing the move prefix instruction and the immediately following instruction is capable of providing a different result from a result that would be generated if the move prefix instruction and the immediately following instruction were executed independently.

US Pat. No. 10,795,674

AUTOMATIC SCALING OF MICROSERVICES APPLICATIONS

Juniper Networks, Inc., ...

1. A device, comprising:a memory; and
one or more processors to:
receive information identifying a set of tasks to be executed,
the set of tasks being associated with a microservices application,
the microservices application being associated with a set of microservices, and
the set of tasks to be executed by the microservices application associated with the set of microservices;
determine an execution time of the set of tasks based on a set of parameters and a model,
the set of parameters including at least:
a first parameter that identifies a first score associated with a first microservice of the set of microservices, and
a second parameter that identifies a second score associated with a second microservice of the set of microservices;
determine that the first microservice is associated with a greater amount of execution time of a first subtask, of a plurality of first subtasks, as compared to an execution time of a second subtask, of a plurality of second subtasks associated with the second microservice; and
execute a greater Quantity of the plurality of first subtasks of the first microservice in parallel as compared to a quantity of the plurality of second subtasks of the second microservice.

US Pat. No. 10,795,673

DIAGNOSING PRODUCTION APPLICATIONS

Microsoft Technology Lice...

1. A method, comprising:modifying an application that is executing on a production server by rewriting intermediate language code for the application to include one or more conditional expressions defining one or more conditions for creating one or more snapshots of the application at one or more locations of one or more possible errors;
evaluating the one or more conditional expressions in the application during execution of the application;
creating a snapshot of the application during execution of the application without stopping execution of the application and in a way that avoids downtime while maintaining state when the one or more conditional expressions is evaluated as being true, wherein the snapshot creates an entire copy of all memory pages for the application;
creating one or more additional snapshots of the application when the evaluation subsequently indicates the one or more conditional expression to be true during execution of the application; and
collecting and comparing data from the snapshot and the one or more additional snapshots using a diagnostic tool to identify one or more trends.

US Pat. No. 10,795,672

AUTOMATIC GENERATION OF MULTI-SOURCE BREADTH-FIRST SEARCH FROM HIGH-LEVEL GRAPH LANGUAGE FOR DISTRIBUTED GRAPH PROCESSING SYSTEMS

ORACLE INTERNATIONAL CORP...

1. A method comprising:analyzing a first plurality of software instructions, wherein the first plurality of software instructions is configured to perform a plurality of breadth-first searches to determine a particular result, wherein each breadth-first search of the plurality of breadth-first searches originates at each vertex of a plurality of vertices of a distributed graph, wherein each breadth-first search is encoded for independent execution;
based on said analyzing, generating a second plurality of software instructions configured to perform a multi-source breadth-first search to determine the particular result, wherein each vertex of the plurality of vertices is a source of the multi-source breadth-first search;
wherein the second plurality of software instructions comprises a node iteration loop and a neighbor iteration loop, wherein the plurality of vertices of the distributed graph comprises active vertices and neighbor vertices;
wherein the node iteration loop is configured to iterate once per each active vertex of the plurality of vertices of the distributed graph, wherein the node iteration loop is configured to determine the particular result;
wherein the neighbor iteration loop is configured to iterate once per each active vertex of the plurality of vertices of the distributed graph, wherein each iteration of the neighbor iteration loop is configured to activate one or more neighbor vertices of the plurality of vertices for a following iteration of the neighbor iteration loop.

US Pat. No. 10,795,671

AUDIOVISUAL SOURCE CODE DOCUMENTATION

International Business Ma...

1. A method of audiovisually documenting source code in an integrated development environment, the method comprising:initiating by a computing device associated with a first terminal, a knowledge transfer session for discussion of source code and generation of audiovisual source code documentation explaining segments of source code from a code base;
displaying within an integrated development environment executing on the first terminal an audiovisual interface containing a segment of code from the code base;
recording audio with a recording device during the knowledge transfer session;
receiving code tracking indicators and off-code tracking indicators from an optical tracking device operated by a user at the first terminal when the user is reviewing and visually focused on the segment of code and when the user is reviewing and visually focused away from the segment of code, respectively;
determining by the computing device via the code tracking indicators a module of the segment of code visually under review;
determining portions of the recorded audio corresponding to the code tracking indicators when the user is reviewing and visually focused on the segment of code;
determining further portions of the recorded audio corresponding to the off-code tracking indicators between when the user was and has returned to reviewing and being visually focused on the segment of code;
associating the portions and the further portions of the recorded audio with the determined module of the segment of code to generate audiovisual source code documentation; and
terminating the knowledge transfer session.

US Pat. No. 10,795,670

DEVELOPER COLLABORATION CONTROL SYSTEM

Roblox Corporation, San ...

1. A method comprising:transmitting, by a server device, a first copy of a committed version of source code to a first client device and a second copy of the committed version of the source code to a second client device for real-time collaborative editing of the source code on a source control platform;
receiving, from the first client device, a selection of a first presentation type from a plurality of presentation types, wherein each of the plurality of presentation types specify, for one or more other users of the source control platform, corresponding access privileges to source code changes made by a first user associated with the first client device to the first copy of the committed version of the source code, wherein the first presentation type specifies first access privileges to the source code changes and a second presentation type specifies second access privileges to the source code changes, wherein the second access privileges of the second presentation type are different from the first access privileges of the first presentation type;
receiving, from the first client device, first source code changes to a part of the source code of the first copy of the committed version;
transmitting, to the second client device, the first source code changes and instructions for real-time presentation of the first source code changes with the second copy of the committed version at the second client device in accordance with the first presentation type; and
storing the first source code changes in a first record of changes that is associated with the first user of the first client device.

US Pat. No. 10,795,669

SYSTEMS AND METHODS FOR INTEGRATING SOFTWARE SOURCE CONTROL, BUILDING, AND TESTING APPLICATIONS

ServiceNow, Inc., Santa ...

1. A system, comprising:a virtual server configured to host a software application, wherein a plurality of management applications is configured to generate a plurality of change events based on a plurality of client devices generating a plurality of changes to the software application, wherein a first management application of the plurality of management applications is developed by a different third party than a second management application of the plurality of management applications; and
an information technology platform comprising:
an endpoint configured to read the plurality of change events;
a change event queue configured to store the plurality of change events;
a change event processor configured to monitor the change event queue, and, for each change event of the plurality of change events:
determine a respective management application of the plurality of management applications that generated a respective change event; and
send the respective change event to a respective handler of a plurality of handlers corresponding to the respective management application; and
the plurality of handlers configured to process the plurality of change events based on the plurality of management applications to update the software application, wherein the plurality of handlers comprise the respective handler configured to process the respective change event based on the respective management application to update the software application.

US Pat. No. 10,795,668

SOFTWARE VERSION SYNCHRONIZATION FOR AVIONICS SYSTEMS

HAMILTON SUNDSTRAND CORPO...

1. An assembly for an aircraft comprising:a control module including a processor and a local memory that stores a first instance of operational software executable by the processor and that relates to functionality of the control module to selectively control a vehicle system;
a backplane memory device coupled to the control module by a common backplane, the backplane memory device including shadow memory that stores a second instance of the operational software; and
a housing and a plurality of line replaceable modules including the control module, the housing at least partially receiving the common backplane and the plurality of line replaceable modules coupled to the common backplane; and
wherein the control module includes a synchronization module that accesses the memory space of the second instance of the operational software in the backplane memory device, compares the first instance and the second instance, and replaces the first instance with a local copy of the second instance in response to at least one predetermined criterion being met.

US Pat. No. 10,795,667

FACILITATING DATA TYPE DETECTION USING EXISTING CODE

MICROSOFT TECHNOLOGY LICE...

1. A computing system comprising:a processor; and
a computer storage memory having computer-executable instructions stored thereon which, when executed by the processor, configure the computing system to:
search existing code to identify a set of functions related to a target data type;
execute one or more functions of the set of functions using positive example values and negative example values;
for each executed function of the set of functions, generate a corresponding logical explanation that represents a distinction in execution of the positive example values from the negative example values;
rank the executed one or more functions of the set of functions based on the extent to which the corresponding logical explanations distinguish execution of the positive example values from the negative example values; and
provide a function suggestion corresponding with at least a highest ranked executed function of the set of functions, wherein the function suggestion indicates a function for use in detecting the target data type.

US Pat. No. 10,795,666

TECHNIQUES FOR WEB APPLICATION UPDATES

WHATSAPP INC., Menlo Par...

1. A computer-implemented method, comprising:sending a request for a web application from a web browser on a client device to an application server;
receiving a service worker web application, operative to load and maintain the requested web application, from the application server in response to the request for the web application;
executing the service worker web application in the web browser;
sending an application download request from the service worker web application to the application server;
receiving, at the service worker web application from the application server, the web application in response to the application download request;
executing the web application in the web browser;
sending, by the service worker web application, an application update request for the web application to the application server, the application update request comprising a cached version indicator indicating a version of the web application executing on the client device;
receiving, from the application server, a delta update representing a difference between the version of the web application executing on the client device and a most current version of the web application on the application server; and
applying the delta update by the service worker web application to update the version of the web application executing on the client device to the most current version of the web application.

US Pat. No. 10,795,665

RELAY DEVICE AND HOT WATER SUPPLY DEVICE

NORITZ CORPORATION, Hyog...

1. A relay device communicably connected between a hot water supply system and a management device of the hot water supply system, the relay device comprising:a first communication part configured to transmit/receive information to/from the hot water supply system via a communication line;
a second communication part configured to transmit/receive information to/from the management device via a communication network;
a storage part comprising a program storage area; and
a control part controlling operations of the first communication part and the second communication part and writing and reading to/from the program storage area,
wherein the first communication part receives, from each of a plurality of devices which are components of the hot water supply system, identification information and version information of software of the devices,
the second communication part receives an update program for software update of the devices from the management device,
the control part writes the update program received by the second communication part to the program storage area and extracts one or more software update target devices from the devices based on the identification information and the version information of the devices, and
when there are more than one software update target device, the control part sequentially selects one of the software update target devices and transmits the update program stored in the program storage area to the selected one device by the first communication part.

US Pat. No. 10,795,664

SYSTEMS AND METHODS FOR DIFFERENTIAL BUNDLE UPDATES

Walmart Apollo, LLC, Ben...

1. A system, comprising:at least one processor operatively coupled with a data store, the at least one processor configured to:
receive a request message from a client device, wherein the request message identifies a client product version number and a differential update indication;
determine whether the client device supports differential updates based on the differential update indication;
when differential updates are supported:
identify a differential bundle based on a difference between the client product version number and a current product version number, wherein the differential bundle comprises a set of bytewise differences between an executable client product binary file associated with the client product version number and a executable current product binary file associated with the current product version number;
determine whether the differential bundle is available in the data store;
retrieve the differential bundle from the data store in response to determining that the differential bundle is available in the data store;
produce the differential bundle in response to determining that the differential bundle is not available in the data store; and
send the differential bundle to the client device; and
when differential updates are not supported, send a full bundle to the client device, wherein the full bundle comprises the executable current product binary file.

US Pat. No. 10,795,663

ELECTRONIC UPDATE HANDLING BASED ON USER ACTIVITY

International Business Ma...

1. A system for performing a computer program update on a target computer comprising:a memory comprising instructions;
a bus coupled to the memory; and
a processor coupled to the bus that is configured to execute the instructions and causes the system to:
determine a target computer having a location, a user, a computer program, and a computer program update;
determine an expected install duration for installing the computer program update, the install duration being a length of time that would be necessary to install the computer program update;
monitor a social media service associated with the user and for detecting a user location from a social media post on the social media service;
estimate an update time window, the update time window being a length of time that the user will be away from the target computer location that is determined based on a travel distance between the user location and the target computer and an estimated amount of time that the user will spend at the user location; and
begin an installation of the computer program update based on the update time window and the expected install duration.

US Pat. No. 10,795,662

SCALABLE ARTIFACT DISTRIBUTION

salesforce.com, inc., Sa...

1. A cloud computing system, comprising:a processing device; and
a memory device coupled to the processing device, the memory device having instructions stored thereon that, in response to execution by the processing device, cause the processing device to:
determine a nearest upstream computing system in a network of computing systems;
request a list of artifacts received by the nearest upstream computing system since a last synchronization time, the artifacts including files of a continuous integration (CI) process;
when a new artifact is available, get metadata and a chunk list for the new artifact and update a last synchronization time indicator, and write an artifact record for the new artifact into a queue, the artifact record including the metadata and the chunk list;
poll the queue for entries for the artifact records of artifacts;
when an entry exists for the new artifact, get the new artifact from the nearest upstream computing system;
divide the new artifact into a plurality of chunks according to the chunk list;
when the plurality of chunks is available for distribution, store the plurality of chunks into a shared storage accessible by downstream computing systems of the network;
determine one or more of the downstream computing systems to receive the new artifact;
receive requests from the one or more downstream computing systems for chunks of the new artifact;
send the chunks to the one or more downstream computing systems from the shared storage for reconstruction of the new artifact from the chunks.

US Pat. No. 10,795,661

VEHICLE CONTROLLER, PROGRAM UPDATING METHOD, AND NON-TRANSITORY STORAGE MEDIUM THAT STORES PROGRAM FOR UPDATING PROGRAM

TOYOTA JIDOSHA KABUSHIKI ...

1. A vehicle controller comprising:a memory storing:
a first program storage area configured to store a control program for controlling a vehicle, and
a second program storage area configured to store an update program that is an updated version of the control program; and
a processor programmed to:
perform a first vehicle control by executing the control program for controlling the vehicle;
acquire update data from a device located outside the vehicle via a network, wherein the update data constitutes the update program when the processor fully downloads the update data from the device,
store the update program in the second program storage area, regardless of whether the execution unit is executing the control program,
obtain a first execution output that is a result of executing the control program, and
obtain a second execution output that is a result of executing the update program.

US Pat. No. 10,795,660

LIVE CODE UPDATES

Twitter, Inc., San Franc...

1. A method performed by a live code update service to perform a live code update on source code of an application, the method comprising:receiving a request to perform a live code update of an application that is currently running on an application computer and that includes a plurality of built-in reflection API classes that implement a built-in reflection API, wherein the live code update is from an earlier version to a current version of the source code, wherein the earlier version of the source code includes a first user class, and wherein the first user class is updated in the current version of the source code;
compiling a subset of the current version of the source code to produce an updated bytecode, including generating, in the updated bytecode, a first instance representation class that inherits from a superclass of the first user class, the first instance representation class comprising a class identifier identifying the first user class and a reference to a representation object map, wherein the representation object map for the first instance representation class stores, for each instance of the first user class, the respective instance of the first instance representation class corresponding to the instance of the first user class;
replacing, in the updated bytecode, each instance of the first user class with a respective instance of the first instance representation class corresponding to the instance of the first user class and according to the representation object map;
replacing, in the updated bytecode, one or more built-in reflection API classes in the plurality of built-in reflection API classes with one or more replacement reflection API classes, comprising:
identifying, in the source code, a first call to a first method implemented by a first built-in reflection API class, wherein the first method, when executed, returns one or more properties of an instance of the first user class identified in an argument to the first method,
identifying, in the source code, built-in reflection API classes having methods that are called with arguments identifying the instances of the first user class;
replacing the identified built-in reflection API classes with respective replacement reflection API classes implementing the same methods as implemented by the identified built-in reflection API classes;
replacing the first built-in reflection API class with a first replacement reflection API class that implements the first method and handles all calls to the first method in the application; and
updating application bytecode of the application with the updated bytecode on the application computer while the application is running.

US Pat. No. 10,795,659

SYSTEM AND METHOD FOR LIVE PATCHING PROCESSES IN USER SPACE

Virtuozzo International G...

1. A system for applying a patch to a running process in user space comprising:a process executing in user space in an operating system executed by a hardware processor; and
a patcher configured to:
suspend execution of the process, wherein a memory address space of the process contains binary code executed in the process, and wherein the binary code comprises one or more symbols;
map a binary patch to the memory address space of the process, wherein the binary patch contains amendments to the binary code, wherein the binary patch references a portion of the one or more symbols, and wherein the binary patch contains metadata indicating offsets of the portion of the one or more symbols, wherein the patcher is configured to map the binary patch by:
injecting parasite code into the process;
transferring control to the parasite code; and
mapping, using the parasite code, the binary patch to the memory address space by executing instructions in the parasite code;
resolve the portion of the one or more symbols using the offsets in the metadata; and
resume execution of the process.

US Pat. No. 10,795,658

UPDATABLE RANDOM FUNCTIONS

FUJITSU LIMITED, Kawasak...

1. A method comprising:generating public parameters associated with a random updatable function;
generating, based at least in part on the public parameters, an initial randomness output that includes a first random element and a first state, wherein the initial randomness output is used by a plurality of computers in zero-knowledge proofs in building of a blockchain; and
generating, without multiparty computation by a plurality of computers, a new randomness output that includes a third random element and a second state, wherein the third random element is generated based at least in part on the public parameters, the first state of the initial randomness output, and a second random element, and wherein the new randomness output is used by the plurality of computers in the zero-knowledge proofs in building the blockchain.

US Pat. No. 10,795,657

METHOD OF MANAGING APPLICATIONS AND COMPUTING DEVICE USING THE SAME

Samsung Electronics Co., ...

1. A method of managing at least one application installed on a computing device, the method comprising:identifying the at least one application based on usage data of the computing device that includes a location of the computing device at different times a day;
fetching archive data and user data corresponding to the at least one application, the archive data including executable data for installing the at least one application and the user data including information for installing the at least one application in a state in which the at least one application was last used on the computing device;
creating backup data by correlating the archive data with the user data; and
uninstalling the at least one application.

US Pat. No. 10,795,656

DEPLOYING AN APPLICATION IN A CLOUD COMPUTING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method for deploying an application in a cloud computing environment, the method comprising:collecting, when a user is deploying an application, metadata and instructions on deploying the application, the metadata comprising service metadata, application metadata and topology metadata, wherein the service metadata comprises metadata on a service required for deploying the application, the application metadata comprises metadata on the application, and the topology metadata comprises metadata indicative of a relationship between the service and the application;
analyzing the collected metadata and instructions to remove redundant metadata and instructions by, in response to there being a delete instruction with respect to a certain service, removing all delete instructions with respect to the service and relevant metadata, and removing all create instructions with respect to the service and relevant metadata before a last delete instruction of the delete instructions with respect to the service; and
storing, according to an operational order in which the user deploys the application, the redundancy-removed metadata and instructions as a model for re-deploying the application.

US Pat. No. 10,795,654

MECHANISMS FOR DECLARATIVE EXPRESSION OF DATA TYPES FOR DATA STORAGE

Kaseya International Limi...

1. A system comprising:a processor device;
a memory in communication with the processor device; and
a storage device that stores a program of computing instructions for execution by the processor using the memory, the program comprising instructions configured to cause the processor to:
receive instances expressed in a modeling language;
auto-generate by a modeling language platform pieces of software for processing the received instances by deconstructing the received instances into pieces of code according to a language infrastructure of the modeling language platform, which language infrastructure includes a language grammar, symbols, and user-defined words, with the language grammar specifying order and specific combinations in which the symbols appear in valid instances; and
transform the deconstructed instances into a common set of data types expressed as platform independent code and artifact files using the modeling language platform that includes the language grammar and a set of language processing rules to transform the instances.

US Pat. No. 10,795,653

TARGET ARCHITECTURE DETERMINATION

Micron Technology, Inc., ...

1. A method, comprising:separating a source code from other portions of source code based on a named address space defined in the source code;
determining a type of target architecture based on the named address space defined in source code;
creating a first portion of compiled code that includes instructions for the type of target architecture, wherein the first portion of compiled code includes a number of homogeneous commands associated with homogeneous target architectures and a first number of heterogeneous commands associated with heterogeneous target architectures;
creating a second portion of compiled code that includes instructions for a base processor, wherein the second portion of compiled code includes the number of homogeneous commands and a second number of heterogeneous commands associated with the homogenous target architectures;
the target architecture only executing the first portion of compiled code; and
the base processor only executing the second portion of compiled code.

US Pat. No. 10,795,652

GENERATING NATIVE CODE FROM INTERMEDIATE LANGUAGE CODE FOR AN APPLICATION

Microsoft Technology Lice...

1. A method comprising:receiving, at a computing device, machine dependent intermediate language code (MDIL code) generated by an online provider for an application;
at the computing device, installing the application on the computing device by generating a native image for the application by binding the MDIL code, the generating the native image comprising:
binding a portion of the MDIL code with one or more libraries on the computing device; and
storing the native image on the computing device for use when loading the application for execution.

US Pat. No. 10,795,651

METHOD AND APPARATUS FOR COMPILING SOURCE CODE OBJECT, AND COMPUTER

Huawei Technologies Co., ...

1. A method, carried out by a compiler, for compiling objects in a source code, the method comprising:first determining, by the compiler, that a to-be-compiled first object in the source code is a first object type that can only be operated by one thread at one moment, wherein a first counter is set for the to-be-compiled first object;
first setting, by the compiler in response to the first determining, a first counter counting rule for the first counter of the to-be-compiled first object
second determining, by the compiler, that a to-be-compiled second object in the source code is a second type that can be simultaneously operated by multiple threads, wherein a second counter is set for the to-be-compiled second object; and
second setting for the second counter, by the compiler, in response to the second determining:
a second counter counting rule,
a counter locking rule, and
a counter unlocking rule,wherein the second counting rule increments the second counter when the second object is operated by a thread, the counter locking rule locks the second counter when the second object is operated by the thread and the counter unlocking rule unlocks the second counter when the second object is released by the thread.

US Pat. No. 10,795,650

CODE LINEAGE TOOL

Bank of America Corporati...

1. A code lineage tool comprising:a scanner configured to identify a plurality of elements in extract, transform, load (ETL) software code by scanning the ETL software code; and
a hardware processor configured to implement:
a parser configured to:
determine, based on a stored grammar file, that a first element of the plurality of elements is a function whose value is affected by a second element of the plurality of elements;
determine, based on the stored grammar file, that the second element is a variable used when calling the first element;
add the first element to a parse tree;
add the second element to the parse tree as a sub-node of the first element in response to the determination that the second element is used when calling the first element;
determine, based on the stored grammar file, that a value of the second element is affected by a third element of the plurality of elements;
determine, based on the stored grammar file, that the third element is a variable;
determine, based on the stored grammar file, that the third element is used when calling the first element;
add the third element to the parse tree as a sub-node of the second element in response to the determination that the third element is used when calling the first element;
determine, that a fourth element of the plurality of elements affects the value of the second element; and
in response to the determination that the fourth element affects the value of the second element, add the fourth element to the parse tree as a sub-node of the second element; and
an integrator configured to:
determine that the third element is not needed when calling the first element when the second element is provided to the first element;
determine, based on the parse tree, that a change to the fourth element will change the value of the first element;
generate a lineage for the first element, the lineage comprising an identification of the first element and the fourth element and an indication that the first element is based on the fourth element;
exclude the third element from the lineage in response to the determination that the third element is not needed when calling the first element when the second element is provided to the first element;
determine, based on the parse tree, that when a change to a fifth element of the software code occurs, no corresponding change occurs in the first, second, and third elements;
exclude the fifth element from the lineage in response to the determination that no corresponding change occurs when the change to the fifth element occurs; and
a delivery engine configured to email the lineage for presentation to a user who requested that the software code be scanned.

US Pat. No. 10,795,649

CUSTOM CODE BLOCKS FOR A VISUAL PLAYBOOK EDITOR

Splunk Inc., San Francis...

1. A computer-implemented method comprising:causing display of a graphical user interface (GUI) including a visual playbook editor for editing a playbook, wherein the playbook represents computer program source code including a collection of related function blocks that define a series of operations to be performed in response to identification of an incident in an information technology (IT) environment, and wherein the collection of related function blocks is represented by a graph displayed in the visual playbook editor;
receiving input via the visual playbook editor, the input including:
first input causing addition, to the graph, of a node representing a custom code function block, wherein the custom code function block is represented by a graphical icon that visually indicates that the custom code function block is associated with user-generated source code to be executed as part of execution of the playbook,
second input specifying one or more output variables associated with the custom code function block, and
third input creating a connection between the node representing the custom code function block and at least one other node of the graph representing the playbook; and
causing display of the node representing the custom code function block as part of the graph displayed in the visual playbook editor.

US Pat. No. 10,795,648

SYSTEMS AND METHODS OF DEVELOPMENTS, TESTING, AND DISTRIBUTION OF APPLICATIONS IN A COMPUTER NETWORK

Google LLC, Mountain Vie...

1. A method comprising:receiving, at a server having a mobile application publishing platform, application information including an application having one or more product modules;
prior to publishing the application via the mobile application publishing platform:
determining, at the server and based on the application information, a plurality of related product modules;
dynamically determining, at the server, an affinity score for each product module from the plurality of related product modules;
providing, by the server and based on the affinity score, at least one of the plurality of related product modules for selection;
responsive to receiving a request to test one or more of the at least one of the plurality of related product modules for a performance goal, determining, at the server, respective performance levels for the one or more of the at least one of the plurality of related product modules;
providing, by the server and based on the respective performance levels, an updated indication of the at least one of the plurality of related product modules;
receiving, at the server, a selection of a product module from the at least one of the plurality of related product modules; and
generating, at the server, an updated application that includes the selected product module in place of one of the one or more product modules of the application; and
publishing, by the server, the updated application via the mobile application publishing platform.

US Pat. No. 10,795,647

APPLICATION DIGITAL CONTENT CONTROL USING AN EMBEDDED MACHINE LEARNING MODULE

Adobe, Inc., San Jose, C...

1. In a digital medium environment to control output of digital content in a context of execution of an application, a method implemented by a client device, the method comprising:monitoring, by the client device, user interaction within the context of the application executed by the client device;
training using a machine learning module selected from a software development kit, by the client device, a model embedded as part of the application;
receiving, by the client device, data via a network from a service provider system, the data describing a plurality of items of digital content that are available to the client device;
generating, by the client device, a recommendation by processing the data using machine learning based on the model, the generating is performed without exposing the model outside of the context of execution of the application;
transmitting, by the client device, the recommendation for receipt by the service provider system via the network;
receiving, by the client device, at least one of the plurality of items of digital content via the network from the service provider system in response to the transmitted recommendation; and
outputting, by the client device, the received at least one item of digital content within the context of execution of the application by the client device.

US Pat. No. 10,795,646

METHODS AND SYSTEMS THAT GENERATE PROXY OBJECTS THAT PROVIDE AN INTERFACE TO THIRD-PARTY EXECUTABLES

VMware, Inc., Palo Alto,...

1. An automated system that generates a proxy-class interface for an external executable that is used by a task included in a workflow executed by a workflow-execution engine to access external methods, the automated system comprising:one or more processors;
one or memories;
one or more mass-storage devices; and
computer instructions, stored in one or more of the one or more memories that, when executed by one or more of the one or more processors, control the automated system to
generate one or more proxy-classes, each proxy class having one or more methods that each delegates execution to a corresponding executable method in the external executable by
searching for an external-executable entrypoint that references the corresponding executable method in the external executable,
when an external-executable entrypoint is found,
calling the corresponding external-executable entrypoint, and
when the external-executable entrypoint is not found,
returning an error, and
store, in one or more of the one or more mass-storage devices, the one or more generated proxy classes as a proxy-class interface.

US Pat. No. 10,795,645

NEURAL NETWORK FOR PROGRAM SYNTHESIS

Microsoft Technology Lice...

1. A method comprising:for a given domain-specific language that defines a plurality of symbols and a plurality of production rules, providing an input-output encoder and a program-generation model comprising a neural network, the input-output encoder and the neural network having been trained on a plurality of programs within the domain-specific language and a plurality of respective training sets of input-output examples associated with the programs, wherein, for each of the plurality of programs and its associated training set, each input-output example of the training set comprises a pair of an input to the program and a corresponding output produced by the program from the input;
providing a test set of input-output examples for a target program;
using one or more hardware processors to perform operations for generating the target program based on the test set of input-output examples, the operations comprising:
encoding the test set of input-output examples using the input-output encoder;
conditioning the program-generation model on the encoded set of input-output examples; and
using the neural network to generate a program tree representing the target program by iteratively expanding a partial program tree, beginning with a root node and ending when all leaf nodes are terminal, based on a computed probability distribution for a set of valid expansions, wherein leaves in the program tree and the partial program tree represent symbols in the domain-specific language and wherein non-leaf interior nodes in the program tree and the partial program tree represent production rules in the domain-specific language.

US Pat. No. 10,795,644

DECENTRALIZED RANDOM NUMBER GENERATOR

1. A method comprising:selecting a first node of a node pool within a large-scale decentralized network based on a block generation order, where the first node is selected to generate a current block;
adding a first signature share of the first node to the current block;
adding at least a second signature share from a previously selected node based on the block generation order to the current block;
generating a random sequence based on the first signature share and the second signature share;
adding the random sequence to the current block; and
publishing the current block to a blockchain maintained by the node pool,
wherein the block generation order indicates a pre-determined order of block generation such that each node within the node pool is selected to produce a block once per cycle.

US Pat. No. 10,795,643

SYSTEM AND METHOD FOR RESOURCE RECONCILIATION IN AN ENTERPRISE MANAGEMENT SYSTEM

BMC Software, Inc., Hous...

1. A method to reconcile multiple instances of a single resource object using a reconciliation engine of a configuration management database (CMDB), the method comprising:receiving, via an application programming interface (API), a plurality of unreconciled resource objects from one or more data sources, each of the plurality of unreconciled resource objects representing a component of a computer system, the component of the computer system including a device, switch, router, memory, software application, or operating system;
selecting, by the reconciliation engine, an unreconciled resource object from the plurality of unreconciled resource objects;
querying, by the reconciliation engine, the CMDB to determine whether the unreconciled resource object matches with at least one resource object stored in the CMDB according to at least one of a plurality of identification rules, each of the plurality of identification rules specifying which attributes are considered when determining a match during a reconciliation process, the plurality of identification rules including a first identification rule and a second identification rule, wherein the matching includes applying the first identification rule and the second identification rule in a defined order such that, when the first identification rule does not result in a match during the reconciliation process, the second identification rule is applied during the reconciliation process;
creating, by the reconciliation engine, a new reconciled resource object in the CMDB;
merging, by the reconciliation engine, the unreconciled resource object and the at least one resource object into the new reconciled resource object according to at least one merging rule, the unreconciled resource object and the at least one resource object being different instances of a common resource object; and
storing, by the reconciliation engine, the new reconciled resource object in a reconciled dataset of the CMDB.

US Pat. No. 10,795,642

PRESERVING TEMPORAL RELEVANCE IN A RESPONSE TO A QUERY

International Business Ma...

1. A computer-implemented method for preserving temporal relevance in a response to a query, comprising:receiving a query from a user, the query containing first temporal features;
extracting the first temporal features from the query;
using the first temporal features to identify first documents within a corpus;
processing the first documents to generate first metadata and mined content;
processing the first temporal features and the first metadata and mined content to generate second documents having second temporal features corresponding to the first temporal features; and,
using the second documents to provide an answer to the query, the answer matching the second temporal features corresponding to the first temporal features, the using the second documents preserving the temporal relevance of the answer, the temporal relevance providing a measure of importance assigned to an aspect of time, characterized by a unit of temporal granularity relevant to a particular condition, context or event, the temporal relevance being based upon a measure of how granularly a temporal feature of the answer matches a temporal feature of the query.

US Pat. No. 10,795,641

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD

SONY CORPORATION, Tokyo ...

1. An information processing device, comprising:a central processing unit (CPU) configured to:
acquire a first collected speech;
execute a voice recognition process based on the first collected speech and a plurality of display objects in a first display range, wherein
the first display range includes a current display range, a second display range, and a third display range, and
the second display range is consecutively displayed before the current display range and the third display range is to be consecutively displayed after the current display range;
determine that a voice recognition result of the first collected speech satisfies a specific condition, wherein the voice recognition result of the first collected speech corresponds to the voice recognition process;
control a display screen to display a first set of display objects of the plurality of display objects in the current display range;
determine a priority of selection of the plurality of display objects for each of the current display range, the second display range, and the third display range, wherein a priority of selection of a first display object, of the first set of display objects, in the current display range is higher than a priority of selection of a second display object, of the plurality of display objects, in the third display range;
acquire a second collected speech;
rearrange, based on a voice recognition result of the second collected speech, the first set of display objects in an ascending order of a character string corresponding to the first set of display objects;
acquire a third collected speech;
determine that a voice recognition result of the third collected speech dissatisfies the specific condition, wherein the second collected speech is acquired before the third collected speech; and
select the first display object from the first set of display objects based on:
the determination that the voice recognition result of the first collected speech satisfies the specific condition,
the determined priority of the selection of the plurality of display objects, and
the determination that the voice recognition result of the third collected speech dissatisfies the specific condition, wherein the first display object and the second display object correspond to the voice recognition result of the first collected speech.

US Pat. No. 10,795,640

CONVERSATIONAL VIRTUAL ASSISTANT

UNITED SERVICES AUTOMOBIL...

1. A system comprising:a first computing device communicatively coupled to a second computing device, wherein:
the first computing device includes: a first memory configured to store data and a first set of instructions, and a first processor configured to read the first set of instructions from the first memory and to initiate a method for providing assistance, the first set of instructions comprising:
code for transmitting a request for the assistance from the first computing device to the second computing device;
code for displaying, via a conversational virtual assistant, a contact menu with options for obtaining the assistance on the first computing device; and
code for transmitting a selection of one of the options for obtaining the assistance to the second computing device;
the second computing device includes: a second memory configured to store data and a second set of instructions, and a second processor configured to read the second set of instructions from the second memory and to execute the method for providing the assistance, the second set of instructions comprising:
code for receiving the request for assistance from the first computing device;
code for launching the conversational virtual assistant on the first computing device;
code for determining at least one potential subject of the request based on information associated with a user and a tab of a mobile application or webpage from which the conversational virtual assistant was launched;
code for receiving a verification of the at least one potential subject of the request;
code for creating the contact menu with the options including timing for receiving assistance to call a representative, hours of operation for receiving assistance to chat with a representative, and timing for receiving assistance to take an action via the conversational virtual assistant;
code for recommending one of the options based on the timing for receiving the assistance, contact center information, and the at least one potential subject of the request;
code for receiving, from the first computing device, the selection of one of the options; and
code for recommending a different option in response to receiving a selection of the hours of operation.

US Pat. No. 10,795,639

SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD

Sony Corporation, Tokyo ...

1. A signal processing device comprising:a sound collection signal input unit configured to input a sound collection signal of a sound collection unit that collects a sound produced by a user at a user's location with more than two microphones disposed at more than two positions and in a circular arrangement to surround the user;
a first acoustic signal processing unit configured to perform a first acoustic signal process on the signal input by the sound collection signal input unit for reproducing a first sound field, in which the sound produced by the user is sensed as if the sound were echoing in a place specified from designated position information, based on a first transfer function that is measured in the place specified from the designated position information to indicate how a sound emitted on an acoustic closed surface side inside the place echoes in the place and then is transferred to the acoustic closed surface side, wherein the place specified from the designated position information is different from the user's location;
a second acoustic signal processing unit configured to perform a second acoustic signal process on a signal input from a rendering unit for reproducing a second sound field, in which the sound emitted from outside of the acoustic closed surface inside the place is transferred to the acoustic closed surface side in the place specified from designated position information as if the sound were the sound being emitted in the place that is a sound field reproduction target, based on a second transfer function that is measured in the place specified from the designated position information to indicate how a sound emitted from the outside of the acoustic closed surface inside the place is transferred to the acoustic closed surface side and to add an acoustic signal that is based on a sound source, wherein the sound source is object-decomposed sound source and is recorded in the place specified from the designated position information, to the signal that has undergone the first acoustic signal process; and
a sound emission control unit configured to cause a sound that is based on the signal that has undergone the first acoustic signal process by the acoustic signal processing unit to be emitted from more than two speakers disposed at more than two positions and in a circular arrangement to surround the user.

US Pat. No. 10,795,638

CONVERSATION ASSISTANCE AUDIO DEVICE PERSONALIZATION

BOSE CORPORATION, Framin...

1. A computer-implemented method of personalizing a conversation assistance audio device, the method comprising:presenting a user of the conversation assistance audio device with a set of simulated audio environments played back at the conversation assistance audio device,
wherein each simulated audio environment comprises playback at the conversation assistance audio device of a person speaking along with playback at the conversation assistance audio device of background audio
wherein each simulated audio environment in the set of simulated audio environments
comprises background audio playback at a background noise level, wherein the background noise level in all of the simulated audio environments comprises audio playback at a signal-to-noise (SNR) variation of approximately 5 decibels (dB) SNR or less, wherein the set of simulated audio environments comprises at least two simulated audio environments:
receiving feedback from the user about each simulated audio environment in the set of simulated audio environments; and
adjusting at least one audio setting at the conversation assistance audio device based upon the feedback from the user and known audio characteristics of the set of simulated audio environments and the conversation assistance audio device;
wherein adjusting the at least one audio setting comprises selecting a best-fit audio setting for the conversation assistance audio device based upon the feedback received from the user about all of the simulated audio environments in the set of simulated audio environments;
wherein each simulated audio environment in the set of simulated audio environments comprises audio playback at a signal-to-noise (SNR) range in which audibility limits intelligibility.

US Pat. No. 10,795,637

ADJUSTING VOLUME LEVELS OF SPEAKERS

DTS, Inc., Calabasas, CA...

1. A method for adjusting a volume level of a first speaker, the first speaker having a non-standardized relationship between logical volume level that is input to the speaker and sound pressure level that is produced by the speaker, the method comprising:receiving, via a user interface, a selected volume level corresponding to a sound pressure level;
accessing a software application that allows access to a stored lookup table to convert the sound pressure level to a first product-specific logical volume level for the first speaker, the relationship between logical volume level and sound pressure level for the first speaker being independent of audio content sent to the first speaker, the stored lookup table tabulating non-standardized relationships between logical volume level and sound pressure level for a plurality of product-specific speakers including the first speaker, wherein a source of data of the stored lookup table is a manufacturer of the software application; and
transmitting data corresponding to the first product-specific logical volume level to the first speaker.

US Pat. No. 10,795,636

INFORMATION DISPLAY REGARDING PLAYBACK QUEUE SUBSCRIPTIONS

Sonos, Inc., Santa Barba...

1. A method to be performed by one or more servers of a computing system, the method comprising:receiving, via a network interface from a first computing device, an instruction to enable subscription to a first playback queue associated with a first media playback system, wherein the first media playback system comprises a first playback device, and wherein one or more first accounts are registered with the first media playback system;
in response to the instruction, enabling one or more second user accounts to subscribe to the first playback queue via one or more wide area networks, wherein the one or more second user accounts are registered with respective second media playback systems in respective second households;
after enabling the one or more second user accounts associated with one or more second playback devices to subscribe to the first playback queue, receiving, via the network interface from a particular second media playback system, a request to subscribe to the first playback queue; and
in response to the request to subscribe to the first playback queue: (i) sending, via the network interface, one or more messages that update a control interface of the first computing device to display a subscriber indication showing that a particular second user account registered with the particular second media playback system has subscribed to the first playback queue and (ii) sending, via the network interface, one or more messages that populate a second playback queue associated with a second playback device of the particular second media playback system with audio tracks of the first playback queue.

US Pat. No. 10,795,635

SYSTEMS AND METHODS FOR SPATIAL AND TEMPORAL HIGHLIGHTING OF CHANGES TO STREAMING IMAGE DATA

Forcepoint LLC, Austin, ...

1. A computer-implementable method comprising:receiving a video stream of image frames;
accessing information indicative of changes in one or more portions of the video stream;
generating a video stream display for a video display device, wherein the video stream display includes:
the video stream; and
an overlay indicating the one or more portions of the video stream wherein the changes occur; and
generating a temporal change indicator to the video display device, indicating temporal portions of the video stream in which changes occur within the video stream;
wherein the information indicative of changes comprises pixel information received from a browser plugin and wherein the pixel information is indicative of changes occurring in one or more pixels between two successive image frames of the video stream display;
wherein the temporal change indicator indicates a percentage of pixels that change to between the two successive image frames of the video stream versus a time associated with the video stream;
wherein the percentage of pixels that change comprises a percentage of pixels within a particular area of the video stream that changed during a particular period of time.

US Pat. No. 10,795,634

METHOD, APPARATUS, AND MOBILE TERMINAL FOR SCREEN MIRRORING

ZHEJIANG GEELY HOLDING GR...

1. A screen mirroring method applied to a mobile terminal, comprising:establishing a connection to at least one second screen terminal;
receiving an operation command;
transmitting one of multimedia files and image signals of the mobile terminal to a corresponding second screen terminal according to the operation command, to make the multimedia files or the image signals be instantly displayed on the corresponding second screen terminal;
wherein the step of transmitting one of multimedia files and image signals of the mobile terminal to a corresponding second screen terminal according to the operation command comprises:
receiving a character information sent by the second screen terminal, the character information comprising a calculation data and a storage data of the second screen terminal;
determining whether the calculation data of the second screen terminal exceeds a first predetermined value or not, and whether the storage data of the second screen terminal exceeds a second predetermined value or not;
transmitting the multimedia files to the corresponding second screen terminal according to the operation command when the calculation data of the second screen terminal does not exceed the first predetermined value and the storage data of the second screen terminal does not exceed the second predetermined value;
transmitting the image signals to the corresponding second screen terminal according to the operation command when the calculation data of the second screen terminal exceeds the first predetermined value; and
transmitting the image signals to the corresponding second screen terminal according to the operation command when the storage data of the second screen terminal exceeds the second predetermined value.

US Pat. No. 10,795,633

DESKTOP SHARING METHOD AND MOBILE TERMINAL

Huawei Technologies Co., ...

1. A method, comprising:receiving, by a first terminal, a first operation of a user, wherein the first operation of the user indicates the user requests to share a plurality of screen interfaces of a desktop of the first terminal;
in response to determining that the first operation meets a first preconfigured condition, determining, by the first terminal, a desktop drawing file according to the desktop of the first terminal, wherein the desktop drawing file comprises a desktop description file and a file package, the desktop description file comprises a location of each of a plurality of application interface elements on the plurality of screen interfaces on the desktop of the first terminal, the desktop description file further comprises a file name, a package name, and a downloading link of each of the plurality of application interface elements, the file package comprises a thumbnail of each of the plurality of application interface elements and thumbnails of the plurality of screen interfaces, the desktop drawing file further comprises a desktop wallpaper and a desktop theme, and the plurality of application interface elements comprise a desktop application and a desktop widget; and
sharing, by the first terminal, the determined desktop drawing file, causing a second terminal to replace a desktop of the second terminal with the shared desktop drawing file.

US Pat. No. 10,795,632

DISPLAY SYSTEM AND METHODS

Nanolumens Acquisition, I...

1. A light emitting display system on a support structure, the light emitting display system comprising:a) a plurality of light emitting display sub-assemblies attached to said support structure, the plurality of light emitting display sub-assemblies collectively creating a visual display having no perceivable gaps or overlaps between adjacent light emitting display sub-assemblies;
b) each of said light emitting display sub-assemblies comprising:
i) a plurality of light emitting elements affixed to a substrate, the edges of the substrate defining a contiguous perimeter;
ii) said plurality of light emitting elements arranged in a matrix formation;
iii) a plurality of pixel gaps defined, one of said plurality of pixel gaps defined between each pair of adjacent light emitting elements;
iv) each of said plurality of pixel gaps spaced so that all of said plurality of pixel gaps are substantially the same size;
v) said plurality of light emitting elements encapsulated by a transparent optical material, the transparent optical material providing a uniform rigid surface above said plurality of light emitting elements, said transparent optical material allowing at least partial transmission of light emitted by said plurality of light emitting elements;
vi) a portion of said plurality of light emitting elements configured so that each light emitter of said portion is spaced from said contiguous perimeter by a distance of one half the size of said pixel gap or less;
c) said plurality of light emitting display sub-assemblies disposed on said support structure so that a portion of said contiguous perimeter of each of said plurality of light emitting display sub-assemblies abuts a portion of said contiguous perimeter of an adjacent light emitting display sub-assembly.

US Pat. No. 10,795,631

FLEXIBLE DISPLAY SYSTEM AND METHODS

Nanolumens Acquisition, I...

1. A flexible display comprising:(a) a plurality of light emitting rigid chixels affixed to a flexible substrate, said plurality of light emitting rigid chixels collectively providing a visible display, said flexible display having both vertical and horizontal directions defined in the plane of said visible display;
(b) each of said plurality of light emitting rigid chixels consisting of:
i. a plurality of light emitting sub-pixels comprising a single pixel, said plurality of light emitting sub-pixels coupled to a rigid substrate, said rigid substrate having a non-functional edge and an adjacent region devoid of any conductor;
(c) said plurality of light emitting rigid chixels arranged upon said flexible substrate to provide a spaced array of chixels, said spaced array of chixels providing:
i. a plurality of substantially equal first pixel gaps disposed across said non-functional edges between adjacent light emitting rigid chixels in said vertical direction;
ii. a plurality of substantially equal second pixel gaps disposed across said non-functional edges between adjacent light emitting rigid chixels in said horizontal direction;
(d) said spaced array of chixels further characterized in that each of said plurality of first pixel gaps is substantially equal to each of said plurality of second pixel gaps across said flexible display.

US Pat. No. 10,795,630

CONFIGURING COMPUTING DEVICE TO UTILIZE A MULTIPLE DISPLAY ARRANGEMENT BY TRACKING EYE MOVEMENT

International Business Ma...

1. A method for configuring a computing device to utilize a multiple display arrangement, the method comprising:detecting a user adding a second display unit to said computing device comprising a first display unit based on detecting said second display unit being connected to said computing device via a port or based on receiving manual input from said user by said computing device indicating that there are multiple displays connected to said computing device;
tracking eye movement of said user using a head-mounted eye tracker connected to said computing device via a network in response to detecting said user adding said second display unit to said computing device;
determining a logical display arrangement of said first and second display units based on said tracked eye movement of said user;
in response to the determining the logical display arrangement, verifying the determined logical display arrangement of said first and second display units matches a physical display arrangement of said first and second display units based on a number of positive repetitive eye movements from said user; and
configuring settings on said computing device to utilize said first and second display units in a particular display arrangement in response to verifying said logical display arrangement of said first and second display units matches said physical display arrangement of said first and second display units, wherein said configuration is determined based on a data structure storing settings to allow ease of movement between said first and second display units having specific relative locations to one another which are identified based on tracking said eye movement of said user.

US Pat. No. 10,795,629

TEXT AND CUSTOM FORMAT INFORMATION PROCESSING METHOD, CLIENT, SERVER, AND COMPUTER-READABLE STORAGE MEDIUM

TENCENT TECHNOLOGY (SHENZ...

1. An information processing method, comprising:displaying, by a client, a chat page between a user and one or more other users in response to a first user operation,
sending, by the client in the chat page, a request for sending a gift packet to a plurality of users in the chat page;
displaying, by the client in the chat page, an operational control in response to the request for sending the gift packet, the operational control comprises data about the gift packet;
receiving, by the client in the chat page from a server, a system message sent by the server, the system message being an indication of details of receiving the gift packet by at least one user of the plurality of users in the chat page through the operational control, wherein the system message comprises text-format information and custom-format information, wherein the custom-format information comprises a user-operable interactive message and a scene-identifying image, wherein the scene-identifying image identifies a type of the system message;
respectively extracting, by the client, the text-format information and the custom-format information in the system message;
obtaining, by the client, a first parsing result by parsing the text-format information in a first parsing mode, the first parsing result being content of the text-format information;
extracting, by the client from the client, a pre-set parsing rule corresponding to a pre-set coding rule of the server, and obtaining a second parsing result by parsing the custom-format information in a second parsing mode by matching the custom-format information according to the pre-set parsing rule, the second parsing result being content of the custom-format information;
displaying, by the client, the first parsing result and the second parsing result on the chat page;
receiving, by the client, a second operation on a content of the user-operable interactive message;
jumping, by the client in response to the second operation, to another page indicated by or linked to by the content of the user-operable interactive message, wherein the another page comprising the details of responding to the gift packet by each user of the plurality of users in the chat page.

US Pat. No. 10,795,628

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND LOG INFORMATION MANAGEMENT METHOD

Ricoh Company, Ltd., Tok...

1. An information processing system comprising:a first information processing apparatus on a first network, the first information processing apparatus including at least one first processor and at least one first memory storing a log information management program and a first job execution program, the log information management program causing the first processor to execute a log information management process, and the first job execution program causing the first processor to execute a first job; and
a second information processing apparatus on a second network different from the first network, the second information processing apparatus including a second processor and a second memory storing a second job execution program causing the second processor to execute a second job; wherein the information processing system is configured to perform a process including
issuing, by the log information management process, first identification information to the first job, in response to receiving an identification information issuance request from the first job;
issuing, by the log information management process, second identification information to the second job, in response to receiving an identification information issuance request from the second job, the second identification information being different from the first identification information;
generating, by the first job, first log information;
adding, by the first job, the first identification information to the first log information;
transmitting, by the first job, the first log information to which the first identification information is added, to the log information management process;
generating, by the second job, second log information;
adding, by the second job, the second identification information to the second log information; and
transmitting, by the second job, the second log information to which the second identification information is added, to the log information management process.

US Pat. No. 10,795,627

IMAGE FORMING SYSTEM, PORTABLE TERMINAL, AND IMAGE FORMING METHOD THAT STORES OR TRANSMITS BROWSING INFORMATION BASED ON STORAGE COMPACITY OF A STORAGE PART

KYOCERA Document Solution...

5. An image forming method executed by an image forming system having a portable terminal and an image forming apparatus, comprising the steps of:by the portable terminal, selecting browsing information, which is possible to be printed on the image forming apparatus, corresponding to a specified condition from the information which the user is browsing in order to browse when the portable terminal is unusable;
by the portable terminal, transmitting the selected browsing information to the image forming apparatus;
by the image forming apparatus, accumulating the browsing information received from the portable terminal;
by the image forming apparatus, printing the accumulated browsing information;
by the image forming apparatus, storing the browsing information in a storage part of the image forming apparatus when a storage capacity of the storage part is greater than or equal to a specific value, and
by the image forming apparatus, transmitting the browsing information to an external server when the storage capacity is less than the specific value.

US Pat. No. 10,795,626

HIGH PRIORITY PRINTING USING EXTERNAL INTERPRETER AND PAGE DESCRIPTION LANGUAGE

KYOCERA Document Solution...

1. A method of prioritizing printing jobs, comprising:pausing a current print job on a printer, wherein the printer includes a print engine, and the current print job includes a plurality of pages;
pinging an external interpreter on an external device for a print job status comprising an existence of a pending high priority print job for the printer, wherein the external interpreter includes first logic to generate high priority raster data from a page description language (PDL);
applying the print job status to a priority determination unit, wherein the priority determination unit includes second logic to determine if a high priority print job exists or does not exist and to generate executable printing instructions for the printer and the external interpreter, wherein the executable printing instructions comprise:
on condition that a high priority print job exists:
receiving the high priority raster data for the high priority print job from the external interpreter;
operating the print engine on the high priority raster data to generate at least one or more high priority pages; and
operating the printer to print the at least one or more high priority pages; and
on condition that a high priority print job does not exist:
operating the printer to print one page from the plurality of pages of the current print job;
and
executing the printing instructions.

US Pat. No. 10,795,625

IMAGE FORMING APPARATUS, RESERVATION JOB MANAGING AND CONTROL PERFORMANCE RESTORATION

Kyocera Document Solution...

1. An image forming apparatus, comprising:a controller configured to perform a print job or a transmission job using a printing device or a communication device; and
a reservation job managing unit configured to (a) register schedule data and job data of a reservation job that is a print job or a transmission job in a predetermined storage device, (b) determine whether the job data is stored in the storage device or not when a reservation time has come on the basis of the schedule data, and (c) notify a user of that the job data is not stored in the storage device if the job data is not stored in the storage device, and afterward cause the controller to perform the reservation job if the job data is restored in the storage device.

US Pat. No. 10,795,624

PRINT WORKFLOW VISUALIZATION AND COMPARISON

Ricoh Company, Ltd., Tok...

1. An apparatus, comprising:a display device configured to present a Graphical User Interface (GUI) to a user; and
a controller coupled to the display device, the controller configured to receive a first print workflow and at least one second print workflow, to generate a first signal indicative of a first graphical representation of sequences of linked steps of the first print workflow and the at least one second print workflow, and to transmit the first signal to the display device,
wherein the display device is configured to receive the first signal from the controller, and to display the first graphical representation to the user with the GUI as a function of the first signal,
wherein the controller is configured to determine differences between the first print workflow and the at least one second print workflow by identifying a first value of an invisible property in a step in the first print workflow that does not match a second value of the invisible property in a corresponding step in the at least one second print workflow,
wherein the controller is configured to generate a second signal indicative of a second graphical representation of the differences, wherein the second graphical representation visually displays simultaneously the previously invisible first value of the invisible property of the step in the first print workflow and the previously invisible second value of the invisible property of the corresponding step in the at least one second print workflow, and to transmit the second signal to the display device,
wherein the display device is configured to receive the second signal from the controller, and to display simultaneously to the user with the GUI, both of: the first graphical representation of sequences of linked steps of the first print workflow and the at least one second print workflow as a function of the first signal, and the second graphical representation of the differences between the first print workflow and the at least one second print workflow as a function of the second signal, the second graphical representation being visually different than the first graphical representation.

US Pat. No. 10,795,623

IMAGE FORMING APPARATUS AND CONTROL METHOD FOR THE IMAGE FORMING APPARATUS FOR READING OUT DATA AND PERFORMING INITIALIZATION PROCESSING USING THE DATA

CANON KABUSHIKI KAISHA, ...

1. An image forming apparatus comprising:a first storage unit configured to store first data for activating a first module that interprets a first type of PDL data and second data for activating a second module that interprets a second type of PDL data;
a second storage unit having a data reading speed higher than that of the first storage unit;
at least one processor configured to:
activate the first module by reading the first data from the first storage unit and placing the read first data in the second storage unit so that the image forming apparatus becomes able to process the first type of PDL data;
activate the second module by reading the second data from the first storage unit and placing the read second data in the second storage unit while the image forming apparatus is able to process the first type of PDL data;
receive the first type of PDL data; and
process the received first type of PDL data.

US Pat. No. 10,795,622

INFORMATION PROCESSING APPARATUS, PRINTING METHOD, AND COMPUTER-READABLE MEDIUM

Ricoh Company, Ltd., Tok...

1. An information processing apparatus configured to perform communication with an apparatus connected via a network, the information processing apparatus comprising:a print setting information memory to store a print setting in association with a setting value; and
circuitry configured to
first allow, by a print dialog program component, a setting value for a particular print setting to be specified;
second allow, by a print dialog extension program component, different from the print dialog program component, the setting value for the particular print setting to be specified;
specify a particular setting value for the particular print setting through one of the first allowing or the second allowing;
generate print data;
determine whether the particular setting value for the print setting is specified through the first allowing by the print dialog program component or through the second allowing by the print dialog extension program component;
write the particular print setting and the particular setting value specified through one of the first allowing and the second allowing in the print setting information memory in an associated manner;
read the particular setting value associated with the particular print setting from the print setting information memory; and
transmit, to the apparatus, the print data to which a print command corresponding to the particular print setting including the particular setting value is added.

US Pat. No. 10,795,621

IMAGE FORMING APPARATUS DETECTS HUMAN BODY TO LIFT SUSPENSION ON PRINTING PROCESS FOR EXECUTING A PRINT JOB

KYOCERA Document Solution...

1. An image forming section that forms an image on a recording medium;a sheet output tray onto which the recording medium having the image formed by the image forming section is discharged;
an operating section through which an instruction is input from a user;
a human body detection sensor that is provided in the image forming apparatus and detects a human body present in front of the image forming apparatus;
a communication section that communicates data with an external device;
a display section; and
a control unit including a processor,
wherein when the processor executes a control program, the control unit functions as:
an operation acceptance section that accepts the instruction input through the operating section;
a command acceptance section that accepts a print job command from the external device through the communication section; and
a control section that controls an operation of the image forming section to execute, upon acceptance of a print job instruction by the operation acceptance section, a print job based on the print job instruction, and execute, upon acceptance of the print job command from the external device by the command acceptance section, a print job based on the print job command, and
when the command acceptance section accepts the print job command while the human body is being detected by the human body detection sensor, the control section performs a suspension of the print job based on the print job command and allows the display section to provide a display for calling attention to a possible mistake of taking away a wrong recording medium discharged onto the sheet output tray.

US Pat. No. 10,795,620

IMAGE PROCESSING APPARATUS AND LAYOUT METHOD

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:an obtaining unit configured to obtain a plurality of images;
a division unit configured to divide the obtained plurality of images into a plurality of groups;
a determination unit configured to determine a processing target group and a template to be used for the processing target group; and
an arranging unit configured to arrange an image included in the processing target group into a slot of the template determined by the determination unit,
wherein in a case where the number of images in the processing target group is one, (a) the determination unit determines a first template as the template to be used for the processing target group, the first template including a first slot and a second slot overlapping the first slot, and (b) the arranging unit arranges a first image based on the one image into the first slot and a second image based on the one image into the second slot.

US Pat. No. 10,795,619

NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING COMPUTER-EXECUTABLE INSTRUCTIONS FOR INFORMATION PROCESSING DEVICE, AND METHOD OF CONTROLLING INFORMATION PROCESSING DEVICE

Brother Kogyo Kabushiki K...

1. A non-transitory computer-readable recording medium for an information processing device having a communication interface and a controller, the information processing device being connected to a cloud server through the communication interface,a particular program being installed in the information processing device,
the recording medium storing computer-executable instructions realizing an application program,
the application program being added to the particular program by a plugin function implemented in the particular program,
the information processing device being configured to receive, through the particular program, a print instruction to print a content stored in the cloud server,
the application program causing, when executed by the controller, the information processing device to perform:
a downloading process of downloading the content from the cloud server;
a selection process of receiving a selection between a cloud printing and a local printing, the cloud printing being a printing process performed by causing the cloud server to transmit print data to a cloud printer which is a printer registered with the cloud server, the local printing being a printing process performed by transmitting print data to a local printer which is a printer connected to the information processing device through the communication interface;
when the cloud printing is selected in the selection process, a cloud printing instruction outputting process of outputting an instruction to perform the cloud printing to the cloud server; and
when the local printing is selected in the selection process, a print data transmitting process of generating print data based on the content downloaded in the downloading process and transmitting the print data as generated to the local printer.

US Pat. No. 10,795,618

METHODS, APPARATUSES, AND SYSTEMS FOR VERIFYING PRINTED IMAGE AND IMPROVING PRINT QUALITY

1. A method for printing an image on print media with a printer, the method comprising:receiving at least part of print data used to generate the image;
receiving a reference image or generating the reference image from at least part of the print data;
storing the reference image in a memory of the printer; printing the image to obtain a printed image;
capturing a representation of the printed image to obtain a captured image;
determining, by a processor, if the captured image conforms to the reference image by comparing a horizontal position of the printed image of the captured image with a horizontal position of the reference image; and
modifying, by the processor, at least part of the print data used to generate the image prior to generating a succeeding image when the captured image does not conform to the reference image due to an offset in the horizontal position of the printed image relative to the horizontal position of the reference image, wherein modifying at least part of the print data comprises shifting at least part of the print data by a value of an offset to reposition the succeeding image on the print media.

US Pat. No. 10,795,617

INFORMATION PROCESSING APPARATUS AND CONTROL METHOD

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:at least one processor; and
at least one memory storing print data generating software that is acquired at first timing and an extension application that is acquired at second timing different from the first timing, that is different from the print data generating software, and that, when executed by the at least one processor, causes the information processing apparatus to perform operations including
generating settings information based on information input on a settings screen provided by the extension application, and
performing an extended function provided by the extension application based on the generated settings information,
wherein information regarding the extended function is processed so as not be edited by an operating system (OS), and
wherein print data is generated based on the settings information by the print data generating software.

US Pat. No. 10,795,616

LOCAL PRINTING OF PRINT DATA GENERATED DURING NESTED REMOTE DESKTOP SESSIONS

VMware, Inc., Palo Alto,...

1. A method of processing print data to be printed at a printer attached locally to a client computing device that has established a first remote desktop session with a first virtual machine that has established a second, nested, remote desktop session with a second virtual machine, wherein the print data is generated by the second virtual machine and transmitted to the first virtual machine, said method comprising:upon receipt of the print data by the first virtual machine, determining whether or not the print data can be handled by the first virtual machine; and
upon determining that the print data cannot be handled by the first virtual machine, transmitting the print data to the client computing device without issuing a print instruction to print the print data locally at the first virtual machine.

US Pat. No. 10,795,615

METHOD AND DEVICE FOR STORAGE MANAGEMENT IN A HIERARCHICAL STORAGE SYSTEM

EMC IP Holding Company, L...

1. A storage management method, comprising:obtaining an attribute and access information of a file stored in storage at a first level in a hierarchical storage system, the attribute of the file indicating a size of the file, and the access information indicating an access frequency of the file;
determining necessity of migrating the file based on the attribute of the file and the access information, wherein determining necessity of migrating the file based on the attribute of the file and the access information includes determining a necessity factor for the file based upon, at least in part, the number of users accessing the file, wherein the necessity factor indicates a level that is more suitable for the file to be stored, the level that is more suitable being a determination of greater necessity of migrating the file for storage at the first level and at a second level, respectively, wherein a storage level corresponding to the necessity factor for the file is the same as a storage level at which the file is currently stored, the necessity of migrating the file is determined to be low; and
in response to the necessity exceeding a predetermined threshold, migrating the file to storage at the second level in the hierarchical storage system, the second level being different from the first level.

US Pat. No. 10,795,614

MEMORY CONTROLLER AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A memory controller for controlling an operation of a memory device, the memory controller comprising:a buffer memory including an input buffer for storing input data received from a host and an output buffer for storing output data received from the memory device; and
a buffer management circuit allocating capacities of the input buffer and the output buffer, based on a use state of at least one of the input buffer and the output buffer,
wherein the buffer management circuit includes:
a buffer monitoring circuit generating buffer analysis data according to a usage of the buffer memory;
a threshold value storing circuit storing a threshold value for changing the capacities of the input buffer and the output buffer; and
a buffer capacity determining circuit determining whether the allocated capacities of the input buffer and the output buffer are to be changed by comparing the buffer analysis data and the threshold value.

US Pat. No. 10,795,613

CONVERGENCE MEMORY DEVICE AND OPERATION METHOD THEREOF

SK Hynix Inc., Icheon (K...

1. A convergence memory device, comprising:a plurality of memories; and
a controller configured to control the plurality of memories,
wherein when an access request for accessing a storage region included in one or more of the memories is received, the controller determines whether the access request has been received a preset number of times or more within a refresh cycle,
wherein when the controller determines that the access request has been received the preset number of times or more, the controller postpones processing of the received access request, and
wherein the controller further comprises a buffer configured to buffer, when the access request is a write request, an address for the storage region, a write command for the storage region, write data for the storage region, and information on the delayed processing of the received access request.

US Pat. No. 10,795,612

OFFLOAD PROCESSING USING STORAGE DEVICE SLOTS

EMC IP Holding Company LL...

1. A system comprising:one or more primary processors;
one or more storage slots arranged to receive storage drives;
a switch fabric communicatively coupling the one or more storage slots to the one or more primary processors to perform I/O operations between the one or more primary processors and one or more storage drives installed in the one or more storage slots;
a processing device including one or more offload processors, the processing device disposed within the one or more storage slots, and the one or more offload processors communicatively coupled to the one or more primary processors; and
a memory comprising code stored thereon that, when executed, performs a method comprising:
a first primary processor of the one or more primary processors sending first data and one or more first instructions over the switch fabric to a first offload processor of the one or more offload processors,
the first offload processor executing the one or more first instructions on the first data,
a second primary processor of the one or more primary processors sending second data and one or more second instructions over the switch fabric to a first offload processor of the one or more offload processors, and
the first offload processor executing the one or more second instructions on the second data,
wherein at least one of the one or more offload processors includes a graphical processing unit.

US Pat. No. 10,795,611

EMPLOYING MULTIPLE QUEUEING STRUCTURES WITHIN A USERSPACE STORAGE DRIVER TO INCREASE SPEED

EMC IP Holding Company LL...

1. A method of processing storage requests directed to a storage device of a computing device having a plurality of processing cores, the method comprising:enqueuing, by a first storage driver operating within userspace of the computing device, storage requests initiated by a first core of the computing device onto a first userspace queue, the first userspace queue being dedicated to storage requests from the first core;
enqueuing, by the first storage driver operating within userspace, storage requests initiated by a second core of the computing device onto a second userspace queue, the second userspace queue being dedicated to storage requests from the second core;
transferring, by the first storage driver operating within userspace, storage requests from the first userspace queue and the second userspace queue to a set of userspace dispatch queues, the first userspace queue and the second userspace queue not belonging to the set of userspace dispatch queues;
sending, by the first storage driver operating within userspace, storage requests from the set of userspace dispatch queues to a second storage driver operating within userspace of the computing device; and
sending, by the second storage driver operating within userspace, by way of a kernel helper function, the storage requests received from the first storage driver to a hardware device driver for the storage device for performance by the storage device, the hardware device driver for the storage device operating within a kernel of the computing device.

US Pat. No. 10,795,610

READ LOOK AHEAD DATA SIZE DETERMINATION

Micron Technology, Inc., ...

7. A system comprising:a memory system;
a processing device, operatively coupled with the memory system, to:
receive a read request from a host system;
detect that the read request is associated with a pattern of read requests;
identify a requested transfer size associated with the read request;
determine a size of data to retrieve that corresponds to a multi-plane operation on the memory system in response to detecting that the read request is associated with the pattern of read requests, the size of the data being based on the requested transfer size and a die-level transfer size utilizing a total number of planes comprised on a die of the memory system; and
provide an indication for the memory system to retrieve the data at the determined size.

US Pat. No. 10,795,609

MEMORY SYSTEM AND OPERATING METHOD OF THE SAME

SK hynix Inc., Gyeonggi-...

1. A memory system comprising:a memory device including a plurality of memory blocks;
a write operation management circuit configured to update write operation counts for the plurality of memory blocks;
a first block detector configured to detect a hot memory block based on a first operation count value corresponding to the write operation count of a first memory block on which a write operation has been performed among the plurality of memory blocks;
a second detector configured to detect a cold memory block from second memory blocks adjacent to the first memory block that is detected as the hot memory block, based on a second operation count value corresponding to the write operation count of each of the second memory blocks; and
a controller configured to copy, if the hot memory block and the cold memory block are detected by the first and second detectors, data of the detected hot memory block or data of the detected cold memory block.

US Pat. No. 10,795,608

COMPUTER, COMMUNICATION DRIVER, AND COMMUNICATION CONTROL METHOD

HITACHI, LTD., Tokyo (JP...

1. A computer, comprising:a storage apparatus configured to writably and readably store data;
a memory configured to store a communication driver that is a software program configured to run in an operating system and communicate with a host; and a storage service program that is a software program configured to run on the operating system and control retention of data by the storage apparatus; and
a processor configured to execute the software programs, wherein
the processor is configured to execute the communication driver and the storage service program and configure a plurality of queue pairs including a queue in each of both directions which transmits information in inter-process communication between the communication driver and the storage service program, and the processor is further configured to: configure command distribution information which associates the queue pairs logical volumes with each other; specify a first queue pair corresponding to a first logical volume that is an access destination of a command requested by the host by referring to the command distribution information; and enqueue a command request of the command to the specified first queue pair, and
the command distribution information is created by acquiring a port and a logical unit number (LUN) for each queue pair and associating the port and the logical unit with the queue pair, wherein each logical volume is defined by the pair of the port and the logical unit.

US Pat. No. 10,795,607

MEMORY DEVICE, A MEMORY CONTROLLER, A STORAGE DEVICE INCLUDING THE MEMORY DEVICE AND THE MEMORY CONTROLLER AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A memory controller controlling a semiconductor memory device, the memory controller comprising:a host interface configured to receive a write request, from a host, to store data in the semiconductor memory device;
a processor configured to generate a program command according to a type of the write request;
a memory interface configured to provide the program command to the semiconductor memory device,
wherein the type of the write request includes a first type write request and a second type write request,
wherein the write request includes information indicating whether the write request is the first type write request or the second type write request,
wherein the first type write request requires faster write completion response than the second type write request, and
wherein the processor, if the type of the write request is the first type write request, generates the program command instructing the semiconductor memory device to perform a program operation using one verify voltage for at least two program states at the same time.

US Pat. No. 10,795,606

BUFFER-BASED UPDATE OF STATE DATA

Hewlett Packard Enterpris...

1. A system comprising:a processor including one or more electronic circuits to:
initialize a first memory buffer and a second memory buffer with initial state data;
determine a next state data based on non-state data inputs and a portion of current state data, the next state data relating to a time period after a current point in time reflected by the current state data; and
store in the second memory buffer a new difference data representing differences between the next state data and the current state data.

US Pat. No. 10,795,605

STORAGE DEVICE BUFFER IN SYSTEM MEMORY SPACE

Dell Products L.P., Roun...

1. An information handling system, comprising:a processor;
a system main memory unit; and
a storage device, comprising:
a buffer;
a storage unit;
a first interface module; and
a controller configured to control read/write operations of the storage device, wherein the first interface module couples the processor to the controller, and wherein the controller is configured to:
write data from the storage unit into the buffer to buffer data from the storage unit;
wherein the processor is configured to map the buffer and the system main memory unit into the system memory address space, and
wherein the processor is configured to communicate with the buffer and the storage unit through the first interface module.

US Pat. No. 10,795,604

REPORTING AVAILABLE PHYSICAL STORAGE SPACE OF NON-VOLATILE MEMORY ARRAY

Western Digital Technolog...

1. A data storage apparatus, comprising:a non-volatile memory array;
an interface; and
a processor coupled to the non-volatile memory array and the interface and configured to:
determine an amount of available physical storage space in the non-volatile memory array, wherein the amount of available physical storage space is based on a total amount of physical storage space, a first amount of storage space comprising used blocks, and a second amount of storage space comprising worn-out blocks,
determine an amount of write data associated with a pending write command in a command queue of the apparatus,
compare the amount of available physical storage space with the amount of write data,
generate an indication based on the comparison, wherein the indication indicates that the apparatus does not have sufficient data storage space for servicing the pending write command, and
send the indication to a host device via the interface thereby allowing the processor to take action to prevent the data storage apparatus from switching to a read-only mode of operation.

US Pat. No. 10,795,603

SYSTEMS AND METHODS FOR WRITING ZEROS TO A MEMORY ARRAY

Micron Technology, Inc., ...

1. A method comprising:activating a first wordline of a memory device, wherein activating the first wordline comprises accessing a memory address associated with the first wordline;
writing logical zero to each memory cell of a first plurality of memory cells of the memory device, wherein each memory cell of the first plurality of memory cells corresponds to the first wordline of the memory device; and
copying the first wordline to a second wordline, wherein copying the first wordline to the second wordline comprises activating the second wordline of the memory device such that the first wordline and the second wordline are simultaneously active and each memory cell of the first plurality of memory cells drives, at least partially, each corresponding memory cell of a second plurality of memory cells to logical zero, wherein each memory cell of the second plurality of memory cells corresponds to the second wordline.

US Pat. No. 10,795,602

SELECTIVELY DESTAGING DATA UPDATES FROM WRITE CACHES ACROSS DATA STORAGE LOCATIONS

International Business Ma...

1. A computer-implemented method, comprising:performing an iterative process for each portion of data in a write cache, wherein the iterative process includes:
determining whether a given portion of data was added to the write cache prior to completion of a most recent flash copy operation;
in response to determining that the given portion of data was not added to the write cache prior to completion of a most recent flash copy operation, determining whether the given portion of data has a clock bit value corresponding thereto;
in response to determining that the given portion of data does not have a clock bit value corresponding thereto, calculating a clock bit value for the given portion of data, wherein the clock bit value is calculated based on a current amount of unused storage capacity in the write cache; and
in response to determining that the given portion of data has a clock bit value corresponding thereto, decrementing the clock bit value by a predetermined amount.

US Pat. No. 10,795,600

INFORMATION PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM FOR AVOIDING ACCIDENTAL DATA DELETION DURING DATA MIGRATION

FUJITSU LIMITED, Kawasak...

1. A method for an information process, the method comprising:executing a reception process that includes receiving a request, the request including any of a first request and a second request; and
executing a control process that includes
performing a first process when the first request is received by the reception process, the first request being a request for executing a first migration process configured to migrate data from a first storage device to a second storage device having a higher access speed than the first storage device, the first process including recording state information indicating that the first migration process is being executed and starting the execution of the first migration process, and
performing a second process when the second request is received by the reception process, the second request being a request for executing a second migration process configured to migrate the data from the second storage device to the first storage device, the second process including stopping the execution of the first migration process and starting the execution of the second migration process, in a case where the state information indicates that the first migration process is being executed.

US Pat. No. 10,795,599

DATA MIGRATION METHOD, HOST AND SOLID STATE DISK

HUAWEI TECHNOLOGIES CO., ...

42. A source solid state disk (SSD) comprising:a memory; and
a processor coupled to the memory and configured to:
receive a data migration instruction carrying access information of a register of a controller of a target SSD;
generate a read instruction according to the access information and migrated data information of to-be-migrated data in a source flash memory of the source SSD;
execute the read instruction to read a data block indicated in the read instruction from the source flash memory into the register; and
send a write request to the target SSD to instruct the target SSD to write the data block in the register to a target flash memory of the target SSD,
wherein the write request carries the access information.

US Pat. No. 10,795,598

VOLUME MIGRATION FOR STORAGE SYSTEMS SYNCHRONOUSLY REPLICATING A DATASET

PURE STORAGE, INC., Moun...

1. A method comprising:determining that a performance metric for accessing a volume stored on a first storage system would improve if transferred to a second storage system based on a comparison of state information of a plurality of storage systems and one or more performance metrics for the first storage system, wherein the first storage system is included in a set of storage systems synchronously replicating received I/O operations and the second storage system is not included in the set of storage systems synchronously replicating received I/O operations;
initiating, based on the determination, a transfer of the volume from the first storage system to the second storage system; and
during the transfer of the volume: determining status information for the transfer;
receiving an I/O operation directed to the volume; and directing, based on the status information, the I/O operation to either the first storage system or the second storage system.

US Pat. No. 10,795,597

THINLY PROVISIONED DISK DRIVES WITH ZONE PROVISIONING AND COMPRESSION IN RELATION TO ZONE GRANULARITY

SEAGATE TECHNOLOGY LLC, ...

1. A method for data storage comprising:receiving a command to write data to a disk drive;
determining a maximum amount of space on the disk drive that would be used to write the data in an uncompressed state to the disk drive;
selecting a compression level for the data;
provisioning a zone of a physical space of the disk drive based at least in part on the determined maximum amount of space;
compressing the data at the selected compression level; and
storing the compressed data in the zone.

US Pat. No. 10,795,596

DELAYED DEDUPLICATION USING PRECALCULATED HASHES

EMC IP Holding Company LL...

1. A method of performing deduplication by a computing device, the method comprising:as data is received by the computing device into blocks as part of write requests, creating an entry in a log for each of the blocks, each entry including information about that respective block and a digest computed from that respective block; and
after accumulating multiple entries in the log, processing the log for delayed deduplication, the processing including (i) retrieving digests from the log, (ii) performing lookups within a deduplication table of the retrieved digests, and (iii) performing deduplication operations based on the lookups using the information about blocks included within the log.

US Pat. No. 10,795,595

TECHNOLOGIES FOR LIFECYCLE MANAGEMENT WITH REMOTE FIRMWARE

Intel Corporation, Santa...

1. A computing device for device lifecycle management, the computing device comprising:a first controller coupled to a first controller memory;
one or more processors; and
one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the computing device to:
load a boot environment in response to a start of a boot process of the computing device;
connect, by the boot environment, to a lifecycle management server via a network connection in response to loading of the boot environment;
download, by the boot environment, a first firmware image from the lifecycle management server via the network connection; and
install, by the boot environment, the first firmware image to the first controller memory;
wherein the first controller is to access the first firmware image in the first controller memory in response to installation of the first firmware image; and
wherein the one or more memory devices have stored therein a plurality of instructions that, when executed by the one or more processors, further cause the computing device to continue the boot process of the computing device in response to the installation of the first firmware image.

US Pat. No. 10,795,594

STORAGE DEVICE

SAMSUNG ELECTRONICS CO., ...

1. A method of operating a storage device comprising a non-volatile memory, the method comprising:receiving, by the storage device, a write command from a host in a first state of the storage device;
deciding, by the storage device, a write mode of the storage device based on an operation mode of the storage device and write information included in the write command, the operation mode indicates whether a cache function is activated and the write information indicates whether the write command requires programming on the non-volatile memory;
transitioning, by the storage device, an operation state of the storage device from the first state to a second state;
receiving, by the storage device, write data from the host in the second state of the storage device; and
transitioning, by the storage device, the operation state of the storage device from the second state to one of the first state and a third state based on the write mode,
wherein the first state is operative to receive the write command, the second state is operative to receive the write data, and the third state is operative to program the received write data to the non-volatile memory.

US Pat. No. 10,795,593

TECHNOLOGIES FOR ADJUSTING THE PERFORMANCE OF DATA STORAGE DEVICES BASED ON TELEMETRY DATA

Intel Corporation, Santa...

1. A compute device comprising:a compute engine to:
receive, with communication circuitry and through a network, telemetry data indicative of a present configuration and performance of each of multiple data storage devices, including a topology that indicates that certain functions that are frequently used together are closer together in memory;
determine, as a function of the received telemetry data, a replacement configuration to improve performance of one or more of the data storage devices; and
send, with the communication circuitry, responsive data that is usable by the one or more of the data storage devices to improve the performance of the one or more data storage devices.

US Pat. No. 10,795,592

SYSTEM AND METHOD FOR SETTING COMMUNICATION CHANNEL EQUALIZATION OF A COMMUNICATION CHANNEL BETWEEN A PROCESSING UNIT AND A MEMORY

Dell Products, L.P., Rou...

1. An information handling system comprising:a control processing unit including a first set of processor cores and a second set of processor cores that are coupled to a first memory controller and a second memory controller, respectively, of the control processing unit, wherein the control processing unit is configured to host a basic input output system (BIOS); and
a first set of memory devices and a second set of memory devices that are coupled to the first memory controller and the second memory controller, respectively, wherein a first set of communication channels couples the first set of memory devices to the first memory controller while a second set of communication channels couples the second set of memory devices to the second memory controller, wherein the first set of communication channels includes a first communication channel to couple a first memory device to the first memory controller, and a second communication channel to couple a second memory device to the first memory controller, wherein the BIOS initially sets settings of transmission and reception components of the first memory controller and the second memory controller based upon channel parameters pre-fed into the BIOS from models performed on the first and second communication channels, wherein the BIOS resets the settings of the transmission and reception components of the first memory controller and the second memory controller based upon an in situ testing that extracts relevant parameters of respective communication channels in the first set of communication channels and the second set of communication channels, wherein the BIOS again resets the settings of the transmission and reception components of the first memory controller and the second memory controller based upon data loss measurements of corresponding communication channels in the first set of communication channels and the second set of communication channels, the BIOS to determine a performance of the first set of memory devices, to characterize the performance of the first set of memory devices into one of a plurality of performance levels, wherein the plurality of performance levels includes a strong performance level, a medium performance level, and a weak performance level, and to set an equalization of the first set of communication channels to one of a plurality of equalization levels based on the one of the plurality of performance levels characterized for the first set of memory devices, wherein the first communication channel is set to a first equalization to save power based on the first memory device being characterized into the strong performance level and the second communication channel is set to a second equalization based on the second memory device being characterized into the weak performance level, wherein the first equalization is lower than the second equalization.

US Pat. No. 10,795,591

SAFE USERSPACE DEVICE ACCESS FOR NETWORK FUNCTION VIRTUALIZATION USING AN IOMMU TO MAP SUPERVISOR MEMORY TO A RESERVED RANGE OF APPLICATION VIRTUAL ADDRESSES

Red Hat, Inc., Raleigh, ...

7. A method comprising:allocating a first portion of memory, wherein the memory also has a second portion;
reserving a range of application virtual addresses;
programming an input output memory management unit (IOMMU) to map the first portion of memory to the range of application virtual addresses in the IOMMU, wherein an application is restricted from modifying data within the range of application virtual addresses;
configuring a device to use the first portion of memory;
receiving a request including at least one virtual address and length from the application to use the device;
verifying that the at least one virtual address and length in the request do not overlap the reserved range of application virtual addresses; and
submitting the request to the device.

US Pat. No. 10,795,590

METHOD AND APPARATUS FOR FLEXIBLE RAID IN SSD

Futurewei Technologies, I...

1. A device comprising:an interface configured to be coupled to a data store comprising a plurality of storage pages; and
a storage controller coupled to the interface, the storage controller configured to organize a subset of the plurality of storage pages into a RAID group, each storage page of the RAID group including membership information identifying the subset of the plurality of storage pages in the RAID group, wherein the membership information is a plurality of bits, each bit of the plurality of bits corresponding to a respective storage page of the RAID group, each bit indicating whether the respective storage page is in the RAID group.

US Pat. No. 10,795,589

MEMORY SYSTEM AND MEMORY CONTROL METHOD

TOSHIBA MEMORY CORPORATIO...

1. A memory system comprising:a nonvolatile memory device having a physical storage area that includes a first bank and a second bank, each of the first and second banks including a first plane and a second plane, each of the first and second planes including a plurality of physical blocks, and each of the physical blocks including a plurality of pages, wherein one physical block is a unit of erasing and one page is a unit of reading and writing; and
a controller circuit configured to execute a patrol process on each of the plurality of pages of each of the plurality of physical blocks of each of the first and second planes of each of the first and second banks in the nonvolatile memory device by repeatedly performing a first process on one page across each of the plurality of physical blocks of each of the first and second planes of each of the first and second banks sequentially for each of the plurality of pages, a sequence of the first processes including a second process of reading and a third process of data verification on a first page across each of the plurality of physical blocks of each of the first and second planes of each of the first and second banks and then the second process of reading and the third process of data verification on a second page across each of the plurality of physical blocks of each of the first and second planes of each of the first and second banks, wherein
during the first process, the controller circuit performs the third process of data verification through a plurality of channels wherein data verification is performed on a plural number of physical blocks in parallel.

US Pat. No. 10,795,588

CHECK POINT RECOVERY BASED ON IDENTIFYING USED BLOCKS FOR BLOCK-BASED BACKUP FILES

EMC IP HOLDING COMPANY LL...

1. A system for check point recovery based on identifying used blocks for block-based backup files of a volume, the system comprising:a processor; and
a processor-based application, which when executed on a computer, will cause the processor to:
receive a request to restore the volume that was backed up at a user specified time;
initialize a vector for payload blocks to be recovered;
identify, in response to the receiving the request to restore the volume, data blocks within a plurality of backup files of the volume that were being used at the user specified time and data blocks that were unused at the user specified time, the identifying being performed using a disk layout files that lists used and unused data blocks within the corresponding backup files, the unused blocks having been deleted before the user specified time, the plurality of backup files includes a full backup file and a plurality of corresponding incremental backup files;
assign to the initialized vector the payload blocks that store the identified used data blocks in the plurality of backup files;
assign each of the backup files that store the payload blocks to a backup list;
recover only the data blocks that were being used at the user specified time as identified from the disk layout file by: (i) identifying the used data blocks stored in each of the incremental backup files, (ii) appending the payload blocks that store identified used data blocks to the initialized vector, and (iii) creating a merged file based on the backup files in the backup list for each payload block in the vector, the recovering being performed without reading each data block backed up via the at least one backup file for the system; and
restore the volume by restoring only the used data blocks within the merged file that were being used at the user specified time as identified from the disk layout file.

US Pat. No. 10,795,587

STORAGE SYSTEM AND CLUSTER CONFIGURATION CONTROL METHOD

HITACHI, LTD., Tokyo (JP...

1. A storage system including N storage nodes that are members of a storage cluster (N being an integer equal to or larger than 3), whereina first storage node that is any one of the N storage nodes
determines whether importance of a second storage node is equal to or larger than a predetermined importance and reliability of the second storage node is equal to or larger than a predetermined reliability, the second storage node being a storage node set as an object among storage nodes other than the first storage node,
when the determination result is true, performs reintegration which is processing for causing the second storage node to leave the storage cluster and causing the second storage node to become a member of the storage cluster again,
the importance of the second storage node depends on highness of availability when assuming that the second storage node has left the storage cluster,
the reliability of the second storage node depends on tendency of operation of the second storage node
each of the N storage nodes comprises a processor unit,
the N storage nodes comprise Q cluster control programs (Q being an integer equal to or larger than 2 and equal to or smaller than N),
the Q cluster control programs are respectively arranged in Q storage nodes,
the Q cluster control programs includes
a primary cluster control program; and
one or more secondary cluster control programs, the one or more secondary cluster control programs each being a cluster control program other than the primary control program, and arranged in one or more storage nodes other than a storage node where the primary cluster control program is arranged,
the primary cluster control program, when executed by the processor unit in a storage node where the primary cluster control program is arranged, manages a cluster serving as the storage system,
when the primary cluster control program is stopped, any one of the one or more secondary cluster control programs becomes primary instead of the primary cluster control program,
the importance of the second storage node depends on a remaining node number that is Q when the second storage node is not included in Q storage nodes where the Q cluster control programs are arranged and is a value obtained by subtracting the number of the second storage nodes from Q when the second storage node is included in the Q storage nodes,
the importance of the second storage node depends on whether the remaining node number is equal to or less than a threshold of the remaining node number, and
the threshold of the remaining node number is a value obtained by adding the number of storage nodes having a possibility to simultaneously be targeted for the reintegration to the majority of Q.

US Pat. No. 10,795,586

SYSTEM AND METHOD FOR OPTIMIZATION OF GLOBAL DATA PLACEMENT TO MITIGATE WEAR-OUT OF WRITE CACHE AND NAND FLASH

Alibaba Group Holding Lim...

1. A computer-implemented method for facilitating global data placement in a storage device, the method comprising:receiving a request to write first data to the storage device;
selecting, based on at least one factor, at least one of a plurality of physical media of the storage device to which to write the first data,
wherein the at least one factor includes a block size associated with the first data and a latency requirement for the first data,
wherein the plurality of physical media includes at least a fast cache medium and a solid state drive, and
wherein selecting the at least one physical medium involves:
in response to determining that the block size associated with the first data is greater than a predetermined size, and determining that the latency requirement is greater than a predetermined level, selecting the solid state drive; and
writing the first data to the at least one selected physical medium.

US Pat. No. 10,795,585

NONVOLATILE MEMORY STORE SUPPRESION

Intel Corporation, Santa...

1. An electronic processing system, comprising:memory; and
a memory controller communicatively coupled to the memory, the memory controller including logic to:
determine if a memory operation on the memory is avoidable,
suppress the memory operation if the memory operation is determined to be avoidable,
in response to one or more silent store operations, collect information related to the one or more silent store operations comprising collection of a clock duration and a data address related to the one or more silent store operations, and
report the information related to the one or more silent store operations comprising storage of the clock duration and the data address related to the one or more silent store operations.

US Pat. No. 10,795,584

DATA STORAGE AMONG A PLURALITY OF STORAGE DRIVES

Liqid Inc., Broomfield, ...

1. A data storage device, comprising:a plurality of storage drives each comprising an associated drive Peripheral Component Interconnect Express (PCIe) interface; and
a control system configured to:
receive, over a host PCIe link, write operations for storage of data by the data storage device;
process the write operations against storage allocation information to apportion the data for storage among more than one of the storage drives based at least on processing the data against the storage allocation information to determine target storage drives to store the data; and
transfer corresponding portions of the data to associated storage drives over corresponding drive PCIe interfaces.

US Pat. No. 10,795,583

AUTOMATIC DATA PLACEMENT MANAGER IN MULTI-TIER ALL-FLASH DATACENTER

SAMSUNG ELECTRONICS CO., ...

1. A system, comprising:a plurality of storage devices offering a plurality of resources, the plurality of storage devices organized into a plurality of storage tiers and storing first data for a first virtual machine and second data for a second virtual machine;
a receiver to receive a first Input/Output (I/O) command from the first virtual machine, a second I/O command from the second virtual machine, first performance data modelling the performance of the first virtual machine in the plurality of storage tiers, and second performance data modelling the performance of the second virtual machine in the plurality of storage tiers;
a transmitter to transmit a first response to the first I/O command to the first virtual machine and a second response to the second I/O command to the second virtual machine; and
an auto-tiering controller to select a first storage tier to store the first data for the first virtual machine, to select a second storage tier to store the second data for the second virtual machine, and to migrate at least one of the first data for the first virtual machine to the first storage tier or the second data for the second virtual machine to the second storage tier responsive to the first performance data and the second performance data,
wherein the auto-tiering controller is operative to select the first storage tier to store the first data for the first virtual machine and to select the second storage tier to store the second data for the second virtual machine to optimize the performance of all virtual machines across the plurality of storage tiers,
wherein the auto-tiering controller is operative to factor in a change in performance resulting from migrating the at least one of the first data for the first virtual machine to the first storage tier or the second data for the second virtual machine to the second storage tier and a migration cost of migrating at least one of the first data for the first virtual machine to the first storage tier or the second data for the second virtual machine to the second storage tier.

US Pat. No. 10,795,582

APPARATUSES AND METHODS FOR SIMULTANEOUS IN DATA PATH COMPUTE OPERATIONS

Micron Technology, Inc., ...

1. An apparatus, comprising:an array of memory cells;
sensing circuitry selectably coupled to the array of memory cells;
a controller associated with the array; and
a plurality of input/output (I/O) lines shared as a data path for in data path compute operations associated with the array, wherein:
the plurality of shared I/O lines are selectably coupled to a plurality of logic stripes in the data path; and
wherein the controller is configured to cause a first portion of logic stripes to perform a first number of operations on a first portion of data moved from the array of memory cells to the first portion of logic stripes while a second portion of logic stripes perform a second number of operations on a second portion of data moved from the array of memory cells to the second portion of logic stripes.

US Pat. No. 10,795,581

GPT-BASED DATA STORAGE PARTITION SECURING SYSTEM

Dell Products L.P., Roun...

1. A Globally Unique Identifier (GUID) Partition Table (GPT) partition locking key system, comprising:a key management system; and
a server device that is coupled to the key management system through a network, wherein the server device includes:
a storage system including a Globally Unique Identifier (GUID) Partition Table (GPT) identifying a plurality of data storage partitions provided on the storage system and a hidden partition provided on the storage system;
a remote access controller device that is configured to retrieve a partition locking key through the network from the key management system and provide the partition locking key for storage in the hidden partition identified by the GPT;
an operating system engine that is configured to provide an operating system application; and
a Basic Input/Output System (BIOS) that includes a runtime service that is configured to:
receive a request to provide access for the operating system application to a first data storage partition of the plurality of data storage partitions that is provided on the storage system and identified by the GPT;
access the partition locking key in the hidden partition that is provided on the storage system and identified by the GPT; and
unlock, using the partition locking key, the first data storage partition to allow the operating system application to access data stored on the first data storage partition.

US Pat. No. 10,795,580

CONTENT ADDRESSABLE MEMORY SYSTEM

XILINX, INC., San Jose, ...

1. A method for configuring a hash content addressable memory system that includes a first hash content addressable memory block (HCB) that is a physical subsystem of the hash content addressable memory system, the method comprising:configuring, by a processor executing software, bus select logic within the first HCB to respond to only a first client from a plurality of clients;
configuring, by the processor, the first HCB to support either one table or multiple tables used by the first client; and,
writing, by the processor, a first key mask into the first HCB, the first key mask specifying a masked and an unmasked portion of a search key and the first key mask also specifying a size of the search key;
wherein each client from the plurality of clients is connected to a dedicated operation bus from a plurality of operation buses, the dedicated operation bus not being connected to any other client from the plurality of clients;
wherein each client from the plurality of clients is connected to a dedicated key bus from a plurality of key buses, the dedicated key bus not being connected to any other client from the plurality of clients;
wherein the bus select logic is connected to all the operation buses in the plurality of operation buses; and,
wherein the bus select logic is connected to all the key buses in the plurality of key buses; and,
wherein when the bus select logic is configured to respond to the first client as configured by the processor, the first HCB reads input from only the dedicated operation bus and the dedicated key bus connected to the first client and ignores input from operation buses and dedicated key buses connected to all other clients in the plurality of clients.

US Pat. No. 10,795,579

METHODS, APPARATUSES, SYSTEM AND COMPUTER PROGRAM PRODUCTS FOR RECLAIMING STORAGE UNITS

EMC IP Holding Company LL...

1. A method of reclaiming storage units, comprising:receiving, by a system comprising a processor, a request to perform an operation from an application;
in response to the operation being performed, recording the operation in a journal recording of a sequence of operations that have been performed, wherein the operation is part of the sequence of operations;
determining that a first storage unit, allocated to store one or more objects at a first node, is reclaimable to store second data representative of a second object created at the first node, wherein first data representative of a first object in the first storage unit is backed up to a second storage unit at a second node;
in response to the determining that the first storage unit is reclaimable to store the second data representative of the second object created at the first node, determining a condition to be satisfied for reclaiming the second storage unit;
sending a command indicating the condition to the second node, such that the second node reclaims the second storage unit in response to the condition being satisfied; and
in response to the command being sent, reclaiming the first storage unit to store the second data representative of the second object created at the first node.

US Pat. No. 10,795,578

DEDUPLICATING DATA BASED ON BOUNDARY IDENTIFICATION

Red Hat, Inc., Raleigh, ...

1. A system comprising:a file storage system configured to store data in the form of storage system blocks on a non-volatile storage device, each storage system block being a same fixed size, wherein the file storage system is configured to store storage system blocks of only the fixed size;
a receiver module configured to receive write requests and read requests, wherein each write request comprises an indication to write a particular data item to the file storage system, and wherein each read request comprises an indication to read a particular data item from the file storage system; and
a data alignment module configured to:
identify, in response to the receiver module receiving a write request, a plurality of boundaries between portions of data within a data item of the write request, and provide to the file storage system an indication to allocate storage system blocks to the data item of the write request based on the plurality of boundaries, at least some of the portions of data being a different size than others of the portions of data, each storage system block being allocated to a single one of the portions of data; and
retrieve, in response to the receiver module receiving a read request, storage system blocks of the file storage system allocated to a data item of the read request.

US Pat. No. 10,795,577

DE-DUPLICATION OF CLIENT-SIDE DATA CACHE FOR VIRTUAL DISKS

Commvault Systems, Inc., ...

1. A method of writing a block of data to a virtual disk on a remote storage platform, said method comprising:receiving, at a first computer server of a plurality of computer servers of a compute farm, a write request to write said block of data to said remote storage platform, said write request including an offset within said virtual disk and originating at an other second computer server of said compute farm different from said first computer server,
wherein said write request originated at a first software application of said compute farm that writes to said remote storage platform, and
wherein said first computer server receives all write requests to write data to said remote storage platform from all software applications, including the first software application, executing on any computer server in said compute farm that enable client-side caching for a virtual disk in said remote storage platform;
writing said block of data to a storage node of said remote storage platform;
calculating a hash value of said block of data using a hash function;
determining whether said hash value exists in a first metadata table of a block cache of said first computer server, wherein said block cache is a global cache for all said software applications that enable client-side caching in said compute farm;
when said first computer server determines that said hash value exists in said first metadata table, adding an entry in a second metadata table of said block cache including said virtual disk offset and said hash value as a key/value pair; and
wherein said calculating, said determining, and said adding are performed only when client-side caching is enabled for said virtual disk.

US Pat. No. 10,795,576

DATA RELOCATION IN MEMORY

Micron Technology, Inc., ...

1. An apparatus, comprising:a memory having a plurality of physical units of memory cells, wherein:
each of the physical units has a different sequential physical address associated therewith;
a first number of the physical units have data stored therein; and
a second number of the physical units do not have data stored therein, wherein the physical address associated with each respective one of the second number of physical units is a different consecutive physical address in the sequence; and
circuitry configured to relocate the data stored in the physical unit of the first number of physical units, whose physical address in the sequence is immediately before the first of the consecutive physical addresses associated with the second number of physical units, to the last of the consecutive physical addresses associated with the second number of physical units.

US Pat. No. 10,795,575

DYNAMICALLY REACTING TO EVENTS WITHIN A DATA STORAGE SYSTEM

International Business Ma...

1. A computer-implemented method, comprising:identifying an event associated with data stored in a data storage system, utilizing one or more hooks;
comparing a plurality of policy rules to metadata identified for the event in order to determine a matching policy rule, where the metadata includes an indication of data read, written, and altered during the event; and
synchronously or asynchronously implementing one or more actions according to the matching policy rule, the one or more actions including enqueueing an event onto an external queue.