US Pat. No. 10,691,582

CODE COVERAGE

Sony Interactive Entertai...

1. A method for identifying applicability of tests to new code sets, the method comprising:receiving new code to be merged into a master code branch of an operational software application;
receiving a plurality of tests, each test applicable to a respective different type of code portion;
evaluating applicability of the tests to the new code, wherein evaluating the applicability of the tests comprises identifying one or more portions of the new code to which at least one of the tests is applicable and at least one other portion of the new code to which none of the tests is applicable;
generating a visual map of the new code, wherein the generated visual map:
illustrates which of the portions of the new code have been identified as the portions to which the at least one of the tests is applicable and which of the portions of the new code have been identified as the at least one other portion to which none of the tests is applicable, and
indicates an outcome of the at least one test applied to the respective identified portion of the new code;
storing the outcome of each of the tests applied to the identified portions of the new code in memory; and
merging the new code into the master code branch based on the outcome of each of the applied tests.

US Pat. No. 10,691,581

DISTRIBUTED SOFTWARE DEBUGGING SYSTEM

SAS INSTITUTE INC., Cary...

1. A non-transitory computer-readable medium having stored thereon computer-readable instructions that when executed by a first computing device cause the first computing device to:receive a connection request from a debug user interface (UI) by a middle tier instance executing on the first computing device through a first predefined computer port of the first computing device, wherein the connection request includes authentication data;
connect to the debug UI when an authentication is successful using the included authentication data;
receive a connection data request from the debug UI by the middle tier instance, wherein the connection data request indicates the connection is for a debug engine executing on a second computing device different from the first computing device;
send connection data for the debug engine from the middle tier instance to the debug UI, wherein the connection data is provided to the debug engine from the debug UI, wherein the connection data includes a second predefined computer port of the first computing device and a security token, wherein the second predefined computer port is different than the first predefined computer port;
(A) receive an initial packet by the middle tier instance from the debug engine through the second predefined computer port of the first computing device, wherein the initial packet includes the security token;
(B) evaluate, by the middle tier instance, the security token included in the received initial packet;
(C) when the evaluated security token matches the security token included in the sent connection data,
define, by the middle tier instance, a unique engine identifier for the debug engine;
send a notification from the middle tier instance to the debug engine confirming a successful connection; and
send an engine connection confirmation from the middle tier instance to the debug UI indicating that a connection with the debug engine has been established and including the unique engine identifier for the debug engine;
(D) repeat (A) to (C) with each debug engine of a plurality of debug engines as the debug engine;
(E) receive a debug command from the debug UI by the middle tier instance, wherein the debug command includes a debug action to perform by the debug engine; and
(F) send the debug command to the debug engine to debug a software application as it is executing on the second computing device.

US Pat. No. 10,691,580

DIAGNOSING APPLICATIONS THAT USE HARDWARE ACCELERATION THROUGH EMULATION

XILINX, INC., San Jose, ...

1. A method, comprising:emulating, using a processor, a kernel designated for hardware acceleration by executing a device program binary that, when executed, implements a register transfer level simulator for a register transfer level file specifying the kernel for implementation in a dynamically reconfigurable region of programmable circuitry of a device;
wherein the device program binary is executed in coordination with a host binary that, when executed, emulates a host, and a static circuitry binary implemented as object code;
wherein the static circuitry binary is compiled from high-level language models of static circuitry implemented in a static region of the programmable circuitry of the device, wherein the high-level language models of the static circuitry include a device memory model for a memory, an interface model for an interface to the host, and one or more interconnect models;
wherein the static circuitry binary, when executed, emulates operation of the static circuitry coupled to the kernel and the memory and wherein the one or more interconnect models are configured to translate between high level language transactions used by the device memory model and the interface model and register transfer level signals used by the register transfer level simulator;
during the emulating, the register transfer level simulator calling one or more functions of the static circuitry binary and detecting, using diagnostic program code of the static circuitry binary, an error condition caused by the device program binary and relating to a memory access violation or a kernel deadlock; and
outputting a notification of the error condition.

US Pat. No. 10,691,579

SYSTEMS INCLUDING DEVICE AND NETWORK SIMULATION FOR MOBILE APPLICATION DEVELOPMENT

WAPP TECH CORP., Alberta...

1. A non-transitory, computer-readable medium comprising software instructions for developing an application to be run on a mobile device, wherein the software instructions, when executed, cause a computer to:display a list of one or more mobile device types from which a user can select;
simulate one or more characteristics of a selected mobile device type;
initiate loading of at least one of the selected characteristics from at least one of a remote server and a computer-readable media;
monitor utilization of one or more resources of the selected mobile device type over time as an application is running;
display a representation of one or more of the monitored resources.

US Pat. No. 10,691,578

DERIVING CONTEXTUAL INFORMATION FOR AN EXECUTION CONSTRAINED MODEL

The MathWorks, Inc., Nat...

1. A method comprising: storing in a memory a graphical model having executable semantics, the graphical model including model elements;receiving a constraint on execution of the graphical model, where the constraint restricts the execution of the graphical model;
storing the constraint in the memory;
receiving a scope of analysis for a given model element of the graphical model identified for analysis, the scope of analysis defining a region of the graphical model whose execution depends on execution of the given model element;
executing the graphical model while the constraint on the execution is imposed;
automatically deriving, by a processor coupled to the memory, contextual information, where the contextual information includes a group of the model elements of the graphical model that are contained within the region of the graphical model defined by the scope of analysis, and are active during the executing while the constraint is imposed; and
outputting the automatically derived contextual information to an output device coupled to the processor, wherein at least one of:
i) first model element of the graphical model is a state-based element, and the constraint includes execution of the graphical model while the first model element either remains in a first state or is precluded from entering a second state;
ii) the constraint is derived from one or more inputs for a dynamic simulation of the graphical model, a start time, and an end time;
iii) one or more of the model elements have one or more executable modes where the one or more executable modes are implemented through different states of the one or more of the model elements, and the constraint restricts the one or more of the model elements to a given executable mode that is active during a specified time epoch; or
iv) the graphical model includes a subsystem having a plurality of blocks and a state chart having a plurality of states, the constraint holds the state chart in one of the plurality of states, and the automatically derived contextual information includes an identification of which of the plurality of blocks of the subsystem are active during the executing of the graphical model while the constraint is applied to the state chart.

US Pat. No. 10,691,577

IDENTIFYING FLAWED DEPENDENCIES IN DEPLOYED APPLICATIONS

SNYK LIMITED, London (GB...

1. A method performed by a processing apparatus external to a deployment platform, wherein said method comprises:obtaining a list of dependencies used by an application that is deployed on the deployment platform; said obtaining the list of dependencies comprising:
retrieving a package specification of the application from the deployment platform, wherein said retrieving is performed in response to a retrieval query sent to the deployment platform;
determining a time of deployment of the application, wherein the time of deployment is a time when the application was deployed on the deployment platform; and
resolving the package specification based on the time of deployment, said resolving comprising determining a set of dependencies that were obtained by the deployment platform at the time of deployment in order to satisfy the package specification, by mimicking the resolution performed by the deployment platform;
mapping each dependency of the list of dependencies with a flaws database, the flaws database comprising indications of known flaws for different dependencies and different versions thereof; and
based on said mapping, determining one or more flaws in the application, wherein said determining the one or more flaws is performed externally to the deployment platform and without executing a monitoring process thereon.

US Pat. No. 10,691,576

MULTIPLE RESET TYPES IN A SYSTEM

Amazon Technologies, Inc....

1. A system, comprising:a processor;
a memory coupled to the processor;
a plurality of functional units, including a Peripheral Component Interconnect Express (PCIe) unit coupled to the processor and a plurality of network interface units; and
a plurality of debug units, each debug unit associated with a respective functional unit of the plurality of functional units, wherein a given debug unit of the plurality of debug units is configured to track events and monitor performance of its corresponding respective functional unit, the given debug unit having a trace buffer that includes a static random access memory (SRAM), the trace buffer configured to store processing state of the corresponding respective functional unit;
wherein, upon detection of a PCIe unit error condition, the processor is configured to issue a debug reset command to reset each of the plurality of functional units, without issuing the debug reset command to the plurality of debug units, thereby retaining information stored in the trace buffer SRAM to be read by the processor for debugging the PCIe unit error condition;
wherein the given debug unit is configured to, upon receiving a power-up reset command from the processor, clear information stored in the trace buffer SRAM.

US Pat. No. 10,691,575

METHOD AND SYSTEM FOR SELF-OPTIMIZING PATH-BASED OBJECT ALLOCATION TRACKING

Dynatrace LLC, Waltham, ...

1. A computer-implemented method for monitoring memory allocation during execution of an application in a distributed computing environment, comprising:identifying execution paths in an application using a control flow graph, where each node in the control flow graph belongs to only one identified execution path and the application resides on a computing device;
for each identified execution path in the application, determining memory allocations which occur during execution of a given execution path;
for each identified execution path in the application, determining only one location for an allocation counter in the given execution path, where the allocation counter in the given execution path increments in response to execution of the given execution path;
for each identified execution path in the application, instrumenting the given execution path with the allocation counter at the location, where the allocation counter is placed at the location in the given execution path and the allocation counter reports a count value for the allocation counter; and
determining, for the given execution path, semantics of the increment of the allocation counter of the given execution path as sum of memory allocation performed by the given execution path and corrections for memory allocations reported by another allocation counter in another execution path different than the given execution path.

US Pat. No. 10,691,574

COMPATIBILITY CHECK FOR CONTINUOUS GLUCOSE MONITORING APPLICATION

DexCom, Inc., San Diego,...

1. A method comprising:receiving, by at least one processor, one or more data values from a user equipment having a glucose monitoring application installed on the user equipment, the one or more data values representing results from one or more self-tests performed on the user equipment, the one or more self-tests validating proper operation of one or more features of one or more of the glucose monitoring application and the user equipment;
determining, by the at least one processor, whether the glucose monitoring application is compatible with an operating environment based at least on a comparison of the one or more data values with respective values from a predetermined list of results of self-tests; and
sending, by the at least one processor, a message to the user equipment based on the determining, the message causing the glucose monitoring application to operate in one or more of a normal mode, a safe mode, and a non-operational mode,
wherein the message causes the user equipment to display a user interface view on the user equipment while the glucose monitoring application is in the safe mode, the user interface view indicating that one or more ancillary functions are disabled; and
wherein the message causes the user equipment to display a user interface view on the user equipment while the glucose monitoring application is in the non-operational mode, the user interface view indicating that one or more core functions are disabled, wherein the one or more core functions include one or more modules that are essential to the operation of the glucose monitoring application and wherein the one or more ancillary functions include one or more modules that are not essential to the operation of the glucose monitoring application, wherein the core functions includes one or more of generating an alert if a glucose level of a user is outside of a target range, displaying a glucose level, or prompting calibration of a glucose sensor assembly.

US Pat. No. 10,691,573

BUS DATA MONITOR

THE BOEING COMPANY, Chic...

1. An apparatus comprising a bus data monitor (BDM) in signal communication with a MIL-STD-1553 data bus, the BDM comprising:one or more processing units; and
a computer-readable medium storing instructions that when executed cause the one or more processing units to initiate or perform operations comprising:
receiving data from the MIL-STD-1553 data bus;
accessing a rule set from a computer file stored on the computer-readable medium, wherein the rule set includes a plurality of defined sub-rules that define normal bus behavior of the MIL-STD-1553 data bus;
comparing the received data against the rule set; and
determining that the received data indicates abnormal bus behavior of the MIL-STD-1553 data bus in response to the received data violating any of the sub-rules,
the operations further comprising:
in response to a determination that data of the received data violates a sub-rule of the sub-rules, attributing a risk level based on the sub-rule and a sub-system for which the data is intended; and
after attributing the risk level, performing packet inspection based on the risk level and the sub-system.

US Pat. No. 10,691,572

LIVENESS AS A FACTOR TO EVALUATE MEMORY VULNERABILITY TO SOFT ERRORS

NVIDIA Corporation, Sant...

1. A method, comprising:setting a counter for each entry of a plurality of entries in a memory;
executing a simulation for the memory over a preconfigured window of time;
during the simulation, manipulating each of the counters to record each residency period for the corresponding entry, the residency period defined by:
a first time that the corresponding entry is written with data, and
a second time of a last read of the data from the corresponding entry;
after completion of the simulation, processing the counters to determine a first liveness factor for the memory, the first liveness factor representing a vulnerability of the memory to soft errors.

US Pat. No. 10,691,571

OBTAINING APPLICATION PERFORMANCE DATA FOR DIFFERENT PERFORMANCE EVENTS VIA A UNIFIED CHANNEL

Red Hat, Inc., Raleigh, ...

1. A method comprising:identifying, by a first application executed by a processing device of a computing system, an event type of event to be measured with respect to performance of a second application being monitored;
issuing, by the first application, a first virtual file system call comprising the identified event type as a parameter, wherein the identified event type corresponds to a cumulative counter, and wherein the cumulative counter is a data field to aggregate a value of a first hardware counter for a first processor, and a value of a second hardware counter for a second processor;
subsequent to receiving a file descriptor of the cumulative counter of the identified event type, causing, by the first application, the second application to begin the execution, wherein the first hardware counter is to perform measurements for the first processor during the execution of the second application based on performance characteristics of the second application for the identified event type, and the second hardware counter is to perform measurements for the second processor during the execution of the second application based on the performance characteristics of the second application for the identified event type;
after the execution of the second application is completed, issuing, by the first application, a second virtual file system call including the file descriptor for the cumulative counter of the identified event type as a parameter, wherein a value of the cumulative counter is an aggregation of the value of the first hardware counter and the value of the second hardware counter; and
receiving, by the first application, the value of the cumulative counter of the identified event type from an operating system in response to the second virtual file system call.

US Pat. No. 10,691,570

SOFTWARE ASSISTANT FOR POWER-ON-SELF-TEST (POST) AND BUILT-IN SELF-TEST (BIST) IN IN-VEHICLE NETWORK (IVN) OF CONNECTED VEHICLES

Cisco Technology, Inc., ...

1. A method, comprising:retrieving, by a device in communication with an in-vehicle network (IVN) of a vehicle, a memory sector address of a memory of a component connected to the IVN when a first startup of the vehicle begins, the memory sector address stored in a non-volatile memory;
performing, by the device, a memory test on a first part of the memory starting at the memory sector address for a predetermined increment during the first startup of the vehicle; and
replacing, by the device, the memory sector address with an incremented memory sector address in the non-volatile memory, the incremented memory sector address indicative of the memory sector address incremented by the predetermined increment.

US Pat. No. 10,691,569

SYSTEM AND METHOD FOR TESTING A DATA STORAGE DEVICE

Silicon Motion, Inc., Hs...

1. A system for testing a data storage device, comprising:the data storage device;
an electronic device, comprising a host device coupled to the data storage device and configured to communicate with the data storage device via an interface logic; and
a computer device, coupled to the electronic device and configured to issue a plurality of commands to test operation stability of the data storage device when being coupled to the electronic device in a test procedure,
wherein when the electronic device has been successfully started up, the computer device issues a first command to the electronic device to trigger the electronic device to enter a hibernate mode, and after waiting for a first predetermined period of time, the computer device issues a second command to the electronic device, so as to wake up the electronic device.

US Pat. No. 10,691,568

CONTAINER REPLICATION AND FAILOVER ORCHESTRATION IN DISTRIBUTED COMPUTING ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method for managing volume replication and disaster recovery in a containerized storage environment, by a processor, comprising:establishing a mapping between a PersistentVolumeClaim (PVC) having a correlated Persistent Volume (PV), and a source storage World Wide Name (WWN) and a target storage WWN; and
replicating the mapping as part of a replication operation between the source storage and the target storage thereby maintaining consistency of the PV associated with one or more application containers among the source storage and the target storage.

US Pat. No. 10,691,567

DYNAMICALLY FORMING A FAILURE DOMAIN IN A STORAGE SYSTEM THAT INCLUDES A PLURALITY OF BLADES

Pure Storage, Inc., Moun...

1. A method, the method comprising:identifying a plurality of possible configurations for failure domains in a storage system;
identifying from among the plurality of possible configurations, in dependence upon a failure domain formation policy, an available configuration for a multi-chassis failure domain that includes a first blade mounted within a first chassis and a second blade mounted within a second chassis, wherein each chassis is configured to support multiple types of blades;
creating the multi-chassis failure domain to identify and store data associated with the multi-chassis failure domain at the storage system in accordance with the available configuration;
receiving data to be stored at the storage system;
determining whether the data is associated with the multi-chassis failure domain; and
in response to determining that the data is associated with the multi-chassis failure domain, storing the data at the storage system in accordance with the available configuration.

US Pat. No. 10,691,566

USING A TRACK FORMAT CODE IN A CACHE CONTROL BLOCK FOR A TRACK IN A CACHE TO PROCESS READ AND WRITE REQUESTS TO THE TRACK IN THE CACHE

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing read and write requests from a host to tracks in storage cached in a cache, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations, the operations comprising:maintaining a track format table associating track format codes with track format metadata, wherein each of the track format metadata indicates a layout of data in a track;
staging a track from the storage to the cache;
processing track format metadata for the track staged into the cache;
determining whether the track format table has track format metadata matching the track format metadata of the track staged to the cache;
determining a track format code from the track format table for the track format metadata in the track format table matching the track format metadata of the track staged into the cache in response to the track format table having the matching track format metadata;
generating a cache control block for the track being added to the cache including the determined track format code when the track format table has the matching track format metadata;
receiving a read or write request to a target track from the host on a first channel connecting to the host
determining whether the target track is in the cache;
determining whether the cache control block for the target track includes a valid track format code from the track format table in response to determining that the target track is in the cache; and
failing the read or write request in response to determining that the target track is not in the cache or determining that the cache control block does not include a valid track format code, wherein the failing the read or write request causes the host to resend the read or write request to the target track on a second channel connecting to the host.

US Pat. No. 10,691,565

STORAGE CONTROL DEVICE AND STORAGE CONTROL METHOD

FUJITSU LIMITED, Kawasak...

1. A storage control device, comprising:a first memory configured to
store therein a first startup program for starting up the storage control device;
a second memory different from the first memory and the second memory configured to
store therein a second startup program for starting up the storage control device; and
a processor coupled to the first memory and the second memory;
a main memory coupled to the processor, the main memory being different from the first memory and the second memory, the processor being configured to:
perform a startup process of starting up the storage control device by executing the first startup program stored in the first memory;
perform diagnosis for the first memory during the startup process;
restore, in a case where an abnormality is detected in a first portion of a first area of the first memory, the first portion being less than the entire first area, first data stored in the first portion by overwriting the first data with data of a part of the second startup program stored in the second memory, the part of the second startup program being less than the entire second startup program, the first area being a storage area in which the first startup program is stored; and
restart the storage control device after switching an active startup program to be used in next startup of the storage control device in a case where data of a second part of the first startup program is stored in the first portion, the second part of the first startup program being executed in a period during which the main memory is unavailable and CAR is not yet enabled, the CAR being a function to make use of a cache of the processor as the main memory.

US Pat. No. 10,691,564

STORAGE SYSTEM AND STORAGE CONTROL METHOD

HITACHI, LTD., Tokyo (JP...

1. A storage control method for a storage system that includes a plurality of storage nodes each of which includes one or more processors, and includes one or more storage devices each of which stores data, the method comprising:detecting that any one of a plurality of passive control programs included in two or more program clusters operating on a plurality of the processors included in the plurality of storage nodes has been switched to active; and
changing an operation status of a different passive control program operating in the storage node that includes a passive control program switched to active;
generating a first storage node list and a second storage node list;
wherein:
each of the control programs is a program for performing input and output to and from a storage area associated with the corresponding control program;
each of the two or more program clusters includes
an active control program, and
a passive control program that becomes active in place of the active control program;
a processing resource of at least one of the one or more processors is more used by the passive control programs in an active state than in a passive state; and
the active control program and the passive control program included in a same program cluster are each arranged in the storage nodes different from each other, each of the plurality of storage nodes being configured to include a plurality of the active control program or the plurality of the passive control programs;
the first storage node list includes a list of all storage nodes with zero active control programs in an ascending order of a number of passive control programs;
the second storage node list includes a list of all storage nodes in an ascending order of use resources amount; and
the passive control program switched to active is selected from the first storage node list or the second storage node list.

US Pat. No. 10,691,563

MANAGING SERVICE AVAILABILITY IN A MEGA VIRTUAL MACHINE

Telefonaktiebolaget LM Er...

1. A virtual machine manager managing a plurality of hardware appliances, the virtual machine manager being operative to:select a subset of the plurality of hardware appliances for running a virtual machine (VM);
allocate processor, memory, and other physical hardware resources of the subset of hardware appliances to the VM as virtual hardware resources;
launch the VM on the selected subset of hardware appliances, the VM comprising the virtual hardware resources and spanning the subset of hardware appliances;
replicate and synchronize data associated with the VM within the subset of hardware appliances.

US Pat. No. 10,691,562

MANAGEMENT NODE FAILOVER FOR HIGH RELIABILITY SYSTEMS

AMERICAN MEGATRENDS INTER...

1. A system, comprising:two management devices, each comprising a processor and a non-volatile memory storing computer executable code, wherein one of the two management devices functions as an active node, and the other one of the two management devices functions as a passive node; and
a detection and reversal device respectively connected to the two management devices, and configured to determine status of the active node and when the active node fails, send an activation signal to the passive node;
wherein the computer executable code, when executed at the processor of the active node, is configured to: establish a periodic communication directly between the active node and the passive node to indicate that the active node is currently functioning; and
wherein the computer executable code, when executed at the processor of the passive node, is configured to:
in response to not receiving the periodic communication from the active node for a predetermined time, send a probe signal from the passive node to the detection and reversal device to confirm the status of the active node; and
in response to receiving the activation signal, switch the passive node to the active node.

US Pat. No. 10,691,561

FAILOVER OF A VIRTUAL FUNCTION EXPOSED BY AN SR-IOV ADAPTER

International Business Ma...

1. A method of failover of a virtual function exposed by an SR-IOV (‘Single Root I/O Virtualization’) adapter of a computing system, the method comprising:placing an active virtual function and a standby virtual function in an error state;
remapping a logical partition from the active virtual function to the standby virtual function; and
placing the logical partition and standby virtual function in an error recovery state.

US Pat. No. 10,691,560

REPLACEMENT OF STORAGE DEVICE WITHIN IOV REPLICATION CLUSTER CONNECTED TO PCI-E SWITCH

LENOVO ENTERPRISE SOLUTIO...

1. A non-transitory computer-readable data storage medium storing program code executable by a storage device to:determine that the storage device, connected to a Peripheral Component Interconnect Express (PCIe) switch, is part of an input/output virtualization (IOV) replication cluster, wherein storage device is replacing another storage device in the IOV replication cluster connected to the PCIe switch and wherein the IOV replication cluster comprises a plurality of storage devices connected to the PCIe switch;
in response to determining that the storage device was part of the IOV replication cluster, initiate a virtual root complex on the storage device;
initiate, by the virtual root complex, a connection through the PCIe switch with each other storage device connected to the PCIe switch and containing data to be replicated on the storage device, wherein each other storage device is an endpoint to the virtual root complex;
receive and store, by the virtual root complex, the data to be replicated on the storage device from each other storage device containing the data, over the connection;
terminate, by the virtual root complex, the connection with each other storage device containing the data; and
in response to terminating the connection with each other storage device containing the data, disable the virtual root complex on the storage device and enabling the storage device as another endpoint.

US Pat. No. 10,691,559

PERSISTENT MEMORY TRANSACTIONS WITH UNDO LOGGING

Oracle International Corp...

1. A method, comprising: performing, by one or more computing devices comprising persistent memory: writing, by a transaction, a new value to a persistent object, wherein the persistent object comprises a location in the persistent memory; appending an undo log record to an undo log configured to store information regarding previous values of locations in the persistent memory, wherein the undo log record comprises an original value of the persistent object, wherein the original value comprises a value for the location in persistent memory prior to said writing; and issuing a single persist barrier prior to updating a tail pointer of the undo log to point to the appended undo log record, wherein the single persist barrier persists the appended undo log record and a previous value for the tail pointer to the persistent memory, wherein the previous value for the tail pointer points to another undo log record previously appended to the undo log: and updating, subsequent to issuing the single persist barrier, the tail pointer of the undo log to point to the appended undo log record.

US Pat. No. 10,691,558

FAULT TOLERANT DATA EXPORT USING SNAPSHOTS

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:receiving a request to export customer log data generated over a specified period of time;
determining the customer log data corresponding to the request, the customer log data being stored to a plurality of storage locations across a resource provider environment;
generating a snapshot indicating a state of the customer log data at a time of the request, wherein the snapshot includes at least one status identifier to identify a partial completion of the request;
assigning portions of the customer log data to a set of tasks, each task of the set of tasks responsible for writing a respective portion of the customer log data to a specified repository, wherein each respective portion is an independent and discrete portion of the customer log data;
causing the tasks to be executed by one or more computing resources in the resource provider environment;
updating the at least one status identifier with respect to the snapshot in response to one or more completed tasks, of the set of tasks, completing successfully in writing the respective portion of the customer log data to the specified repository;
determining that at least one task, of the set of tasks, failed to complete successfully;
determining, based at least in part upon the at least one status identifier and the snapshot, the at least one task that failed to complete successfully;
causing the at least one task to be executed at least a second time by the one or more computing resources without re-executing the completed task of the set of tasks;
causing the at least one task to be re-executed until the at least one task completes successfully or a maximum number of retries is reached for the request; and
providing less than the requested log data if the at least one task fails to complete, wherein the provided log data includes log data retrieved by the one or more completed tasks.

US Pat. No. 10,691,557

BACKUP FILE RECOVERY FROM MULTIPLE DATA SOURCES

EMC IP HOLDING COMPANY LL...

1. A system for backup file recovery from multiple data sources, the system comprising:a processor-based application, which when executed on a computer, will cause a processor to:
determine, in response to receiving a request to recover a backup file associated with a data object, whether each data source of a plurality of data sources stores an entire copy of the backup file associated with the data object;
allocate, in response to a determination that each data source of the plurality of data sources stores the entire copy of the backup file associated with the data object, an amount of the backup file requested to be recovered to each data stream of a plurality of data streams, the plurality of data streams equaling the plurality of data sources, each of the data streams transmitting the corresponding allocated amount of the backup file from a different one of the plurality of data sources;
recover the allocated backup file by concurrently recovering the corresponding plurality of data streams from the corresponding plurality of data sources; and
recover the backup file from a single data source of the plurality of data sources in response to a determination that each of the data sources of the plurality number of data sources stores less than the entire copy of the backup file associated with the data object.

US Pat. No. 10,691,556

RECOVERING A SPECIFIED SET OF DOCUMENTS FROM A DATABASE BACKUP

QUEST SOFTWARE INC., Ali...

1. A computer-implemented method, comprising:selecting a backup file that includes multiple tables and object metadata corresponding to multiple backed-up objects, wherein each table of the multiple tables corresponds to a different object of the multiple backed-up objects;
determining multiple object identifiers stored in the multiple tables, wherein each object identifier of the multiple object identifiers is stored in each table of the multiple tables and identifies a corresponding object of the multiple backed-up objects;
receiving a selection of one or more object identifiers of the multiple object identifiers;
determining one or more objects to be restored based on the one or more object identifiers;
responsive to receiving a request to restore object content of the selected one or more object identifiers, without restoring the object content, restoring, to a staging database, one or more tables of the multiple tables and object metadata, different than the selected one or more object identifiers, associated with the selected one or more objects, wherein the one or more tables and the object metadata are restored based on the selected one or more objects identifiers;
importing, to a production database different from the staging database, the restored one or more tables and the restored object metadata associated; and
restoring the object content to the production database based on the imported one or more tables and the imported object metadata.

US Pat. No. 10,691,555

ELECTRONIC DEVICE

Panasonic Intellectual Pr...

1. An electronic device configured to execute automatic backup of automatically transmitting, to a predetermined recording medium, data stored in the electronic device, comprising:a battery configured to supply electricity to drive the electronic device, the electricity having been charged in the battery;
an electricity input part to be connected to an external device, the electricity input part being configured to receive the electricity supplied from the external device;
a charging controller configured to charge the battery with the electricity having been supplied via the electricity input part; and
a controller configured to control execution of the automatic backup,
wherein in a case where the electricity input part is connected to the external device and there is data to be subjected to the automatic backup,
the controller compares a remaining capacity of the battery with a threshold,
the controller executes the automatic backup when the remaining capacity of the battery is higher than the threshold, and
the controller does not execute the automatic backup when the remaining capacity of the battery is equal to or lower than the threshold,
in a case where the remaining capacity of the battery is equal to or lower than the threshold, the controller is configured to transmit a predetermined notification to the charging controller and to stop operating, and
in a case where the charging controller has received the predetermined notification, the charging controller is configured to activate the controller upon completion of charging of the battery.

US Pat. No. 10,691,554

PROVIDING ACCESS TO STORED COMPUTING SNAPSHOTS

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:providing, by one or more computing systems that implement one or more network-accessible services for use by a plurality of users, one or more computing nodes for use by a first user;
providing, by a block data storage service of the one or more network-accessible services, a storage volume to be attached to the one or more computing nodes, wherein the storage volume is configured with access rights indicating one or more authorized users of the storage volume;
executing, on the one or more computing nodes, one or more software programs on behalf of the first user, and storing data associated with the executing of the one or more software programs on the storage volume;
receiving, by the one or more computing systems, instructions from the first user specifying one or more criteria for other users to obtain access to the storage volume;
separating, by the one or more computing systems, the storage volume into a plurality of chunks, wherein individual ones of the chunks include respective groups of blocks of the storage volume and are formatted as data objects for storage in an archival storage system distinct from the block data storage service;
creating, by the one or more computing systems, a stored snapshot copy of the stored data of the storage volume in the archival storage system, wherein the stored snapshot copy includes the respective data objects for the plurality of chunks, wherein the data objects are configured to restore the plurality of chunks in another storage volume;
creating, by the one or more computing systems, a second snapshot copy of the storage volume in the archival storage system, wherein the second snapshot copy includes additional data objects for a subset of the plurality of chunks that have been created or modified since a prior snapshot copy of the storage volume and is configured to restore the subset of chunks in the other storage volume;
receiving; by the block data storage service and from one or more second users separate from the first user, a request to access the storage volume;
determining that the one or more second users are authorized to receive the access based at least in part on the one or more second users satisfying the one or more criteria; and
responsive to the determining:
creating a second storage volume configured for use by the one or more second users based at least in part on the stored snapshot copy and the second snapshot copy; and
providing the access to the second storage volume to the one or more second users.

US Pat. No. 10,691,553

PERSISTENT MEMORY BASED DISTRIBUTED-JOURNAL FILE SYSTEM

NETAPP, INC., Sunnyvale,...

1. A method, comprising:creating by a processor, a resources dataset with a plurality of structures for a persistent memory based, distributed-journal file system having a plurality of files, each file associated with a metadata and a self-journal record stored at a mapped persistent memory, wherein the mapped persistent memory hosts at least a portion of the file system to access the metadata and the self-journal record of each file by the file system from the persistent memory;
wherein the self-journal record tracks non-atomic alteration of an associated file and maintains an indicator only for incomplete non-atomic alterations, and the resources dataset provides an arrangement of the plurality of structures at the mapped persistent memory, and maps characteristics of the file system and the mapped persistent memory, based on a state of metadata and self-journal record during a first mount sequence for the file system;
determining by the processor, based on an alteration request to alter a first file that the alteration is based on a non-interruptible atomic operation or an interruptible non-atomic operation;
executing the non-interruptible atomic operation without updating a first self-journal record of the first file, upon determining that the alteration is based on the non-interruptible atomic operation;
recording by the processor, an indication of alteration in the first self-journal record of the first file, when the alteration is executed by the interruptible non-atomic operation, upon determining that the alteration is based on the interruptible non-atomic operation;
applying by the processor, the alteration in the file system;
removing by the processor, the indication from the first self-journal record upon successful completion of the interruptible non-atomic operation; and
recreating by the processor, the resources dataset during a second mount sequence of the file system using metadata and the first self-journal record of the altered first file.

US Pat. No. 10,691,552

DATA PROTECTION AND RECOVERY SYSTEM

International Business Ma...

1. A method of data recovery, the method comprising:determining an amount of time to transfer a first file from a first location to a second location, wherein the first location is on a first device and a second location is on a second device;
receiving historical operational information associated with the first device and historical operational information associated with the second device, wherein operational information comprises information detailing processor usage, information detailing memory allocation, and information detailing one or more operated processes;
creating a transfer model correlating the amount of time to transfer the first file from the first location to the second location based on the historical operational information associated with the first device and the historical operational information associated with the second device, wherein the transfer model is a linear regression model relating historical information for transfer time to historical information for file size, historical information for network speed between the first location and the second location, and historical information for operational information;
determining a reliability of the system to meet backup requirements of transferring from the first location to the second location, wherein the backup requirements are stored on the first device and include an acceptable length of time for a backup file to be transferred, an acceptable amount of downtime for a computing device during an operational failure, and an acceptable rate of failure to back up a collection of files, and wherein the reliability of the system is determined based on periodically:
determining, for the first device and the second device, systematic differences between uploading and downloading files and incorporating the systematic differences in the transfer model;
determining an estimated amount of time to transfer a second file based on the transfer model, size of the second file, current operational information about the first device, and current operational information about the second device; and
determining the reliability of the system based on the percentage of time the estimated amount of time to transfer the second file is above a first threshold amount; and
based on the reliability of the system being below a threshold percentage, providing improvement solutions.

US Pat. No. 10,691,551

STORAGE SYSTEM WITH SNAPSHOT GENERATION CONTROL UTILIZING MONITORED DIFFERENTIALS OF RESPECTIVE STORAGE VOLUMES

EMC IP Holding Company LL...

1. An apparatus comprising:a storage system comprising a plurality of storage devices and a storage controller;
the storage controller being configured:
to monitor a differential between a storage volume of the storage system and a previous snapshot generated for that storage volume; and
responsive to the monitored differential satisfying one or more specified conditions, to generate a subsequent snapshot for the storage volume;
wherein monitoring the differential between the storage volume of the storage system and the previous snapshot generated for that storage volume comprises:
maintaining a first counter indicative of a total amount of data in the storage volume;
maintaining a second counter indicative of an amount of data in the storage volume that has been written since generation of the previous snapshot; and
monitoring values of the first and second counters;
wherein generating the subsequent snapshot for the storage volume responsive to the monitored differential satisfying one or more specified conditions comprises generating the subsequent snapshot responsive to the value of the second counter satisfying a specified condition relative to the value of the first counter; and
wherein the storage controller comprises at least one processing device comprising a processor coupled to a memory.

US Pat. No. 10,691,550

STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD

FUJITSU LIMITED, Kawasak...

1. A storage control apparatus comprising:a memory including a logical area and a physical area and configured to store meta-information for associating addresses of the logical area and the physical area with each other; and
a processor coupled to the memory and configured to:
read out first meta-information corresponding to a first logical area that is set as a copy source of data in the logical area from the memory,
specify a first address of the physical area corresponding to a copy source address of the data based on the first meta-information,
read out second meta-information corresponding to a second logical area that is set as a copy destination of the data in the logical area from the memory,
specify a second address of the physical area corresponding to a copy destination address of the data based on the second meta-information,
associating the first address and the second address with each other as storage areas of the data to indicate a storage of the data without copying the data to the second address of the physical area,
assign information related to the first logical area to the data when the data is written into the first logical area,
further assign information related to the second logical area to the data when the data is copied from the first logical area to the second logical area,
write the meta-information stored in the memory into the physical area when a data size of the meta-information reaches a predetermined size, and
generate position information indicating a position set as a write destination of the meta-information in the physical area.

US Pat. No. 10,691,549

SYSTEM MANAGED FACILITATION OF BACKUP OF DATASET BEFORE DELETION

INTERNATIONAL BUSINESS MA...

1. A method, comprising:receiving, by a storage controller, a command to delete a dataset stored in a first set of storage volumes controlled by the storage controller;
in response to receiving the command, determining whether an indicator has been enabled to protect the dataset against an accidental deletion;
in response to determining that the indicator has been enabled, copying the dataset from the first set of storage volumes to a second set of storage volumes controlled by the storage controller within 500 milliseconds of the receiving of the command to delete the dataset by maintaining both the first and the second set of storage volumes within the storage controller and not outside of the storage controller;
in response to completion of the copying of the dataset from the first set of storage volumes to the second set of storage volumes, executing the command to delete the dataset stored in the first set of storage volumes, wherein the copying of the dataset from the first set of storage volumes to the second set of storage volumes occurs subsequent to the receiving of the command to delete the dataset in the first set of storage volumes but prior to the executing of the command to delete the dataset in the first set of storage volumes; and
subsequent to deletion of the dataset from the first set of storage volumes, performing a backup of the dataset from the second set of storage volumes that is maintained within the storage controller to a storage device located outside of the storage controller.

US Pat. No. 10,691,548

TRACKING FILES EXCLUDED FROM BACKUP

EMC IP Holding Company LL...

1. A computer-implemented method of excluding files from backup, comprising:accessing, by one or more processors, a database that that stores data associated with one or more files identified to be excluded from backup in connection with an incremental backup; and
using, by the one or more processors, at least the data stored in the database for one or more files excluded from a previous backup to exclude one or more of the one or more files from the incremental backup, comprising:
locating the one or more files identified to be excluded from the incremental backup, comprising in response to a determination that at least one of the one or more files identified to be excluded from the incremental backup cannot be found, determining whether the at least one of the one of the one or more files identified to be excluded from the incremental backup has moved to a new path; and
ensuring that data blocks associated with the located one or more files excluded from the previous backup are excluded from the incremental backup.

US Pat. No. 10,691,547

BACKUP AND RECOVERY FOR END-USER COMPUTING IN VIRTUAL DESKTOP ENVIRONMENTS

EMC IP Holding Company LL...

1. A method of backing up a virtual desktop environment in a backup system having a backup server, comprising:determining, in a first component of the backup server, whether the virtual desktop environment is persistent or non-persistent and implements clone technology to copy an existing virtual machine (VM) to create a separate unique VM that is one of a full clone or a linked clone;
if the virtual desktop environment is non-persistent, backing up, in a backup component of the backup server, a master image that is used to create non-persistent desktops, and not directly backing up the virtual desktop environment;
reprovisioning the virtual desktop environment upon recovery of the master image;
if the virtual desktop environment is persistent, backing up the master image that is used to create non-persistent desktops and virtual storage objects that maintain persistence of an identity of the virtual desktop environment by decoupling user settings and local installed applications from the virtual desktop environment so that only user customization changes require backup in order to minimize the backup size and window to improve performance of the backup system;
performing the backing up by a sequential ordered process of firstly backing up a database server hosting a database providing data of the master image; secondly backing up a virtual center server serving the virtual desktop environment; thirdly backing up a connection server configured to assign virtual desktops of the virtual desktop environment to respective users; and lastly backing up a composer server configured to manage and configure the virtual desktops based on a provisioning of the composer server, wherein the composer server creates linked clones provisioned as floating user assignment clones using non-persistent desktops requiring no backup, dedicated user assignment clones using assigned persistent desktops requiring backup, or full clones using instantiations of a master VM template requiring backup; and
enabling a restore in a fixed order of: restoring the composer server, restoring the connection server, restoring the virtual center server, and restoring the database server.

US Pat. No. 10,691,546

STORAGE MANAGEMENT SYSTEM AND METHOD

EMC IP Holding Company LL...

1. A computer-implemented method, executed on a computing device, comprising:identifying a failing virtualized object within a virtualized computing environment, wherein the failing virtualized object executes one or more server objects;
analyzing other virtualized objects included within the virtualized computing environment to identify one or more target virtualized objects, wherein analyzing other virtualized objects included within the virtualized computing environment includes:
determining a workload experienced by each of the other virtualized objects included within the virtualized computing environment, wherein identifying the one of more target virtualized objects includes disqualifying virtualized objects which have too high of a workload from identification as target virtualized objects;
determining a quantity of server objects being executed by each of the other virtualized objects included within the virtualized computing environment, wherein identifying the one or more target virtualized objects includes disqualifying virtualized objects which exceed a threshold amount of server objects from identification as target virtualized objects; and
reassigning the one or more server objects from the failing virtualized object to the one or more target virtualized objects.

US Pat. No. 10,691,545

MODIFYING A CONTAINER INSTANCE NETWORK

International Business Ma...

1. A computer-implemented method, comprising:progressively recording, by one or more processors, data modifications to a virtual workload, in a shared computing environment, in an in-memory resource of the shared computing environment; and
based on receiving an indication of a system failure or a system reboot, writing, by the one or more processors, the data modifications to a non-volatile storage resource, wherein the non-volatile storage resource is readable by a hypervisor communicatively coupled to the non-volatile storage resource, and wherein the hypervisor utilizes the data modifications to recover a most recent version of the virtual workload at reboot following the system failure, based on the progressively recording.

US Pat. No. 10,691,544

MODIFYING A CONTAINER INSTANCE NETWORK

International Business Ma...

1. A computer program product comprising:a computer readable storage medium readable by one or more processors and storing instructions for execution by the one or more processors for performing a method comprising:
progressively recording, by the one or more processors, data modifications to a virtual workload, in a shared computing environment, in an in-memory resource of the shared computing environment; and
based on receiving an indication of a system failure or a system reboot, writing, by the one or more processors, the data modifications to a non-volatile storage resource, wherein the non-volatile storage resource is readable by a hypervisor communicatively coupled to the non-volatile storage resource, and wherein the hypervisor utilizes the data modifications to recover a most recent version of the virtual workload at reboot following the system failure, based on the progressively recording.

US Pat. No. 10,691,543

MACHINE LEARNING TO ENHANCE REDUNDANT ARRAY OF INDEPENDENT DISKS REBUILDS

International Business Ma...

1. A computer-implemented method comprising:storing a plurality of blocks of data in a striped redundant array of independent devices (RAID) storage system that includes a plurality of storage devices;
storing parity data for the plurality of blocks of data;
updating, by machine logic, a priority assignment algorithm for assigning the priority data values based, at least in part, on a review of relative weights assigned to a set of factors considered when assigning priority data values and what priority data values were assigned to each block of data of the plurality of blocks of data;
for each given block of data of the plurality of blocks of data, assigning a priority data value to the given block of data based on the priority assignment algorithm; and
responsive to a failure of a first storage device of the plurality of storage devices, rebuilding, using the parity data and data of the blocks of data stored on the plurality of storage devices other than the first storage device, blocks of data that were stored on the first storage device in an order determined by priority values of the blocks of data that were stored on the first storage device.

US Pat. No. 10,691,542

STORAGE DEVICE AND STORAGE METHOD

Toshiba Memory Corporatio...

1. A storage device comprising:a plurality of memory nodes, each of which includes a storage unit including a plurality of storage areas, each of the plurality of storage areas having a predetermined size, each of the plurality of memory nodes being arranged at a lattice point of a lattice, each of the plurality of memory nodes including input ports and output ports, and each of the plurality of memory nodes being connected, through an input port and output port, to each of one or more adjacent memory nodes among the plurality of memory nodes,
the plurality of memory nodes constituting three or more groups, each of the three or more groups including two or more memory nodes, each of the plurality of memory nodes being included in any one group among the three or more groups, and each of the plurality of memory nodes being connected to all other memory nodes in the same group directly or via one or more memory nodes in the same group; and
a control unit that is connected to a first memory node that is one of the plurality of memory nodes, the control unit being configured to
divide data received from an external computer to generate three or more data pieces each having a predetermined size, the data being indicated by a logical address,
generate parity from the three or more data pieces,
allocate each of writing destinations of the three or more data pieces and the parity in a different group among the three or more groups,
generate packets each addressed to a different destination among the writing destinations, the packets each including a corresponding data piece among the three or more data pieces and the parity, and
transmit the packets to the first memory node, wherein
when a memory node receives a packet among the transmitted packets through an input port of the memory node,
in a case where the received packet is not addressed to the memory node itself, the memory node transmits the received packet to one of memory nodes that are adjacent to the memory node through an output port of the memory node, and
in a case where the received packet is addressed to the memory node itself, the memory node performs storing the data piece included in the received packet into a storage unit included in the memory node.

US Pat. No. 10,691,540

SOFT CHIP-KILL RECOVERY FOR MULTIPLE WORDLINES FAILURE

SK Hynix Inc., Gyeonggi-...

1. A method implemented on a computer system to output data from superblocks of a memory that comprises a plurality of memory dies, the method comprising:decoding, in a decoding iteration, codewords from a superblock of the memory, wherein the superblock comprises a first block on a first memory die, a second block on a second memory die, and a third block on a third memory die, wherein the first block stores a first codeword of the codewords, wherein the second block stores a second codeword of the codewords, and wherein the third block stores XOR parity bits for the codewords;
determining that the decoding of at least the first codeword and the second codeword failed in the decoding iteration based on a first number of error bits associated with the first codeword and on a second number of error bits associated with the second codeword, wherein the decoding of the first codeword in the decoding iteration is based on first soft information associated with the first codeword, and wherein the decoding of the second codeword in the decoding iteration is based on second soft information associated with the second codeword;
selecting to decode, in a next decoding iteration, the first codeword prior to decoding the second codeword in the next decoding iteration based on the first number of error bits and the second number of error bits;
generating, based on the first codeword being selected, updated first soft information associated with the first codeword, wherein the updated first soft information is generated by updating the first soft information based on the second soft information and the XOR parity bits; and
decoding, in the next decoding iteration, the first codeword based on the updated first soft information.

US Pat. No. 10,691,539

GROWN DEFECT DETECTION AND MITIGATION USING ECC IN MEMORY SYSTEMS

WESTERN DIGITAL TECHNOLOG...

1. A circuit comprising:a memory array comprising a plurality of memory cells; and
a controller configured to:
receive a bit group of data stored in the memory array;
generate an empirical distribution of numbers of unsatisfied checks for the bit group based on an error correction code;
compare the empirical distribution for the bit group with an expected distribution; and
in response to the comparison, identify that the bit group is unreliable.

US Pat. No. 10,691,538

METHODS AND APPARATUSES FOR ERROR CORRECTION

Micron Technology, Inc., ...

1. An apparatus, comprising:a memory device having a plurality of data bits stored in a plurality of multi-level memory cells, and further having parity check bits stored with a set of data bits including a portion of the plurality of data bits, the parity check bits stored with the set of data bits in a block of multi-level cells of the plurality of multi-level memory cells; and
a controller coupled to the memory device and configured to convert a flash channel associated with the plurality of multi-level memory cells from an errors channel to an erasures channel, and to perform low density parity check decoding.

US Pat. No. 10,691,537

STORING DEEP NEURAL NETWORK WEIGHTS IN NON-VOLATILE STORAGE SYSTEMS USING VERTICAL ERROR CORRECTION CODES

Western Digital Technolog...

1. An apparatus, comprising:a non-volatile memory controller, including:
a host interface configured to receive a data stream and a corresponding specification of strength of error correction code (ECC) for use in encoding the data stream;
one or more error correction code (ECC) engines configured to encode the data stream into a plurality of ECC codewords encoded with one of a plurality of strengths of ECC in response to the corresponding specification of the strength of ECC; and
a memory die interface configured to transmit the plurality of ECC codewords to one or more memory die; and
a data processing apparatus, comprising a logic circuit configured to execute software to:
receive a data set having a plurality of elements, the elements of the data set each having a plurality of n bits, each of the bits of a corresponding level of significance;
receive the data set and generate therefrom the data stream from bits of a common level of significance from the elements of the data set, and
transmit the data stream to the non-volatile memory controller, wherein the logic circuit is further configured to execute software to:
associate with the data stream a tag specifying one of a plurality of error correction codes levels for use in encoding the data stream into ECC codewords, and
transmit the associated tag to a memory device.

US Pat. No. 10,691,536

METHOD TO SELECT FLASH MEMORY BLOCKS FOR REFRESH AFTER READ OPERATIONS

SK Hynix Inc., Gyeonggi-...

1. A non-volatile data storage device, comprising:memory cells arranged in blocks; and
a memory controller coupled to the memory cells for controlling program and read operations of the memory cells;
wherein each memory cell is programmed to a data state corresponding to one of multiple cell programmed voltages (PVs) from PV0 to PVN, where PV0 wherein the memory controller is configured to perform a read reclaim operation as follows:
select a block of memory cells;
read multiple memory cells in the block to determine a programmed data state of each memory cell;
perform error correction decoding of the multiple memory cells to determine a corrected data state of each memory cell;
for each memory cell, determine a read programmed voltage (PV-r) corresponding to the programmed data state determined by the read operation, and determine a corrected programmed voltage (PV-c) corresponding to the data state determined by the error correction decoding;
identify a plurality of error cells that have errors and determine a total number of error cells;
determine a first error count e+ that represents a total number of error cells that have a higher read programmed voltage (PV-r) than corrected programmed voltage (PV-c); and
determine a second error count e? that represents a total number of error cells that have a lower read programmed voltage (PV-r) than corrected programmed voltage (PV-c);
determine if the first error count is higher than the second error count;
determine if the total number of error cells is higher than a threshold error count; and
upon determining that the first error count is higher than the second error count and the total number of error cells is higher than a threshold error count, perform a read reclaim operation to the block of memory cells.

US Pat. No. 10,691,535

FLASH MEMORY ERROR CORRECTION METHOD AND APPARATUS

HUAWEI TECHNOLOGIES CO., ...

1. A flash memory error correction method, comprising:determining that a first error correction decoding operation fails, wherein the first error correction decoding operation is performed on data of a flash memory page that is read using an nth read voltage threshold, and wherein n is a positive integer not less than two;
reading the data of the flash memory page using an (n+1)th read voltage threshold to obtain (n+1)th data, wherein the (n+1)th read voltage threshold is different from the nth read voltage threshold;
reading the data of the flash memory page using an mth read voltage threshold to obtain mth data, wherein the mth read voltage threshold is different from the (n+1)th read voltage threshold or the nth read voltage threshold, and wherein m is a positive integer less than n;
comparing the (n+1)th data to the mth data to determine a first data bit that is different between the (n+1)th data and the mth data;
reducing a first confidence level of the first data bit in response to determining that the first data bit is different between the (n+1)th data and the mth data, wherein the first confidence level is an absolute value of confidence corresponding to the first data bit; and
performing, according to the first confidence level of the first data bit, a second error correction decoding operation on the (n+1)th data.

US Pat. No. 10,691,534

DATA ENCODING METHOD, DATA DECODING METHOD AND STORAGE CONTROLLER

Shenzhen EpoStar Electron...

1. A data encoding method for encoding a raw data to be stored to a rewritable non-volatile memory module, wherein the rewritable non-volatile memory module has a plurality of physical units, and each of the physical units comprises a plurality of physical sub-units, wherein a plurality of physical addresses are assigned to the physical sub-units, the method comprising:executing a write command, wherein the write command instructs writing the raw data to one or more target physical addresses among the physical addresses;
obtaining a verification data corresponding to the raw data from the write command, wherein the verification data is directly read from data of the write command, and the verification data is a part of the data of the write command, wherein obtaining the verification data corresponding to the raw data from the write command comprises:
identifying a plurality of first system data corresponding to the raw data in the write command;
determining a length of the verification data according to a predetermined checking ability; and
selecting one or more second system data among the first system data according to the length of the verification data to form the selected one or more second system data as the verification data,
wherein a total data length of the selected one or more second system data is equal to the length of the verification data,
wherein the first system data comprise one or more target logical addresses configured to store the raw data, one or more target physical addresses configured to store the raw data, and physical unit information of a target physical unit configured to store the raw data;
adding the verification data to the raw data to form a pre-scrambling data;
performing a scramble operation on the pre-scrambling data to obtain a scrambled data;
performing an encoding operation on the scrambled data to obtain a codeword data; and
writing the codeword data to the one or more target physical addresses after obtaining the codeword data, so as to complete execution of the write command.

US Pat. No. 10,691,533

ERROR CORRECTION CODE SCRUB SCHEME

Micron Technology, Inc., ...

1. A method, comprising:reading a plurality of data bits and a plurality of parity bits from a memory array;
decoding the plurality of data bits and the plurality of parity bits;
determining an error associated with a first parity bit of the plurality of parity bits based at least in part on decoding the plurality of data bits and the plurality of parity bits, wherein the error is determined based at least in part on identifying an incorrect bit value of the first parity bit;
correcting the error associated with the first parity bit based at least in part on determining the error; and
writing the corrected first parity bit to the memory array based at least in part on correcting the error associated with the first parity bit.

US Pat. No. 10,691,531

SYSTEMS AND METHODS FOR MULTI-ZONE DATA TIERING FOR ENDURANCE EXTENSION IN SOLID STATE DRIVES

Western Digital Technolog...

1. A method, comprising:providing a plurality of error correction mechanisms, each having a plurality of corresponding error correction levels;
associating a first plurality of blocks of a solid state drive with a first zone, wherein the first zone comprises a first logical accumulation of blocks;
associating a second plurality of blocks of the solid state drive with a second zone, wherein the second zone comprises a second logical accumulation of blocks, and wherein the first zone is configured to support a larger number of program/erase (PE) cycles compared to the second zone by limiting programming to lower pages in the first zone;
assigning a first error correction mechanism and a first corresponding error correction level to the first zone;
assigning a second error correction mechanism and a second corresponding error correction level to the second zone;
directing a first plurality of write requests to the solid state drive into the first zone and a second plurality of write requests into the second zone, wherein the first plurality of write requests is for data that is overwritten more frequently than data for the second plurality of write requests; and
re-directing at least one write request from the first plurality of write requests into the second zone.

US Pat. No. 10,691,530

APPARATUSES AND METHODS FOR CORRECTING ERRORS AND MEMORY CONTROLLERS INCLUDING THE APPARATUSES FOR CORRECTING ERRORS

SK hynix Inc., Icheon-si...

1. An error correction apparatus comprising:a scrambler configured to randomize original data to generate scrambled data;
an error correction code (ECC) encoder configured to perform an ECC encoding operation of the scrambled data to generate encoded data to be outputted from the error correction apparatus;
an ECC decoder configured to perform an ECC decoding operation of the encoded data, which are received by the error correction apparatus externally from the error correction apparatus, to generate decoded data corresponding to corrected data of the encoded data;
a descrambler configured to descramble the decoded data having a bit array sequence randomized by the scrambler to generate descrambled data that are restored to have the same bit array sequence as the original data;
a scrambling discriminator configured to discriminate whether the original data have to be scrambled to supply the original data to the scrambler or the ECC encoder; and
a descrambling discriminator configured to discriminate whether the encoded data is the scrambled data or the original data to output the decoded data to the descrambler or an external device out of the error correction apparatus.

US Pat. No. 10,691,529

SUPPORTING RANDOM ACCESS OF COMPRESSED DATA

INTEL CORPORATION, Santa...

1. A processing device comprising:compression circuitry to:
determine a compression configuration to compress source data;
generate a checksum of the source data in an uncompressed state; and
compress the source data into at least one block based on the compression configuration to generate compressed source data, wherein the at least one block comprises:
a plurality of sub-blocks, wherein each sub-block of the plurality of sub-blocks is to include a corresponding predetermined amount of data; and
a block header corresponding to the plurality of sub-blocks; and
decompression circuitry coupled to the compression circuitry, wherein the decompression circuitry is to:
while not outputting a decompressed data stream of the source data:
generate index information corresponding to the plurality of sub-blocks;
in response to generating the index information, generate a checksum of the compressed source data associated with the plurality of sub-blocks; and
determine whether the checksum of the source data in the uncompressed state matches the checksum of the compressed source data.

US Pat. No. 10,691,527

SYSTEM INTERCONNECT AND SYSTEM ON CHIP HAVING THE SAME

Samsung Electronics Co., ...

1. A system on chip (SoC) comprising:a bus matrix configured to connect a plurality of functional blocks;
a monitoring unit configured to monitor whether a transaction between the functional blocks has a hang or stall and distinguish a functional block that caused the hang or stall from among the functional blocks;
a recovery signal generation unit configured to provide a recovery signal, which releases the hang or stall, to at least one of the functional blocks based on the distinguished functional block; and
a multiplexer having one end connected to the bus matrix and the other end connected to a first functional block, among the functional blocks, and the recovery signal generation unit, wherein
the multiplexer is configured to output one of an output signal of the first functional block and a recovery signal output from the recovery signal generation unit to the bus matrix according to the distinguished functional block.

US Pat. No. 10,691,526

AUTOMATIC ERROR FIXES FOR HIGH-AVAILABILITY APPLICATIONS

International Business Ma...

1. A method comprising:obtaining output from a remote computer function on a first set of arguments;
responsive to determining that said output exhibits an error, applying a fixer routine, other than a retry, to said arguments to produce new arguments;
obtaining output from said remote computer function on said new arguments; and
in a case where said output from said remote computer function on said new arguments is acceptable, using said output from said remote computer function on said new arguments as a corresponding output from said remote computer function on said first set of arguments;
the method further comprising selecting said fixer routine from a plurality of fixer routines, wherein said plurality of fixer routines comprise an identity function, an input validator, a database storing a most representative group of inputs, and a statistical model.

US Pat. No. 10,691,525

OPPORTUNISTIC SOFTWARE UPDATES DURING SELECT OPERATIONAL MODES

Aurora Labs Ltd., Tel Av...

1. A non-transitory computer readable medium including instructions that, when executed by at least one processor, cause the at least one processor to perform operations for opportunistically updating controller software, comprising:receiving, at a controller, a wireless transmission indicating a need to update software running on the controller, the controller controlling at least a portion of a device;
monitoring an operational status of the device at a first time to determine whether the device is in a first mode of operation in which a controller software update is prohibited;
delaying the controller software update when the operational status is prohibited;
continuing to monitor the operational status of the device at a second time to determine whether the device is in a second mode of operation in which the controller software update is permitted;
determining the device to be in the second mode of operation;
sending a message to a remote server when it is determined that the device is in the second mode of operation;
enabling updating of the controller with the delayed controller software update when it is determined that the device is in the second mode of operation, wherein the delayed controller software update is maintained on the remote server or on the device when the device is in the first mode of operation;
receiving the controller software update in response to the message; and
installing the controller software update on the controller when the device is in the second mode of operation;
wherein the controller software is mapped to a plurality of functional units, and the controller is configured to utilize a virtual file system (VFS) to manage and track one or more symbols representing versions of the plurality of functional units.

US Pat. No. 10,691,524

DIAGNOSTIC SYSTEM AND METHOD

Bugreplay Inc., New Roch...

1. A computer-implemented method, executed on a computing device, comprising:recording video information on the computing device during a monitored event, wherein the monitored event includes one or more user interactions with a web browser and wherein the video information includes a screen recording of the one or more user interactions with the web browser content rendered within the web browser by the computing device;
recording execution information associated with the web browser on the computing device during the monitored event; and
temporally synchronizing the video information and the execution information to form temporally-synchronized diagnostic content that is configured to allow a user to monitor a status of the recorded execution information while playing back synchronized recorded video information.

US Pat. No. 10,691,523

GENERATING NOTIFICATION VISUALIZATIONS BASED ON EVENT PATTERN MATCHING

Splunk Inc., San Francis...

1. A method, comprising:creating a plurality of time stamped events from data received from one or more information technology systems;
analyzing the plurality of time stamped events to identify whether an event pattern that occurs in the plurality of time stamped events is the same or similar to one or more registered event patterns, the one or more registered event patterns indicative of performance aspects of the one or more information technology systems;
generating, based upon identification of one or more registered event patterns of the one or more registered event patterns that are the same or similar to the event pattern, a visualization representing one or more information technology systems of the one or more information technology systems that generated events associated with the event pattern;
generating within the visualization a control tool, wherein interaction with the control tool triggers:
retrieving events by searching the plurality of time stamped events for events surrounding the event pattern; and
replaying the retrieved events; and
generating within the visualization a representation of the replaying of the retrieved events surrounding the event pattern;
wherein the method is performed by one or more computing devices.

US Pat. No. 10,691,521

USING TELEMETRY TO DRIVE HIGH IMPACT ACCESSIBILITY ISSUES

MICROSOFT TECHNOLOGY LICE...

1. A computer device, comprising:memory configured to store data and instructions;
at least one processor configured to communicate with the memory;
an operating system configured to communicate with the memory and the processor, wherein the operating system is operable to:
automatically detect at least one accessibility error for assistive technology operating on the computer device by scanning application information associated with one or more applications executing on the computer device and identifying missing application information for the one or more applications;
identify element information where the at least one accessibility error occurred;
generate error data for the at least one accessibility error with the application information and the element information; and
transmit the error data.

US Pat. No. 10,691,520

LIVE ERROR RECOVERY

Intel Corporation, Santa...

1. An apparatus comprising:a capability structure associated with an downstream port error containment mode; and
a downstream port comprising:
input/output (I/O) circuitry to support communication with another device over a serial data link; and
error logic comprising hardware circuitry, wherein the error logic is to:
determine an uncorrectable error associated with a packet;
determine that a particular bit is set within the capability structure to indicate that the downstream port error containment mode is enabled for the downstream port, wherein the port error containment mode is to contain uncorrectable errors at the downstream port;
set a downstream port error containment status bit in a status register of the capability structure to trigger the downstream port error containment mode based at least in part on the particular bit set to indicate that the downstream port error containment mode is enabled;
halt traffic downstream from the downstream port in the downstream port error containment mode to avoid spread of data corruption associated with the uncorrectable error and to permit error recovery; and
detect that the downstream port error containment status bit is cleared;
wherein the I/O logic is to attempt to retrain the link based on clearing of the downstream port error containment status bit.

US Pat. No. 10,691,519

HANG DETECTION AND RECOVERY

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for hang detection and recovery, the method comprising:sending, by a processor, a read request to a controller;
detecting, by a data hang detection circuit, the read request being sent to the controller and detecting an acknowledgement sent by the controller that receipt of the read request is acknowledged by the controller;
responsive to detecting the read request being sent to the controller and detecting the acknowledgment, initiating, by the data hang detection circuit, a counter when the read request is first detected and acknowledged by the controller;
monitoring, by the data hang detection circuit, to receive a read response from the controller; and
responsive to the counter reaching a timeout threshold before receiving the read response, sending, by the data hang detection circuit, a timeout error to the processor via a multiplexer in the data hang detection circuit.

US Pat. No. 10,691,518

HANDLING ZERO FAULT TOLERANCE EVENTS IN MACHINES WHERE FAILURE LIKELY RESULTS IN UNACCEPTABLE LOSS

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing I/O requests to a storage array of storage devices in a machine having a processor node and device adaptor, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that is executable to perform operations, the operations comprising:in response to the device adaptor initiating a rebuild of data at the storage devices in the storage array, determining whether a remaining fault tolerance at the storage array comprises a non-zero fault tolerance that permits at least one further storage device of the storage devices to fail and still allow recovery of data stored in the storage array; and
determining, by the device adaptor, whether processor utilization at the device adaptor exceeds a utilization threshold after determining that the remaining fault tolerance is not a zero fault tolerance;
initiating, by the device adaptor, an operation to reduce a rate at which I/O requests to the storage array are processed at the device adaptor in response to determining that the processor utilization at the device adaptor exceeds the utilization threshold.

US Pat. No. 10,691,517

OPERATING FREQUENCY DETERMINATION BASED ON A WARRANTY PERIOD

Hewlett Packard Enterpris...

1. A method for determining operating frequencies, the method comprising:receiving a warranty period for a computer component;
determining durability information for the computer component based on data received from a stress sensor on the computer component;
determining, based on the durability information, an operating frequency that causes a predicted life cycle of the computer component operating at the operating frequency to fall within the warranty period; and
setting the computer component to operate at the operating frequency, thereby allowing the computer component to operate at an enhanced performance level while maintaining a life cycle within the warranty period.

US Pat. No. 10,691,516

MEASUREMENT AND VISUALIZATION OF RESILIENCY IN A HYBRID IT INFRASTRUCTURE ENVIRONMENT

International Business Ma...

1. A method of evaluating and measuring resilience of a multi-site, multi-vendor hybrid information technology infrastructure environment using a resilience measurement module comprising the steps of:the resilience measurement module constructing a service level to component level mapping structure;
the resilience measurement module assigning a business component criticality index to each business component of the hybrid information technology infrastructure environment to construct a business component to technical component mapping structure;
the resilience measurement module assigning a technical component criticality index to technical components of the business component to technical component mapping structure based on importance and impact of failure of said components to identify critical technical components and critical business components to the hybrid information technology infrastructure environment;
the resilience measurement module identifying single points of failure of the critical technical components and critical business components of the hybrid information technology infrastructure environment;
the resilience measurement module calculating an availability vulnerability weighted score of the critical technical components and critical business components;
the resilience measurement module determining a recoverability factor for each of the critical business components and critical technical components identified;
the resilience measurement module measuring downtime and availability to determine a performance vulnerability score;
the resilience measurement module computing a backup vulnerability score for backup methodology of the identified, critical business components and critical technical components;
the resilience measurement module determining an impact analysis score; and
the resilience measurement module constructing a risk charter with red, amber, and green status exhibiting the resilience of the hybrid information technology environment infrastructure.

US Pat. No. 10,691,515

TESTING IN SERVERLESS SYSTEM WITH AUTOSTOP OF ENDLESS LOOP

INSTITUTE FOR INFORMATION...

1. A detecting system suitable for a serverless structure, comprising:a processor containing a plurality of modules, including:
a testing mode module configured to obtain a testing signal, perform at least one action according to the testing signal, and transmit a request instruction comprising the testing signal;
a service redirection module configured to determine whether the testing signal of the request instruction represents performing a testing mode, if service redirection module determines that the testing signal of the request instruction represents performing the testing mode, the service redirection module requests a testing service device to provide at least one service corresponding to the request instruction;
a data collection module configured to collect a performing order when the testing mode module performing the at least one action and a performing result of each one of the at least one action, and generate a to-be classified data based on the performing order and the performing result; and
a classification module, comprising a classification model, configured to calculate a detecting result according to the to-be classified data, wherein the detecting result represents whether a snow ball effect will occur, wherein the snow ball effect represents that a storage device adopted by the at least one service is the same as the storage device in an electronic device, and the electronic device transmits the testing signal;
wherein the modules are operated under the control of the processor.

US Pat. No. 10,691,514

SYSTEM AND METHOD FOR INTEGRATION, TESTING, DEPLOYMENT, ORCHESTRATION, AND MANAGEMENT OF APPLICATIONS

DATAPIPE, INC., Jersey C...

1. A system, comprising:at least one processor; and
a memory operatively coupled to the at least one processor, the at least one processor configured to perform operations comprising:
identifying an application template stored in a template data store;
determining application creation configuration information, for a software application, based on the identified application template;
generating application source code information based on the application creation configuration information, the application source code information including application build configuration information;
provisioning an application source code data store based on the application creation configuration information;
storing the application source code information in the application source code data store;
creating an integration workflow including an application build process configured to build the application source code information based on the application build configuration information;
predicting an infrastructure need, based at least in part on the integration workflow including the application build process;
scaling an infrastructure limit, based at least in part on the integration workflow including the application build process;
initiating a build of the application source code information, based on the application build configuration information, to generate the software application;
provisioning an application infrastructure configured to host the software application in an infrastructure services provider system based at least in part on the infrastructure need as predicted, the infrastructure limit as scaled, and the application creation configuration information;
initiating a rebuild of the application source code information, in response to a change in the application source code information detected via the integration workflow;
testing the software application according to a testing workflow comprising a logic gate; and
deploying the software application to the application infrastructure upon generation of the software application, in response to a notification corresponding to the logic gate.

US Pat. No. 10,691,513

DISTRIBUTED MESSAGE QUEUE WITH BEST CONSUMER DISCOVERY AND AREA PREFERENCE

TWITTER, INC., San Franc...

1. A computer-implemented method of queuing items in a logical queue distributed across multiple hosts operating in multiple data centers, the method comprising:producing, by a producer process executing on a host operating in a first data center, a task for forwarding a message in a mobile communication network;
determining whether there is a consumer process executing on the host operating in the first data center;
storing the task in the portion of the logical queue that is distributed to the host operating in the first data center when it is determined that there is a consumer process executing on the host operating in the first data center;
determining whether there is a consumer process executing on another host operating in the first data center when it is determined that there is no consumer process executing on the host operating in the first data center;
sending the task to another host operating in the first data center and storing the task in the portion of the logical queue that is distributed to the other host when it is determined that there is a consumer process executing on the other host;
determining whether there is a consumer process executing on a host operating in a second data center when it is determined that there is no consumer process executing on another host operating in the first data center;
sending the task to a host operating in the second data center and storing the task in the portion of the logical queue that is distributed to the host operating in the second data center when it is determined that there is a consumer process executing on the host operating in the second data center.

US Pat. No. 10,691,512

NOTIFYING ENTITIES OF RELEVANT EVENTS

Banjo, Inc., South Jorda...

1. A method comprising:receiving an indication of a location type, a boundary geometry, a user event truthfulness preference, a first event type, and a second event type;
receiving an indication of an area including a first location of the location type and a second location of the location type;
combining the location type, the boundary geometry, the area, the user event truthfulness preference, the first event type, and the second event type, into a rule formula;
monitoring the area for events occurring within a first boundary surrounding the first location or occurring within a second boundary surrounding the second location, the first boundary and the second boundary defined in accordance with the boundary geometry;
accessing first event characteristics including a first event type and a first event truthfulness corresponding to a first detected event;
accessing second event characteristics including a second event type and a second event truthfulness corresponding to a second detected event;
determining that a combination of the first characteristics and the second characteristics satisfy the rule formula, including determining that the first event type and the second event type occurred in combination within the first boundary and that the first event truthfulness and the second event truthfulness both satisfy the user event truthfulness preference; and
automatically electronically notifying an entity in accordance with notification preferences that the rule formula was satisfied.

US Pat. No. 10,691,511

COUNTING EVENTS FROM MULTIPLE SOURCES

Arm Limited, Cambridge (...

13. A method comprising:generating a first indication of a first event which has occurred in a first event source, wherein the first indication is one of a predefined set of indications corresponding to a plurality of event types;
generating a second indication of a second event which has occurred in a second event source, wherein the second indication is one of the predefined set of indications corresponding to the plurality of event types;
generating a first count signal in response to the first indication matching a selected event type of the plurality of event types;
generating a second count signal in response to the second indication matching the selected event type of the plurality of event types;
incrementing a counter in response to the first count signal;
incrementing the counter in response the second count signal,
wherein the first event source comprises first event selection configuration storage, and wherein a configuration value stored in the first event selection configuration storage determines the selected event type for the first event selection circuitry,
wherein the second event source comprises second event selection configuration storage, and wherein the configuration value stored in the second event selection configuration storage determines the selected event type for the second event selection circuitry; and
in response to a configuration update pending signal indicating that the configuration value stored in the first event selection configuration storage and the second event selection configuration storage will be updated, preventing modification of the counter.

US Pat. No. 10,691,510

METHODS AND APPARATUS TO DETECT UNINSTALLATION OF AN ON-DEVICE METER

The Nielsen Company (US),...

1. An apparatus to detect uninstallation of applications on mobile devices, the apparatus comprising:means for detecting that an application is to be uninstalled from the mobile device, wherein the application is to gather status information of the means for detecting to ensure that the means for detecting is installed, wherein the application is to transmit the status information to a data collector;
means for displaying a prompt indicating whether the means for detecting is to be uninstalled when the application is to be uninstalled, wherein the means for displaying is to instruct a package manager to remove the means for detecting from the mobile device; and
means for communicating an uninstallation notification to the data collector when the application is to be uninstalled, and wherein the uninstallation notification is to enable identification of a panelist associated with the mobile device.

US Pat. No. 10,691,509

DESIRED SOFTWARE APPLICATIONS STATE SYSTEM

Microsoft Technology Lice...

1. A computer-implemented method comprising:monitoring a user activity of a first product operating on a machine, the first product comprising a first software application;
determining a second product connected to the first product, the second product comprising a second software application, both the first and second software applications being part of a suite of software applications;
determining a user activity of the second product installed on the machine;
determining a desired activity of the second product on the machine;
comparing the user activity of the second product with the desired activity of the second product;
generating a customized message based on the comparison, the customized message identifying the second software application;
causing a display of the customized message at the machine, the customized message comprising a user interface element indicating a recommended configuration setting of the second software application, the recommended configuration setting being based on the comparison of the user activity of the second product with the desired activity of the second product;
detecting a selection of the user interface element on the machine; and
in response to the detecting, configuring the second software application with the recommended configuration setting on the machine.

US Pat. No. 10,691,507

API LEARNING

FUJITSU LIMITED, Kawasak...

10. A method, comprising:crawling, via a communication interface, one or more sources for application program interface (API) documentation;
collecting, via the communication interface, an API document from the one or more sources;
tokenizing the API document to create at least one token based on content of the API document;
associating a priority with each sentence;
generating an API ontology graph for a first semantic view based on the at least one token;
selecting a sentence based on probability of appearing in the corpus which is computed by machine learning methods;
developing the first semantic view for each API document based on the API ontology graph;
revising the first semantic view of the API document to generate the second semantic view once API documents are updated;
providing a particular interface to interact with an API;
providing the second semantic view of the API document via the particular interface;
receiving a request to execute a generic function based on the API document;
causing a native function of the API to be executed, wherein the native function of the API corresponds to the generic function; and
providing, via the particular interface, a response based on an execution of the native function of the API.

US Pat. No. 10,691,506

DISTRIBUTED LOCK FOR DATA ACQUISITION SYSTEMS

Intel Corporation, Santa...

1. A non-transitory computer readable medium (CRM) comprising instructions that, when executed by a first node of a plurality of nodes that comprise a distributed data store, cause the first node to:receive a request for a data acquisition event;
retrieve, from a plurality of unprocessed data acquisition events, where the unprocessed data acquisition events are stored across the plurality of nodes in the distributed data store, an event with a lock in an unlocked status;
determine whether the lock for the unlocked event is stored in the first node; and
respond with an event ID corresponding to the unlocked event if the first node stores the lock, or
forward the request to a second node, of the plurality of nodes of the distributed data store, that stores the lock.

US Pat. No. 10,691,505

SOFTWARE BOT CONFLICT-RESOLUTION SERVICE AGENT

International Business Ma...

1. A system comprising:hardware processor coupled with a memory device, the hardware processor configured to:
receive data from a target domain, the data comprising changes made to a content of the target domain;
analyze the data to identify a first change made to the content by a first bot and a second change made to the content by a second bot;
determine based on the analysis that the first and second changes conflict;
in response to determining that the first and second changes conflict, determine that the first and second bots are in conflict by determining that the first and second changes are made by the first and second bots automatically, by detecting that the changes made to the content are balanced reverts within a threshold difference and latency between successive reverts are within a threshold of time; and
resolve the conflict by executing an amelioration action.

US Pat. No. 10,691,504

CONTAINER BASED SERVICE MANAGEMENT

International Business Ma...

1. A method for migrating a service from one container to another container in a cloud environment, the method comprising:concurrently obtaining a first group of requests that are accessing a service launched in a first container instance and a second group of requests that are waiting for accessing the service;
generating a migrated service in a second container instance by migrating the service from the first container instance to the second container instance based on the obtained first and second groups of requests,
wherein the data associated with the migrated service is copied from the first container instance to the second container instance without suspending the first group of requests; and
directing the second group of requests to the migrated service in the second container instance without a new request,
wherein the migrated service remains active and the first group of requests accesses the migrated service during the service migration into the second container instance,
wherein the migrated service includes one or more network activities and one or more computing activities,
wherein the container cluster provides the migrated service without interruption to a consumer and without a change to one or more existing connections to the migrated service and at a reduced shut down time for the migrated service,
wherein the migrated service is shut down for a dynamic portion to be copied to the migrated service,
wherein the data associated with the migrated service is removed from the first container instance with the migrated service is completed.

US Pat. No. 10,691,503

LIVE MIGRATION OF VIRTUAL MACHINE WITH FLOW CACHE IN IMPROVED RESTORATION SPEED

Alibaba Group Holding Lim...

1. A method comprising:improving a restoration speed of Transmission Control Protocol (TCP) connection at a destination physical machine during live migration of a virtual machine from a source physical machine to the destination physical machine by:
receiving, by a flow cache, a start instruction;
hooking, by the flow cache, a callback function of a processing logic to a virtual back-end network adapter of the destination physical machine that is directed to in the start instruction, and wherein the virtual back-end network adapter of the destination physical machine has a same configuration as a back-end network adapter of the source physical machine;
receiving, by the flow cache, a data packet that is sent to the virtual machine on the source physical machine directly from a peer with the flow cache pre-installed on the destination physical machine in a stage when the virtual machine is suspended;
caching, by the flow cache, the received data packet; and
sending, by the flow cache, the cached data packet to the virtual machine on the destination physical machine after the flow cache senses that the virtual machine is restored at the destination physical machine.

US Pat. No. 10,691,502

TASK QUEUING AND DISPATCHING MECHANISMS IN A COMPUTATIONAL DEVICE

INTERNATIONAL BUSINESS MA...

1. A method comprisingmaintaining a plurality of ordered lists of dispatch queues corresponding to a plurality of processing entities, wherein each dispatch queue includes one or more task control blocks or is empty;
determining whether a primary dispatch queue of a processing entity is empty in an ordered list of dispatch queues for the processing entity;
in response to determining that the primary dispatch queue of the processing entity is empty, selecting a task control block for processing by the processing entity from another dispatch queue of the ordered list of dispatch queues for the processing entity, wherein the another dispatch queue from which the task control block is selected meets a threshold criteria for the processing entity, wherein a data structure indicates that the task control block that was selected was last executed in the processing entity, and wherein in response to determining that the primary dispatch queue of the processing entity is not empty, processing at least one task control block in the primary dispatch queue of the processing entity;
determining that another task control block is ready to be dispatched; and
in response to determining that the another task control block was dispatched earlier, placing the another task control block in a primary dispatch queue of a processing entity on which the another task control block was dispatched earlier.

US Pat. No. 10,691,501

COMMAND INVOCATIONS FOR TARGET COMPUTING RESOURCES

Amazon Technologies, Inc....

1. A system for distributing a command to a set of computing instances, comprising:at least one processor;
at least one memory device including instructions that, when executed by the at least one processor, cause the system to:
receive a request to invoke the command in batches over the set of computing instances managed within a service provider environment, wherein the command is an instruction to a software agent hosted on a computing instance to perform an administrative task associated with the computing instance, and the request includes an attribute that identifies at least one tag assigned to computing instances included in the set of computing instances;
identify a first subset of computing instances included in the set of computing instances assigned the at least one tag, wherein a number of computing instances included in a subset of computing instances is determined using a subset parameter;
send the command to computing instances included in the first subset of computing instances according to a send rate parameter specifying a rate at which the command is sent to the computing instances;
receive status indications from the computing instances indicating statuses of command executions by the computing instances, wherein a number of errors executing the command that exceed an error threshold causes termination of the execution of the command;
identify, based at least in part on the subset parameter, a second subset of computing instances included in the set of computing instances assigned the at least one tag; and
send the command to software agents hosted on computing instances included in the second subset of computing instances according to the send rate parameter.

US Pat. No. 10,691,500

MODELING WORKLOADS USING MICRO WORKLOADS REPRESENTING NORMALIZED UNITS OF RESOURCE CONSUMPTION METRICS

Virtustream IP Holding Co...

1. A method comprising:selecting a given workload, the given workload being associated with at least one application type;
analyzing the given workload to determine a set of functional patterns, the set of functional patterns describing resource structures for implementing functionality of the at least one application type;
determining resource consumption demand profiles for each of the set of functional patterns;
determining micro workload distributions for each of the resource consumption demand profiles, a given one of the micro workload distributions comprising a number of micro workloads, each micro workload comprising a normalized unit of resource consumption metrics;
converting the micro workload distributions for each of the resource consumption demand profiles into a set of resource requirements for running the given workload on an information technology infrastructure; and
allocating resources of the information technology infrastructure to the given workload based on the set of resource requirements;
wherein the method is performed by at least one processing device comprising a processor coupled to a memory.

US Pat. No. 10,691,499

DISTRIBUTED RESOURCE ALLOCATION

Alibaba Group Holding Lim...

1. A computer-implemented method for performing resource allocation, the method comprising:using a distributed computing system that includes a number of individual computer-implemented solvers for performing resource allocation of M resources among N users into K pools by solving a knapsack problem (KP) subject to K global constraints and L local constraints:
receiving data representing the K global constraints and the L local constraints, wherein K is smaller than L, each of the K global constraints restricts a respective maximum per-pool cost of the M resources across two or more users, and each of the L local constraints restricts a per-user selection of the M resources;
decomposing the KP into N sub-problems using K dual multipliers, each of the N sub-problems corresponding to a respective one of the N users and subject to the L local constraints with respect to (w.r.t.) the corresponding user, wherein N is on an order of billions or larger, wherein each of the K dual multipliers corresponds to a respective one of the K global constraints;
determining the number of individual computer-implemented solvers for solving the N sub-problems;
distributing the N sub-problems among the number of individual computer-implemented solvers by assigning each sub-problem to a respective computer-implemented solver;
solving the KP by the distributed computing system by performing two or more iterations, and, in one iteration, the method comprising:
solving each of the N sub-problems by the computer-implemented solver to which the sub-problem was assigned independently, wherein solving each of the N sub-problems comprises computing an amount of each of the M resources to be allocated to the corresponding user of the N user, and
computing, for each of the K pools, a per-pool cost of the M resources across the N users based on the amount of each of the M resources to be allocated to the corresponding user of the N user, and
updating each of the K dual multipliers w.r.t. a corresponding pool based on a difference between a maximum per-pool cost of the M resources across two or more users for the corresponding pool restricted by a corresponding global constraint and a per-pool cost of the M resources across the N users for the corresponding pool computed based on the amount of each of the M resources to be allocated to the corresponding user of the N user;
determining whether a convergence condition is met based on the K dual multipliers;
in response to determining that a convergence condition is met based on the K dual multipliers, solving each of the N sub-problems based on the K dual multipliers by one of the number of individual computer-implemented solvers independently, wherein solving each of the N sub-problems comprises computing an amount of each of the M resources to be allocated to the corresponding user of the N user; and
allocating the amount of each of the M resources to the corresponding user of the N user.

US Pat. No. 10,691,498

ACQUISITION AND MAINTENANCE OF COMPUTE CAPACITY

Amazon Technologies, Inc....

1. A system, comprising:one or more processors; and
one or more memories, the one or more memories having stored thereon instructions, which, when executed by the one or more processors, configure the one or more processors to:
receive, at a first time, a request to execute a program code, the request including information usable for executing the program code;
determine, based at least on the information included in the request, that the requested execution of the program code is not time-sensitive; and
subsequent to determining that the requested execution of the program code is not time-sensitive, determine that delaying the requested execution of the program code until a subsequent time period subsequent to the first time and performing the requested execution of the program code using computed capacity available during the subsequent time period would consume resources that are in lower demand or consume less resources compared to performing the requested execution of the program code using compute capacity available at the first time; and
cause the program code to be executed using compute capacity acquired for the subsequent time period.

US Pat. No. 10,691,497

HIGH BANDWIDTH CONNECTION BETWEEN PROCESSOR DIES

INTEL CORPORATION, Santa...

1. A semiconductor package comprising:at least one central processing unit (CPU) disposed on a first CPU die;
at least a first graphics processing unit (GPU) disposed on a first GPU die and a second graphics processing unit (GPU) disposed on a second GPU die, wherein the first GPU and the second GPU are positioned in a stacked configuration forming layers on the semiconductor package;
a communication network to provide communication connections between the first CPU die and the and the first GPU die; and
a processor to:
receive graphics metadata from an application which is to execute on the at least one CPU, wherein the graphics metadata indicates a level of graphics processing activity associated with the application;
determine, from the graphics metadata, an amount of graphics processing resources to be dedicated to the application, the graphics processing resources comprising a plurality of graphics processing units located on the first GPU and the second GPU;
configure one or more communication connections between the CPU and the first GPU and the CPU and the second GPU;
transmit the graphics data for the application from the CPU to the plurality of graphics processing units via the communication network;
receive a completion acknowledgment from the plurality of graphics processing units; and
in response to a determination that the workload is finished, to terminate one or more communication connections on the communication network.

US Pat. No. 10,691,496

DYNAMIC MICRO-SERVICES RELATED JOB ASSIGNMENT

Capital One Services, LLC...

1. A method, comprising:subscribing, by a computing node, to a message broker associated with a set of heartbeat messages,
wherein a set of computing nodes subscribe to the message broker;
receiving, by the computing node and from the set of computing nodes, the set of heartbeat messages,
wherein each computing node, of the set of computing nodes, is configured to receive the set of heartbeat messages based upon being subscribed to the message broker, and
wherein the set of heartbeat messages is related to determining a respective priority of the set of computing nodes for processing a set of jobs;
determining, by the computing node, an order of the set of heartbeat messages based on respective delays between when the set of computing nodes would have been triggered to send the set of heartbeat messages and when the set of computing nodes actually sent the set of heartbeat messages;
determining, by the computing node, the respective priority of the set of computing nodes based on one or more factors related to at least one of the set of computing nodes or the order of the set of heartbeat messages;
determining, by the computing node, whether to perform a subset of the set of jobs based on the respective priority of the set of computing nodes; and
performing, by the computing node, a set of actions after determining whether to perform the subset of the set of jobs.

US Pat. No. 10,691,495

VIRTUAL PROCESSOR ALLOCATION WITH EXECUTION GUARANTEE

VMware, Inc., Palo Alto,...

1. A method of scheduling a jitterless workload on a virtual machine (VM) executing on a host comprising one or more physical central processing units (pCPUs) comprising a first subset of the one or more pCPUs and a second subset of the one or more pCPUs, the method comprising:creating a jitterless zone, wherein the jitterless zone comprises the first subset of the one or more pCPUs;
determining whether a virtual central processing unit (vCPU) of the VM is used to execute a jitterless workload or a non-jitterless workload;
when the vCPU is determined to execute the jitterless workload:
allocating by a central processing unit (CPU) scheduler to the vCPU at least one of the pCPUs in the jitterless zone; and
scheduling the jitterless workload for execution by the vCPU on the allocated at least one of the pCPUs in the jitterless zone; and
when the vCPU is determined to execute the non-jitterless workload:
allocating by the CPU scheduler to the vCPU at least one of the pCPUs of the first subset or the second subset; and
scheduling the non-jitterless workload for execution by the vCPU on the allocated at least one of the pCPUs of the first subset or the second subset.

US Pat. No. 10,691,494

METHOD AND DEVICE FOR VIRTUAL RESOURCE ALLOCATION, MODELING, AND DATA PREDICTION

Alibaba Group Holding Lim...

1. A computer-implemented method, comprising:receiving, from a plurality of data providers, evaluation results of a plurality of users, wherein the evaluation results are obtained by the plurality of data providers evaluating the plurality of users based on evaluation models of the plurality of data providers;
constructing a plurality of training samples by using the evaluation results uploaded by the plurality of data providers as training data, wherein each training sample comprises a respective subset of the evaluation results corresponding to a same user of the plurality of users;
generating a label for each training sample based on an actual service execution status of the same user to provide a plurality of labels;
training a model based on the plurality of training samples and the plurality of labels, wherein training the model comprises setting a plurality of variable coefficients, each variable coefficient specifying a contribution level of a corresponding data provider;
allocating virtual resources to each data provider based on the plurality of variable coefficients; and
receiving evaluation results of a particular user that are uploaded by the plurality of data providers, and inputting the evaluation results of the particular user to the trained model to obtain a final evaluation result of the particular user.

US Pat. No. 10,691,493

PROCESSING PLATFORM WITH DISTRIBUTED POLICY DEFINITION, ENFORCEMENT AND MONITORING ACROSS MULTI-LAYER INFRASTRUCTURE

EMC IP Holding Company LL...

1. An apparatus comprising:a processing platform comprising a plurality of processing devices each comprising a processor coupled to a memory;
the processing platform being configured to implement multi-layer infrastructure comprising compute, storage and network resources at a relatively low level of the multi-layer infrastructure, an application layer at a relatively high level of the multi-layer infrastructure, and one or more additional layers arranged between the relatively low level and the relatively high level;
the processing platform being further configured:
to determine policies for respective different ones of the layers of the multi-layer infrastructure, the policy for a given one of the layers defining rules and requirements relating to that layer;
to enforce the policies at the respective layers of the multi-layer infrastructure; and
to monitor performance of an application executing in the multi-layer infrastructure;
wherein determining policies for respective ones of the layers of the multi-layer infrastructure comprises:
determining operational policies for each of a plurality of layers other than the application layer;
determining an application policy for the application layer;
propagating the application policy from the application layer through the other layers of the multi-layer infrastructure; and
generating the policy for the given one of the layers as a combination of the application policy and an operational policy for that layer;
wherein one or more configuration parameters of the multi-layer infrastructure are adjusted based at least in part on a result of the monitoring; and
wherein each of one or more of the layers of the multi-layer infrastructure is associated with at least one controller configured to receive the policy for the corresponding layer and to translate the policy into management and orchestration actions for that layer.

US Pat. No. 10,691,492

RESOURCE TOLERATIONS AND TAINTS

Google LLC, Mountain Vie...

1. A method of allocating tasks with a scheduler that allocates tasks to a resource comprising a physical or virtual machine, the method comprising:receiving, at the scheduler, a request to allocate the resource to perform a particular task;
receiving, by one or more processors, one or more attributes associated with the resource;
analyzing by the one or more processors, in response to the request, the one or more attributes associated with the resource;
analyzing, with the one or more processors, the received request to determine whether the particular task is associated with a first toleration, the first toleration being configured to indicate to the scheduler that one or more particular attributes associated with the resource are to be ignored for the particular task;
in response to a determination that the one or more attributes associated with the resource include one or more particular attributes identified in the first toleration, allocating, by the scheduler and the one or more processors, the resource to the particular task;
in response to a determination that the one or more attributes associated with the resource do not include one or more particular attributes identified in the first toleration, not allocating, by the scheduler and the one or more processors, the resource to the request;
in response to receiving the one or more attributes, determining, by the one or more processors, whether the one or more received attributes indicate whether a claim on the resource is allowed if the claim is not associated with a second toleration matching the received one or more attributes; and
in response to a determination that the received one or more attributes indicates that a claim on the resource is allowed if the claim is not associated with the second toleration matching the received one or more attributes, determining, by the one or more processors, whether a then-existing claim on the resource is associated with a third toleration matching one or more of the received one or more attributes.

US Pat. No. 10,691,491

ADAPTING A PRE-TRAINED DISTRIBUTED RESOURCE PREDICTIVE MODEL TO A TARGET DISTRIBUTED COMPUTING ENVIRONMENT

Nutanix, Inc., San Jose,...

1. A method, comprising:training a model into a trained model, wherein the model comprises a set of trained model parameters to capture resource consumption of computing and storage resources of a workload in a first computing environment, and the first computing environment is characterized by a first configuration;
detecting a difference between the first computing environment and a second computing environment characterized by a second configuration, wherein
the first configuration comprises at least a portion of the second configuration so that the first computing environment in which the model is trained comprises a computing node that is also in the second computing environment into which the trained model is to be deployed; and
deploying the trained model to the second computing environment at least by adapting the trained model to the second computing environment, wherein adapting the trained model comprises modifying the set of model parameters based at least in part on the difference prior to conclusion of adapting the trained model to the second computing environment.

US Pat. No. 10,691,490

SYSTEM FOR SCHEDULING THREADS FOR EXECUTION

Apple Inc., Cupertino, C...

1. An apparatus, comprising:an execution unit circuit configured to process a plurality of data samples associated with a signal; and
a hardware scheduling circuit configured to:
receive priority indications for a plurality of threads for processing the plurality of data samples;
store respective program counter start values for each thread of the plurality of threads;
based on a priority of a particular thread and based on an availability of at least some of the plurality of data samples that are to be processed by the particular thread, schedule the particular thread for execution and transfer a program counter start value for the particular thread to the execution unit circuit; and
in response to a determination, during a particular clock cycle, that a portion of the plurality of data samples are available to be processed by a different thread of the plurality of threads that has a higher priority than the particular thread;
save a current value of the program counter for the particular thread; and
transfer a program counter start value for the different thread to the execution unit circuit to permit the different thread to begin execution in a next clock cycle.

US Pat. No. 10,691,489

MANAGING THE PROCESSING OF STREAMED DATA IN A DATA STREAMING APPLICATION USING QUERY INFORMATION FROM A RELATIONAL DATABASE

International Business Ma...

1. A computer-executed method, comprising:monitoring queries against data in a computerized database to generate at least one parameter defining data of interest;
identifying selective in-flight data in a stream computing application which matches the at least one parameter defining the data of interest, the stream computing application producing output data for inclusion in the computerized database; and
responsive to said identifying selective in-flight data in the stream computing application which matches the at least one parameter defining the data of interest, modifying the manner in which the selective in-flight data is processed by the stream computing application;
wherein said modifying the manner in which the selective in-flight data is processed by the stream computing application comprises adjusting a processing priority of the selective in-flight data.

US Pat. No. 10,691,488

ALLOCATING JOBS TO VIRTUAL MACHINES IN A COMPUTING ENVIRONMENT

International Business Ma...

1. A computer system for allocating jobs to run on a virtual machine (VM) of a particular host computing environment, said system comprising:a memory storage device storing a program of instructions;
a processor device receiving said program of instructions to configure said processor device to:
receive, from a user, input data representing a current request to run a job on a VM, said current job having associated job characteristics;
receive input data representing features associated with VMs running on a public networked or private networked host computing environment, said environment comprising features associated with a host computing environment;
run a learned model for selecting one of: the public network or private network as a host computing environment to run said current requested job on a VM resource, said learned model based on a minimized cost function comprising a probabilistic classifier hypothesis function for a multiclass classifier parameterized according to one or more job characteristics, VM features, and host computing environment features associated with running the current requested job on a VM resource, and fitting parameters with factor weightages based on a sigmoid function to achieve a balance between a computational efficiency, cost effectiveness, and I/O operational efficiency for running the current requested job on the selected private networked or public networked host computing environment; and
allocate the VM resource to run said current requested job on said selected host computing environment.

US Pat. No. 10,691,487

ABSTRACTION OF SPIN-LOCKS TO SUPPORT HIGH PERFORMANCE COMPUTING

International Business Ma...

1. A method comprising:receiving a non-privileged disable interrupts instruction from a user application executing in user space, the non-privileged disable interrupts instruction having an operand with a non-zero value;
determining a value in a special purpose register associated with disabling interrupts; and
in response to determining that the value in the special purpose register associated with disabling interrupts is zero, disabling interrupts and placing the non-zero value of the operand in the special purpose register associated with disabling interrupts.

US Pat. No. 10,691,485

AVAILABILITY ORIENTED DURABILITY TECHNIQUE FOR DISTRIBUTED SERVER SYSTEMS

eBay Inc., San Jose, CA ...

1. A method comprising:receiving, by a first server, a first message from a client device, the first server being an entry point for a message processing stream that includes at least a second server positioned downstream from the first server in the message processing stream;
in response to receiving the first message:
generating a unique identifier for the first message, and
adding an entry in a transaction log, the entry including the first message and the unique identifier for the first message;
appending the unique identifier to the first message, and transmitting the first message to the second server positioned downstream from the first server in the message processing stream;
determining that the first message has not been processed through the message processing stream; and
in response to determining that the first message has not been processed through the message processing stream:
accessing the first message from the transaction log; and
appending the unique identifier to the first message, and re-transmitting the first message to the second server positioned downstream from the first server in the message processing stream.

US Pat. No. 10,691,484

REDUCING COMMIT WAIT IN A DISTRIBUTED MULTIVERSION DATABASE BY READING THE CLOCK EARLIER

Google LLC, Mountain Vie...

1. A method, comprising:receiving, at a client in a distributed system, a transaction to be committed to a server in communication with the client;
computing, with one or more processors in a client library of the client, a tentative timestamp for the transaction, wherein the tentative timestamp is computed using a value for a current time plus a variable corresponding to bounds of uncertainty of clocks in the distributed system, the clocks including at least a client clock at the client and a server clock at the server; and
initiating, with the one more processors, a commit for the transaction based on the computed tentative timestamp, wherein initiating the commit for the transaction is performed by the client outside of a lock-hold interval for the transaction.

US Pat. No. 10,691,483

CONFIGURABLE VIRTUAL MACHINES

Amazon Technologies, Inc....

1. A method comprising:receiving, by a computer system, a request for pricing of a custom virtual machine configuration, wherein the custom virtual machine configuration is for a virtual machine instance that is to be executed at a remote computing resource, wherein the remote computing resource is remote relative to the computer system;
determining a number of virtual machine instances associated with the request;
determining a number of cores associated with the request;
determining an amount of memory associated with the request;
determining a usage amount of the remote computing resource associated with the request;
determining a first estimated price for the virtual machine instance for a first timeframe using the number of virtual machine instances, the number of cores, the amount of memory, and the usage amount of the remote computing resource;
determining a second estimated price for the virtual machine instance for a second timeframe using the number of virtual machine instances, the number of cores, the amount of memory, and the usage amount of the remote computing resource, wherein the first timeframe and the second timeframe are different;
generating configuration data associated with the custom virtual machine configuration;
receiving a selection of the remote computing resource based on at least one of the first estimated price and the second estimated price;
allocating the selected remote computing resource to the virtual machine instance, wherein the selected remote computing resource configures the virtual machine instance using the configuration data; and
causing the selected remote computing resource to execute the virtual machine instance.

US Pat. No. 10,691,482

SYSTEMS, METHODS, AND APPARATUS FOR SECURING VIRTUAL MACHINE CONTROL STRUCTURES

Intel Corporation, Santa...

1. A processor with technology to secure a virtual machine control data structure, the processor comprising:virtualization technology that enables the processor to:
execute host software in root mode; and
execute guest software in non-root mode in a virtual machine (VM), wherein the VM is based at least in part on a virtual machine control data structure (VMCDS) for the VM; and
a root security profile that specifies access restrictions to be imposed when the host software attempts to read the VMCDS in root mode.

US Pat. No. 10,691,481

SYSTEM AND METHOD FOR DETECTION OF UNDERPROVISIONING OF MEMORY IN VIRTUAL MACHINES

NUTANIX, INC., San Jose,...

1. A processor having programmed instructions that, when executed, cause the processor to:store a plurality of virtual memory address process indicator pair entries in a data structure, each virtual memory address process indicator pair entry corresponding to a virtual memory address-process pair generating a page fault, each virtual memory address-process indicator pair entry including an indicator of a number of page faults in a time period;
determine a revolving memory size based on a number of virtual memory address-process indicator pair entries and a page size associated with a guest physical memory;
determine provisioning status of the guest physical memory based on the revolving memory size; and
increase the size of the guest physical memory based on the revolving memory size or generate a notification recommending the increase.

US Pat. No. 10,691,480

APPARATUS AND METHOD FOR CONFIGURING AND ENABLING VIRTUAL APPLICATIONS

Telefonaktiebolaget LM Er...

1. A method by a computing device to configure and monitor a virtual application in a cloud environment, comprising:generating instructions for configuring and monitoring the virtual application based on configuration data for the virtual application;
modifying an injection virtual machine (VM) image to include the instructions for configuring and monitoring the virtual application, wherein the injection VM image is a template for instantiating an injection VM that is to configure and monitor the virtual application according to the instructions;
modifying an injection virtual machine (VM) image to include injection data, wherein the virtual application VM image is a template for instantiating a virtual application VM that implements the virtual application, wherein the injection data includes two or more of: a script, a configuration file, and a public key that corresponds to a private key included in the injection VM image to enable secure communications between the virtual application VM and the injection VM;
modifying a virtual application deployment descriptor for the virtual application to indicate that the injection VM is to be injected into the virtual application; wherein the virtual application deployment descriptor includes a reference to the virtual application VM image; and
causing the virtual application, with the injection VM, to be deployed in the cloud environment using the modified virtual application deployment descriptor.

US Pat. No. 10,691,478

MIGRATING VIRTUAL MACHINE ACROSS DATACENTERS BY TRANSFERRING DATA CHUNKS AND METADATA

FUJITSU LIMITED, Kawasak...

1. An information processing system comprising:a first data center;
a second data center; and
a super metadata server,
the first data center includes
a first virtual machine server that includes a first processor that executes a virtual machine using an image file of a storage device belonging to the virtual machine,
a first data server that includes a second processor and a first storage that has stored therein a plurality of chunks that form the image file, and
a first metadata server that includes a third processor and a second storage that has metadata of the image file stored therein, the metadata including position information representing a position of each chunk in the image file,
the second data center includes
a second virtual machine server that includes a fourth processor that executes the virtual machine using the image file,
a second data server that includes a third storage and a fifth processor, and
a second metadata server that includes a sixth processor and a fourth storage that has the metadata stored therein,
the super metadata server includes
a fifth storage that has stored therein information indicative of an apparatus that has an authority to manage the metadata, and
a seventh processor that transfers the authority from the first metadata server to the second metadata server when the metadata is transmitted to the second metadata server and is stored in the fourth storage,
the second processor transmits, to the second data center, a predetermined chunk of the plurality of chunks that corresponds to predetermined data stored in the storage device,
the fifth processor stores the predetermined chunk in the third storage,
the first processor stops the virtual machine operated under the first processor after the predetermined chunk is transmitted,
the third processor transmits the metadata to the second metadata server after the first processor stops the virtual machine operated under the first processor,
the sixth processor stores the metadata in the fourth storage, and
the fourth processor activates the virtual machine using the image file that includes the predetermined chunk stored in the third storage.

US Pat. No. 10,691,477

VIRTUAL MACHINE LIVE MIGRATION USING INTELLIGENT ORDER OF PAGES TO TRANSFER

Red Hat Israel, Ltd., Ra...

1. A method comprising:receiving a request to migrate a virtual machine (VM) managed by a source hypervisor of a source host machine, wherein the request is to migrate the VM to a destination host machine as a live migration;
receiving, from a VM agent of the VM, indications regarding status of memory pages of the VM for transfer during the live migration, wherein the indications comprise a first indication of prioritized memory pages of the VM, a second indication of de-prioritized memory pages of the VM, and a third indication of ignored memory pages of the VM, wherein the ignored memory pages of the third indication comprise freed memory pages resulting from at least one of VM memory ballooning or cache optimization of the VM;
responsive to receiving the request to live migrate the VM and to receiving the indications regarding the status of the memory pages of the VM, transferring, by a processing device of the source host machine, memory pages of the VM that are identified as at least one of read-only or executable in a first iteration of VM memory page transfer of the live migration;
transferring, by the processing device as part of a second iteration of the VM memory page transfer and in view of the received indications, prioritized memory pages of the VM that have not been transferred as part of the first iteration, the prioritized memory pages identified in accordance with the first indication of the received indications; and
transferring, by the processing device as part of a third iteration of the VM memory page transfer and in view of the second indication and the third indication of the received indications, other memory pages of the VM that have not been transferred as part of the first and second iterations and that are not identified in view of the third indication as ignored memory pages of the VM, wherein the other memory pages of the VM comprise de-prioritized memory pages of the VM identified in accordance with the second indication that are transferred last in the third iteration.

US Pat. No. 10,691,476

PROTECTION OF SENSITIVE DATA

McAfee, LLC, Santa Clara...

1. At least one non-transitory machine readable medium comprising one or more instructions that when executed by at least one processor of an electronic device, cause the at least one processor to:receive data via a network;
determine if the data includes sensitive information;
store the data in a secured area of memory if the data includes sensitive information;
monitor, by a security module, access to the data in the secured area of memory, wherein the secured area of memory is at a hypervisor level;
receive a request from an application to access the data in the secured area;
determine if the application is a trusted application; and
allow the request if the application is a trusted application; or
deny the request if the application is not a trusted application; and
wherein the one or more instructions further cause the at least one processor to store the data in a non-secured area of memory if the data does not include sensitive information.

US Pat. No. 10,691,475

SECURITY APPLICATION FOR A GUEST OPERATING SYSTEM IN A VIRTUAL COMPUTING ENVIRONMENT

International Business Ma...

1. A method, comprising:injecting, by a management component of a hypervisor, a guest kernel module storing heartbeat protocol information and a secret key that is shared between the guest kernel module and the management component of the hypervisor into a kernel of a guest operating system during a boot-up of the guest operating system, wherein the injecting establishes an encrypted heartbeat communication channel between the kernel and the management component of the hypervisor while the guest operating system is booting; and
receiving, at the management component of the hypervisor, periodic encrypted heartbeat messages from the injected guest kernel module based on the heartbeat protocol information, wherein each heartbeat, message verifies the injected guest kernel module is still operating.

US Pat. No. 10,691,474

TEXT RESOURCES PROCESSING IN AN APPLICATION

International Business Ma...

1. A computer-implemented method comprising:in response to initiation of a plug-in, running an updated application,
wherein information displayed on at least one text resource of a plurality of text resources in an original application of the updated application is not editable, and
wherein a subset of the plurality of text resources includes the at least one text resource;
in response to a first piece of information displayed on a text resource of the at least one text resource being changed to a second piece of information:
obtaining an ID of the text resource of the at least one text resource in the updated application; and
mapping the second piece of information to the ID of the text resource in a file corresponding to the at least one text resource in the updated application, wherein the updated application, when built, deployed and run, comprises a first variable identifier which can be enabled to make the subset of the plurality of text resources in the updated application editable from a user interface (UI) of the updated application, and further comprises a second variable identifier which can be enabled to make an entirety of the plurality of text resources in the updated application editable from the (UI) of the updated application;
obtaining source code of the original application;
in response to determining a type of the source code as JavaScript, and in response to the information displayed on the at least one text resource in the original application being not editable:
determining an ID of the at least one text resource, and code templates related to the type of the source code which comprise code with editable text resources;
inputting the ID of each text resource of the at least one text resource into the code templates to produce replaced templates; applying the replaced templates to produce updated source code; saving the replaced templates in code of the plug-in; and
building and deploying the updated source code to produce the updated application; and
in response to determining a type of the source code as HTML, and in response to the information displayed on the at least one text resource in the original application being not editable:
determining the ID of the at least one text resource from an HTML page and code templates related to the HTML page which comprise code with editable text resources,
inputting the ID of each text resource of the at least one text resource into the related code templates to produce the replaced templates, and adding the replaced templates to the HTML page to produce the updated application which can be directly run in the web browser.

US Pat. No. 10,691,473

INTELLIGENT AUTOMATED ASSISTANT IN A MESSAGING ENVIRONMENT

Apple Inc., Cupertino, C...

1. A non-transitory computer-readable medium having instructions stored thereon, the instructions, when executed by one or more processors of an electronic device having a display, cause the one or more processors to:display, on the display, a graphical user interface (GUI) having a plurality of previous messages between a user of the electronic device and a digital assistant implemented on the electronic device, the plurality of previous messages presented in a conversational view;
after displaying the plurality of previous messages, detect a user selection of a first previous message of the displayed plurality of previous messages, the first previous message corresponding to a first previous user input received at a first time;
in response to detecting the user selection of the first previous message, retrieve a first previous contextual state of the electronic device at the first time, wherein the first previous contextual state is associated with the first previous message;
receive a current user input at a second time after the first time; and
in response to receiving the current user input:
display a representation of the current user input as a first current message in the GUI, wherein the first current message is associated with a current contextual state of the electronic device at the second time;
cause a determination of a user intent based on the current user input and the retrieved first previous contextual state of the electronic device at the first time;
cause an action to be performed in accordance with the determined user intent, wherein results are obtained by performing the action; and
display a response as a second current message in the GUI, the response containing a representation of the obtained results.

US Pat. No. 10,691,472

USER INTERFACE EXECUTION APPARATUS AND USER INTERFACE DESIGNING APPARATUS

Mitsubishi Electric Corpo...

1. A user interface execution apparatus that executes operation content of a user based on a code generated by a user interface designing apparatus that designs a user interface, said user interface execution apparatus comprising:a processor to execute a program; and
a memory to store the program which, when executed by the processor, performs processes of:
transitioning a state of said user interface execution apparatus based on said code and said operation content of said user;
issuing a prefetch request for data to a data providing unit, said prefetch request being generated by said user interface designing apparatus based on a data obtaining interface included in said code statically defined in association with said state of said user interface execution apparatus;
storing said data obtained from said data providing unit in response to said prefetch request issued in said issuing;
generating said code from an interface definition defined as an interface between said user interface execution apparatus and said data providing unit, and a state transition definition that defines said transitioning of a state, said interface definition and said state transition definition being designed by said user interface designing apparatus; and
selecting, before transitioning said state, data to be prefetched based on a difference between a data obtaining interface to be used in a state before said transitioning and a data obtaining interface to be used in a state after said transitioning.

US Pat. No. 10,691,470

PERSONAL COMPUTER SYSTEM WITH REMOTELY-CONFIGURED HARDWARE-ENFORCED USAGE LIMITS

1. A computer system comprising:a main processor;
a main memory coupled to said main processor adapted for storing a main operating system and installed applications, said main processor adapted for executing said installed applications;
a first video graphics adapter circuit coupled to said main processor;
a main system logic coupled to said main processor and said main memory;
a power supply coupled to said main system logic and configured to operate in one of a plurality of states, signaled by said main system logic,
wherein if the said power supply is signaled to operate in an on state, said power supply provides power to said main processor and said video graphics adapter circuit and if the said power supply is signaled to operate in an off state, said power supply does not supply power to said main processor and said video graphics adapter circuit;
a plurality of front panel connector circuits coupled to said main system logic;
a power switch input sensing means coupled to said plurality of front panel connector circuits so that upon detection of a power switch closure, said power switch input sensing means adapted to signal said main system logic to signal said power supply to switch states;
a microcontroller coupled to said plurality of front panel connector circuits and said power supply;
a user input means coupled to said microcontroller adapted to indicate the user is requesting to power said main processor to an on state to operate said main operating system;
a memory coupled to said microcontroller which is adapted for executing a program stored in said memory that upon detecting said user input determines, based on a date said user input is detected and a set of limitations values, if such a request is allowed or denied, and if said request is allowed, will cause said power supply to switch to said on state, and when an allowed time has elapsed, will cause said power supply to switch to said off state;
wherein said set of limitations values further comprise a plurality of taper down settings;
wherein said program determines the allowed time, smaller than a previous allowed time corresponding to a previous date, based on said taper down settings and said date of the user's request to power on said main processor;
whereby the allowed time is automatically reduced over time according to a predetermined schedule;
whereby forming a computer usage time limit system adapted to enforce computer usage time limits are enforced independent of the main operating system or installed applications.

US Pat. No. 10,691,469

INTEGRATED SYSTEMS AND METHODS PROVIDING SITUATIONAL AWARENESS OF OPERATIONS IN AN ORGANIZATION

Intrepid Networks, LLC, ...

1. A method for covertly terminating execution of second program steps in a second software application running on a hand-held electronic device having a processor, a storage medium and a touch screen through which termination of program execution by the processor is effected with gestures made on or near the touch screen by one or more fingers, the method comprising:providing a first software application containing first program steps stored in the storage medium and executable on the processor by making one or more gestures on or near the screen, the first program steps causing display of one or more screens of graphic or text information concerning subject matter of the first software application, where the first software application looks like a consumer application having functionality different and unrelated to the functionality of the second software application in order to provide inconspicuous operation for covertly terminating the second program steps;
providing access to control termination of the second program steps by initiating execution of the first program steps, this causing display of a screen of the graphic or text information on the touch screen; and
terminating execution of the second program steps by making one or more first additional gestures on or near the touch screen while the graphic or text information is displayed on the touch screen.

US Pat. No. 10,691,467

BOOTING METHOD USING SYSTEM FIRMWARE WITH MULTIPLE EMBEDDED CONTROLLER FIRMWARES

Wistron Corporation, New...

1. A booting method executed by a system of an electronic device, wherein the system of the electronic device comprises a central processing unit (CPU), a platform controller hub (PCH), a nonvolatile storage device connected to the central processing unit and the platform controller hub, and an embedded controller, and the booting method comprises the steps of:providing a booting read-only memory that stores a system firmware having a plurality of embedded controller firmwares, a basic embedded controller firmware, and a header having a second identification information directed to the basic embedded controller firmware, wherein one of the plurality of embedded controller firmwares is a real embedded controller firmware corresponding to the system of the electronic system;
loading a firmware code of the basic embedded controller firmware into a memory of the embedded controller according to the second identification information;
modifying the header so that the header includes a first identification information;
obtaining a start code address of the real embedded controller firmware directed by the first identification information of the header of the system firmware; and
loading a firmware code of the real embedded controller firmware into the memory of the embedded controller according to the start code address.

US Pat. No. 10,691,466

BOOTING A COMPUTING SYSTEM USING EMBEDDED NON-VOLATILE MEMORY

Intel Corporation, Santa...

1. A computing system comprising:one or more memory modules; and
a processor semiconductor chip, the processor semiconductor chip comprising one or more processing cores; and
a three dimensional cross-point memory coupled to the one or more processing cores, the three dimensional cross-point memory storing BIOS instructions that when executed by the one or more processing cores manages a boot process for the computing system, wherein the BIOS includes instructions to compare information describing the one or more memory modules with information about the one or more memory modules as stored in the three dimensional cross-point memory, and instructions to train the one or more memory modules when the information describing the one or more memory modules does not match the information about the one or more memory modules as stored in the three dimensional cross-point memory wherein instructions to train the one or more memory modules comprise instructions to adjust clocks and data edge and reference voltage levels for reading by sweeping across all possible address ranges, while writing and reading a linear-feedback shift register (LFSR) pattern.

US Pat. No. 10,691,465

METHOD FOR SYNCHRONIZATION OF SYSTEM MANAGEMENT DATA

Mitac Computing Technolog...

1. A method for synchronization of system management data to be implemented by a processor included in a computer device, the computer device further including a first storage unit that is electrically connected to the processor and that stores a system booting program used for a booting process of the computer device, a baseboard management controller that is electrically connected to the processor, and a second storage unit that is electrically connected to the baseboard management controller and that stores system management data including a plurality of sequential packets, some of the sequential packets carrying system information which is related to the system booting program, the method comprising steps of:a) in response to execution of the system booting program, generating a request for the system management data, and transmitting the request to the baseboard management controller so as to enable the baseboard management controller to transmit, based on the request, the system management data stored in the second storage unit to the processor;
b) receiving the system management data from the baseboard management controller, and determining whether the system management data is complete; and
c) when it is determined that the system management data is complete, storing at least one of the sequential packets of the system management data in the first storage unit, and proceeding with execution of the system booting program.

US Pat. No. 10,691,464

SYSTEMS AND METHODS FOR VIRTUALLY PARTITIONING A MACHINE PERCEPTION AND DENSE ALGORITHM INTEGRATED CIRCUIT

quadric.io, Burlingame, ...

1. A method for virtually partitioning an integrated circuit, the method comprising:identifying one or more dimensional attributes of a target input dataset;
selecting a data partitioning scheme from a plurality of distinct data partitioning schemes for the target input dataset based on:
(i) the one or more dimensional attributes of the target input dataset, and
(ii) one or more architectural attributes of the integrated circuit;
disintegrating the target input dataset into a plurality of distinct subsets of data based on the selected data partitioning scheme;
identifying a virtual processing core partitioning scheme from a plurality of distinct processing core partitioning schemes for an architecture of the integrated circuit based on the disintegration of the target input dataset;
virtually partitioning the architecture of the integrated circuit into a plurality of distinct partitions of processing cores of the integrated circuit; and
mapping each of the plurality of distinct subsets of data to one of the plurality of distinct partitions of processing cores of the integrated circuit.

US Pat. No. 10,691,463

SYSTEM AND METHOD FOR VARIABLE LANE ARCHITECTURE

Futurewei Technologies, I...

1. A processing system comprising:a plurality of vector instruction pipelines comprising parallel processing lanes, the plurality of vector instruction pipelines operating asynchronously with respect to one another; and
a global program controller unit (GPCU) outputting a task comprising instructions, the GPCU configured to:
provide individual instructions to one or more vector instruction pipelines of the plurality of vector instruction pipelines;
receive and count beats from each vector instruction pipeline of the plurality of vector instruction pipelines to generate a plurality of pipeline beat counts, with a beat being generated by a vector instruction pipeline upon completion of an instruction;
synchronize execution by generating a barrier and moderating an instruction flow from the GPCU to the plurality of vector instruction pipelines when the plurality of pipeline beat counts indicate a lack of synchronization.

US Pat. No. 10,691,462

COMPACT LINKED-LIST-BASED MULTI-THREADED INSTRUCTION GRADUATION BUFFER

ARM Finance Overseas Limi...

1. A processor, comprising:a results buffer having a plurality of entries, each buffer entry to store a result of an executed instruction prior to the result being written to a register file;
a results buffer allocater to allocate a first results buffer identification value to a first decoded instruction of a program thread and to allocate a second results buffer identification value to a second decoded instruction of the program thread, wherein each of the first and second results buffer identification value identifies one of the plurality of entries of the results buffer to which a result of the respective first and second instructions is written;
a graduation buffer coupled to the results buffer and the results buffer allocator, the graduation buffer having a plurality of entries to store results buffer identification values including the first and second results buffer identification values as part of a linked-list data structure for the program thread; and
a graduation controller comprising:
a thread-tail ID unit associated with the program thread, the thread-tail ID unit coupled to the results buffer allocator and the graduation buffer, the thread-tail ID unit to store the first and second results buffer identification values of the program thread,
a thread-head ID unit associated with the program thread, the thread-head ID unit coupled to the graduation buffer to store the results buffer identification values stored at the linked-list data structure; and
the graduation controller to:
add the first and second results buffer identification values stored at the thread-tail ID unit to the linked-list data structure for the program thread,
add the first and second results buffer identification values stored at the linked-list data structure to the thread-head ID unit over one or more clock cycles, and
identify from the thread-head ID unit two instructions of the program thread for graduation during an instruction graduation cycle.

US Pat. No. 10,691,461

DATA PROCESSING

ARM Limited, Cambridge (...

1. Data processing circuitry comprising:fetch circuitry to fetch blocks, containing instructions for execution, defined by a fetch queue, the blocks having the same length; and
prediction circuitry to predict one or more next blocks to be fetched and to add the predicted next blocks to the fetch queue;
the prediction circuitry comprising:
branch prediction circuitry to detect a predicted branch destination for a branch instruction in a current block, the predicted branch destination representing either a branch target for a branch predicted to be taken or a next instruction after the branch instruction, for a branch predicted not to be taken; and
sequence prediction circuitry to detect sequence data, associated with the predicted branch destination, identifying a next block following the predicted branch destination in the program flow order having a next instance of a branch instruction, to determine based on the sequence data how many intervening blocks occur between the current block and the identified next block, to add to the fetch queue the identified next block and any intervening blocks between the current block and the identified next block, and to initiate branch prediction in respect of the predicted next instance of a branch instruction.

US Pat. No. 10,691,460

POINTER ASSOCIATED BRANCH LINE JUMPS FOR ACCELERATED LINE JUMPS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:providing, by a processor, at least one line entry address tag in each line of a branch predictor;
indexing, by the processor, into the branch predictor with a current line address to predict a taken branch's target address and a next line address, wherein the at least one line entry address tag is utilized when indexing into the branch predictor with a current line address to predict a next line address when the at least one line entry address tag matches the current line address;
re-indexing, by the processor, into the branch predictor with one of a predicted next line address or a sequential next line address when the at least one line entry address tag does not match the current line address;
using, by the processor, branch prediction content compared against a search address to predict a direction and targets of branches and determining when a new line address is generated;
re-indexing, by the processor, into the branch predictor with a corrected next line address when it is determined that one of the predicted next line address or the sequential next line address differs from the new line address; and
writing content from the branch predictor into a data entry queue, and removing an oldest entry from the data entry queue when a new line address is generated,
wherein each line of the branch predictor further comprises a confidence counter, wherein the confidence counter is adjusted based on whether the predicted next line address matches the new line address from a branch prediction search process, and wherein a line exit prediction is performed when the confidence counter is above a threshold value.

US Pat. No. 10,691,459

CONVERTING MULTIPLE INSTRUCTIONS INTO A SINGLE COMBINED INSTRUCTION WITH AN EXTENSION OPCODE

International Business Ma...

1. A method of converting program instructions for two-stage processors, the method comprising:receiving, by a preprocessing unit, a group of program instructions;
determining, by the preprocessing unit, that at least two of the group of program instructions can be converted into a single combined instruction;
converting the at least two program instructions into the single combined instruction comprising an extension opcode, wherein the extension opcode indicates, to an execution unit, a format of the single combined instruction, wherein the format indicates which respective bits of the single combined instruction correspond respectively to: an operand, a memory location, and a program instruction of the at least two program instructions; and
sending, by the preprocessing unit, the single combined instruction to the execution unit.

US Pat. No. 10,691,458

METHOD AND APPARATUS TO PROCESS KECCAK SECURE HASHING ALGORITHM

Intel Corporation, Santa...

1. A processor, comprising:a plurality of registers;
an instruction decoder to decode an instruction, the instruction to indicate to execution circuitry to perform a KECCAK theta phase; and
execution circuitry coupled to the instruction decoder to execute the decoded instruction to perform the KECCAK theta phase to:
perform a ? function of a KECCAK algorithm on subcubes stored in the registers in parallel, and
perform a first portion of a ? function of the KECCAK algorithm on the subcubes in parallel.

US Pat. No. 10,691,457

REGISTER ALLOCATION USING PHYSICAL REGISTER FILE BYPASS

Apple Inc., Cupertino, C...

1. An apparatus, comprising:a plurality of execution units;
a physical register file including a plurality of physical registers with respective address values;
an instruction buffer configured to receive a group of instructions to be performed by the plurality of execution units; and
a scheduling circuit configured to:
determine a number of instructions of the group of instructions that consume a result of a particular instruction;
in response to a determination that a plurality of instructions in the group of instructions consume the result, associate a respective address of a particular physical register of the plurality of physical registers to store the result of the particular instruction; and
in response to a determination that a single instruction of the group of instructions consumes the result, associate a tag address to the particular instruction and the consuming instruction, wherein the associated tag address indicates that the result of the particular instruction is not to be written to the physical register file.

US Pat. No. 10,691,456

VECTOR STORE INSTRUCTION HAVING INSTRUCTION-SPECIFIED BYTE COUNT TO BE STORED SUPPORTING BIG AND LITTLE ENDIAN PROCESSING

International Business Ma...

1. A method for storing vector data in memory with a processor, the method comprising:obtaining, by the processor, a variable-length vector store instruction that comprises (i) an SX field, (ii) an S field, (iii) an RA field specifying a general purpose register address specification for a memory address, (iv) an RB field specifying a length of the general purpose register address specification, and (v) one or more fields identifying the variable-length vector store instruction, wherein at least two ranges of vector registers for a source are available, wherein each of the at least two ranges of vector registers for the source is controlled by a respective one or more bits in a machine state register;
selecting one of the at least two ranges of vector registers for a target, based on the SX field within the variable-length vector store instruction;
selecting a vector register for the target within the selected range of vector registers for the target, based on the S field within the variable-length vector store instruction;
determining whether data should be stored into memory using big endian byte-ordering or little endian byte-ordering; and
storing data from the vector register into memory by:
iteratively storing data for each of a plurality of bytes in the variable-length vector store instruction into a memory address, wherein each iteration of a storing data comprises:
placing a byte of the plurality of bytes in the memory address according to big endian byte-ordering or little endian byte-ordering; and
selecting a next byte of the plurality of bytes in the variable-length vector store instruction to store into a next memory address.

US Pat. No. 10,691,455

POWER SAVING BRANCH MODES IN HARDWARE

Samsung Electronics Co., ...

1. A method, comprising:executing a plurality of threads in a temporal dimension;
executing a plurality of threads in a spatial dimension;
determining a branch target address for each of the plurality of threads in the temporal dimension and the plurality of threads in the spatial dimension;
comparing each of the branch target addresses to determine a minimum branch target address, wherein the minimum branch target address is a minimum value among branch target addresses of each of the plurality of threads; and
configuring a compiler to determine an n-bit mode for determining the branch target address for each of the plurality of threads in the temporal dimension and the plurality of threads in the spatial dimension, wherein n is an integer power of 2,
wherein the n-bit mode is one of a quarter-precision, a half-precision, or a full-precision mode.

US Pat. No. 10,691,454

CONFLICT MASK GENERATION

Intel Corporation, Santa...

1. A processor comprising:a source register to store a first vector with each element identifying a memory location in a memory device, wherein the first vector comprises a first element located at a first position in the first vector, the first element storing a first value identifying a first memory location in the memory device, and a second element located at a subsequent position in the first vector, the second element storing a second value identifying a second memory location in the memory device;
a destination register to store a second vector;
a decoder to decode a single instruction multiple data (SIMD) instruction to generate a mask for a scatter operation to avoid lane conflicts during the scatter operation; and
an execution unit coupled to the destination register and the source register, the execution unit to:
store, in the destination register, the second vector with each element containing a range of bits corresponding to comparisons of a current element in the first vector with each of the preceding elements in the first vector, wherein each bit in the range of bits is set when the corresponding comparison results in a conflict;
identify, using the second vector, that the second element is a last element in the first vector with the same value as the first element; and
generate a third vector with each element identifying a mask bit, wherein the mask bit is set when 1) the corresponding element in the first vector is not in conflict with other elements in the first vector or 2) the corresponding element is in conflict with one or more other elements in the first vector and is a last element in a sequential order of the first vector that conflicts with the one or more other elements, wherein a position of each element in the third vector maps to a same position of the corresponding element in the first vector, wherein the third vector is the mask for the scatter operation.

US Pat. No. 10,691,453

VECTOR LOAD WITH INSTRUCTION-SPECIFIED BYTE COUNT LESS THAN A VECTOR SIZE FOR BIG AND LITTLE ENDIAN PROCESSING

International Business Ma...

1. A method for loading a vector with a processor, the method comprising:obtaining, by the processor, a variable-length vector load instruction that comprises (i) a TX field, (ii) a T field, (iii) an RA field specifying a general purpose register address specification for a memory address, (iv) an RB field specifying a length of the general purpose register address specification, and (v) one or more fields identifying the variable-length vector load instruction, wherein at least two ranges of vector registers for a target are available, wherein each of the at least two ranges of vector registers for the target is controlled by a respective one or more bits in a machine state register;
selecting one of the at least two ranges of vector registers for the target, based on the TX field within the variable-length vector load instruction;
selecting a vector register for the target within the selected range of vector registers for the target, based on the T field within the variable-length vector load instruction;
determining whether data should be loaded into the vector register using big endian byte-ordering or little endian byte-ordering;
loading data from memory into the vector register by
iteratively fetching load data for each of a plurality of addresses in the variable-length vector load instruction, wherein each iteration of a load data fetch comprises:
fetching load data for an address in the variable-length vector load instruction;
placing the fetched load data in the vector register according to big endian byte-ordering or little endian byte-ordering; and
proceeding to access a next address in the variable-length vector load instruction;
determining a length of the fetched load data for all of the plurality of addresses is less than a length of the vector register; and
iteratively setting one or more residue bytes in the vector register to a pad value, wherein each iteration of setting the one or more residue bytes comprises:
setting a first residue byte of the one or more residue bytes to the pad value according to big endian byte-ordering or little endian byte-ordering; and
proceeding to a next residue byte of the one or more residue bytes.

US Pat. No. 10,691,452

METHOD AND APPARATUS FOR PERFORMING A VECTOR BIT REVERSAL AND CROSSING

Intel Corporation, Santa...

1. A processor comprising:a decoder to decode a vector bit reversal and crossing instruction to generate a decoded vector bit reversal instruction, the vector bit reversal crossing instruction to specify a first plurality of source bit groups stored in a first source vector register, a second plurality of source bit groups stored in a second source vector register, and an immediate to specify a size of the first and second plurality of source bit groups; and
execution circuitry to execute the decoded vector bit reversal instruction to reverse positions of contiguous bit groups within a first source register to generate a set of reversed bit groups, interleave the set of reversed bit groups with the second plurality of bit groups to generate a set of reversed and interleaved bit groups, and store the reversed and interleaved bit groups in a destination vector register.

US Pat. No. 10,691,451

PROCESSOR INSTRUCTIONS TO ACCELERATE FEC ENCODING AND DECODING

COHERENT LOGIX, INCORPORA...

1. A processing element, comprising:a plurality of pipelined operational stages, each operational stage configurable to perform a plurality of data-processing operations;
wherein the processing element is configured to:
at a first time, in response to receiving an M-width approximated min-sum (MAMINSUM) instruction associated with a forward error correction (FEC) encoded signal, wherein the FEC encoded signal comprises a plurality of input values:
configure a first operational stage to determine an absolute value and a sign function of a first input value of the plurality of input values, and determine an absolute value and a sign function of a second input value of the plurality of input values;
configure a second operational stage to determine a minimum of the absolute value of the first input value and the absolute value of the second input value, and determine a final sign function comprising a product of the sign function of the first input value and the sign function of the second input value; and
configure a third operational stage to apply the final sign function to the minimum of the absolute value of the first input value and the absolute value of the second input value; and
iteratively perform the first, second, and third operational stages for the plurality of input values to generate a decoded signal corresponding to the FEC encoded signal; and
at a second time, in response to receiving a different instruction, reconfigure at least one of the first, second, or third operational stages to perform a different function using a different plurality of input values.

US Pat. No. 10,691,450

SYSTEM AND METHOD FOR MANAGING END TO END AGILE DELIVERY IN SELF OPTIMIZED INTEGRATED PLATFORM

Tata Consultancy Services...

1. A method for managing a program in an agile delivery environment, the method comprising a processor implemented steps of:providing a set of visions corresponding to the program (202);
deriving a set of goals aligned to the set of visions, wherein each of the set of goals have a key result metric (204), wherein the key result metric is a measure of progress of each of the set of goals;
deriving a set of requirements from a plurality of sources to achieve the set of goals, wherein the set of requirements is maintained in a product backlog based on a priority (206);
deciding a sprint duration to be used during a release of the program (208);
providing a list of a plurality of teams participating in the release of the program, wherein each of the plurality of teams has a team velocity (210), wherein the team velocity is indicative of a speed of the team in completing a number of story points in the agile delivery environment, and wherein the team velocity is dynamic and keeps on updating with every sprint;
performing profiling of the plurality of teams, the product backlog and the sprint duration (212);
performing a set of machine learning methods based on the profiling for generating a set of recommendations for matching the plurality of teams with the set of requirements for optimal release of the program (214), wherein generating the set of recommendations using the set of machine learning methods comprises comparing a planned story points with a total number of story points and performing following steps based on the comparison (310):
adding one or more requirements to the first set of requirements in a release backlog if the planned story points are less than the total number of story points (312); and
modifying at least one of the following to enable a optimal release of the program in the agile delivery environment if the planned story points are more than the total number of story points (314):
reducing one or more requirements of the first set of requirements from the release backlog (316);
adding one or more teams to the list of teams who would participate in the release (318); and
adding one or more sprints to the number of sprints required for the release if an end date of the program is not fixed (320);
delivering a minimum viable product corresponding to the program based on the set of recommendations at the end of every sprint (216); and
managing the release of the program through an automated multiple deployment pipelines by defining various stages that the program would go through in a single click start (218).

US Pat. No. 10,691,449

INTELLIGENT AUTOMATIC MERGING OF SOURCE CONTROL QUEUE ITEMS

Microsoft Technology Lice...

1. A computer-implemented method comprising:receiving, in a build queue associated with a code repository, a first build request for a build of a first code revision and a second build request for a build of a second code revision;
determining a risk factor for at least one of the first build request or the second build request;
generating a request set comprising one or more of a copy of the first build request and a copy of the second build request based on the risk factor; and
inserting the request set into the build queue as a third build request to pend ahead of the first build request and the second build request.

US Pat. No. 10,691,448

METHOD AND APPARATUS TO EXECUTE BIOS FIRMWARE BEFORE COMMITTING TO FLASH MEMORY

Dell Products, L.P., Rou...

1. A method comprising:receiving a basic input/output system (BIOS) update executable at an information handling system (IHS), the executable including a first payload containing a BIOS image;
storing the BIOS image at system memory included at the IHS; and
providing a boot mode variable stored at a primary BIOS flash memory device, the boot mode variable having states including:
a first state identifying that the next boot at the IHS is to execute a BIOS image stored at the primary BIOS flash memory device;
a second state identifying that the next boot of the IHS is to execute a BIOS image stored at system memory; and
a third state identifying that a BIOS image stored at a hard drive should be stored at the primary BIOS flash memory device.

US Pat. No. 10,691,447

WRITING SYSTEM SOFTWARE ON AN ELECTRONIC DEVICE

BlackBerry Limited, Wate...

1. A method, comprising:receiving, by an electronic device and from a booting device, an instruction to write system software on the electronic device that is different than the booting device, wherein the system software includes a version of a high level operating system (HLOS) of the electronic device;
in response to the instruction, invoking a boot loader that is stored on the electronic device prior to the electronic device receiving the instruction to write system software on the electronic device;
sending, by the boot loader on the electronic device, a request for password to the booting device;
in response to the request for password, receiving, by the boot loader on the electronic device, a password from the booting device;
determining, by the boot loader, whether the received password matches a high level operating system (HLOS) password stored on the electronic device, wherein the HLOS password is stored on a Replay Protected Memory Block (RPMB) of the electronic device; and
performing, by the boot loader, one of the following operations:
when the received password matches the HLOS password, writing the system software on the electronic device; or
when the received password does not match the HLOS password, halting the writing of the system software.

US Pat. No. 10,691,446

SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR COORDINATION AMONG MULTIPLE DEVICES

MAJEN TECH, LLC, Longvie...

1. A system, comprising:a first device including a first Bluetooth interface, a first Wi-Fi interface, a first input device, a first display, at least one first processor, and a first memory storing first instructions and an application;
a second device including a second Bluetooth interface, a second Wi-Fi interface, a second input device, a second display, at least one second processor, and a second memory storing second instructions and the application;
said at least one first processor of the first device configured to execute the first instructions for, based on user input, causing the first device to:
access the application on the first device,
perform an action utilizing the application,
update a state of the application,
cause communication of the updated state of the application with the second device, and
at least one of: shut down the first device or the application, or place the first device in stand by;
said at least one second processor of the second device configured to execute the second instructions for, based on additional user input, causing the second device to:
after the at least one of: the first device or the application is shut down, or the first device is placed in stand by, and utilizing the updated state of the application received from the first device, display, on the second display, an interface including:
a button for accessing the application utilizing the second device by displaying the application on the second display of the second device, and
indicia that indicates that the first device has updated at least one aspect of the application, by visually identifying the first device by displaying a visual identification of the first device on the second display of the second device, the updated state of the application received from the first device being utilized by the indicia being included with the interface on the second display based on the updated state of the application received from the first device, and
in response to a detection of a selection of the button after the at least one of: the first device or the application is shut down, or the first device is placed in stand by, access the application utilizing the second device such that the application is accessed so as to reflect the updated state of the application.

US Pat. No. 10,691,445

ISOLATING A PORTION OF AN ONLINE COMPUTING SERVICE FOR TESTING

Microsoft Technology Lice...

1. An apparatus, comprising:a processor;
a set of memory units; and
a management application operative on the processor, the management application configured to perform functions of:
routing production traffic away from a deployment unit, the deployment unit including a server for running a first endpoint protection service instance for a plurality of endpoints,
generating a second endpoint protection service instance,
establishing a different endpoint on the second endpoint protection service instance,
migrating the deployment unit to the different endpoint,
applying a change to the deployment unit to produce a modified deployment unit,
and
routing at least a portion of the production traffic to the modified deployment unit for testing.

US Pat. No. 10,691,444

LAUNCHING UPDATED FIRMWARE FILES STORED IN A DEDICATED FIRMWARE VOLUME

American Megatrends Inter...

1. A computer-implemented method for updating firmware file system (FFS) files, the method, comprising:identifying, by one or more processors, during boot-up of a computing system, a first FFS file stored in a first firmware volume within a non-volatile memory included in the computing system, and the first FFS file associated with a first identifier;
determining, by the one or more processors, during the boot-up of the computing system, that a second FFS file associated with the first identifier is stored in a second firmware volume within the non-volatile memory in the computing system, wherein the first firmware volume and the second firmware volume are in separate locations of the non-volatile memory in the computing system, and the second FFS file is an updated version of the first FFS file;
determining, by the one or more processors, during the boot-up of the computing system, based on the first identifier, that the second FFS file is not indicated in either a black list or a launch list, wherein the black list indicates whether an updated version of a first particular FFS file is not allowed to be executed when the updated version of the first particular FFS file is detected, and the launch list indicates whether an updated version of a second particular FFS file is allowed to be executed when the updated version of the second particular FFS file is detected;
executing, by the one or more processors, based at least in part on the determining that the second FFS file associated with the first identifier is stored in the second firmware volume, and based at least in part on the determining that the second FFS file is not indicated in either the black list or the launch list, instructions of the second FFS file;
determining that execution of the second FFS file does not result in any errors; and
updating, based at least in part on determining that the execution of the second FFS file does not result in any errors, the launch list to include the second FFS file.

US Pat. No. 10,691,443

VIRTUALIZED FILE SERVER BLOCK AWARENESS

Nutanix, Inc., San Jose,...

1. A virtualized file server, comprising:a cluster of virtualized server managers hosted on a computing node cluster and configured to manage storage items maintained on virtual disks, the cluster of virtualized server managers including a first virtualized file server manager configured to manage an input/output (I/O) transaction directed to a first storage item of the storage items and a second virtualized file server manager configured to manage an I/O transaction directed to a second storage item of the storage items; and
a centralized coordination service hosted on the computing node cluster and configured to, in response to detection of a failure of the first virtualized file server manager:
determine whether a first computing node block of the computing node cluster on which the first virtualized file server manager is hosted has failed;
based on the determination and identification of a second computing node block of the computing node cluster on which the second virtualized file server manager is hosted, identify a failover path to the second virtualized file server manager; and
migrate I/O transaction management associated with the first storage item to the second virtualized file server manager.

US Pat. No. 10,691,442

VIRTUALIZED FILE SERVER TIERS

Nutanix, Inc., San Jose,...

1. A system for managing a virtualization environment, the system comprising:a virtualized file server (VFS) comprising a plurality of file server virtual machines (FSVMs), wherein each of the FSVMs is configured to run on one of a plurality of host machines and is further configured to conduct I/O transactions with one or more virtual disks, wherein at least one of the FSVMs is configured to:
in response to a network filesystem request to access a user file, determine an access frequency associated with the user file measured based on one or more accesses of the user file by one or more FSVMs;
in response to a request to store the user file, determine a storage tier from a plurality of storage tiers of a cloud storage service at which the user file is to be stored, wherein the storage tier is determined from the plurality of storage tiers based on the access frequency associated with the user file, wherein each of the plurality of storage tiers is associated with a different tier access time; and
store the user file at the determined storage tier.

US Pat. No. 10,691,441

VIRTUALIZED FILE SERVER SPLITTING AND MERGING

Nutanix, Inc., San Jose,...

38. A method, comprising:selecting a particular virtualized server manager of a cluster of virtualized server managers of a virtualized file server (VFS) to be removed from the VFS, wherein each virtualized server manager of the cluster of virtualized server managers is configured to manage an input/output operation directed to a respective storage item of storage items of the VFS maintained on virtual disks;
identifying an available virtualized file service manager;
providing a resource of the particular virtualized file service manager to the available virtualized file service manager;
incorporating the available virtualized file service manager into a new VFS of a plurality of new VFSs; and
removing the particular virtualized file service manager from the VFS.

US Pat. No. 10,691,440

ACTION EXECUTION BASED ON MANAGEMENT CONTROLLER ACTION REQUEST

HEWLETT PACKARD ENTERPRIS...

1. A method comprising:receiving an indication of an action request at a management controller processor within a server from a first user without administrator privileges via a management network, wherein the administrator privileges required to execute a complete action are specified in the action request;
receiving a workload request provided by a second user with administrator privileges via a production network, wherein the workload request and the production network are isolated from the management network, and wherein the action request and management network are isolated from the production network;
sending the action request from the management controller processor to a utility program; and
executing, by the utility program, the complete action of the action request wherein the utility program determines a time at which the complete action is executed;
wherein the utility program examines one or more configuration parameters to determine the time at which the complete action is to be executed, and
writing an action in progress indication to a storage accessible by the management controller processor in response to the complete action beginning execution.

US Pat. No. 10,691,439

METHOD AND APPARATUS FOR FACILITATING A SOFTWARE UPDATE PROCESS OVER A NETWORK

ALIBABA GROUP HOLDING LIM...

1. A client device for facilitating an update process of a software program, comprising:a memory that stores a set of instructions; and
one or more processors configured to execute the set of instructions to cause the client device to:
receive and process a user request for accessing a network device;
maintain a connection with the network device;
receive information for updating a software program installed on the client device from the network device;
perform an updating of the software program installed on the client device based on the received information from the network device;
when the updating of the software program installed on the client device is in progress, receive and store one or more user requests for accessing the network device; and
after the updating of the software program installed on the client device is completed, continue to receive and store the one or more user requests for accessing the network device and process the stored one or more user requests for accessing the network device received when the updating of the software program installed on the client device is in progress and after the updating of the software program installed on the client device is completed.

US Pat. No. 10,691,437

APPLICATION DIRECTORY FOR A MULTI-USER COMPUTER SYSTEM ENVIRONMENT

salesforce.com, inc., Sa...

1. A non-transitory, computer-readable storage medium having stored thereon a plurality of instructions that are capable of being executed by a computer system to cause operations comprising:creating, by the computer system based on user input from a first user of the computer system, a metadata package corresponding to an application, wherein the metadata package specifies a set of data associated with the first user within a data store of the computer system to be utilized by the application;
receiving, by the computer system from the first user, an indication that the application may be imported by other users of the computer system;
registering, by the computer system, the application in an application directory of the computer system, wherein the application directory is accessible to one or more users of the computer system and includes a listing of one or more applications available for import by the one or more users;
receiving, by the computer system from a second user of the computer system, a request to import the metadata package; and
in response to the request to import the metadata package, allowing, by the computer system, the second user to utilize the application with a different set of data associated with the second user within the data store of the computer system.

US Pat. No. 10,691,436

SYSTEMS AND METHODS FOR TRACKING SOURCE CODE DEPLOYMENTS

ATLASSIAN PTY LTD, Sydne...

1. A computer-implemented method comprising:receiving, from a client device, a request for previewing promotion of a selected source code deployment to a target environment, the request including an identifier of the selected source code deployment and an identifier of the target environment, wherein the identifier of the target environment defines a type of the target environment for deployment, and wherein types of environment including testing, staging, and production;
identifying a source code revision identifier of the selected source code deployment based on the identifier of the selected code deployment;
identifying a source code revision identifier of the latest source code deployment in the target environment based on the identifier of the target environment;
retrieving a list of undeployed source code revisions between the selected source code deployment and the latest source code deployment in the target environment, the retrieving based on the source code revision identifier of the selected source code deployment and the source code revision identifier of the latest source code deployment in immediately preceding source code deployment in the target environment; and
forwarding the retrieved list of undeployed source code revisions to the client device for rendering on a display of the client device.

US Pat. No. 10,691,435

PROCESSOR REGISTER ASSIGNMENT FOR BINARY TRANSLATION

Parallels International G...

1. A method comprising:decoding a current source code fragment compatible with a source instruction set architecture (ISA);
identifying a first source register referenced by the current source code fragment;
determining that the first source register is not referenced by a register mapping table, wherein the register mapping table comprises a plurality of entries, each entry specifying a source register, a target register, and a weight value, wherein the weight value is proportional to a value of a usage path length of the target register;
identifying, among the plurality of mapping table entries, a mapping table entry comprising a highest weight value, wherein the identified mapping table entry specifies a second source register and a second target register;
replacing, in the identified mapping table entry, an identifier of the second source register with an identifier of the first source register; and
translating, using the mapping table entry, the current source code fragment into a target code fragment, wherein the target code fragment is compatible with a target ISA.

US Pat. No. 10,691,434

SYSTEM AND METHOD FOR CONVERTING A FIRST PROGRAMMING LANGUAGE APPLICATION TO A SECOND PROGRAMMING LANGUAGE APPLICATION

Macrosoft, Inc., Parsipp...

1. A computer-implemented method for converting a First Programming Language Application to a Second Programming Language Application, the method comprising:receiving for the First Programming Language Application a Grammar file, Source Project, and Target Project, the First Programming Language Application having backend databases coupled to graphic user interfaces;
generating tabular data for the Source Project;
generating for the First Programming Language Application a listing of Source Application files and Source Project files;
generating blank code files for the Second Programming Language Application;
iterating through First Programming Language Application files to generate corresponding Second Programming Language Application files, wherein one or more of the First Programming Language Application files contains code-behind;
setting up a Second Programming Language Application target project containing Second Programming Language Application files;
converting First Programming Language Application modules to generate corresponding Second Programming Language Application files;
building a dictionary for the First Programming Language Application by executing a first pass through the First Programming Language Application;
generating Second Programming Language Application files with the dictionary in the form of a plurality of controller functions, classes and report expressions by executing a second pass; and
stitching the Second Programming Language Application files together into the Second Programming Language Application, the Second Programming Language Application having core functionality separated from user interface behavior.

US Pat. No. 10,691,433

SPLIT FRONT END FOR FLEXIBLE BACK END CLUSTER PROCESSING

Databricks Inc., San Fra...

1. A system for code development and execution, comprising:a client interface adapted to:
receive user code to be executed; and
receive an indication of a server that will perform the execution; and
a client processor adapted to:
parse the user code determining one or more data items referred to during the execution, wherein the one or more data items includes a table, a file, a directory, an object store, or any combination thereof;
generate a preliminary plan based on the one or more data items;
provide the server with an inquiry for metadata regarding the one or more data items;
receive the metadata regarding the one or more data items;
determine a logical plan based at least in part on the preliminary plan and the metadata regarding the one or more data items, comprising to:
perform one or more of the following:
A) reorder, using the metadata, the preliminary plan to optimize processing speed of the logical plan;
B) determine, using the metadata, whether a query amounts to an impossible situation; and
in response to a determination that the query amounts to an impossible situation, remove the query from the preliminary plan; and/or
C) determine whether a user has access permissions; and
in response to a determination that the user does not have access permissions, reject the preliminary plan;
provide the logical plan to the server to be executed; and
monitor execution of the logical plan.

US Pat. No. 10,691,432

COMPILATION METHOD

Graphcore Limited, Brist...

1. A computer-implemented method for generating an executable program to run on a processing system comprising one or more chips each comprising a plurality of tiles, each tile comprising a respective processing unit and memory; the method comprising:receiving an input graph comprising a plurality of data nodes, a plurality of compute vertices and a plurality of directional edges, each edge representing an output from a data node input to a compute vertex or an output from a compute vertex input to a data node, each data node representing a variable and/or constant, and each compute vertex representing one or more computations to perform on the input to the compute vertex in order to result in the output from that compute vertex;
receiving an initial tile-mapping specifying which of the data nodes and vertices are allocated to be run on which of the tiles;
determining a subgraph of the input graph that meets one or more heuristic rules, the rules comprising:
the subgraph comprises at least one data node,
the subgraph spans no more than a threshold number of tiles in the initial tile-mapping, and
the subgraph comprises at least a minimum number of edges outputting to one or more vertices on one or more others of the tiles;
adapting the initial mapping to migrate the data nodes and any vertices of the determined subgraph to said one or more other tiles; and
compiling the executable program from the graph with the vertices and data nodes configured to run on the tiles specified by the adapted mapping.

US Pat. No. 10,691,431

METHOD FOR GENERATING INTEROPERABILITY RULES AND ELECTRONIC DEVICE

LENOVO (BEIJING) LIMITED,...

1. A method implemented at a smart controller for generating interoperability rules, comprising:pre-storing a plurality of application-scenario-based installation packages at the smart controller, wherein each of the plurality of application-scenario-based installation packages is a set of interoperability rules comprising a plurality of interoperability rules corresponding to one application scenario, wherein each of the plurality of interoperability rules is a control rule for controlling at least one execution device according to states of at least one condition device;
displaying at least one installation package on a display screen of the smart controller for selection by a user, wherein selecting at least one installation package that match a plurality of smart home devices connected to the smart controller among the plurality of application-scenario-based installation packages and displaying the at least one installation package that match the plurality of smart home devices to the user, or displaying each of the plurality of application-scenario-based installation packages to the user;
determining an installation package to be loaded among the at least one installation package displayed on the display screen based on user selection;
selecting all interoperability rules that match the plurality of smart home devices from the installation package to be loaded, wherein all interoperability rules comprised in the installation package to be loaded match the plurality of smart home devices when the installation package to be loaded matches the plurality of smart home devices;
installing the installation package to be loaded on the smart controller to load the interoperability rules that match the plurality of smart home on the smart controller; and
controlling the plurality of smart home devices connected to the smart controller based on interoperability rules loaded on the smart controller.

US Pat. No. 10,691,430

LATENCY SCHEDULING MEHANISM

Intel Corporation, Santa...

1. An apparatus to facilitate instruction scheduling, comprising:one or more processors to execute a latency scheduler to provide low compilation overhead and balance latency hiding with register pressure, including receiving a block of instructions, dividing the block of instructions into a plurality of sub-blocks based on a register pressure bounded by a predetermined threshold by determining a register pressure estimate (RPE) for each program point in a code block and determining local minimum points and local maximum points for each program point in the code block based on the RPE and schedule instructions in each of the plurality of sub-blocks for processing.

US Pat. No. 10,691,429

CONVERTING WHITEBOARD IMAGES TO PERSONALIZED WIREFRAMES

International Business Ma...

1. A method for creating a wireframe model for a user interface, comprising:identifying, by a computer, an image created on a user interface;
capturing said image from a sketch created on a whiteboard;
performing, by the computer, image recognition to identify objects and text within said image;
creating, by the computer, a storyboard based on said step of performing image recognition, wherein said storyboard is converted into a digital widget model;
calibrating, by the computer, said digital widget model by comparing said digital widget model with a guideline, rejecting at least one feature of the digital widget model based on the guideline, and replacing the rejected feature with an available acceptable feature based on the guideline;
delivering, by the computer, a digital widget output to a user experience designer for editing, said digital widget output based on said steps of creating and calibrating;
storing, by the computer, edits made by said user experience designer in an historical record database, wherein said historical record database includes at least one previous edit made by said user experience designer to a previous digital widget output;
wherein said calibrating said digital widget model further includes comparing, by the computer, said digital widget model to said historical record database, and applying the at least one previous edit to said digital widget model; and
finalizing a final wireframe model design.

US Pat. No. 10,691,428

DIGITAL COMPLIANCE PLATFORM

SAP SE, Walldorf (DE)

1. A computing system comprising:a storage device; and
a processor configured to
receive a request from a sending system to submit an electronic document associated with an identified jurisdiction of a plurality of jurisdictions from the sending system to a receiving system, wherein the request includes a plurality of sequential non-generic communications that must satisfy compliance requirements of the identified jurisdiction, and wherein the plurality of communications include communication identifiers,
determine a current status of the submission based on an identifier of the sending system,
dynamically determine whether the request is allowed based on a comparison of the communication identifiers with the current status, and
store the electronic document in the storage device and update the current status of the submission in the storage device when the request is allowed.

US Pat. No. 10,691,427

METHOD AND APPARATUS REUSING LISTCELL IN HYBRID APPLICATION

Alibab Group Holding Limi...

1. A computer-implemented method, comprising:defining, by a front end layer of a hybrid software application, a prototype of a ListCell;
creating, by a native layer of the hybrid software application, a ListCell template based on a created prototype of the ListCell defined by the front end layer;
obtaining, by the native layer, a ListCell by copying the ListCell template;
obtaining, by the native layer, first ListCell content transmitted from the front end layer;
filling, by the native d layer, the ListCell with the obtained first ListCell content;
initiating, by the native layer, display of the ListCell having the first ListCell content; and
reusing, by the native layer, the ListCell, comprising:
in response to a user input, determining that the first ListCell content has been moved out of a display area of the native layer,
in response, removing the first ListCell content from the ListCell,
obtaining second ListCell content transmitted from the front end layer, and
refilling the ListCell with the second ListCell content.

US Pat. No. 10,691,426

BUILDING FLEXIBLE RELATIONSHIPS BETWEEN REUSABLE SOFTWARE COMPONENTS AND DATA OBJECTS

Saudi Arabian Oil Company...

1. A computer-implemented method, comprising:defining, at design-time, an owner data object and a container reference object;
instantiating, at runtime, an instance of the defined owner data object;
instantiating, at runtime an instance of defined relationship construction parameters;
instantiating, at runtime, an instance of the defined container reference object and an instance of a defined data source object using the instantiated relationship construction parameters, wherein the instantiating the defined data source object comprises:
using, by the instance of the defined container reference object at runtime, the relationship construction parameters to locate a data source class from a data source registry; and
instantiating the instance of a defined data source object using the located data source class;
instantiating, at runtime, an instance of a defined target data object by calling a defined interface of the instantiated data source object, wherein the defined interface comprises a list of standard collection interfaces and definitions of additional methods used to locate the instantiated data source object; and
caching, at runtime, the instance of the target data object in the instance of the container reference object.

US Pat. No. 10,691,425

METHOD AND MECHANISM FOR OPTIMAL SCOPE EVALUATION IN SCOPE BASED HIERARCHICAL CONFIGURATION USING EVALUATED SCOPE PROPAGATION TECHNIQUE

EMC IP Holding Company LL...

1. A system, comprising:a processor; and
a memory coupled with the processor, wherein the memory is configured to provide the processor with instructions which when executed cause the processor to:
receive a first local scope definition associated with a first node in a hierarchical application tree and a second local scope definition associated with a second node in the hierarchical application tree, wherein the hierarchical application tree includes application instructions at one or more nodes within the hierarchical application tree;
determine a pruned version of the hierarchical application tree, including by:
propagating the first local scope definition from the first node to the second node which is a child of the first node, including by:
in the event the first local scope definition and the second local scope definition are associated with a same scope type, using the second local scope definition instead of the first local scope definition at the second node, wherein the second local scope definition is the same or narrower compared to the first local scope definition; and
in the event the first local scope definition and the second local scope definition are associated with independent scope types, using both the first local scope definition and the second local scope definition at the second node; and
pruning any nodes from the hierarchical application tree that are not relevant to a particular set of one or more local scope definitions at a given node in order to obtain the pruned version of the hierarchical application tree; and
generate a qualified application using those application instructions at the unpruned nodes in the pruned version of the hierarchical application tree, wherein:
in the event the first local scope definition and the second local scope definition are associated with a same scope type, the qualified application is relevant to the second local scope definition; and
in the event the first local scope definition and the second local scope definition are associated with independent scope types, the qualified application is relevant to the first local scope definition and the second local scope definition.

US Pat. No. 10,691,424

METHOD FOR PROGRAMMING AND TERMINAL DEVICE

FIBERSTORE CO., LIMITED, ...

1. A method comprising:loading, by a user of a terminal, a programming driver to the terminal;
receiving, by a server, programming instructions from the terminal, wherein the programming instructions comprise programming file identification information;
receiving, by an administrator, a customized information from the terminal based on a target programming file not being obtained by the server from a programming file database according to the programming file identification information;
receiving, by the server, a customized programming file from the administrator, wherein the customized programming file is based on the customized information;
sending, by the server, the customized programming file to the program editor via the programming driver based on the server receiving application instructions from the terminal;
programming, by the program editor, a device according to the customized programming file; and
executing, by the program editor, a programming operation for the device based on the customized programming file.

US Pat. No. 10,691,423

TESTING SYSTEMS AND METHODS FOR PERFORMING HVAC ZONE AIRFLOW ADJUSTMENTS

Johnson Controls Technolo...

1. A heating, ventilation, and air conditioning (HVAC) system comprising:a HVAC unit configured to control air flow to be supplied to a plurality of zones of a building;
a first control system configured to directly control operation of equipment in the HVAC unit;
a second control system communicatively coupled to the first control system, wherein the second control system is located in a different zone of the plurality of zones as compared to the first control system, wherein the second control system is configured to:
receive a request to adjust the air flow output by the HVAC unit; and
send a command to the first control system based on the request, wherein the command is configured to cause the first control system to adjust the operation of the equipment in the HVAC unit to cause the air flow output by the HVAC unit to be adjusted according to the request.

US Pat. No. 10,691,422

PROGRAMMABLE DEVICE PROVIDING VISUAL FEEDBACK OF BALANCING OF OPENING AND CLOSING STATEMENTS OF PROGRAMMING STRUCTURES AND NESTING LEVEL

TEXAS INSTRUMENTS INCORPO...

1. A method for providing visual feedback of balancing of programming structure hierarchy in a program entered on a programmable device, comprising:receiving at least two control structure opening statements, each having an associated control structure;
assigning a unique color to each of the at least two control structures;
displaying each control structure opening statement in the unique color assigned to the corresponding associated control structure;
receiving at least two control structure closing sequences; and
associating each control structure closing sequence with one of the control structures, respectively;
wherein the control structure opening statements and the control structure closing sequences start at a common starting position on an edge of a display of the programmable device.

US Pat. No. 10,691,421

EMBEDDED DESIGNER FRAMEWORK AND EMBEDDED DESIGNER IMPLEMENTATION

MICROSOFT TECHNOLOGY LICE...

1. An apparatus comprising:a processor; and
a memory storing machine readable instructions that when executed by the processor cause the processor to:
upon receiving an indication of actuation of a second designer launch element that is included in a first designer, launch a second designer inline from the first designer, wherein the first designer and the second designer include a configurable component, wherein the configurable component includes a header, a footer, a command bar, a component pane, or a property pane;
in response to a determination that the second designer launch element is actuated, configure the configurable component for the second designer in a canvas of the first designer;
upon receiving an indication of actuation of a first designer return element that is included in the second designer, return to the first designer inline from the second designer; and
utilize a common portion of the memory for the configurable component.

US Pat. No. 10,691,420

DYNAMIC FUNCTION ARGUMENT COMPLETION

The MathWorks, Inc., Nat...

1. A method, comprising:receiving code,
the receiving the code being performed by one or more devices;
identifying a function included in the code and a first argument value, corresponding to a first argument of the function, included in the code,
the identifying being performed by the one or more devices;
generating a data structure based on the function,
where generating the data structure includes generating a state machine based on the function, the state machine including one or more states associated with one or more values for a second argument of the function, and
the generating being performed by the one or more devices;
eliminating a first state, of the one or more states, based on the first argument value,
the first state being associated with a value that is invalid when inserted into the code as an input value to the second argument of the function subsequent to the first argument value being included in the code as an input value to the first argument of the function,
the eliminating being performed by the one or more devices;
determining one or more valid values, from among the one or more values, for the second argument of the function based on the state machine and eliminating the first state,
the one or more valid values for the second argument of the function being determined based on the first argument value for the first argument of the function,
the one or more valid values for the second argument being insertable into the code as an input value to the second argument associated with the function,
the input value being used during an invocation of the function, and
the determining being performed by the one or more devices; and
providing, via a user interface, the one or more valid values for the second argument for selection to be inserted into the code as the input value to the second argument,
the providing being performed by the one or more devices.

US Pat. No. 10,691,419

RECONSTRUCTING A HIGH LEVEL COMPILABLE PROGRAM FROM AN INSTRUCTION TRACE

International Business Ma...

1. A method, in a data processing system comprising at least one processor and at least one memory, wherein the at least one memory comprises instructions which are executed by the at least one processor to specifically configure the at least one processor to implement one or more elements of a conversion engine that operates to perform the method, the method comprising:receiving, by the conversion engine, a trace file for an original program whose execution on computing hardware has been traced;
performing, by the conversion engine, analysis of the trace file to identify a hot function, symbol information corresponding to the hot function, and initialization parameters for the hot function;
generating, by the conversion engine, a trace control flow graph based on the identified hot function and the symbol information corresponding to the hot function;
identifying, by the conversion engine, based on the trace control flow graph, pathways in the original program to the hot function, represented in the trace file;
generating, by the conversion engine, a reconstructed program based on the trace control flow graph, the pathways to the hot function, and the initialization parameters; and
outputting, by the conversion engine, the reconstructed program, wherein the reconstructed program comprises a first portion of the original program corresponding to the hot function, one or more second portions of the original program that are part of the pathways to the hot function, and the initialization parameters for the hot function, and wherein other portions of the original program not corresponding to the hot function, the pathways to the hot function, and the initialization parameters for the hot function, are not included in the reconstructed program.

US Pat. No. 10,691,418

PROCESS MODELING ON SMALL RESOURCE CONSTRAINT DEVICES

SAP SE, Walldorf (DE)

1. A computer-implemented method, comprising:determining, by an integration application design tool, a focus node from among one or more nodes in an integration scenario rendered in a user interface displayed on a device;
determining, by the integration application design tool, successor nodes, predecessor nodes, forward connections between the focus node and the successor nodes, and backward connections between the focus node and the predecessor nodes;
determining, by the integration application design tool, a visible area of the integration scenario based on a neighborhood value that displays the focus node, the successor nodes, the predecessor nodes, the forward connections, and the backward connections,
wherein the neighborhood value is an integer value representing a distance from the focus node to include in the visible area;
displaying, by the integration application design tool, the visible area of the integration scenario in the user interface on the device;
receiving, by the integration application design tool, an interaction primitive in the user interface from the device; and
updating, by the integration application design tool, the visible area based on the interaction primitive,
wherein at least one of the determining, displaying, receiving, and updating are performed by one or more computers.

US Pat. No. 10,691,417

SYSTEM AND METHOD FOR EXECUTING NATIVE CLIENT CODE IN A STORAGE DEVICE

NGD SYSTEMS, INC., Irvin...

1. A storage device communicatively coupled to a host through a storage interface, the storage device being configured to store host data provided by the host through the storage interface, the storage device comprising:storage media;
a first processing unit; and
a program memory storing instructions that, when executed by the first processing unit, cause the first processing unit to:
instantiate, within the storage device, a device data processing agent and a container, wherein the device data processing agent is connected to the host through a virtual Transmission Control Protocol/Internet Protocol (TCP/IP) tunnel over the storage interface, and wherein the device data processing agent is configured to receive, from the host through the virtual TCP/IP tunnel over the storage interface, a first manifest comprising a first binary comprising first instructions;
extract the first binary, from the first manifest, within the storage device; and
execute the first binary to perform data processing on data stored in the storage device based on the first instructions in the first binary.

US Pat. No. 10,691,416

PERFORMING CONSTANT MODULO ARITHMETIC

Imagination Technologies ...

1. A binary logic circuit for reducing x to a sum of a first m-bit integer) ? and a second m-bit integer ?, for use in determining y=x mod(2m?1), where x is an n-bit integer, y is an m-bit integer, and n>m, the binary logic circuit comprising fixed function reduction logic configured to:interpret x as a sum of m-bit rows x?, each row representing m consecutive bits of x such that each bit of x contributes to only one row and all of the bits of x are allocated to a row;
reduce the sum of such m-bit rows x? in a series of reduction steps so as to generate the sum of the first m-bit integer ? and the second m-bit integer ?.

US Pat. No. 10,691,415

RANDOM NUMBER GENERATION AND ACQUISITION METHOD AND DEVICE

Alibaba Group Holding Lim...

1. A computer-implemented method, comprising:automatically generating, by a first hardware random number generator of an electronic device, a plurality of random numbers based on a random function, wherein the plurality of random numbers are N different random numbers, N is a positive integer, generating the plurality of random numbers includes generating a random number array including N storage units, each storage unit of the N storage units stores a particular random number of the N random numbers, and each storage unit of the N storage units is associated with a particular identifier;
automatically shuffling, by the electronic device, the plurality of random numbers, wherein automatically shuffling the plurality of random numbers includes:
generating an ith random value by a second hardware random number generator of the electronic device, wherein i is a positive integer from 1 to N and indicates the number of times random numbers being exchanged among the N storage units; and
exchanging a random number in a first storage unit of the N storage units with a random number in a second storage unit of the N storage units based on the ith random value and a predetermined rule;
receiving, by the electronic device, a random number obtaining instruction;
obtaining, by the electronic device, a random number from the plurality of random numbers based on the random number obtaining instruction; and
returning, by the electronic device, the random number.

US Pat. No. 10,691,414

RANDOM CODE GENERATOR AND ASSOCIATED RANDOM CODE GENERATING METHOD

EMEMORY TECHNOLOGY INC., ...

1. A random code generator installed in a semiconductor chip, the random code generator comprising:a PUF cell array comprising m×n PUF cells;
a control circuit connected with the PUF cell array, wherein while an enroll action is performed, the control circuit enrolls the PUF cell array; and
a verification circuit connected with the PUF cell array,
wherein while a verification action is performed, the verification circuit sets a first flag corresponding to a first row of the PUF cell array to a first state if at least one of n PUF cells in the first row of the PUF cell array is a low reliability PUF,
wherein while the verification action is performed, the verification circuit maintains a second flag corresponding to a second row of the PUF cell array in a second state if n PUF cells in the second row of the PUF cell array are normal PUF cells,
wherein while the semiconductor chip is enabled, the control circuit reads states of the n normal PUF cells in the second row of the PUF cell array according to the second state stored in the second flag and generates a random code according to the states of the n normal PUF cells in the second row of the PUF cell array.

US Pat. No. 10,691,413

BLOCK FLOATING POINT COMPUTATIONS USING REDUCED BIT-WIDTH VECTORS

Microsoft Technology Lice...

1. A system for block floating point computation in a neural network, the system comprising:at least one processor; and
at least one memory comprising computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the at least one processor to:
receive a block floating point number comprising a mantissa portion and an exponent portion;
reduce a bit-width of the block floating point number by decomposing the block floating point number into reduced bit-width block floating point numbers each having an exponent portion and a mantissa portion, and with a bit-width of the mantissa portion of each of the reduced bit-width block-floating point numbers that is smaller than a bit-width of the mantissa portion of the block floating point number;
scale the reduced bit-width block floating point numbers, wherein the scaling includes scaling the exponent portion of the reduced bit-width block floating point numbers based on the mantissa portion within the reduced bit-width block floating point numbers;
perform one or more dot product operations separately on each of the reduced bit-width block floating point numbers to obtain individual results;
sum the individual results to generate a final dot product value; and
use the final dot product value to implement the neural network.

US Pat. No. 10,691,412

PARALLEL SORT ACCELERATOR SHARING FIRST LEVEL PROCESSOR CACHE

INTERNATIONAL BUSINESS MA...

1. A computer processor comprising:a memory unit configured to store key values to be sequentially sorted;
a processor cache configured to obtain tree data from the memory unit indicating the key values;
a hardware merge sort accelerator in signal communication with the memory unit and the processor cache, the merge sort accelerator configured to:
generate a master tournament tree based on the key values;
perform a tournament sort that determines a first winning key value based on the master tournament tree; and
speculate a second winning key value based on the master tournament tree, wherein the speculated second winning key value is a next sequential winning key value of the tournament sort,
wherein a first portion of tournament results is stored in the processor cache while a second portion of the tournament results is excluded from the processor cache.

US Pat. No. 10,691,411

FLOATING POINT TO FIXED POINT CONVERSION

Imagination Technologies ...

1. A binary logic circuit for converting a number in floating point format having an exponent E of ew bits, an exponent bias B given by B=2ew-1?1, and a significand comprising a mantissa M of mw bits into a fixed point format with an integer width of iw bits and a fractional width of fw bits, the binary logic circuit comprising:a shifter operable to receive a significand input comprising a contiguous set of the most significant bits of the significand and configured to left-shift the significand input by a number of bits equal to the value represented by k least significant bits of the exponent to generate a shifter output, wherein min((ew?1), bitwidth(iw?2?sy)}?k?(ew?1) where sy=1 for a signed floating point number and sy=0 for an unsigned floating point number; and
a multiplexer coupled to the shifter and configured to: receive an input comprising a contiguous set of bits of the shifter output; and output the input if the most significant bit of the exponent is equal to one.

US Pat. No. 10,691,410

NEURAL NETWORK COMPUTING

Alibaba Group Holding Lim...

1. A method comprising:receiving a computing instruction for a neural network, the computing instruction for the neural network including a computing rule for the neural network and a connection weight of the neural network;
inputting, for a multiplication operation in the computing rule for the neural network, a source operand corresponding to the multiplication operation to a shift register;
performing a shift operation based on a connection weight corresponding to the multiplication operation.

US Pat. No. 10,691,409

PROVIDING A COMMUNICATIONS CHANNEL BETWEEN INSTANCES OF AUTOMATED ASSISTANTS

GOOGLE LLC, Mountain Vie...

1. A system to establish communication channels between networked devices, comprising an automobile-based data processing system comprising at least one processor to:receive, by an interface of the automobile-based data processing system, a first input audio signal from a client device;
determine, by the automobile-based data processing system and based on the first input audio signal, a first action intent request and an identifier of an application associated with the first action intent request;
output, by the interface of the automobile-based data processing system, an output response based on the first action intent request;
receive, by the interface of the automobile-based data processing system, a second input audio signal, the second input audio signal generated in response to the output response;
determine, by the automobile-based data processing system, a digital component and a second action intent request based on the second input audio signal, the second action intent request comprising the identifier of the application associated with the first action intent;
set, by the automobile-based data processing system, a priority of the second action intent request, the priority of the second action intent request based on the client device; and
transmit, by the automobile-based data processing system, the second action intent request and the digital component to the client device to process the second action intent request with the application identified by the identifier of the second action intent request based on the priority of the second action intent request.

US Pat. No. 10,691,407

METHODS AND SYSTEMS FOR ANALYZING SPEECH DURING A CALL AND AUTOMATICALLY MODIFYING, DURING THE CALL, A CALL CENTER REFERRAL INTERFACE

Kyruus, Inc., Boston, MA...

1. A method for analyzing speech generated during a call to generate a potential referral and automatically modifying, during the call, a user interface displaying, to a call center operator generating a referral to a physician for a patient, data associated with the analyzed speech, the method comprising:receiving, during a call by a patient to a call center operator, by an analysis engine executing on a first computing device, from a second computing device executing a speech-to-text converter, data generated during a call by a patient to a call center operator by converting audio data to text data, the call center operator generating a referral to a physician for the patient;
analyzing, by the analysis engine, the received data generated during the call, wherein analyzing further comprises:
analyzing at least one acoustic feature of the patient speech during the call, the at least one acoustic feature identified in the received data, and
identifying at least one keyword within the received data;
identifying, by the analysis engine, patient data stored in an electronic medical record associated with the patient, responsive to the analysis of the identified at least one keyword;
determining, by the analysis engine, at least one requirement of the patient based upon the identified patient data and the analysis of the at least one acoustic feature and of the identified at least one keyword;
generating, during the call by the patient to the call center operator, by the analysis engine, a potential referral to a physician for the patient, the physician having a profile that satisfies the determined at least one requirement, based on the analysis of the at least one acoustic feature and of the identified at least one keyword; and
modifying, by a presentation engine executing on the second computing device, a user interface displayed to the call center operator, during the call by the patient to the call center operator, the modification to the user interface including addition to the user interface of an identification of the potential referral; and
modifying, by the presentation engine, the user interface displayed to the call center operator, during the call by the patient to the call center operator, the modification to the display including addition to the user interface of an identification of feedback to the call center operator regarding an emotional state of the patient, based on the analysis of the acoustic feature and of the identified at least one keyword.

US Pat. No. 10,691,406

AUDIO AND VISUAL REPRESENTATION OF FOCUS MOVEMENTS

MICROSOFT TECHNOLOGY LICE...

1. A system, comprising:at least one processor and a memory;
a graphical user interface (GUI) displayed on a display, the GUI comprising user-selectable components that when selected are deemed to be a focus component;
an audio mapper that generates a continuous audible tone representing a transition from a first focus component to a second focus component, the continuous audible tone including a first tone, a second tone, and one or more intermediate tones, the first tone represents a first note of a music scale, the second tone represents a different note in the music scale, the one or more intermediate tones represent one or more notes in the music scale between the first tone and the second tone, the one or more intermediate tones played without selection of an associated user- selected component, the first tone audibly representing a position of the first focus component in the GUI, the second tone audibly representing a position of the second focus component;
a display mapper that displays a visual representation of a path of the transition from the first focus component to the second focus component in each non-selected component in the path; and
a sound device that plays the continuous audible tone.

US Pat. No. 10,691,405

SOUND CONTROL APPARATUS, SOUND CONTROL METHOD, AND PROGRAM

SONY INTERACTIVE ENTERTAI...

1. A sound control apparatus comprising:a sound output control section configured to control a sound to be output to a user,
wherein the sound is sound from a game program or from a reproduction of recorded content;
a distance specification section configured to specify a distance between the user wearing a head-mounted display and an object existing around the user; and
a changing section configured to change an output mode of the sound output to the user such that a volume level of the sound increases as the distance to the object decreases.

US Pat. No. 10,691,404

TECHNOLOGIES FOR PROTECTING AUDIO DATA WITH TRUSTED I/O

Intel Corporation, Santa...

1. A computing device for protected audio I/O, the computing device comprising:an audio controller that includes an audio codec, wherein the audio controller comprises a host controller of the computing device, and wherein the audio codec includes a hardware audio resource to connect to an audio device;
a session configuration module to (i) request, by a trusted software component of the computing device, an untrusted audio driver of the computing device to establish an audio session with the audio controller, wherein the audio session is associated with the audio codec of the audio controller, and (ii) receive, by the trusted software component, a stream identifier associated with the audio session from the untrusted audio driver;
an endpoint verification module to (i) determine, by the trusted software component, the audio codec associated with the audio session based on a platform manifest of the computing device, and (ii) determine, by the trusted software component, whether the stream identifier received from the untrusted audio driver matches a stream identifier programmed to the audio codec associated with the audio session; and
a topology protection module to (i) send, by the trusted software component, a command to lock controller topology to the audio controller in response to a determination that the stream identifier received from the untrusted audio driver matches the stream identifier programmed to the audio codec associated with the audio session, and (ii) lock, by the audio controller, the audio controller topology in response to sending of the command to lock the controller topology.

US Pat. No. 10,691,403

COMMUNICATION DEVICE AND METHOD FOR AUDIO ENCODED DATA RADIO TRANSMISSION

Intel Corporation, Santa...

1. A communication device, comprising:a receiver configured to receive a signal;
a determination circuit configured to determine an energy level of the signal;
at least one processor configured to generate a mute instruction to mute an audio output based on the energy level of the signal in a predefined audio frequency range meeting a first threshold in a first time period,
wherein the at least one processor is configured to determine whether the energy level of the signal in the predefined audio frequency range meets a second threshold in a second time period after the first time period, and
wherein the at least one processor is configured to revoke the mute instruction based on the second threshold not being met in the second time period, and wherein the at least one processor is configured to maintain the mute instruction based on the second threshold being met in the second time period; and
a demodulation circuit configured to demodulate the signal in the second time period and to generate data packets, wherein the at least one processor is configured to maintain the mute instruction, in addition to the second threshold being met in the second time period, based on a data packet of the generated data packets comprising a predefined bit sequence.

US Pat. No. 10,691,401

DEVICE GROUP IDENTIFICATION

Sonos, Inc., Santa Barba...

1. A tangible non-transitory computer-readable medium having stored thereon instructions executable by one or more processors of a mobile device to cause the mobile device to perform functions comprising:displaying, via a control application of a media playback system on a graphical display of the mobile device, a synchrony group control comprising controls to select playback devices of the media playback system for a synchrony group;
receiving, via the displayed synchrony group control, input data representing a command to create a new synchrony group, the input data comprising input data representing selection of two or more playback devices for a new synchrony group;
in response to receiving the input data representing the command to create the new synchrony group, forming the synchrony group, wherein forming the new synchrony group comprises:
receiving, via the control application, input data indicating a display name for the new synchrony group;
generating a particular group identification for the new synchrony group, wherein the the particular group identification is unique among other synchrony groups of the media playback system; and
sending, via a network interface, data representing one or more instructions to the two or more playback devices to form the new synchrony group with the particular group identification;
in response to forming the new synchrony group, updating an interface for the media playback system to indicate the new synchrony group using the particular group identification; and
sending, via the network interface to a cloud server, data representing the new synchrony group for storage by the cloud server, wherein the data comprises the particular group identification.

US Pat. No. 10,691,400

INFORMATION MANAGEMENT SYSTEM AND INFORMATION MANAGEMENT METHOD

YAMAHA CORPORATION, Hama...

1. An information management system comprising:at least one processor; and
an information provider configured to communicate with a terminal device by means of a communication not involving communication in a form of sound waves,
wherein the at least one processor is configured to execute stored instructions to:
acquire a first audio signal representative of a guide voice in a first language;
generate pieces of content information, where each piece of content information represents a spoken content of the guide voice and is a translation of the spoken content of the guide voice in a corresponding one of multiple languages other than the first language;
associate the pieces of content information generated for the guide voice in the multiple languages with a same piece of identification information;
transfer, to a sound emitting system including a speaker, a second audio signal that includes the first audio signal representative of the guide voice and a sound component indicative of the piece of identification information, wherein the sound emitting system is configured to output the second audio signal in a form of sound waves to a plurality of terminal devices that are present near the speaker and are each configured to extract the piece of identification information from the second audio signal; and
the processor is further configured to:
receive, by the information provider, from one of the terminal devices an information request including the piece of identification information extracted from the second audio signal and language information indicative of a second language, wherein the second language is one from among the multiple languages other than the first language, and is specified in the terminal device,
upon receipt of the information request,
select one of the pieces of content information that is associated with the piece of identification information included in the information request and that is also in the second language indicated by the language information in the information request; and
transmit, by the information provider, the selected piece of the content information to the terminal device having transmitted the information request to the information management system, so that the content information is presented to a user of the terminal device in the second language while the guide voice in the first language is being output by the speaker.

US Pat. No. 10,691,399

METHOD OF DISPLAYING MOBILE DEVICE CONTENT AND APPARATUS THEREOF

GM GLOBAL TECHNOLOGY OPER...

1. An apparatus configured to display applications provided by a single mobile device, the apparatus comprising:a plurality of displays;
a first input device configured to receive a first input corresponding to the first display and a second input device configured to receive a second input corresponding to the second display;
a wireless communication device configured to receive first output data corresponding to a first display of the plurality of displays and second output data corresponding to a second display of the plurality of displays from an external device consisting of one mobile device; and
a controller configured to direct the received first output data from the wireless communication device to be displayed on the first display and direct the received second output data from the wireless communication device to be displayed on the second display,
wherein the controller is configured to direct information corresponding to the first input and the second input to the mobile device via the wireless communication device, and
wherein the controller is configured to receive an output data identification, an output data category, and a desired codec from the mobile device via the wireless communication device and to send a list of available codecs corresponding to the received desired codec to the mobile device via the wireless communication device.

US Pat. No. 10,691,398

CONNECTED CLASSROOM

ACCENTURE GLOBAL SERVICES...

1. A system for providing a collaborative teaching experience in a classroom environment having a defined layout, the system comprising:presenter interface circuitry;
participant interface circuitry of a participant media device; and
a controller coupled between the presenter interface circuitry and the participant interface circuitry, the controller configured to operate according to a plurality of presentation modes, each presentation mode defining an operating configuration for coordinated control of one or more input devices and one or more output devices, wherein the one or more input devices includes a plurality of cameras and a plurality of microphones, where the controller is configured to:
receive, from an input mechanism coupled to the participant interface circuitry, a first control input and a second control input, wherein the first control input and the second control input respectively comprise one of a plurality of different interactive control values;
receive a first media stream including audio data captured by a first microphone from the plurality of microphones and video data captured by a first camera from the plurality of cameras;
activate, based on a first interactive control value of the first control input, a second camera with a field of view covering the input mechanism;
control, based on the first interactive control value of the first control input, operation of a second microphone to activate;
receive a second media stream including a second audio data captured by the second microphone and a second video data captured by the second camera;
modify at least one of a resolution or frame rate of the second media stream based on a network attribute of a network connecting the one or more output devices including at least one of a network bandwidth or network latency;
control transmission of the second media stream to a respective output device;
switch between a first presentation mode of the plurality of presentation modes to a second presentation mode of the plurality of presentation modes based on a second interactive control value of the second control input, wherein the second presentation mode pushes media content from the participant media device coupled to the input mechanism to a remote participant media device, wherein the remote participant media device is included in the one or more output devices; and
control a camera pan and a camera zoom of the second camera to adjust to a preprogrammed camera pan setting and a preprogrammed camera zoom setting according to the second presentation mode that captures the participant media device coupled to the input mechanism.

US Pat. No. 10,691,397

MOBILE COMPUTING DEVICE USED TO OPERATE DIFFERENT EXTERNAL DEVICES

1. A system for operating a device on a internet, comprising:the device communicating over the internet to broadcast a connection request including a description of the device,
and to generate menu information that specifies menu icons associated to functions of the device and related to the function provided by the device,
the menu icons providing interactive control of the device by a user; and
a portable computer having a display connected to the internet wirelessly,
the portable computer having a portable computer location device, and the device having a device location device,
for determining the portable computer's location and the device's location,
the portable computer's location and device's location plotted on a map,
for detecting when the portable computer's location is at a predetermined distance to the device on the map,
for at the predetermined distance between the portable computer and the device connecting the portable computer to the device and sending the device's menu to the portable computer,
and to generate the menu responsive to the user's input and allowing interacting with a plurality of the menu icons,
a user input device connected to the portable computer,
for receiving from the user input corresponding to one of the menu icons and sending to the device the input,
for the device to update the menu information in response to the input,
for the device to send to the portable computer the updated menu information,
whereby the portable computer receives the device's menu at the predetermined distance from the device, and the device responds to user input into the device's menu.

US Pat. No. 10,691,395

DISPLAY APPARATUS AND CONTROL METHOD THEREOF

SAMSUNG ELECTRONICS CO., ...

1. A display apparatus comprising:a cabinet;
a plurality of display panels disposed at a first side of the cabinet;
a plurality of indicators disposed at a second side opposite to the first side;
a communicator configured to receive an external signal for detecting positions of the plurality of display panels; and
a controller configured to control an indicator among the plurality of indicators disposed at the second side to be turned on based on the external signal received by the communicator, the indicator corresponding to a display panel from among the plurality of display panels, the display panel being a position detection target.