US Pat. No. 10,394,643

DISTRIBUTED RUN-TIME AUTO-CALCULATION OF MEASUREMENT UNCERTAINTY

National Instruments Corp...

1. A measurement device comprising:a plurality of hardware modules coupled to a measurement controller; and
a processor coupled to the plurality of hardware modules, wherein the processor is configured to execute program instructions of a driver;
wherein the measurement controller is configured to initiate a measurement by the plurality of hardware modules;
wherein execution of the program instructions cause the processor to request one or more error specifications from each of the plurality of hardware modules;
wherein each of the plurality of hardware modules are configured to:
determine the respective one or more error specifications based on a current configuration of the respective hardware module; and
transmit the determined error specifications to the processor; and
wherein execution of the program instructions further cause the processor to determine an uncertainty in the measurement based on the error specifications.

US Pat. No. 10,394,640

REQUIREMENT RUNTIME MONITOR USING TEMPORAL LOGIC OR A REGULAR EXPRESSION

Infineon Technologies Aus...

1. A hardware monitor, comprising:one or more hardware components to:
receive information that identifies a requirement for a hardware system,
the requirement being associated with operation of the hardware system during a runtime operation of the hardware system in an intended operating environment;
program the one or more hardware components to analyze the hardware system based on the requirement;
receive a runtime signal, associated with a component of the hardware system, from the hardware system during the runtime operation of the hardware system in the intended operating environment;
analyze the runtime signal during the runtime operation of the hardware system based on programming the one or more hardware components to analyze the hardware system;
monitor another signal associated with a software module of the hardware system or another component of the hardware system;
determine, during the runtime operation of the hardware system, that the requirement was violated during the runtime operation of the hardware system based on analyzing the runtime signal and the other signal; and
output information indicating that the requirement was violated.

US Pat. No. 10,394,639

DETECTING AND SURFACING USER INTERACTIONS

Microsoft Technology Lice...

1. A computing system, comprising:a processor; and
memory storing instructions executable by the processor, wherein the instructions, when executed, configure the computing system to provide:
a data aggregation system configured to:
obtain incident data indicative of an incident that results in performance degradation of a hosted service, wherein the hosted service is hosted by a service computing system and accessible by a set of users, associated with a tenant, over a computing network; and
obtain tenant data, corresponding to the tenant, indicative of user activity for the set of users;
data mining logic configured to:
identify, based on tenant map information corresponding to the tenant, a plurality of servers that host the hosted service for the tenant;
identify, based on the incident data, a time corresponding to the incident and a set of servers, in the plurality of servers, that were impacted by the incident;
identify, based on the user activity, a set of impacted users of the tenant, who were actively using the set of servers during the time corresponding to the incident;
generate a metric indicative of a measure of the impacted users, impacted by the incident; and
data surfacing logic configured to:
generate a computer control signal that controls surfacing of a representation of the identified metric, based on the generated metric.

US Pat. No. 10,394,638

APPLICATION HEALTH MONITORING AND REPORTING

STATE FARM MUTUAL AUTOMOB...

1. A computer-implemented method comprising:retrieving, via a computer network, data about the health of a plurality of applications executing in a computing environment,
wherein at least some of the data about the health of the plurality of applications is generated while at least some of the plurality of applications are performing respective functions in the computing environment,
wherein the data about the health of the plurality of applications includes more than one dissimilar metric;
determining, by one or more processors operating a health indicator generation module, a plurality of normalized indications of health based upon the data about the health of the plurality of applications,
wherein each of the plurality of normalized indications of health corresponds to one of the plurality of applications,
wherein at least one of the plurality of normalized indications of health is based upon data generated directly by a corresponding one of the plurality of applications,
and wherein each of the plurality of normalized indications of health indicates one of:
(i) an availability of the corresponding one of the plurality of applications to perform the respective functions of the corresponding one of the plurality of applications, or
(ii) a performance of the corresponding one of the plurality of applications in performing the respective functions of the corresponding one of the plurality of applications,
wherein determining the plurality of normalized indications of health based upon the data about the health of the plurality of applications includes transforming the more than one dissimilar metric into comparable indications of health;
determining, by the one or more processors, an indication of an overall health of a portion of the computing environment based upon the plurality of normalized indications of health of the plurality of applications, the portion of the computing environment implementing two or more of the plurality of applications;
generating, by the one or more processors, a plurality of visual elements in a dashboard to be displayed on remote user devices, the dashboard including a plurality of tiles, each corresponding to one of the plurality of applications, and each including at least one of the normalized indications of health,
wherein one of the plurality of visual elements (a) presents the indication of the overall health of the portion of the computing environment, (b) is expandable, upon a selection by a user of one of the remote user devices, to present details about the performance or the availability of subdivisions of the portion of the computing environment, and
wherein other of the plurality of visual elements present at least some of the plurality of normalized indications of health of the plurality of applications; and
sending, via the computer network, the plurality of visual elements to at least one of the remote user devices.

US Pat. No. 10,394,637

SYSTEMS AND METHODS FOR DATA VALIDATION AND PROCESSING USING METADATA

AMERICAN EXPRESS TRAVEL R...

1. A method comprising:receiving, by a processor, a source,
identifying, by the processor, the source with an source type and a file type;
receiving, by the processor, a metadata layer that describes the source,
wherein the source comprises source records with source data fields containing source data,
wherein the metadata layer includes metadata comprising at least one of a field data type, a field data length, a field description, or a record length;
validating, by the processor, the metadata layer against the source;
validating, by the processor and using rules, a quality of the metadata layer;
correcting, by the processor, the metadata in response to the metadata being inaccurate;
completing, by the processor, the metadata in response to the metadata being incomplete;
writing, by the processor, results to a log;
transforming, by the processor, the source records into transformed records in an ASCII readable format for a load ready file,
performing, by the processor, data conversions by reading metadata describing source columns;
dynamically creating, by the processor and in the load ready file, target columns corresponding to the source columns;
determining, by the processor, a structure of the load ready file based on the target columns;
deriving, by the processor, new data fields from the source data fields to create derived fields in the transformed records;
tracking, by the processor, a history of the transforming in the load ready file;
detecting, by the processor, a number of failed transforms due to bad records during the importing;
writing, by the processor, the bad record to the log in response to a failed transformation;
evaluating, by the processor, the number of failed transforms with the bad records versus a threshold for the bad records to determine if the importing is a success or failure;
balancing, by the processor, a number of records in the source against a number of transformed records in the load ready file to generate a transformation failure rate;
deciding, by the processor, a state of the transforming the source records in response to the transformation failure rate and a predetermined acceptable failure rate; and
outputting, by the processor, the load ready file.

US Pat. No. 10,394,636

TECHNIQUES FOR MANAGING A HANG CONDITION IN A DATA PROCESSING SYSTEM WITH SHARED MEMORY

International Business Ma...

1. A method of operating a data processing system, comprising:detecting, by a master, that a processing unit within a first group of processing units in the data processing system has a hang condition;
in response to detecting that the processing unit has a hang condition, reducing, by an arbiter, a command issue rate for the first group of processing units;
notifying, by the master, one or more other groups of processing units in the data processing system that the first group of processing units has reduced the command issue rate for the first group of processing units; and
in response to the notifying, changing, by respective arbiters of the one or more other groups of processing units, respective command issue rates of the other groups of processing units to reduce a number of commands received by the first group of processing units from the other groups of processing units.

US Pat. No. 10,394,635

CPU WITH EXTERNAL FAULT RESPONSE HANDLING

Hewlett Packard Enterpris...

1. A system, comprising:a central processing unit (CPU) to process data;
a first memory management unit (MMU) in the CPU to generate an external request to a bus for data located external to the CPU; and
an external fault handler in the CPU to process a fault response received via the bus, wherein the fault response is generated externally to the CPU and relates to a fault being detected with respect to the external request;
wherein the CPU retires an operation from a protocol layer that caused the fault and signals a thread that is blocked on a given cache miss to execute an external fault corresponding to the fault response, and
the CPU comprises a cache that re-issues a cache miss request to the protocol layer in response to execution of the external fault to enable the external request to be completed after the fault is detected.

US Pat. No. 10,394,634

DRIVE-BASED STORAGE SCRUBBING

Intel Corporation, Santa...

1. A nonvolatile memory (NVM) communicatively coupled to a distributed storage service (DSS), the DSS having a scrubber to validate data stored by the DSS on the nonvolatile memory, comprising:a first background task to offload the scrubber, including local review of NVM operation, within the NVM, to identify a first potential error; and
a second background task to further offload the scrubber, including local read of a block of data from the NVM having a first portion containing data and a second portion containing validation information for the data, local determination of a reference validation for the data, and local comparison of the reference validation to the second portion to identify a second potential error, all performed within the NVM;
wherein the first and second potential errors are selectively reported to the scrubber.

US Pat. No. 10,394,633

ON-DEMAND OR DYNAMIC DIAGNOSTIC AND RECOVERY OPERATIONS IN CONJUNCTION WITH A SUPPORT SERVICE

Microsoft Technology Lice...

1. A method to provide on-demand, dynamic diagnostic and recovery operations in conjunction with a support service, the method comprising:collecting hardware and software environment information associated with a user device at an assistance client application executed on the user device, wherein at least some of the hardware and software environment information being collected is received from an operating system executed on the user device;
receiving, at the assistance client application executed on the user device, hardware and software environment information associated with one or more servers from the one or more servers executing a hosted service, wherein a component of the hosted service is executed on the user device;
in response to exhausting a set of automatic diagnostic and recovery actions associated with the component of the hosted service, engaging the support service;
providing the collected hardware and software environment information associated with the user device and the received hardware and software environment information associated with the one or more servers to the support service;
automatically facilitating a communication between a user associated with the user device and an operator of the support service through the assistance client application based on one or more contact preferences of the user; and
performing one or more diagnostic and recovery actions on one or more of the component of the hosted service and the user device instructed by the operator of the support service.

US Pat. No. 10,394,632

METHOD AND APPARATUS FOR FAILURE DETECTION IN STORAGE SYSTEM

International Business Ma...

1. A method for improving failure detection in a storage system, the method comprising:determining, by one or more processors of a computing system, an amount of data received by a plurality of switches in the storage system within a predetermined time window to obtain a plurality of data amounts, the determining including excluding, for a first switch of the plurality of switches, a particular amount of data received from a host of the storage system;
determining, by the one or more processors of the computing system, a count of check errors detected in the amount of data received by the plurality of switches to obtain a plurality of check error counts;
requesting, in response to a given switch of the plurality of switches detecting a check error in data received from a neighboring device connected to the given switch, the neighboring device to retransmit the data to the given switch; and
calculating, by the one or more processors of the computing system, a failure risk for the plurality of switches based on the plurality of data amounts and the plurality of check error counts.

US Pat. No. 10,394,631

ANOMALY DETECTION AND AUTOMATED ANALYSIS USING WEIGHTED DIRECTED GRAPHS

Callidus Software, Inc., ...

1. A method comprising:receiving a data set, wherein the data set includes a plurality of data subsets wherein each data subset is associated with one transaction;
processing each data subset according to a plurality of rules to generate a plurality of activation values and an output for the each data subset, wherein the plurality of activation values and the output for the each data subset form an activation pattern for the each data subset;
generating a predictive model based on the activation patterns, wherein the predictive model is based on a regression algorithm; and
identifying a subset of transactions as outliers based on the predictive model.

US Pat. No. 10,394,630

ESTIMATING RELATIVE DATA IMPORTANCE IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN) having a plurality of storage units, the plurality of storage units storing a plurality of data objects in the form of encoded data slices, the method comprises:generating a first importance ranking for a first data object of the plurality of data objects;
generating a second importance ranking for a second data object of the plurality of data objects, the first importance ranking and the second importance ranking based on one or more ranking factor;
detecting a plurality of the encoded data slices that require rebuilding, wherein each encoded data slice of the plurality of the encoded data slices is a dispersed storage error encoded portion of a respective one of the plurality of data objects, and wherein the plurality of the encoded data slices that require rebuilding include at least one encoded data slice of the first data object and at least one encoded data slice of the second data object;
performing a comparison of the first importance ranking and the second importance ranking; and
based on the comparison, assigning respective rebuilding priority levels to the at least one encoded data slice of the first data object and the at least one encoded data slice of the second data object.

US Pat. No. 10,394,629

MANAGING A PLUG-IN APPLICATION RECIPE VIA AN INTERFACE

Oracle International Corp...

1. One or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause:storing a first mapping between:
a set of user-exposed fields selectable via a plug-in application recipe (“PIAR”) creation interface associated with a PIAR management engine, and
another set of fields exposed by an Application Programming Interface (API) of a third-party application,
wherein the PIAR management engine manages PIAR definitions, each PIAR definition identifying
(a) a trigger for which one or more trigger variables, values of which are necessary to evaluate the trigger on an ongoing basis, are exposed by a first plug-in application to the PIAR management engine, wherein an instance of evaluating the trigger comprises determining whether a condition is satisfied based at least in part on one or more values of the one or more trigger variables, and
(b) an action for which a second plug-in application exposes an interface to the PIAR management engine for causing the second plug-in application to carry out the action, wherein an instance of evaluating the action comprises carrying out the action based on one or more values of one or more input variables that are input to the action in the PIAR definition,
wherein the PIAR management engine makes the action conditional on the trigger on an ongoing basis, and
wherein the PIAR definition comprises a particular trigger and a particular action;
receiving, via one or more PIAR creation interfaces, a plurality of PAIR definitions based at least on a user-selected field of the set of user-exposed fields;
wherein the user-selected field is mapped, in the first mapping, to a first third-party application field exposed by the API of the third-party application, wherein the first third-party application field is associated with the particular trigger or the particular action;
managing a particular PIAR in an active state, wherein the particular PIAR corresponds to a PIAR definition of the plurality of PIAR definitions, and wherein managing the particular PIAR comprises periodically receiving and checking, against a condition of the particular PIAR, data from the first third-party application field as provided by the third-party application via the API;
during or after managing the particular PIAR in the active state, storing information comprising an update from the first mapping to a second mapping, wherein the second mapping maps the user-selected field to a second third-party application field, wherein the second third-party application field differs from the first third-party application field;
without modifying the particular PIAR, managing the particular PIAR in the active state at least in part by periodically receiving and checking, against the condition of the particular PIAR, data from the second third-party application field as provided by the third-party application via the API.

US Pat. No. 10,394,628

IN-LINE EVENT HANDLERS ACROSS DOMAINS

Microsoft Technology Lice...

1. A computing system, comprising:a computer processor;
a communication system that communicates, through a network interface, with a first domain computing system and a second domain computing system that is different from the first domain computing system, the first and second domain computing systems communicating with one another through corresponding network interfaces; and
an event handler orchestrator service that stores a first event handler record corresponding to a first event handler in the first domain computing system, the first event handler record including filter criteria identifying an event of interest raised by an invoking process running the second domain computing system, the event handler orchestrator service receiving a call from the invoking process when the event of interest is raised by the invoking process in the second domain computing system and returning, to the invoking process, an endpoint in the first domain computing system, corresponding to the first event handler, for invoking the first event handler.

US Pat. No. 10,394,627

ASYNCHRONOUS C#-JS DATA BINDING BRIDGE

1. A computer-implemented method comprising:providing a front-end binding framework having a data controller that interacts with a user interface of a user device to manage a bindable property or method for a view on the user device;
providing a back-end binding framework with a data model controller that manages a data model, the front-end binding framework and the back-end binding framework being different types of frameworks; and
implementing, via a bridge controller, asynchronous two-way binding for the bindable property or method between the data controller of the front-end binding framework and the back-end binding framework to update the bindable property or method in the data model when data changes at the user interface and to update the view on the user device when data changes at the data model,
wherein the providing a front-end binding framework operation, the providing a back-end framework operation and the implementing operation are performed on a user device;
generating a request object corresponding to the data model for a binding request in the front-end binding framework; invoking a callback associated with the request object to update the view of the user interface with data from the corresponding model via a bridge response in response to the binding request.

US Pat. No. 10,394,626

EVENT FLOW SYSTEM AND EVENT FLOW CONTROL METHOD

HITACHI, LTD., Tokyo (JP...

1. An event flow system connecting nodes, for which processes are defined, from an upstream side to a downstream side by an event which is generated due to a certain process and is used by another process to realize a process flow,the event flow system comprising:
at least a memory and a processor;
a process flow builder configured to build a process flow via a flow table using a plurality of nodes, one or more events, and a reverse event that sends a predetermined request, from a downstream node to an upstream node disposed further toward an upstream side than the downstream node;
wherein the process flow builder receives information for changing the flow table by any one of adding at least one node, deleting at least one node, adding at least one link, deleting at least one link or updating a node parameter,
wherein a link is a connection between at least two nodes that indicates the event or the reverse event,
wherein when the process flow builder deletes at least one node from the flow table, the process flow builder searches for related nodes to the at least one deleted node and deletes the related nodes,
wherein when the process flow builder adds at least one link to the flow table, the process flow builder adds destination information and source information for the at least one added link, and
wherein when the process flow builder deletes at least one link to the flow table, the process flow builder deletes destination information and source information for the at least one deleted link; and
a process flow executer configured to execute a process, stored on the flow table, defined for each of the plurality of nodes according to the event and the reverse event.

US Pat. No. 10,394,625

REACTIVE COINCIDENCE

Microsoft Technology Lice...

1. An event processing method, comprising:executing, on a processor, instructions stored in a memory that cause an event processing system to perform the following acts:
creating a second event stream, embedded within a first event stream, to represent duration of a first point event in the first event stream, wherein creation of the second event stream represents start of the duration of the first point event;
creating a third event stream, embedded within the first event stream, to represent duration of a second point event in the first event stream, wherein creation of the third event stream represents the start of the duration of the second point event; and
determining coincidence between the first point event and the second point event based on a comparison of the second event stream and third event stream.

US Pat. No. 10,394,624

ATTACHING APPLICATIONS BASED ON FILE TYPE

VMware, Inc., Palo Alto,...

1. A method of operating an application attaching system to dynamically make applications available to a computing device, the method comprising:identifying an application attach triggering event based on a file selection of a certain file type on the computing device;
in response to the application attach triggering event, identifying an application within an application volume based on the certain file type, wherein the application volume comprises a virtual or physical storage element;
in response to identifying the application, mounting the application volume to the computing device;
modifying one or more registry keys on the computing device to make the application executable on the computing device from the application volume; and
executing files for the application stored on the application volume to support the file selection.

US Pat. No. 10,394,623

TECHNIQUES FOR PROCESSING CUSTOM EVENTS

INTEL CORPORATION, Santa...

1. An apparatus, comprising:processing circuitry;
an interrupt handling component for execution on the processing circuitry to receive an interrupt to cause a system event on the apparatus from a device not operative to directly cause the system event, to generate a generic event message in response to receiving the interrupt and send the generic event message to a generic event handling component;
the generic event handling component for execution on the processing circuitry to receive the generic event message, generate a defined event message in response to receiving the generic event message and to send the defined event message to a defined event handling component; and
the defined event handling component for execution on the processing circuitry to receive the defined event message from the generic event handling component and to generate a system interrupt and send the system interrupt to an interrupt handler of an operating system to cause the operating system to execute the system event on the apparatus.

US Pat. No. 10,394,622

MANAGEMENT SYSTEM FOR NOTIFICATIONS USING CONTEXTUAL METADATA

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:determining that an application executing on a mobile computing device of a user generated a notification on the mobile computing device for an event, the event comprising an upcoming calendar appointment of the user, a missed telephone call of the user, or a communication from a social media contact of the user, and the application one of a calendar application, a conferencing application, and a social media application;
collecting contextual data from one or more sources associated with the mobile computing device, the sources comprising at least one selected from the group consisting of a sensor on the mobile computing device, the social media application, the calendar application, and the conferencing application;
generating contextual metadata using the contextual data;
analyzing the contextual metadata;
generating, based on the contextual metadata, a determination that an action associated with the notification has been implemented by the user, wherein the implementation of the action renders the notification obsolete; and
dismissing the notification from a notification queue of the user based on the determination that the action associated with the notification has been implemented by the user.

US Pat. No. 10,394,621

METHOD AND COMPUTER READABLE MEDIUM FOR PROVIDING CHECKPOINTING TO WINDOWS APPLICATION GROUPS

OPEN INVENTION NETWORK LL...

9. A method, comprising:launching one or more applications each comprising one or more processes and threads;
initializing a checkpointer using one or more of a checkpoint library or checkpoint kernel module;
creating, by said checkpointer, a set of objects that are used to record data and a computation state of application processes and threads;
creating a synchronization point for said one or more applications;
triggering one or more checkpoints of application processes and threads using one or more of user-mode or kernel-mode Asynchronous Procedure Calls (APC) and signaling application threads to enter a checkpoint APC signal handler;
removing, by said checkpoint APC handler, one or more of said user-mode or kernel-mode APCs from the applications' APC queues when at said synchronization point for said one or more applications; and
checkpointing one or more joining applications jointly with said one or more applications by launching said one or more joining applications, initializing said one or more of a checkpoint library and kernel module, including said one or more joining applications in said synchronization point for said one or more applications, and including the processes and threads of said joining applications in said triggering of one or more checkpoints.

US Pat. No. 10,394,620

METHOD FOR CHANGING ALLOCATION OF DATA USING SYNCHRONIZATION TOKEN

INTERNATIONAL BUSINESS MA...

1. A system configured to process data with data processing modules provided in parallel, the system comprising:a processor; and
one or more computer readable mediums collectively including instructions that, when executed by the processor, cause the processor to:
input a synchronization token into at least one data processing module that is in an operational state from among the data processing modules provided in parallel, in response to a request to change allocation of the data;
change the allocation of the data to the data processing modules provided in parallel, after the synchronization token is input, two or more of the data processing modules being configured for processing in parallel; and
in response to the synchronization token having arrived at a data processing module that receives data at a later stage than the at least one data processing module into which the synchronization token was input, process data for which processing has been stopped by the at least one data processing module among the data processing modules after the synchronization token is input to the at least one data processing module;
wherein pieces of data are respectively provided with key values indicating groups to which the respective pieces of data belong;
wherein an order in which the pieces of data in each of the group are to be processed within each of the group is determined; and
wherein the key values are allocated to every data module that processes the pieces of data among the data processing modules provided in parallel.

US Pat. No. 10,394,619

SIGNATURE-BASED SERVICE MANAGER WITH DEPENDENCY CHECKING

Western Digital Technolog...

1. A computer-implemented method comprising:monitoring a plurality of services, wherein:
each service is a process managed by an operating system and running on a computer, wherein the process is uniquely identifiable in a process table of the operating system based on a signature;
the signature is based on a combination of at least two service attributes in an entry in the process table for the process, wherein at least one service attribute of the at least two service attributes comprises one or more of a file system location and a command line argument associated with the service;
the signature excludes a unique process identifier assigned to the process and included in the entry for the process in the process table; and
the signature is used to identify each of the plurality of services in the process table for monitoring the plurality of services;
receiving a request to add a new service to the plurality of services;
determining, using signatures to lookup processes in the process table, whether service dependencies of the plurality of services in the process table and the new service are compatible; and
responsive to determining that the service dependencies of the plurality of services in the process table and the new service are compatible:
starting the new service; and
determining a new signature for the new service based on a new entry in the process table for the new service.

US Pat. No. 10,394,618

THERMAL AND POWER MEMORY ACTIONS

International Business Ma...

1. A system comprising:a memory module including a volatile memory, a non-volatile memory, and one or more sensors; and
one or more processing circuits, wherein the one or more processing circuits are configured to perform a method comprising:
obtaining, from the one or more sensors, a set of volatile memory sensor data;
obtaining, from the one or more sensors, a set of non-volatile memory sensor data;
analyzing the set of volatile memory sensor data and the set of non-volatile memory sensor data, wherein analyzing the set of volatile memory sensor data and the set of non-volatile memory sensor data comprises:
comparing the set of volatile memory sensor data to a set of volatile memory thresholds; and
comparing the set of non-volatile memory sensor data to a set of non-volatile memory thresholds;
determining, based on the analyzing, that a memory condition exists, wherein determining that a memory condition exists further comprises:
determining, in response to the set of volatile memory sensor data not satisfying the set of volatile memory thresholds, that a volatile memory condition exists; and
issuing, in response to determining that the memory condition exists, one or more memory actions, wherein issuing the one or more memory actions further comprises:
generating, in response to determining that the volatile memory condition exists, a set of parity volatile memory data;
storing the set of parity volatile memory data in the non-volatile memory; and
reducing the refresh rate of the volatile memory.

US Pat. No. 10,394,617

EFFICIENT APPLICATION MANAGEMENT

International Business Ma...

1. A method for application management, the method comprising:receiving an application and system configuration, wherein the application and system configuration details which of one or more systems each of one or more applications are configured to operate on;
establishing baseline energy consumptions of each of the one or more applications on each of the one or more systems by briefly operating each of the one or more applications on each of the one or more systems and measuring an energy consumption of each of the one or more systems and individual energy consuming hardware components of each of the one or more systems;
assigning operation of a first application of the one or more applications to a first system of the one or more systems based on the measured energy consumption of the first system being a least amount of the one or more systems while operating the first application;
assigning operation of a second application of the one or more applications to a second system of the one or more systems based on the measured energy consumption of the second system being a least amount of the one or more systems while operating the second application;
determining whether the second application of the one or more applications can operate with less energy consumption than it is currently operating with by:
referencing the energy consumption baseline of the second application to obtain the energy consumption of the first system and individual energy consumptions of each one or more energy consuming hardware components of the first system, and
comparing a real-time energy consumption of the energy consuming hardware component utilized by the second application on the second system with the obtained energy consumption of a same energy consuming hardware component on the first system; and
operating the second application on the first system based on determining that the second application operates with less energy consumption when using the energy consuming hardware component of the first system than it is currently operating on the second system,
wherein one or more steps of the above method are performed using one or more computers.

US Pat. No. 10,394,616

EFFICIENT APPLICATION MANAGEMENT

International Business Ma...

8. A computer system for application management, the computer system comprising:one or more computer processors, one or more computer-readable storage media, and program instructions stored on one or more of the computer-readable storage media for execution by at least one of the one or more processors, the program instructions comprising:
program instructions to receive an application and system configuration, wherein the application and system configuration details which of one or more systems each of one or more applications are configured to operate on;
program instructions to establish baseline energy consumptions of each of the one or more applications on each of the one or more systems by briefly operating each of the one or more applications on each of the one or more systems and measuring an energy consumption of each of the one or more systems and individual energy consuming hardware components of each of the one or more systems;
program instructions to assign operation of a first application of the one or more applications to a first system of the one or more systems based on the measured energy consumption of the first system being a least amount of the one or more systems while operating the first application;
program instructions to assign operation of a second application of the one or more applications to a second system of the one or more systems based on the measured energy consumption of the second system being a least amount of the one or more systems while operating the second application;
program instructions to determine whether the second application of the one or more applications can operate with less energy consumption than it is currently operating with by:
referencing the energy consumption baseline of the second application to obtain the energy consumption of the first system and individual energy consumptions of each one or more energy consuming hardware components of the first system, and
comparing a real-time energy consumption of the energy consuming hardware component utilized by the second application on the second system with the obtained energy consumption of a same energy consuming hardware component on the first system; and
program instructions to operate the second application on the first system based on determining that the second application operates with less energy consumption when using the energy consuming hardware component of the first system than it is currently operating on the second system.

US Pat. No. 10,394,615

INFORMATION PROCESSING APPARATUS AND JOB MANAGEMENT METHOD

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus comprising:a processor configured to perform a procedure including:
taking currently executing jobs respectively as candidate jobs, and specifying, when a migration of a candidate job to a migration destination node selected from free nodes, which are not executing any jobs, is expected to expand a continued range of free nodes, the migration of the candidate job to the migration destination node as a possible migration;
determining, when a plurality of possible migrations is specified, a possible migration to be performed from among the plurality of possible migrations, based on amounts of communication needed to perform individual migrations indicated by the plurality of possible migrations and numbers of nodes used for executing candidate jobs to be migrated in the individual migrations; and
performing the determined possible migration,
the determining of the possible migration to be performed includes:
repeatedly extracting, as a pair of possible migrations, two possible migrations that have not been excluded from comparisons, from among the plurality of possible migrations;
calculating a first amount of communication needed to perform a first migration indicated by a first possible migration included in the pair of possible migrations, and a second amount of communication needed to perform a second migration indicated by a second possible migration included in the pair of possible migrations;
calculating a first number of nodes used for executing a first candidate job to be migrated in the first migration, and a second number of nodes used for executing a second candidate job to be migrated in the second migration;
calculating, as a first evaluation value, an amount of communication by dividing the second amount of communication by the first amount of communication;
calculating, as a second evaluation value, a sum of values for the amount of communication and the number of nodes being calculated by dividing the second number of nodes by the first number of nodes; and
carrying out a first evaluation using the first evaluation value to make a determination on which of the possible migrations included in the pair takes a shorter time for the job migration than another of the possible migrations included in the pair, and carrying out, if the first evaluation value is within a prescribed range, a second evaluation using the second evaluation value to determine which of the possible migrations included in the pair takes a shorter time for the job migration than another of the possible migrations included in the pair.

US Pat. No. 10,394,614

TASK QUEUING AND DISPATCHING MECHANISMS IN A COMPUTATIONAL DEVICE

INTERNATIONAL BUSINESS MA...

1. A method comprisingmaintaining a plurality of ordered lists of dispatch queues corresponding to a plurality of processing entities, wherein each dispatch queue includes one or more task control blocks or is empty;
determining whether a primary dispatch queue of a processing entity is empty in an ordered list of dispatch queues for the processing entity;
in response to determining that the primary dispatch queue of the processing entity is empty, selecting a task control block for processing by the processing entity from another dispatch queue of the ordered list of dispatch queues for the processing entity, wherein the another dispatch queue from which the task control block is selected meets a threshold criteria for the processing entity, wherein a data structure indicates that the task control block that was selected was last executed in the processing entity, and wherein in response to determining that the primary dispatch queue of the processing entity is not empty, processing at least one task control block in the primary dispatch queue of the processing entity;
determining that another task control block is ready to be dispatched; and
in response to determining that the another task control block was dispatched earlier, placing the another task control block in a primary dispatch queue of a processing entity on which the another task control block was dispatched earlier.

US Pat. No. 10,394,613

TRANSFERRING TASK EXECUTION IN A DISTRIBUTED STORAGE AND TASK NETWORK

PURE STORAGE, INC., Moun...

1. A method for execution by a computer to manage distributed computing of a task, the method comprises:encoding a data object using an encoding matrix having a unity matrix portion to produce a plurality of sets of encoded data slices, wherein a set of encoded data slices of the plurality of sets of encoded data slices includes data encoded slices and redundancy encoded slices, wherein the data encoded slices results from the unity matrix portion of the encoding matrix;
dividing the task into a set of partial tasks, wherein a number of partial tasks corresponds to a number of data encoded slices in a set of encoded data slices;
determining processing speeds of a set of distributed storage and task (DST) execution units allocated for storing the plurality of sets of encoded data slices;
mapping storage and partial task assignments regarding the data encoded slices of the plurality of sets of encoded data slices to the set of DST execution units based on the processing speeds to produce storage-task mapping; and
outputting the data encoded slices of the plurality of sets of encoded data slices and the set of partial tasks to the set of DST execution units in accordance with the storage-task mapping.

US Pat. No. 10,394,612

METHODS AND SYSTEMS TO EVALUATE DATA CENTER PERFORMANCE AND PRIORITIZE DATA CENTER OBJECTS AND ANOMALIES FOR REMEDIAL ACTIONS

VMware, Inc., Palo Alto,...

1. In a process that prioritizes objects of a data center for application of remedial actions to correct performance problems with the objects, the specific improvement comprising:calculating an object rank of each object of the data center over a period of time, wherein each object rank is calculated as a weighted function of relative frequencies of alerts that occur within the period of time for the object;
calculating an object trend of each object of the data center, wherein each object trend is calculated as a weighted function of differences between a first relative frequency of alerts at a first time stamp and second relative frequency of alerts at a second time stamp for the object;
determining an order of priority of the objects for applying remedial actions based on the object ranks and object trends; and
executing the remedial actions based on the order of priority, thereby correcting performance problems of the objects.

US Pat. No. 10,394,611

SCALING COMPUTING CLUSTERS IN A DISTRIBUTED COMPUTING SYSTEM

Amazon Technologies, Inc....

1. A system, comprising:a plurality of computing devices configured to implement:
a current cluster having a plurality of nodes storing cluster data, wherein each node comprises a respective at least one storage device that stores a respective portion of the cluster data, wherein the current cluster receives access requests for the cluster data at a network endpoint for the current cluster; and
a cluster control interface configured to:
receive a cluster scaling request for the current cluster, wherein said cluster scaling request indicates a change in a number or type of the plurality of nodes in the current cluster;
in response to receiving the cluster scaling request:
create a new cluster having a plurality of nodes as indicated in the cluster scaling request, wherein the new cluster comprises the change in the number or type of the plurality of nodes from the current cluster; and
initiate a copy of the cluster data from the plurality of nodes of the current cluster to the plurality of nodes in the new cluster, wherein after completion of the copy of a respective portion of the cluster data from one of the plurality of nodes of the current cluster and before completion of the copy of another respective portion of the cluster data from another one of the plurality of nodes of the current cluster, the current cluster continues to respond to all read requests directed to the network endpoint for the current cluster including a read request directed to the respective portion that has already been copied to the new cluster; and
subsequent to completion of the copy of the cluster data to the plurality of nodes in the new cluster: move the network endpoint for the current cluster to the new cluster, wherein after the network endpoint is moved, access requests directed to the network endpoint are sent to the new cluster; and
disable the current cluster.

US Pat. No. 10,394,610

MANAGING SPLIT PACKAGES IN A MODULE SYSTEM

Oracle International Corp...

1. One or more non-transitory machine-readable media storing instructions which, when executed by one or more processors, cause:generating a first module membership record, for a first module in a module system that specifies accessibility of each module in a plurality of modules to other modules in the plurality of modules, at least by:
identifying a first set of one or more packages in the first module;
determining that a first package, from the first set of one or more packages, comprises a first set of executable code;
based at least in part on determining that the first package comprises the first set of executable code: including, in the first module membership record, an indication that the first package belongs to the first module;
generating a second module membership record, for a second module in the module system, at least by:
identifying a second set of one or more packages in the second module;
determining that a second package, from the second set of one or more packages, does not comprise any sets of executable code;
based at least in part on determining that the second package does not comprise any sets of executable code: omitting, from the second module membership record, any indication that the second package belongs to the second module;
determining, based at least on the first module membership record and the second module membership record, whether a code conflict exists in the module system,
wherein generating the first module membership record, generating the second module membership record, and determining whether the code conflict exists are performed by executable code associated with one or more of: an integrated development environment (IDE), a compiler, a loader, a runtime environment, a module assembler, or a runtime image assembler.

US Pat. No. 10,394,609

DATA PROCESSING

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for data processing in a multi-threaded processing arrangement, the method comprising:receiving a data processing task to be executed on a data file comprising a plurality of data records in a nested records structure, where one or more data records are within one or more other data records, the data file and the plurality of data records each having an associated record description that defines a data layout, including information relating to parameters or attributes of the plurality of data records, wherein the record description comprises metadata;
based on the received data processing task, pre-processing the data file to analyze the record descriptions associated with the data file and the plurality of data records, and determine therefrom characteristics of the data records;
dividing the data file into a plurality of data sets based on the analyzing of the record descriptions associated with the data file, and a comparing of the determined characteristics of the data records, wherein one or more data records of the plurality of data records are divided between different data sets of the plurality of data sets;
based on the determined plurality of data sets, allocating the data sets of the divided data file to processing threads for parallel processing by the multi-threaded processing arrangement; and
wherein the record descriptions comprise a record descriptor associated with each data record, each record descriptor comprising information relating to parameters or attributes of the associated data record.

US Pat. No. 10,394,606

DYNAMIC WEIGHT ACCUMULATION FOR FAIR ALLOCATION OF RESOURCES IN A SCHEDULER HIERARCHY

Hewlett Packard Enterpris...

1. A method for resource allocation, the method comprising:assigning a plurality of weights to a plurality of child schedulers, each child scheduler associated with multiplier the child schedulers being in a scheduler hierarchy with a plurality of parent schedulers, wherein each of the plurality of parent schedulers is associated with a unique group of the child schedulers;
for each child scheduler that is active, propagating a value based on the assigned weight of the child scheduler upwards in said scheduler hierarchy through a respective chain of schedulers to cause a given scheduler at a given level in the respective chain of schedulers to be associated with an accumulation of values based on the weights of descendent schedulers of the given scheduler along the respective chain;
for the given scheduler at the given level, factoring in the multiplier applied to said accumulated values of the descendant schedulers in the respective chain of schedulers to generate a multiplied value;
propagating said multiplied value upwards through said respective chain of schedulers; and
distribute a given set of resources assigned to said scheduler hierarchy based on multiplied value at each level of schedulers to cause the schedulers in the scheduler hierarchy to be proportioned resources from said given set of resources based on said multiplied value.

US Pat. No. 10,394,605

MUTABLE CHRONOLOGIES FOR ACCOMMODATION OF RANDOMLY OCCURRING EVENT DELAYS

Ab Initio Technology LLC,...

1. A method for causing a computing system to process events from a sequence of events that defines a correct order for said events independent from an order in which those events are received over an input device or port, said method including:defining a first variable,
defining, for said first variable, a first chronology of operations on said first variable associated with received events,
receiving a first event that pertains to said first variable,
executing a first operation on said first variable, wherein said first operation results in a first update of said first chronology,
after having received said first event, receiving a delayed event that pertains to said first variable,
executing a second operation on said first variable, wherein said second operation results in a second update of said first chronology, wherein said first update occurred earlier than said second update,
determining whether to reprocess said previously executed first operation based on a determination, using said first chronology, of whether said earlier first update is valid or invalid, and
reprocessing said previously executed first operation based on the determination that said earlier first update is invalid;
wherein said delayed event precedes said first event in said sequence,
wherein said first update is based on said first event, and
wherein said second update is based on said delayed event.

US Pat. No. 10,394,604

METHOD FOR USING LOCAL BMC TO ALLOCATE SHARED GPU RESOURCES INSIDE NVME OVER FABRICS SYSTEM

SAMSUNG ELECTRONICS CO., ...

1. A system comprising:a non-volatile memory (NVM) device stores data and manages execution of a task,
and wherein the NVM device comprises:
a network interface configured to receive data and the task,
a NVM processor configured to determine if the NVM processor will execute that task or if the task will he assigned to a shared resource within the system based on the shared resource more efficiently performing the task than the NVM processor, and
a local communication interface configured to communicate with at least one other device within the system;
a main board sub-system comprising:
a switched fabric in communication with the NVM device, wherein the switched fabric sends the data and task to the NVM device as a destination for the task, and
a resource arbitration circuit configured to:
receive, a request to assign the task to the shared resource, and
manage the execution of the task by the shared resource; and
the shared resource configured to execute the task.

US Pat. No. 10,394,603

VIRTUAL CONTAINER PROCESSING ON HIGH PERFORMANCE COMPUTING PROCESSORS

GENBAND US LLC, Plano, T...

1. A method comprising:with a first execution unit of a processor, executing instructions for a processing task on behalf of a first virtual container, the first virtual container being configured to utilize computing resources of the first execution unit without demanding more computing resources than the first execution unit provides, the first execution unit having exclusive access to a first arithmetic logic unit (ALU); and
with a second execution unit of the processor, processing instructions for the processing task on behalf of a second virtual container, the second virtual container being configured to utilize computing resources of the first execution unit without demanding more computing resources than the first execution unit provides, the second execution unit having exclusive access to a second Arithmetic Logic Unit (ALU);
wherein the first virtual container corresponds to a first Virtual Network Function (VNF) component and the second virtual container corresponds to a second VNF component, and wherein the first execution unit and the second execution unit operate in parallel; and
provisioning an additional VNF component for execution on a third execution unit of the processor, the third execution unit having exclusive access to a third ALU.

US Pat. No. 10,394,602

SYSTEM AND METHOD FOR COORDINATING PROCESS AND MEMORY MANAGEMENT ACROSS DOMAINS

BlackBerry Limited, Wate...

1. A method at a computing device having a plurality of concurrently operative operating systems including an originating operating system, comprising at least one originating process, and a target operating system, comprising one or more resources, the method comprising:operating a proxy process within the target operating system on the computing device, the proxy process being marked to avoid being shut down even if the target operating system is running low on memory;
receiving, by the proxy process, from the originating operating system, a first request for the at least one originating process to interact with a resource of the one or more resources from the target operating system, the first request including at least one process identifier identifying the at least one originating process, and a resource identifier that identifies the requested resource;
sending a second request, from the proxy process to the target operating system, for the resource;
determining, by the target operating system, that no process currently running within the target operating system provides access to the resource;
responsive to the determining, starting, by the target operating system, a target process;
providing access to the resource to the target process;
returning a handle to the target process from the proxy process to the originating operating system, the handle enabling the at least one originating process to interact directly with the target process within the target operating system to thereby interact with the resource, wherein the proxy process maintains an association between process identifiers of one or more processes external to the target operating system, including the at least one originating process, and resource identifiers of the one or more resources with which the external processes interact;
receiving, at the proxy process, from the at least one originating process in the originating operating system, an indication that the at least one originating process no longer interacts with the requested resource, the indication comprising the resource identifier;
receiving, at the target operating system, from the proxy process, an indication that the requested resource is no longer needed by the proxy process upon determining, based on the association maintained by the proxy process, that no process external to the target operating system interacts with the requested resource identified by the resource identifier; and
ending, by the target operating system, the target process.

US Pat. No. 10,394,601

MEDIA BALANCER EMPLOYING SELECTABLE EVALUATION SOURCE

iHeartMedia Management Se...

15. A system comprising:at least one processor and associated memory configured to implement a media balancer, the media balancer configured to:
receive option parameters indicating preferences related to generation of a target schedule, wherein generation of the target schedule is based on a master schedule;
select a selected media scheduler from a plurality of potential media schedulers based on the option parameters;
transmit from the media balancer to the selected media scheduler:
first information associated with the option parameters;
a request to perform, based on the first information, an evaluation of potential replacement media items to be inserted into the target schedule in place of original media items included in the master schedule;
receive, in response to the request, second information indicating results of the evaluation; and
at least one processor and associated memory configured to implement a local scheduling system, the local scheduling system configured to:
generate the target schedule by replacing at least one original media item included in the master schedule with a replacement media item selected based on the second information.

US Pat. No. 10,394,600

SYSTEMS AND METHODS FOR CACHING TASK EXECUTION

CAPITAL ONE SERVICES, LLC...

1. A method for processing a job in a form of computer-executable code, comprising:receiving, at a client device over a network, information representing the job;
receiving the job at a job scheduler of a master device;
dividing, by the job scheduler, the job into at least two tasks comprising a first task and a second task;
for the first task:
generating, by the job scheduler, a signature corresponding to the first task representative of whether the first task has been processed;
searching, by a task scheduler of the master device, a data structure for the generated signature;
if the signature is found in the data structure retrieving a result associated with the first task by the task scheduler;
if the signature is not found in the data structure,
sending the first task by the task scheduler over the network to a task executor device,
processing the first task by the task executor device,
receiving a result of the first task processing by the task scheduler, and
storing the task result and a signature corresponding to the processed first task in the data structure by the task scheduler;
aggregating, by the job scheduler, the task result into a job result;
sending, by the job scheduler and over the network, the job result to the client device; and
processing the job result by the client device.

US Pat. No. 10,394,599

BREAKING DEPENDENCE OF DISTRIBUTED SERVICE CONTAINERS

International Business Ma...

1. A computer-implemented method for managing service container dependency, the computer-implemented method comprising:receiving, by a computer, a notification that a first service container is running on a host environment;
determining, by the computer, whether the first service container is dependent on a second service container being up and running on the host environment;
responsive to the computer determining that the first service container is dependent on a second service container being up and running on the host environment, determining, by the computer, whether the second service container is running on the host environment;
responsive to the computer determining that the second service container is not running on the host environment, responding, by the computer, to service requests from the first service container to the second service container using stub data running on the computer that corresponds to the second service container; and
responsive to the computer determining that the second service container is running on the host environment, generating, by the computer, the stub data corresponding to the second service container based on an image, an image identifier, and a port number identifier corresponding to the second service container, the service requests received from the first service container to the second service container, and responses to the service requests sent from the second service container to the first service container.

US Pat. No. 10,394,598

SYSTEM AND METHOD FOR PROVIDING MSSQ NOTIFICATIONS IN A TRANSACTIONAL PROCESSING ENVIRONMENT

ORACLE INTERNATIONAL CORP...

1. A system for providing multiple servers, single queue (MSSQ) notifications in a transactional processing environment, comprising:a transactional processing environment executing on one or more microprocessors;
a first server of the transactional processing environment, wherein the first server includes a first main thread and a subsidiary thread, wherein the first server provides a unanimous service and a specific service, wherein the first server is associated with a specific request queue, and wherein the first server includes an internal memory queue;
a second server of the transactional processing environment, wherein the second server includes a second main thread, and wherein the second server provides the unanimous service;
a main request queue of the transactional processing environment, wherein the first server and the second server share the main request queue; and
an application programming interface (API) for use by the first server and the second server, wherein the first server and the second server use the API to advertise the unanimous service on the main request queue, and wherein the first server uses the API to advertise the specific service on the specific request queue;
wherein the main request queue receives and queues a plurality of request messages for the unanimous service, wherein a first request message of the plurality of request messages is dequeued by the first main thread of the first server, and wherein a second request message of the plurality of request messages is dequeued by the second main thread of the second server;
wherein the specific request queue of the first server receives and queues request messages for the specific service, wherein each of the queued request messages for the specific service is dequeued by the subsidiary thread of the first server, and stored in the internal memory queue of the first server; and
wherein the first main thread of the first server checks the internal memory queue of the first server before checking the main request queue for request messages to process.

US Pat. No. 10,394,597

FLEXIBLE BATCH JOB SCHEDULING IN VIRTUALIZATION ENVIRONMENTS

Amazon Technologies, Inc....

1. A system, comprising:one or more computing devices comprising one or more respective hardware processors and memory and configured to:
implement one or more programmatic interfaces enabling clients of a job scheduling service of a provider network to indicate respective scheduling descriptors associated with a plurality of jobs, the provider network configured to perform the plurality of jobs on behalf of the clients;
receive, from a particular client and via the one or more programmatic interfaces, a particular scheduling descriptor associated with a particular job, wherein the particular job indicates one or more executable programs or scripts whose execution is dependent on at least in part on use of a shared resource, and wherein the particular scheduling descriptor comprises at least one scheduling flexibility parameter indicating one or more desired execution times for the particular job;
determine a target time to initiate an execution of the particular job, based at least in part on an analysis of (a) a plurality of scheduling descriptors corresponding to a set of jobs including the particular job, whose executions contend for use of the shared resource, (b) a temporal load distribution policy, and (c) at least two scheduling flexibility parameters in scheduling descriptors of different jobs in the set of jobs, wherein the at least two scheduling flexibility parameters specify different desired execution times;
transmit a job execution request indicating the target time to a selected execution platform;
perform one or more executable operations at the selected execution platform in accordance with the job execution request;
collect a result indicator of the iteration of the particular job from the selected execution platform; and
in response to a job status request from the particular client, display one or more metrics associated with the iteration of the particular job.

US Pat. No. 10,394,596

TRACKING OF MEMORY PAGES BY A HYPERVISOR

Red Hat, Inc., Raleigh, ...

1. A non-transitory machine-readable storage medium including data that, when accessed by a processing device, cause the processing device to:identify a first set of candidate memory pages associated with a virtual machine;
provide, by the virtual machine to a hypervisor associated with the virtual machine, a request to initiate a dirty tracking operation of the hypervisor for the first set of candidate memory pages;
receive an indication that the hypervisor has initiated the dirty tracking operation; and
provide a subsequent identification of a second set of candidate memory pages associated with the virtual machine in response to receiving the indication that the hypervisor has initiated the dirty tracking operation to discard one or more memory pages of the second set of candidate memory pages in view of the initiated dirty tracking operation.

US Pat. No. 10,394,595

METHOD TO MANAGE GUEST ADDRESS SPACE TRUSTED BY VIRTUAL MACHINE MONITOR

Intel Corporation, Santa...

1. A processor comprising:a register to store a first reference to a context data structure specifying a virtual machine context, the context data structure comprising a second reference to a target array; and
an execution unit comprising a logic circuit to:
execute a virtual machine (VM) based on the virtual machine context, wherein the VM comprises a guest operating system (OS) associated with a page table comprising a first memory address mapping between a guest virtual address (GVA) space and a guest physical address (GPA) space;
receive a request by the guest OS to switch from the first memory address mapping to a second memory address mapping, the request comprising an index value and a first root value;
retrieve an entry, identified by the index value, from the target array, the entry comprising a second root value; and
responsive to determining that the first root value matches the second root value, cause a switch from the first memory address mapping to the second memory address mapping.

US Pat. No. 10,394,594

MANAGEMENT OF A VIRTUAL MACHINE IN A VIRTUALIZED COMPUTING ENVIRONMENT BASED ON A CONCURRENCY LIMIT

International Business Ma...

1. A method of managing a virtualized computing environment, the method comprising:monitoring active virtual machine management operations on a first host among a plurality of hosts in the virtualized computing environment, wherein each active virtual machine management operation includes a plurality of sub-operations with associated concurrency limits that represent maximum numbers of concurrent sub-operations;
receiving a request to perform a virtual machine management operation, wherein the virtual machine management operation includes at least first and second sub-operations, the first sub-operation associated with a first concurrency limit that is a hypervisor concurrency limit, a storage system concurrency limit, a virtualization library concurrency limit, or a network concurrency limit, and the second sub-operation associated with a second concurrency limit that is a hypervisor concurrency limit, a storage system concurrency limit, a virtualization library concurrency limit, or a network concurrency limit;
in response to receiving the request, determining whether any of the first and second concurrency limits associated with the first and second sub-operations for the requested virtual machine management operation has been met based at least in part on the monitored active virtual machine management operations on the first host; and
initiating performance of the requested virtual machine management operation on a second host among the plurality of hosts in response to determining that at least one of the first and second concurrency limits associated with the first and second sub-operations for the requested virtual machine management operation has been met.

US Pat. No. 10,394,593

NONDISRUPTIVE UPDATES IN A NETWORKED COMPUTING ENVIRONMENT

International Business Ma...

1. A computer-implemented method for facilitating nondisruptive maintenance on a virtual machine (VM) in a networked computing environment, comprising:creating, in response to a receipt of a request to implement an update on an active VM, a copy of the active VM, wherein the copy is a snapshot VM;
installing, while saving any incoming changes directed to the active VM to a storage system, the update on the snapshot VM, wherein the update is not installed on the active VM;
applying, when the update on the snapshot VM is complete, the saved incoming changes on the snapshot VM; and
switching from the active VM to the snapshot VM so the snapshot VM becomes a new active VM and the active VM becomes an inactive VM,
wherein the storage system includes a first-in-first-out (FIFO) queue, and
wherein the saving includes inserting the incoming change in the FIFO queue with a reference count that indicates that the incoming change needs to be executed on both the active VM and the snapshot VM.

US Pat. No. 10,394,592

DEVICE AND METHOD FOR HARDWARE VIRTUALIZATION SUPPORT USING A VIRTUAL TIMER NUMBER

HUAWEI TECHNOLOGIES CO., ...

1. A device for hardware virtualization support to handle an interrupt targeted to a running virtual machine (VM), the device comprising:memory; and
a processor coupled to the memory and configured to:
run the VM;
access, from the running VM, a host system of the device, wherein the host system is accessible by a hypervisor configured to launch the VM;
process, by the host system, a configuration flag (CF) in the host system that enables delivery of a virtual timer of the host system to a guest operating system (OS), wherein the virtual timer is controlled by the host system;
record, by the host system, a virtual interrupt request (IRQ) number of the virtual timer when the CF is set, the virtual IRQ number identifying which of the running VM or the host system a physical interrupt targets; and
process, by the hypervisor, the virtual IRQ number to deliver the virtual timer to the guest OS in a first manner when the virtual IRQ number indicates that the physical interrupt is targeted to the running VM and in a second manner when the virtual IRQ number indicates that the physical interrupt is targeted to the host system, wherein the first manner delivers the virtual timer to the quest OS without loading a host OS state.

US Pat. No. 10,394,591

SANITIZING VIRTUALIZED COMPOSITE SERVICES

International Business Ma...

1. A computer-implemented method for sanitizing a virtualized composite service, the computer-implemented method comprising:providing, by one or more processors, a sanitization policy for each image within a virtualized composite service, wherein the virtualized composite service employs multiple virtual machine (VM) instances that initially use different policies for sanitizing sensitive data within each VM instance;
analyzing, by one or more processors, sanitization policies for multiple images in the virtualized composite service in order to detect inconsistencies among the sanitization policies;
analyzing, by one or more processors, the sanitization policies for images within the virtualized composite service for inconsistences with sanitization policies for entities external to the virtualized composite service;
in response to detecting inconsistencies between the sanitization policies for images within the virtualized composite service and sanitization policies for entities external to the virtualized composite service, modifying, by one or more processors, the sanitization policies for images within the virtualized composite service to match the sanitization policies for entities external to the virtualized composite service;
responsive to finding inconsistencies between the sanitization policies for multiple images within the virtualized composite service, resolving, by one or more processors, the inconsistencies to produce a consistent sanitization policy;
using, by one or more processors, the consistent sanitization policy to sanitize the virtualized composite service to create a sanitized virtualized composite service;
receiving, by one or more processors, a request for the virtualized composite service from a requester; and
responding, by one or more processors, to the request for the virtualized composite service by returning the sanitized virtualized composite service to the requester.

US Pat. No. 10,394,590

SELECTING VIRTUAL MACHINES TO BE MIGRATED TO PUBLIC CLOUD DURING CLOUD BURSTING BASED ON RESOURCE USAGE AND SCALING POLICIES

International Business Ma...

1. A method for selecting virtual machines to be migrated to a public cloud during cloud bursting, the method comprising:determining current resource usage for each of a plurality of virtual machine instances running in a private cloud;
obtaining one or more scaling policies for said plurality of virtual machine instances running in said private cloud;
computing additional resource usage for each of said plurality of virtual machine instances with a scaling policy when scaled out;
receiving a cost for running virtual machine instances in said public cloud based on resource usage;
determining, by a processor, a cost of running a virtual machine instance of said plurality of virtual machine instances in said public cloud using said current resource usage and said additional resource usage when said virtual machine instance of said plurality of virtual machine instances is scaled out based on said received cost for running said virtual machine instances in said public cloud;
selecting said virtual machine instance of said plurality of virtual machine instances to be migrated from said private cloud to said public cloud in response to said cost being less than a threshold value; and
migrating said selected virtual machine instance of said plurality of virtual machine instances to said public cloud from said private cloud.

US Pat. No. 10,394,589

VERTICAL REPLICATION OF A GUEST OPERATING SYSTEM

International Business Ma...

1. A method for testing a host machine including a processing unit, the method comprising:creating one or more virtual disks in memory assigned to a host operating system;
assessing available virtual storage space within the memory assigned to the host operating system; and
assessing a an operational parameter associated with the available virtual storage space, wherein the operational parameter corresponds to a performance capacity and comprises a performance limitation of the processing unit, a performance limitation of program code executable by the processing unit, and a performance limitation of the host operating system, including:
creating a hierarchy of guest operating systems utilizing the one or more virtual disks, including assigning a first guest operating system to a first layer in the hierarchy and assigning the first guest operating system in a replication role;
vertically replicating the first layer, including creating a second guest operating system and one or more additional virtual disks in the virtual storage assigned to the first guest operating system and placing the second guest operating system in a second layer of the hierarchy; and
repeating the vertical replication, including placing the second guest operating system in the replication role, wherein a conclusion of the vertical replication is responsive to a characteristic of the parameter, including initiating paging responsive to determining a sum of utilization of the host operating system memory by the created hierarchy of guest operating systems exceeds a predetermined amount, wherein paging augments the virtual storage space with secondary storage.

US Pat. No. 10,394,588

SELF-TERMINATING OR SELF-SHELVING VIRTUAL MACHINES AND WORKLOADS

International Business Ma...

1. A method, comprising:receiving, by a cloud tuning service from a first workload, a first abstract request to perform a shelving operation on the first workload, wherein the first workload is executing on a first virtual machine on a first host in a first cloud computing environment, of a plurality of cloud computing environments, wherein the cloud tuning service executes on a system external to each of the plurality of cloud computing environments, wherein the first abstract request identifies the first workload but does not identify the first host or the first cloud computing environment, does not include required credentials, and wherein the first abstract request does not include specific operations required to shelve the first workload;
determining, by the cloud tuning service, that use of a first system resource of a plurality of system resources of the first host by the first virtual machine does not exceed a threshold;
upon receiving the first abstract request, generating, by the cloud tuning service operating external to each of the plurality of cloud computing environments, a specific request that is compatible with the first cloud computing environment by:
determining, by the cloud tuning service, based on a predefined configuration, that the first workload is executing on the first host in the first cloud computing environment;
identifying a first set of commands that are specific to the first cloud computing environment based on the predefined configuration, wherein the first set of commands cause the first cloud computing environment to shelve the first workload and wherein the first set of commands includes at least one command that was not specified in the first abstract request; and
identifying a set of login credentials needed to access the first cloud computing environment; and
initiating, by the cloud tuning service, performance of the shelving operation on the first workload using the specific request, wherein the specific request includes the set of login credentials and the first set of commands, and wherein shelving the first workload removes the first workload from the first virtual machine and the first host and stores an image of the first workload in a data store.

US Pat. No. 10,394,587

SELF-TERMINATING OR SELF-SHELVING VIRTUAL MACHINES AND WORKLOADS

International Business Ma...

1. A system, comprising:a computer processor; and
a memory containing a program, which when executed by the processor, performs an operation comprising:
monitoring, by a cloud tuning service, use of each of a plurality of system resources by a first workload executing on a first virtual machine on a first host in a first cloud computing environment, of a plurality of cloud computing environments, wherein the cloud tuning service executes on the system, wherein the system is external to each of the plurality of cloud computing environments;
determining, by the cloud tuning service, that the use of a first system resource of the plurality of system resources by the first workload does not exceed a threshold;
determining, based on the use of the first system resource by the first workload not exceeding the threshold, that the first workload has completed processing a set of tasks;
receiving, by the cloud tuning service from the first workload, a first abstract request to shelve the first workload, wherein the first abstract request identifies the first workload does not identify the first host or the first cloud computing environment, does not include required credentials, and wherein the first abstract request does not include specific operations required to shelve the first workload;
upon receiving the first abstract request, generating, by the cloud tuning service operating external to each of the plurality of cloud computing environments, a specific request that is compatible with the first cloud computing environment by:
determining, by the cloud tuning service, based on a predefined configuration, that the first workload is executing on the first host in the first cloud computing environment;
identifying a first set of commands that are specific to the first cloud computing environment based on the predefined configuration, wherein the first set of commands cause the first cloud computing environment to shelve the first workload and wherein the first set of commands includes at least one command that was not specified in the first abstract request; and
identifying a set of login credentials needed to access the first cloud computing environment; and
initiating, by the cloud tuning service, shelving of the first workload by transmitting the specific request to the first cloud computing environment, wherein the specific request includes the set of login credentials and the first set commands, and wherein shelving the first workload removes the first workload from the first virtual machine and the first host and stores an image of the first workload in a data store.

US Pat. No. 10,394,586

USING CAPABILITY INDICATORS TO INDICATE SUPPORT FOR GUEST DRIVEN SURPRISE REMOVAL OF VIRTUAL PCI DEVICES

Red Hat Israel, Ltd., Ra...

1. A method for removing a virtual device from a virtual machine having a guest operating system (OS), the virtual machine managed by a hypervisor executing on a processing device, comprising:receiving, by the hypervisor, a notification from the guest OS, the notification comprising a capability indicator value indicating a support level for surprise removal of a virtual device of the guest OS, the surprise removal of the virtual device comprising removal of the virtual device from the virtual machine without first providing a warning to the guest OS;
storing, by the hypervisor, the capability indicator value corresponding to the virtual device in a mapping table;
subsequently receiving, by the processing device executing the hypervisor, a request to remove the virtual device from the virtual machine;
responsive to receiving the request to remove the virtual device, accessing, by the hypervisor, the mapping table to obtain the capability indicator value corresponding to the virtual device;
identifying, by the processing device executing the hypervisor, in view of the obtained capability indicator value, a particular set of actions associated with the obtained capability indicator value, the particular set of actions to be performed to remove the virtual device from the virtual machine, the particular set of actions including at least removing the virtual device from the virtual machine without first providing the warning to the guest OS when the capability indicator indicates a safe support level, or at least first providing the warning to the guest OS before removing the virtual device from the virtual machine when the capability indicator indicates an unsafe support level; and
removing the virtual device from the virtual machine using the particular set of actions.

US Pat. No. 10,394,585

MANAGING GUEST PARTITION ACCESS TO PHYSICAL DEVICES

Microsoft Technology Lice...

1. A method, implemented in a computing device comprising a memory space, the method comprising:identifying, in a host of the computing device, a physical device to be made accessible to a guest partition of the computing device, the physical device comprising a memory-mapped I/O device where first and second portions of the physical device are mapped to first and second regions of the memory space, respectively;
virtualizing, by the host of the computing device, the first portion of the physical device for indirect access to the physical device by the guest partition, the first portion including at least part of a control plane for the physical device, the virtualizing providing an exposed control plane available to guest partitions through the host;
the virtualizing comprising intermediating, by the host of the computing device, accesses to the first portion of the physical device by the guest partition, wherein the guest partition interfaces with the exposed control plane, and wherein the host virtualizes access to the control plane by mapping the accesses between the first portion and the first memory region; and
allowing the guest partition to directly access the non-virtualized second portion of the physical device, the non-virtualized second portion including at least part of a data plane for the physical device, wherein the guest partition directly accesses the second portion by directly accessing the second memory region.

US Pat. No. 10,394,584

NATIVE EXECUTION BRIDGE FOR SANDBOXED SCRIPTING LANGUAGES

Atlassian Pty Ltd, Sydne...

1. A computer-implemented method, comprising:receiving, at a scripting language execution sandbox, a request to execute one or more scripting language commands, the scripting language execution sandbox executing using one or more processors of a single computing device and programmed to execute scripting language computer program scripts with restrictions on certain memory accesses and program calls, wherein the scripting language execution sandbox includes a scripting language component communicatively coupled to a native execution component of a native execution bridge, the native execution component executing using the one or more processors of the single computing device and programmed to securely programmatically communicate with the scripting language component and to execute natively executable commands without the same restrictions on certain memory accesses and program calls;
sending the one or more scripting language commands from the scripting language component of the native execution bridge to the native execution component of the native execution bridge;
determining, using the native execution component of the native execution bridge, based at least in part on a security policy, whether to execute the one or more scripting language commands as corresponding native commands outside the scripting language component;
in response to determining to execute the one or more scripting language commands, translating the one or more scripting language commands into one or more natively executable commands;
in response to translating the one or more scripting language commands into the one or more natively executable commands, executing, at the native execution component of the native execution bridge, the one or more natively executable commands;
in response to determining not to execute the one or more scripting language commands as corresponding native commands, executing, at the scripting language execution sandbox, the one or more scripting language commands,
wherein the method is performed on the single computing devices.

US Pat. No. 10,394,583

AUTOMATED MODEL GENERATION FOR A SOFTWARE SYSTEM

CA, Inc., Islandia, NY (...

1. A method comprising:accessing transaction data generated from monitoring of a plurality of transactions in a system comprising a plurality of software components, wherein at least a particular one of the plurality of transactions comprises data generated by a particular model simulating operation of a particular one of the plurality of software components in the particular transaction;
analyzing the transaction data, using a data processing apparatus, to identify respective sets of attributes for each of the plurality of transactions;
determining, using the data processing apparatus, that the set of attributes of the particular transaction meets a particular one of a set of conditions, wherein the particular transaction involves a subset of the plurality of software components and the particular model;
selecting a portion of the transaction data describing the particular transaction based on the particular transaction meeting the particular condition;
determining that the set of attributes of another one of the plurality of transactions does not satisfy the particular condition, wherein the other transaction involves the subset of software components;
identifying another portion of the transaction data describing the other transaction; and
autonomously generating an additional model of a another one of the subset of software components using the portion of the transaction data based on the particular transaction meeting the particular condition, wherein the other portion of the transaction data describing the other transaction is excluded from use in generation of the additional model based on the other transaction failing to meet the particular condition, and the additional model is used to launch a computer-implemented simulation of the other software components within subsequent transactions of the system.

US Pat. No. 10,394,582

METHOD AND ARRANGEMENT FOR MANAGING PERSISTENT RICH INTERNET APPLICATIONS

TELEFONAKTIEBOLAGET LM ER...

1. An application execution server comprising:a processor and a memory, the memory containing instructions executable by the processor whereby the application execution server is configured to:
responsive to receiving a request from a Rich Internet Application (RIA) executing on a user device, create a background process on the application execution server;
after execution of the RIA on the remote user device has terminated and in response to the background process recognizing an event associated with the RIA, trigger, by the background process, re-execution of the RIA on the remote user device;
wherein the RIA is accessible via a web browser of the user device.

US Pat. No. 10,394,581

OPTIMIZED USER INTERFACE RENDERING

International Business Ma...

1. A computer program product for optimized user interface rendering, the computer program product comprising:one or more tangible computer-readable hardware storage devices and program instructions stored on at least one of the one or more tangible storage devices, the program instructions comprising:
program instructions to identify one or more functional elements having a priority level, and one or more device characteristics of a device, wherein each one of the one or more functional elements is a segment of a computer software composed in one or more technology layers;
program instructions to determine a hardware index that represents computation capabilities of the device based on values and weights associated with each component of the device;
program instructions to determine a video index that represents video rendering capabilities of the device based on values and weights of the device components that are associated with at least one of visually rendering and auditorily rendering a user interface;
program instructions to determine a selection index based on scaling a result of dividing the hardware index by the video index;
program instructions to determine a first functional element of the one or more functional elements that has a highest priority level from the priority level;
program instructions to determine whether there is an appropriate technology layer for the first functional element based on comparing the selection index to one or more technology layer ranges corresponding to one or more technology layers associated with the first functional element;
based on determining that there is an appropriate technology layer for the first functional element:
program instructions to determine a second functional element of the one or more functional elements that has a next highest priority level from the priority level;
program instructions to determine an appropriate technology layer for the second functional element based on comparing the selection index to one or more technology layer ranges corresponding to one or more technology layers associated with the second functional element;
program instructions to determine a cumulative index based on adding the appropriate rendering index of the technology layer of the first functional element and the appropriate technology layer rendering index of the second functional element;
program instructions to determine that the cumulative index exceeds the hardware index; and
program instructions to determine whether there is another appropriate technology layer for the second functional element, based on determining whether another appropriate technology layer has a lower rendering index than the appropriate technology layer of the second functional element.

US Pat. No. 10,394,580

DYNAMIC ADDITION AND REMOVAL OF OPERATING SYSTEM COMPONENTS

Microsoft Technology Lice...

1. A method in a computing device, comprising:receiving a call from an application executing on the computing device;
determining an operating system component intended to receive the call that does not exist in an operating system of the computing device; and
hydrating the component into the operating system of the computing device based at least in part on said determining, said hydrating comprising dynamic installation by the computing device of the component into the operating system to handle the call.

US Pat. No. 10,394,579

AUTOMATICALLY FIXING INACCESSIBLE WIDGETS DURING MOBILE APPLICATION EXECUTION

International Business Ma...

1. A method comprising:identifying, during execution of a mobile application, an image element from a set of one or more user interface elements of the mobile application that is inaccessible to a set of users, wherein said set of users comprises at least one of (i) one or more users with a hearing impairment and (ii) one or more users with a vision impairment, wherein said identifying comprises determining that the image element is one of: an ImageButton type element and an ImageView type element based on corresponding user interface element type information derived from a user interface view hierarchy of a given user interface screen, wherein the user interface view hierarchy is an extensible markup language representation of the given user interface screen and comprises multiple items of information pertaining to each user interface element in the screen, wherein said multiple items of information comprise user interface element type, user interface element label, and bounding coordinates of the user interface element;
transmitting an image associated with the image element to an image-based content retrieval application programming interface;
generating, during execution of the mobile application and via the image-based content retrieval application programming interface, a text description associated with the image element;
extracting, during execution of the mobile application, at least a portion of the text description related to an accessibility property of the image element; and
adjusting, during execution of the mobile application, the accessibility property of the image element to render the image element accessible to the set of users;
wherein said identifying, said transmitting, said generating, said extracting, and said adjusting are carried out by at least one computing device.

US Pat. No. 10,394,578

INTERNET OF THINGS DEVICE STATE AND INSTRUCTION EXECUTION

International Business Ma...

1. A computer-implemented method, comprising:intercepting, by one or more processors in a computing device, an instruction, upon receipt on the instruction, by the one or more processors in the computing device on a communications network, via the communications network, prior to execution of the instruction by the one or more processors in the computing device, wherein the computing device comprises an Internet of Things computing device;
determining, by the one or more processors, a state of the computing device is a first state, wherein the state of the computing device is accessible only to the one or more processors;
based on the computing device being in the first state and a portion of the instruction, determining, by the one or more processors, that the instruction is precluded from executing on the computing device, wherein the determining the instruction is precluded from executing on the computing device further comprises:
mapping, by the one or more processors, the first state to a hierarchy of rules stored on a memory comprising a rule based engine in the computing device; and
determining, by the one or more processors, that the hierarchy of the rules precludes execution of the instruction when the computing device is in the first state;
based on the determining that the hierarchy of the rules precludes execution of the instruction, queuing, by the one or more processors, the instruction on a memory in the computing device while the computing device is in the first state;
changing, by the one or more processors, the state of the computing device from the first state to a second state, wherein the state is changed exclusively in the rule based engine by the one or more processors;
based on the computing device being in the second state and a portion of the instruction, determining, by the one or more processors, that the queued instruction is allowed to execute on the computing device; and
automatically transmitting, by the one or more processors, the queued instruction, from the memory, for execution on the computing device, wherein the queued instruction is executed upon receipt from the transmitting.

US Pat. No. 10,394,577

METHOD AND APPARATUS FOR AUTOMATIC PROCESSING OF SERVICE REQUESTS ON AN ELECTRONIC DEVICE

Deepassist Inc., Grand C...

1. A method performed by a computer system in communication with mobile devices via a network, the computer system including or having access to a data store, the method comprising:receiving information about a first service request from a first mobile device;
searching the data store for a script file including a set of operation/display events for execution by the first mobile device to fulfill the first service request;
in response to the scrip file not being found in the data store: transmitting a signal to the first mobile device to notify the first mobile device that the first script file is not found; receiving from the first mobile device a first sequence of operation/display events performed on the first mobile device to fulfill the first service request; extracting a request template from the information about the first service request; building a first script file for association with the request template using the first sequence of operation/display events and the information about the first service request; and storing the request template and the associated first script file in the data store;
receiving information about a second service request from a second mobile device;
in response to the second service request being related to the request template, retrieving the first script file and the request template from the data store; and
transmitting the first script file and the request template to the second mobile device.

US Pat. No. 10,394,576

CONTROL FOR THE SAFE CONTROL OF AT LEAST ONE MACHINE

SICK AG, Waldkirch (DE)

1. A control for the safe control of at least one machine, the control comprising:at least one input unit for receiving input signals from at least one signal generator;
at least one output unit for outputting output signals to the at least one machine;
a control unit for generating the output signals in dependence on the input signals; and
a connection unit having at least one connection socket for connecting an external input device that can be used for configuring the control,
wherein the connection unit has at least one connection terminal for connecting the signal generators and/or the machine and is separable from the control,
wherein the connection socket can be removed from the connection unit or from the control and comprises a memory with configuration data of the control,
and wherein the connection unit in the connected state provides a first connection and a second connection between the connection socket and the control unit in the control.

US Pat. No. 10,394,575

ELECTRONIC DEVICE WITH AUTOMATIC MODE SWITCHING

Apple Inc., Cupertino, C...

14. An electronic device, comprising:a display having a touch sensor;
an ambient light sensor that detects ambient light;
a motion sensor that detects a change in position of the electronic device; and
a controller that enables and disables the touch sensor based on the ambient light and the change in position.

US Pat. No. 10,394,574

APPARATUSES FOR ENQUEUING KERNELS ON A DEVICE-SIDE

VIA ALLIANCE SEMICONDUCTO...

1. An apparatus for enqueuing kernels on a device-side, comprising:a CSP (Command Stream Processor); and
a MXU (Memory Access Unit), coupled to the CSP and a video memory, comprising a PID (Physical-thread ID) buffer,
wherein the video memory comprises a ring buffer, the MXU allocates space of the ring buffer for a first hardware thread of a kernel according to a first instruction of the CSP, stores a profile of the first hardware thread in the PID buffer, the profile comprises a thread ID, a tail address of the allocated space of the ring buffer and a ready flag indicating that a plurality of first commands of the first hardware thread are not ready,
wherein each first command of the plurality of the first commands when being executed generates a kernel dispatch command for generating a first child-kernel-instance, and the first child-kernel-instance comprises a plurality of second commands, each second command of the plurality of the second commands when being executed generates a kernel dispatch command for generating a second child-kernel instance, and the first and second child-kernel instances are descendant kernels from the kernel;
wherein the PID buffer stores profiles of a plurality of hardware threads following original initiation sequence of the plurality of the hardware threads, wherein the plurality of hardware threads includes the first hardware thread;
wherein the ring buffer contains memory space from a head address to a locked-tail address, and the ring buffer at most stores a predefined number of the plurality of the hardware threads;
wherein the predefined number of the plurality of hardware threads is equal to 96.

US Pat. No. 10,394,573

HOST BUS ADAPTER WITH BUILT-IN STORAGE FOR LOCAL BOOT-UP

Avago Technologies Intern...

1. A method of a storage area network comprising:responsive to data received from a server at a host bus adapter via a bus controller of the adapter, wherein the data is received in a request that specifies an address:
in a case where the address specified in the request comprises a predetermined address corresponding to a default boot logical unit (LUN) of a non-volatile memory (NVM) of the adapter, storing the data in the NVM; and
in a case where the address specified in the request does not comprise the predetermined address corresponding to a boot LUN of the NVM, communicating the data over the storage area network.

US Pat. No. 10,394,572

POWER ADAPTER AND METHOD FOR UPGRADING THE POWER ADAPTER

Guangdong Oppo Mobile Tel...

1. A power adapter comprising:a radio frequency unit;
a micro controller unit configured to:
determine whether to upgrade a firmware of the micro controller unit,
transmit, when the micro controller unit determines to upgrade the firmware of the micro controller unit, a request for requesting firmware upgrade data to a server via the radio frequency unit and an antenna,
control the power adapter to switch to a firmware upgrade mode from a standard charging mode upon transmission of the request,
receive the firmware upgrade data from the server to upgrade the firmware of the micro controller unit in the firmware upgrade mode, and
control the power adapter to switch to the standard charging mode from the firmware upgrade mode upon finish of firmware upgrade; and
a charging interface configured to charge a terminal in the standard charging mode.

US Pat. No. 10,394,571

PASSING DATA FROM A HOST-BASED UTILITY TO A SERVICE PROCESSOR

Lenovo Enterprise Solutio...

1. A method, comprising:loading, using a processor, an interrupt handler and runtime code during initialization of a computer before booting an operating system;
requesting, using a processor, that the operating system transfer a data file via an interface;
transferring, using a processor, the data file to an area accessible to the runtime code;
requesting, using a processor, that the interrupt handler pass the data file to a service processor; and
passing, using a processor, the data file from the accessible area to the service processor via a memory-mapped input/output window of the service processor, wherein the data file is transferred to the service processor without waiting for a system reboot.

US Pat. No. 10,394,570

METHOD OF GENERATING BOOT IMAGE FOR FAST BOOTING AND IMAGE FORMING APPARATUS FOR PERFORMING THE METHOD, AND METHOD OF PERFORMING FAST BOOTING AND IMAGE FORMING APPARATUS FOR PERFORMING THE METHOD

HP PRINTING KOREA CO., LT...

1. A method of generating a boot image for fast booting an image forming apparatus, the method comprising:in response to a first power-on of the image forming apparatus, initializing a bootloader to begin booting of the image forming apparatus;
detecting a hardware setting change in the image forming apparatus;
in response to detecting the hardware setting change, displaying boot modes on a display of a user interface coupled to the image forming apparatus, the user interface to receive an input selecting one of the displayed boot modes, the displayed boot modes including a boot image generating mode and a normal boot mode; and
in response to the input selecting the boot image generating mode:
initializing, using at least one processor, an operating system and at least one application installed in the image forming apparatus,
terminating processes that are not used to execute the operating system and the at least one application, from among processes that are performed when the initializing of the operating system and the at least one application is completed,
suspending remaining processes performed in the image forming apparatus, and
generating the boot image for fast booting while the remaining processes are suspended, the generated boot image for fast booting including information regarding a system state of the image forming apparatus;
in response to a second power-on of the image forming apparatus, from a power-off condition of the image forming apparatus and subsequent to the first power-on of the image forming apparatus, performing fast booting by:
initializing the bootloader,
determining whether the generated boot image for fast booting has an error,
in response to determining the generated boot image for fast booting does not have the error, loading the generated boot image for fast booting and restoring the image forming apparatus to the system state included in the generated boot image for fast booting before re-initializing the at least one application, and
in response to determining the generated boot image for fast booting has the error:
displaying fixing modes on the display of the user interface, the user interface to receive an input selecting one of the displayed fixing modes, the displayed fixing modes including the boot image generating mode, a backup boot image replacing mode, and a normal booting switching mode, and
fixing the error according to the input selecting one of the displayed fixing modes by:
generating another boot image for fast booting when the user interface receives the input selecting the boot image generating mode from among the displayed fixing modes,
retrieving a backup copy of the boot image for fast booting when the user interface receives the input selecting the backup boot image replacing mode from among the displayed fixing modes, and
performing a normal booting when the user interface receives the input selecting the normal booting switching mode from among the displayed fixing modes.

US Pat. No. 10,394,568

EXCEPTION HANDLING FOR APPLICATIONS WITH PREFIX INSTRUCTIONS

INTERNATIONAL BUSINESS MA...

1. A computer program product for managing exception conditions, the computer program product comprising:a non-transitory computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
selecting a plurality of instruction units of an instruction stream to be received in parallel by a plurality of instruction decode units of a processor, wherein the plurality of instruction units includes a prefix instruction and a prefixed instruction, the prefix instruction being an instruction to modify the prefixed instruction and to be forwarded from one decode unit of the plurality of instruction decode units to another decode unit of the plurality of instruction decode units that includes the prefixed instruction to decode the prefix instruction and the prefixed instruction together;
determining an exception condition associated with the prefixed instruction; and
performing exception handling for the prefixed instruction, wherein the performing comprises determining an address at which to restart execution of the instruction stream, wherein the determining the address comprises adjusting the address at which to restart execution based on the prefix instruction to be separately received by an instruction decode unit, wherein the prefix instruction and the prefixed instruction, which are separate instructions to be independently received by the plurality of instruction decode units in parallel, are to be decoded by the other decode unit, and have a dependent relationship with one another in, at least, handling the exception condition, are considered in the performing exception handling.

US Pat. No. 10,394,567

TEMPORARILY FAVORING SELECTION OF STORE REQUESTS FROM ONE OF MULTIPLE STORE QUEUES FOR ISSUANCE TO A BANK OF A BANKED CACHE

International Business Ma...

1. A method of data processing in a data processing system including a plurality of processor cores each having a respective store-through upper level cache and a store-in banked lower level cache, the method comprising:buffering store requests of the plurality of processor cores destined for the banked lower level cache in multiple store queues including a first store queue and a second store queue;
determining whether all store requests in the multiple store queues target only the common bank of the banked lower level cache;
based on determining that all store requests in the multiple store queues contain store requests targeting only a common bank of the banked lower level cache, temporarily favoring selection, for issuance to the banked lower level cache, of store requests from the first store queue over those in the second store queue, wherein temporarily favoring selection includes temporarily biasing arbitration logic to select store requests from the first store queue over those of the second store queue in multiple consecutive selections; and
while temporarily favoring selection of store requests from the first store queue, selecting and issuing store requests to the common bank for processing from both the first store queue and the second store queue.

US Pat. No. 10,394,566

BANKED CACHE TEMPORARILY FAVORING SELECTION OF STORE REQUESTS FROM ONE OF MULTIPLE STORE QUEUES

International Business Ma...

7. A processing unit, comprising:a plurality of processor cores each having a respective store-through upper level cache and a store-in banked lower level cache;
a core interface unit coupled between the plurality of processor cores and the banked lower level cache, wherein the core interface unit includes:
multiple store queues that buffer store requests of the plurality of processor cores destined for the banked lower level cache, wherein the multiple store queues include a first store queue and a second store queue;
an arbiter that, based on determining that the multiple store queues contain store requests targeting a common bank of the banked lower level cache, temporarily favors selection, for issuance to the banked lower level cache, of store requests from the first store queue over those in the second store queue for multiple consecutive selections; and
wherein the core interface unit, while temporarily favoring selection of store requests from the first store queue for the multiple consecutive selections, selects and issues to the common bank for processing store requests from both the first store queue and the second store queue; and
wherein the arbiter is configured to determine whether store requests in the multiple store queues target multiple banks of the banked lower level cache and to concurrently issue requests from the multiple queues to the multiple banks of the banked lower level cache based on determining that the multiple store queues contain store requests targeting multiple banks of the banked lower level cache.

US Pat. No. 10,394,565

MANAGING AN ISSUE QUEUE FOR FUSED INSTRUCTIONS AND PAIRED INSTRUCTIONS IN A MICROPROCESSOR

International Business Ma...

1. A method of managing an issue queue for fused instructions and paired instructions in a microprocessor, the microprocessor comprising:a dispatcher and an execution slice, wherein the execution slice includes a mapper, a double issue queue, issue queue logic, an execution unit, an even age array, and an odd age array, wherein the method comprises:
dispatching a fused instruction to a first entry in the double issue queue, wherein the fused instruction occupies two halves of the first entry in the double issue queue, wherein an age of each instruction in the double issue queue is tracked using the even age array and the odd age array, and wherein dispatching the fused instruction to the first entry in the double issue queue comprises adding a slot in the even age array for a first half of the fused instruction and adding a slot in the odd age array for a second half of the fused instruction;
dispatching two paired instructions to a second entry in the double issue queue, wherein a first instruction of the two paired instructions occupies a first half of the second entry in the double issue queue, and wherein a second instruction of the two paired instructions occupies a second half of the second entry in the double issue queue;
issuing, by the issue queue logic, the fused instruction during a single cycle to the execution unit in response to determining, by the issue queue logic, that the fused instruction is ready to issue;
issuing, by the issue queue logic, the first instruction of the two paired instructions to the execution unit in response to determining, by the issue queue logic, that the first instruction of the two paired instructions is ready to issue;
determining, by the issue queue logic, that the fused instruction is the oldest ready instruction in the even age array, wherein at least one instruction in the odd age array is older than the fused instruction and ready to issue; and
issuing, by the issue queue logic, the fused instruction before the at least one instruction in the odd age array that is older than the fused instruction and ready to issue.

US Pat. No. 10,394,564

LOCAL CLOSED LOOP EFFICIENCY CONTROL USING IP METRICS

Intel Corporation, Santa...

1. A processor, comprising:fetch hardware to fetch instructions;
decode hardware to decode the fetched instructions;
an execution unit to execute instructions, the execution unit being associated with a capture logic to periodically capture operating heuristics of the execution unit;
a detection logic coupled to the execution unit to evaluate the captured operating heuristics to determine whether there is a need to adjust an operating point of the execution unit, wherein the detection logic is to determine a number of instructions retired per clock cycle (IPC) based on the captured operating heuristics to determine whether there is a need to adjust the clock signal to the execution unit; and
control circuitry coupled to the detection logic and the execution unit to adjust the clock signal to the execution unit based on the evaluation of the operating heuristics, wherein the control circuitry is to decrease a frequency of the clock signal in response to determining that the IPC exceeds a first predetermined threshold, and increase the frequency of the clock signal to the execution unit in response to determining that the IPC drops below a second predetermined threshold.

US Pat. No. 10,394,563

HARDWARE ACCELERATED CONVERSION SYSTEM USING PATTERN MATCHING

Intel Corporation, Santa...

1. A method for converting guest instructions into native instructions, the method comprising:accessing a guest instruction;
performing a first level translation of the guest instruction, wherein the performing comprises:
comparing the guest instruction to a plurality of group masks and a plurality of tags stored in multi-level conversion tables by pattern matching subfields of the guest instruction in a hierarchical manner, wherein the conversion tables store mappings of guest instruction bit-fields to corresponding native instruction bit-fields; and
responsive to a hit in a conversion table, substituting a bit-field in the guest instruction with a corresponding native equivalent of the bit-field;
performing a second level translation of the guest instruction using a second level conversion table; and
outputting a resulting native instruction when the second level translation proceeds to completion.

US Pat. No. 10,394,561

MECHANISM FOR FACILITATING DYNAMIC AND EFFICIENT MANAGEMENT OF INSTRUCTION ATOMICITY VOLATIONS IN SOFTWARE PROGRAMS AT COMPUTING SYSTEMS

INTEL CORPORATION, Santa...

1. An apparatus comprising:a memory to store a recording from a recording system; and
a processor coupled to the memory and configured to implement logic including:
replay logic to receive the recording from the recording system, of a first software thread running a first macro-instruction, and a second software thread running a second macroinstruction, wherein the first software thread and the second software thread are executed by a first core and a second core, respectively, of a processor at a computing device, wherein the recording system to record interleavings between the first and second macro-instructions; and
the replay logic is further to correctly replay the recording of the interleavings of the first and second macro-instructions precisely as they occurred, wherein correctly replaying includes replaying a local memory state of the first and second macro-instructions and a global memory state of the first and second software threads, and wherein the recording system includes a hardware-based memory race recording (MRR) system using chunks, wherein a chunk refers to a logical grouping of multiple, sequential instructions from a single software thread including the first software thread or the second software thread, wherein the chunk comprises a package chunk including encoding of a standard reference field (NTB) providing information relating to a state of the first macroinstruction or the second macro-instruction.

US Pat. No. 10,394,560

EFFICIENT RECORDING AND REPLAYING OF NON-DETERMINISTIC INSTRUCTIONS IN A VIRTUAL MACHINE AND CPU THEREFOR

VMware, Inc., Palo Alto,...

1. A computer system comprising:a CPU having operational modes including at least a record mode and a replay mode; and
a buffer, wherein:
when the CPU is in the record mode, each time the CPU encounters an instruction that generates a non-deterministic value and if the buffer has available space for the non-deterministic value, the CPU executes the instruction and, without a context switch to an operating system or hypervisor, stores the nondeterministic value in the buffer; and
when the CPU is in the replay mode, each time the CPU encounters an instruction that would generate a non-deterministic value and the buffer contains a next non-deterministic value to be read, the CPU reads the next non-deterministic value from the buffer, without the context switch, and uses this next non-deterministic value as a result of execution of the instruction.

US Pat. No. 10,394,559

BRANCH PREDICTOR SEARCH QUALIFICATION USING STREAM LENGTH PREDICTION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:determining, by a stream-based index accelerator predictor of a processor, a predicted stream length between an instruction address and a taken branch ending an instruction stream of a plurality of instructions to be fetched from memory including the instruction address, the stream-based index accelerator predictor comprising a plurality of rows of branch prediction data, the stream-based index accelerator predictor tracking an index to one or more of the rows corresponding to the instruction address of a first instruction received in the instruction stream and an exit point as the taken branch ending the instruction stream;
searching a first-level branch predictor of a hierarchical asynchronous lookahead branch predictor of the processor for a branch prediction in one or more entries in a search range bounded by the instruction address and the predicted stream length, wherein the first-level branch predictor is configured to provide a predicted target address as the branch prediction responsive to the searching of the first-level branch predictor;
triggering a search of a second-level branch predictor of the hierarchical asynchronous lookahead branch predictor based on failing to locate the branch prediction in the search range, wherein the second-level branch predictor comprises a greater capacity to store the entries than the first-level branch predictor;
updating an accuracy counter based on a number of times that the stream-based index accelerator predictor correctly predicts the predicted stream length and the correct prediction is used;
enabling use of the search range to extend searching of the first-level branch predictor beyond a default search depth that triggers the search of the second-level branch predictor based on determining that the accuracy counter is above an accuracy threshold; and
disabling use of the search range and enabling use of the default search depth based on determining that the accuracy counter is below the accuracy threshold.

US Pat. No. 10,394,558

EXECUTING LOAD-STORE OPERATIONS WITHOUT ADDRESS TRANSLATION HARDWARE PER LOAD-STORE UNIT PORT

INTERNATIONAL BUSINESS MA...

1. A processing unit for executing one or more instructions, the processing unit comprising:a load-store unit (LSU) for transferring data between memory and registers, the LSU configured to execute a plurality of instructions in an out-of-order (OoO) window by:
selecting an instruction from the OoO window, the instruction using an effective address;
in response to the instruction being a load instruction:
determining whether the effective address is present in an effective address directory (EAD); and
in response to the effective address being present in the EAD, issuing the load instruction using the effective address;
in response to the effective address of the load instruction not being present in the EAD, determining a real address of the load instruction mapped to the effective address from an effective-real translation (ERT), and issuing the load instruction using the real address of the load instruction.

US Pat. No. 10,394,557

DEBUGGING DATA PROCESSING TRANSACTIONS

ARM Limited, Cambridge (...

1. A method of processing data comprising:executing program instructions including a target transaction having one or more program instructions that execute to generate speculative updates to state data and to commit said speculative updates if said target transaction completes without a conflict;
detecting a trigger condition corresponding to direct execution by processing hardware of a program instruction of said target transaction;
upon detecting said trigger condition, initiating software emulation of execution of said target transaction, said software emulation operating:
to store data representing one or more versions of said speculative updates generated during emulation of execution of said target transaction; and
to detect a conflict with said target transaction.

US Pat. No. 10,394,556

HARDWARE APPARATUSES AND METHODS TO SWITCH SHADOW STACK POINTERS

Intel Corporation, Santa...

1. A hardware processor comprising:a hardware decode unit to decode an instruction; and
a hardware execution unit to execute the instruction to:
pop a token for a thread from a shadow stack, wherein the token includes a shadow stack pointer for the thread with at least one least significant bit (LSB) of the shadow stack pointer overwritten with a bit value of an operating mode of the hardware processor for the thread,
remove the bit value in the at least one LSB from the token to generate the shadow stack pointer, and
set a current shadow stack pointer to the shadow stack pointer from the token when the operating mode from the token matches a current operating mode of the hardware processor.

US Pat. No. 10,394,555

COMPUTING NETWORK ARCHITECTURE FOR REDUCING A COMPUTING OPERATION TIME AND MEMORY USAGE ASSOCIATED WITH DETERMINING, FROM A SET OF DATA ELEMENTS, A SUBSET OF AT LEAST TWO DATA ELEMENTS, ASSOCIATED WITH A TARGET COMPUTING OPERATION RESULT

1. A method for reducing a computing operation time associated with determining a subset of at least two data elements, associated with a target computing operation result, from a set of data elements, the method comprising:receiving or accessing, using one or more computing device processors, a first set of data elements,
wherein the first set comprises two or more data elements, wherein a first data element of the two or more data elements is associated with a first index of the first set, and a second data element of the two or more data elements is associated with a second index of the first set;
receiving or accessing, using the one or more computing device processors, a target computing operation result associated with at least two data elements of the first set,
wherein the target computing operation result comprises a target sum of the at least two data elements of the first set;
determining or generating, using the one or more computing device processors, a second set of mapped data elements mapped via one or more indexes to the first set of data elements,
wherein the second set of mapped data elements comprises two or more mapped data elements, wherein a first mapped data element of the two or more mapped data elements is associated with a first index of the second set, and a second mapped data element of the two or more mapped data elements is associated with a second index of the second set,
wherein each of the two or more mapped data elements is determined based on at least the target computing operation result,
utilizing a physical or virtual memory for storing the second set;
determining, using the one or more computing device processors, equivalence between at least two mapped data elements of the second set;
determining, using the one or more computing device processors, index information associated with the at least two mapped data elements of the second set;
determining, using the one or more computing device processors, and based on the index information associated with the at least two mapped data elements of the second set, related index information associated with at least two data elements of the first set;
determining, using the one or more computing device processors, and using the related index information associated with the at least two data elements of the first set, a first subset, of the first set, comprising the at least two data elements of the first set,
wherein a sum of the at least two data elements of the first subset equals the target sum;
deleting the second set from the physical or virtual memory, or overwriting, in the physical or virtual memory, the second set with data.

US Pat. No. 10,394,554

SOURCE CODE EXTRACTION VIA MONITORING PROCESSING OF OBFUSCATED BYTE CODE

STRIPE, INC., San Franci...

1. A system for implementing source code extraction, the system comprising:a memory; and
a processor coupled with the memory to:
access obfuscated byte code by an interpreter,
process the obfuscated byte code, wherein the obfuscated byte code is an instruction set compiled from original source code, and wherein the instruction set is executable by the interpreter based on parsing and directly executing instruction from the instruction set one at a time,
monitor processing of the obfuscated byte code by the interpreter using instrumentation instances associated with functions of the obfuscated byte code,
record instruction sequences of functions based on the monitored processing of the obfuscated byte code by the interpreter, and
generate source code representations of the original source code based on the recorded instruction sequences of functions.

US Pat. No. 10,394,553

REMOTE PROCEDURE CALLS IN A REPLICATED SERVER SYSTEM

GitHub, Inc., San Franci...

1. A method for command handling for replicated repositories comprising:receiving a command from a computer system;
determining, using a processor, a set of servers to receive the command;
determining whether responses match for the command for at least a plurality of the set of servers;
in the event at least one of the responses do not match, determining whether the responses are required to match for the command;
in the event at least one of the responses do not match and in the event the responses are required to match for the command, determining a unified failure response; and
sending the unified failure response to the computer system.

US Pat. No. 10,394,552

INTERFACE DESCRIPTION LANGUAGE FOR APPLICATION PROGRAMMING INTERFACES

DROPBOX, INC., San Franc...

1. A method comprising:generating an application programming interface (API) specification defining API endpoints using data types that are supported in a plurality of programing languages, the data types comprising structs and tagged unions defining respective input data types and output data types for the API endpoints, the API specification being generated in a language that supports struct and tagged union data types;
inputting at least a portion of the API specification into an API engine configured to process and convert the at least the portion of the API specification into a target programming language and serialization format, the target programming language being selected from the plurality of programming languages;
processing and converting, by the API engine using one or more processors, the at least the portion of the API specification into the target programming language and serialization format, to yield a representation of the at least the portion of the API specification in the target programming language and serialization format; and
generating, by the API engine using one or more processors, one or more output files based on the representation of the at least the portion of the API specification, the one or more output files defining one or more of the API endpoints and respective input and output data types, the one or more output files comprising code in the target programming language for communicating data via the API endpoints based on the respective input and output data types and the serialization format.

US Pat. No. 10,394,551

MANAGING KERNEL APPLICATION BINARY INTERFACE/APPLICATION PROGRAMMING INTERFACE-BASED DISCREPANCIES RELATING TO KERNEL PACKAGES

Red Hat, Inc., Raleigh, ...

1. A method comprising:retrieving a first kernel interface information from a first file within a kernel package, wherein the first kernel interface information relates to kernel interfaces associated with the kernel package, wherein the kernel interfaces comprise a first kernel application binary interface (kABI) and a second kABI, wherein the first file is a development file of the kernel package, wherein the development file comprises a hint about a source of the kernel package, the hint including an identification of a kernel developer of the kernel package;
forming a first dataset comprising the first kernel interface information relating to a first version of the kernel package, the first dataset being formed in view of a format of a second dataset relating to a second version of the kernel package, wherein the forming the first dataset comprises:
reading the development file to obtain a hash number of the first kABI, a hash number of the second kABI, and the hint; and
transforming the hash number of the first kABI, the hash number of the second kABI, and the hint into the first dataset into the format such that the first dataset is compatible and comparable with the second dataset and a third dataset relating to a third version of the kernel package;
comparing the first dataset with the second dataset;
comparing the first dataset with the third dataset;
detecting, by a processing device, kernel discrepancies based on the comparison of the first dataset with the second dataset and based on the comparison of the first dataset with the third dataset, wherein the kernel discrepancies comprise at least:
a discrepancy between a hash number associated with the first kABI of the first version of the kernel package and a hash number associated with a first kABI of the second version of the kernel package;
a discrepancy between the hash number associated with the first kABI of the first version and a hash number associated with a first kABI associated with a third version;
an unintended kernel discrepancy between the first kABI of the first version and the first kABI of the second version or the first kABI of the third version, wherein the unintended kernel discrepancy is caused by an error or a mistaken change in the first kABI of the first version; and
a first intended kernel discrepancy between the second kABI of the first version and a second kABI of the second version, wherein the first intended kernel discrepancy is caused by an update or an upgrade to a function associated with the second kABI of the first version; and
a second intended kernel discrepancy between the second kABI of the first version and a second kABI of the third version, wherein the second intended kernel discrepancy is caused by the update or the upgrade to the function associated with the second kABI of the first version;
generating a human-readable discrepancy report in response to detection of the kernel discrepancies, wherein the generating the human-readable discrepancy report further comprises:
generating a list of the kernel discrepancies to be illustrated in the human-readable discrepancy report in view of the detecting;
populating the human-readable discrepancy report with the hint corresponding to the unintended kernel discrepancy to assign a task to the kernel developer to correct the unintended kernel discrepancy; and
populating the human-readable discrepancy report with additional information from the first kernel interface information that identifies the update or the upgrade to the function associated with the second kABI of the first version, the additional information to be used to remove the unintended kernel discrepancy and maintain the first intended kernel discrepancy and the second intended kernel discrepancy;
determining whether the first version of the kernel package is ready to be released when a number of unintended kernel discrepancies, identified in the human-readable discrepancy report, satisfies a minimum kABI requirement for release; and
sending the first version of the kernel package to a computing system when the number of unintended kernel discrepancies identified in the human-readable discrepancy report satisfies the minimum kABI requirement for release, the computing system to execute the first version of the kernel package.

US Pat. No. 10,394,550

SYSTEM AND METHOD FOR SUPPORTING PATCHING IN A MULTITENANT APPLICATION SERVER ENVIRONMENT

ORACLE INTERNATIONAL CORP...

1. A system for supporting patching in an application server environment, comprising:at least one computer and an application server environment executing thereon;
a plurality of managed servers that operate within the application server environment as part of a domain;
wherein a patching process is performed within the application server environment to update the plurality of managed servers, including for each managed server of the plurality of managed servers,
updating the managed server to use an instance of a patched home directory, and
restarting the managed server to cause the managed server to be updated according to the instance of the patched home directory of the managed server; and
wherein a load balancer is configured for use with the application server environment including, during the patching process, as each particular managed server of the plurality of managed servers is updated, directing requests associated with sessions to particular managed servers of the plurality of managed servers.

US Pat. No. 10,394,549

INFORMATION PROCESSING APPARATUS, UPDATING METHOD, AND RECORDING MEDIUM

Ricoh Company, Ltd., Tok...

1. An information processing apparatus, comprising:a memory including a plurality of storage areas including a first storage area and a second storage area, each of which stores a same program; and
circuitry configured to:
obtain an update program to be used for updating the program stored in each one of the first storage area and the second storage area;
update the program stored in the second storage area with the update program, when the first storage area is activated and the second storage area is not activated; and
control the information processing apparatus to start operating with the updated program stored in the second storage area, after shutdown and activation of the information processing apparatus,
wherein, after the information processing apparatus has started to operate with the updated program stored in the second storage area after the shutdown and activation, the circuitry is further configured to update the program stored in the first storage area with the update program stored in the second storage area so that the first and second storage areas store the same update program.

US Pat. No. 10,394,548

ASSEMBLING DATA DELTAS IN VEHICLE ECUS AND MANAGING INTERDEPENDENCIES BETWEEN SOFTWARE VERSIONS IN VEHICLE ECUS USING TOOL CHAIN

1. A non-transitory computer readable medium including instructions that, when executed by at least one processor in a dependency management system, cause the at least one processor to perform operations for receiving and integrating a delta file in a vehicle to address a security vulnerability, comprising:receiving, at a first Electronic Control Unit (ECU) in the vehicle, at least one memory position-independent code segment for addressing a security vulnerability of the first ECU, the at least one memory position-independent code segment comprising at least one executable delta file, the delta file comprising a plurality of deltas corresponding to a software update for software on the first ECU and startup code for executing the delta file in the first ECU;
executing the delta file at a first memory location of the first ECU, based on the startup code, in the first ECU;
checking to determine if the delta file is associated with a second ECU in the vehicle that is interdependent with the first ECU, the dependency management system maintaining mappings of interdependencies prior to the software update; and
updating memory addresses in the first ECU to correspond to the plurality of deltas from the delta file while allowing the first ECU to execute operations at a second memory location of the first ECU.

US Pat. No. 10,394,547

APPLYING UPDATE TO SNAPSHOTS OF VIRTUAL MACHINE

International Business Ma...

1. A computer method for applying program updates executed by a computer hardware processor, the method comprising:creating, in response to a triggering event, a cloned virtual machine reproducing a state of an existing snapshot of a virtual machine, wherein creating the cloned virtual machine further comprises selecting the existing snapshot of a plurality of existing snapshots based on, at least, a priority associated with the existing snapshot and a usage frequency of the existing snapshot;
disabling a first virtual network interface card (NIC) in the cloned virtual machine;
adding a second virtual NIC to the cloned virtual machine wherein the adding the second virtual NIC to the cloned virtual machine further comprise initiating the cloned virtual machine to which the second virtual machine has been added and assigning an using the unused IP address assigned to the second virtual NIC;
applying an update to the cloned virtual machine having the added second virtual NIC, wherein the applied update includes an update of a definition file for security software designed to maintain security for the cloned virtual machine;
deleting the second virtual NIC from the updated cloned virtual machine;
reenabling the first virtual NIC in the updated cloned virtual machine; and
generating a new snapshot of the updated cloned virtual machine and the reenabled first virtual NIC, wherein the new snapshot is a data set in which only information on differences from a directly preceding snapshot is stored.

US Pat. No. 10,394,546

NOTIFICATIONS FRAMEWORK FOR DISTRIBUTED SOFTWARE UPGRADES

Oracle International Corp...

1. A method comprising:executing, on each host machine of a plurality a host machine, at least one upgrade process from a plurality of upgrade processes;
determining, from among the plurality of host machines, a particular host machine as a notification agent for the plurality of host machines, wherein the particular host machine executes a third upgrade process of the plurality of upgrade processes;
receiving, by the particular host machine from the plurality of host machines, a first notification, the first notification generated by a first upgrade process of the plurality of upgrade processes executed by a first host machine from the plurality of host machines, the first upgrade process for upgrading a first software application on the first host machine, the first notification comprising information related to the first upgrade process;
generating, by the particular host machine, a single notification by consolidating the first notification and a second notification, the second notification generated by a second upgrade process of the plurality of upgrade processes executed by a second host machine from the plurality of host machines, the second upgrade process for upgrading a second software application on the second host machine, the second notification comprising information related to the second upgrade process;
sending, by the particular host machine, the single notification to a user instead of the first notification and the second notification; and
subsequent to sending the single notification, selecting, from among the plurality of host machines, a different particular host machine other than the particular host machine, as the notification agent for the plurality of the host machines.

US Pat. No. 10,394,545

CONTROL METHOD OF UPDATING FIRMWARE

MITAC COMPUTING TECHNOLOG...

1. A control method for updating firmware, comprising steps of:(A) providing a set of expanders that are serially coupled and that include a target expander, and a host computer that includes a connection port coupled to the set of the expanders, and that stores a flag value associated with the set of the expanders, and a start time information associated with the connection port;
(B) determining, by the host computer, whether or not the flag value is a first predetermined value which indicates that none of the expanders coupled to the connection port is in a condition of updating firmware; and
(C) when the host computer determines in step (B) that the flag value is the first predetermined value, permitting, by the host computer, transmission of a firmware update file to the target expander via at least the connection port, updating, by the host computer, the start time information according to a start time point at which the transmission of the firmware update file via the connection port starts, and setting, by the host computer, the flag value to a second predetermined value which differs from the first predetermined value and which indicates that one of the expanders is in the condition of updating firmware.

US Pat. No. 10,394,544

ELECTRONIC UPDATE HANDLING BASED ON USER ACTIVITY

International Business Ma...

1. A system for performing a computer program update on a target computer comprising:a memory comprising instructions;
a bus coupled to the memory; and
a processor coupled to the bus that when executing the instructions causes the system to:
determine a target computer having a location, a user, a computer program, and a computer program update;
determine an expected install duration for installing the computer program update;
monitor a social media service associated with the user and for detecting a user location from a social media post on the social media service;
estimate an update time window, based on the user location, that the user is away from the target computer location; and
beginning an installation of the computer program update based on the update time window and the expected install duration;
detect a change in user location;
estimate a changed update time window, based on the change in user location, that the user could be away from the target computer; and
back out of the installation if the changed update window is not big enough to install the computer program update.

US Pat. No. 10,394,542

LOW-POWER DEVICE RECOVERY USING A BACKUP FIRMWARE IMAGE

Infineon Technologies AG,...

1. An electronic device, the electronic device comprising:a main memory;
a secondary memory;
one or more processors, communicatively coupled to the main memory and the secondary memory, configured to:
receive, via a wireless transmission, a firmware code update that is to be installed in the main memory,
determine that the firmware code update is not successfully received,
determine, based on determining that the firmware code update is not successfully received, an amount of power remaining in a battery,
send a request to retransmit the firmware code update based on determining that the amount of power remaining in the battery satisfies a threshold,
determine, based on sending the request and determining that the firmware code update is not successfully received, and based on determining that the amount of power remaining in the battery does not satisfy the threshold, that the secondary memory includes a backup firmware,
wherein the backup firmware includes a set of functionalities for the electronic device, and
install the backup firmware, from the secondary memory, in the main memory to provide functionality to the electronic device; and
the battery to supply power to the main memory, the secondary memory, and the one or more processors.

US Pat. No. 10,394,541

SYSTEM, METHOD, AND PROGRAM FOR DISTRIBUTING CONTAINER IMAGE

OPTIM CORPORATION, Saga-...

1. A system for distributing a container image, comprising:a container image registration unit that registers a container image that includes an application and a program group for the execution environment of the application;
a start instruction receiving unit that receives start instruction to start the registered container image;
a platform selection receiving unit that receives selection of a platform that distributes the container image instructed to be started;
a container image distribution unit that distributes the container image instructed to be started to the selected platform;
a re-distribution control unit that controls the platform to have an external system to re-distribute the container image distributed to the platform; and
a container creation instructing unit that instructs the platform to create a container from the container image re-distributed on the external system.

US Pat. No. 10,394,540

SOFTWARE INCREMENTAL LOADER

TIME WARNER CABLE ENTERPR...

1. A method comprising the steps of:communicating, by consumer premises equipment, identification information and a list of installed bundles to a software management system via a network interface, wherein each of the installed bundles has registered classes thereof in a file registry of the consumer premises equipment;
receiving, by the consumer premises equipment, a list of available bundles in response to communicating the identification information and the list of installed bundles to the software management system via the network interface, wherein the available bundles are stored in a compressed form in a virtual file system in a plurality of object carousel repositories remote from the consumer premises equipment;
selecting, by a bundle handler of the consumer premises equipment, at least one, but less than every bundle of the available bundles to be installed on the consumer promises equipment, wherein the selection comprises resolving, by the bundle handler, at least one dependency of the available bundles using the file registry of the registered classes of the installed bundles, wherein the selecting returns at least one selected bundle from among the available bundles;
extracting, by the consumer premises equipment from the at least one selected bundle, a location of each of the object carousel repositories storing the at least one selected bundle;
obtaining, by the consumer premises equipment, in compressed form, the at least one selected bundle from respective ones of the plurality of object carousel repositories storing the at least one bundle to be installed using the location extracted from the available bundles, wherein the software management system includes the plurality of object carousel repositories storing the available bundles at different locations; and
installing, by the consumer premises equipment, the at least one selected bundle obtained from the respective ones of the plurality of object carousel repositories using the location extracted by the consumer premises equipment, including decompressing the at least one selected bundle.

US Pat. No. 10,394,538

OPTIMIZING SERVICE DEPLOYMENT IN A DISTRIBUTED COMPUTING ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:receiving, by a computing device, a trigger indication to deploy a new artifact into an application cluster;
obtaining, by the computing device, deployment data for the new artifact from one or more service entities via external application programming interface (API) calls;
storing, by the computing device, the deployment data as a deployment data object in a centralized repository that is accessible to a plurality of instances of the application cluster via internal API calls;
providing, by the computing device, the deployment data object to a plurality of instances of the application cluster via internal API calls to the centralized repository without the need for the plurality of instances to conduct external API calls to the one or more service entities,
consolidating the deployment data for the new artifact with deployment data for existing artifacts in the application cluster;
storing the consolidated deployment data in the centralized repository; and
providing the consolidated deployment data to the plurality of instances of the application cluster via internal API calls to the centralized repository after a restart of the application cluster to restore each of the plurality of instances to a configuration prior to the restart of the application cluster without the need for the plurality of instances to conduct external API calls to the one or more service entities after the restart.

US Pat. No. 10,394,537

EFFICIENTLY TRANSFORMING A SOURCE CODE FILE FOR DIFFERENT CODING FORMATS

International Business Ma...

1. A method of transforming a source code file for different coding formats comprising:generating a source code file in accordance with a first coding format employed by a first user, wherein generating the source code file comprises:
converting the source code file to a preference-neutral source code file by stripping the first coding format from the source code file;
parsing the preference-neutral source code file into a first plurality of elements of a first tree structure; and
storing the first tree structure in a repository;
in response to a request for the generated source code file from a second user employing a different coding format: transforming the generated source code file to the different coding format employed by the second user;
presenting the transformed source code file to the second user in the different coding format;
receiving changes by the second user to the source code file in the different coding format;
converting the source code file in the different coding format to a second preference-neutral source code file by stripping the different coding format from the source code file;
parsing the second preference-neutral source code file into a second plurality of elements of a second tree structure;
retrieving the first tree structure from the repository and comparing each of the first plurality of elements of the first tree structure to the second plurality of elements of the second tree structure to identify substantive modifications in the second plurality of elements, wherein a substantive modification is identified when comparing an element of the first plurality of elements to a corresponding element of the second plurality of elements indicates a difference in content between the compared elements, and wherein the first and second tree structures include a document object model; and
applying the substantive modifications to the first tree structure to modify the source code file.

US Pat. No. 10,394,536

COMPILING A PARALLEL LOOP WITH A COMPLEX ACCESS PATTERN FOR WRITING AN ARRAY FOR GPU AND CPU

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for compiling a parallel loop and generating Graphics Processing Unit (GPU) code and Central Processing Unit (CPU) code for writing an array for the GPU and the CPU, the method comprising:compiling the parallel loop by (i) checking, based on a range of array elements to be written, whether the parallel loop can update all of the array elements and (ii) checking whether an access order of the array elements that the parallel loop reads or writes is known at compilation time; and
determining an approach, from among a plurality of available approaches, to generate the CPU code and the GPU code based on (i) the range of the array elements to be written and (ii) the access order to the array elements in the parallel loop,
wherein (i) checking, based on a range of array elements to be written, whether the parallel loop can update all of the array elements comprises:
checking whether the range of array elements to be written is equal to a range of an index of the parallel loop to be updated;
checking whether a subscript expression of the array is equal to the index of the loop array to be updated; and
checking whether an assignment to the array is always executed in an iteration of the parallel loop.

US Pat. No. 10,394,535

FLOATING ELEMENT SYSTEM AND METHODS FOR DYNAMICALLY ADDING FEATURES TO AN APPLICATION WITHOUT CHANGING THE DESIGN AND LAYOUT OF A GRAPHICAL USER INTERFACE OF THE APPLICATION

1. A non-transitory computer readable medium storing a program which when executed by at least one processing unit of a computing device dynamically adds features to an application without changing the application design or the application layout, said program comprising sets of instructions for:providing, to a user computing device associated with a user of the application, a first set of functions of the application;
providing, to the user computing device, a graphical user interface (GUI) of the application, said GUI comprising a first layer GUI;
providing, to the user computing device, a first set of GUI tools in the first layer GUI, wherein the first set of functions of the application are accessible to the user of the application by way of the first set of GUI tools, wherein the first set of GUI tools includes a particular GUI tool;
rendering the first layer GUI and the first set of GUI tools in the first layer GUI for a target display screen associated with the user computing device, wherein the first layer GUI is rendered with a tool space area and a free space area;
visually outputting the rendered first layer GUI and the first set of GUI tools in the tool space area of the first layer GUI on the target display screen associated with the user computing device, wherein the particular GUI tool is located at a particular position in the tool space area of the first layer GUI;
providing, to the user computing device, a transparent second layer of the application after rendering the first layer GUI and the first set of GUI tools in the tool space area of the first layer GUI;
providing, to the user computing device, a visible floating element configured for display within a first geometric area of the transparent second layer when the application is running on the user computing device, wherein the first geometric area is configured for display at least in part over the free space area of the first layer GUI;
initializing the visible floating element based on a set of local settings;
retrieving a set of cloud-based parameters to use in rendering the visible floating element for the target display screen associated with the user computing device;
rendering the transparent second layer of the application to span the top of the first layer GUI;
rendering the initialized visible floating element based on the retrieved set of cloud-based parameters, wherein the initialized visible floating element is rendered for display within the first geometric area that is configured for display at least in part over the free space area of the first layer GUI rendered for the target display screen associated with the user computing device;
visually outputting the visible floating element on the target display screen associated with the user computing device, wherein the visible floating element is visually output as a selectable floating element icon within the first geometric area of the transparent second layer and at least in part over the free space area of the first layer GUI without obstructing access to the first set of GUI tools in the tool space area of the first layer GUI on the target display screen of the user computing device, wherein the visible floating element provides access to a second set of functions of the application;
visually outputting a second set of GUI tools in a window of the GUI of the application contemporaneously with receiving a selection of the selectable floating element icon by rendering the window of the GUI for display within a second geometric area that is outside of and adjacent to the first geometric area of the transparent second layer that spans the top of the first layer GUI, wherein the second set of GUI tools provides access to the second set of functions of the application from within the window of the GUI;
receiving a user selection at a location within a third geometric area of the transparent second layer, wherein the third geometric area is an empty area of the transparent second layer that is outside of the second geometric area of the transparent second layer and is at least in part over the tool space area of the first layer GUI and which provide visibility and functional access to the first set of GUI tools of the first layer GUI of the application, wherein the location of the user selection within the third geometric area is visibly equivalent to the particular position of the particular GUI tool in the tool space area of the first layer GUI;
removing the window of the GUI and the second set of GUI tools from display within the second geometric area of the transparent second layer in response to the user selection within the third geometric area of the transparent second layer; and
activating the particular GUI tool in response to the user selection within the third geometric area of the transparent second layer, wherein the visible floating element remains visually output as the selectable floating element icon within the first geometric area of the transparent second layer on the target display screen of the user computing device after activating the particular GUI tool in response to the user selection.

US Pat. No. 10,394,534

FRAMEWORK FOR FLEXIBLE LOGGING OF DEVELOPMENT ENVIRONMENT DEPLOYMENT

PAYPAL, INC., San Jose, ...

1. A development environment deployment system, comprising:a non-transitory memory storing a plurality of development environment deployment templates for creation of a plurality of development environments; and
one or more hardware processors coupled to the non-transitory memory and configured to execute instructions to cause the system to perform operations comprising:
responsive to a selection by a user of a stored development environment deployment template from the plurality of development environment deployment templates, generating, for display on a display device coupled to the development environment deployment system, an interactive graphical depiction of a plurality of events for creating a development environment from the selected stored development environment deployment template, wherein the interactive graphical depiction includes at least a first graphical representation of a first event in the plurality of events, a second graphical representation of a second event in the plurality of events, and a third graphical representation of a dependency of the second event upon the first event, wherein the first event includes instructions executable to set up a virtual machine, and the second event includes instructions executable to install software on the virtual machine;
receiving an updated status of the first event in the plurality of events from a virtual machine host coupled to the development environment deployment system;
modifying at least a portion of the interactive graphical depiction of the executable instructions in response to receiving the updated status, wherein the modified portion includes at least the first graphical representation of the first event;
receiving, from the user and via the interactive graphical depiction, a selection of a graphical representation of an event displayed within the interactive graphical depiction of the events; and
responsive to the selection of the graphical representation of the event, presenting a log entry associated with the event.

US Pat. No. 10,394,533

REUSABLE COMPONENT IN A MODELING ENVIRONMENT

The MathWorks, Inc., Nat...

1. A computer-implemented 1 method, comprising:receiving, using a computing device, a modeling component generated in a first graphical modeling environment, wherein the first graphical modeling environment is utilized to create one or more first executable graphical models that when executed simulate one or more first system behaviors of one or more first systems;
generating a first numerical output associated with a first behavior of the modeling component when the modeling component is executed in the first graphical modeling environment utilizing a first input;
generating a second numerical output associated with a second behavior of the modeling component when the modeling component is executed in a second graphical modeling environment utilizing a second input, wherein the second graphical modeling environment is utilized to create one or more second executable graphical models that when executed simulate one or more second system behaviors of one or more second systems, and wherein the second graphical modeling environment differs from the first graphical modeling environment;
comparing, using the computing device, the first numerical output and the second numerical output to determine whether the second behavior of the modeling component in the second graphical modeling environment is equivalent to the first behavior of the modeling component in the first graphical modeling environment; and
in response to determining that the second behavior of the modeling component in the second graphical modeling environment is not equivalent to the first behavior of the modeling component in the first graphical modeling environment,
executing code generated for the modeling component in the second modeling environment,
calling the code generated for the modeling component in the second modeling environment, or
incorporating the code generated for the modeling component in a model generated in the second graphical modeling environment.

US Pat. No. 10,394,532

SYSTEM AND METHOD FOR RAPID DEVELOPMENT AND DEPLOYMENT OF REUSABLE ANALYTIC CODE FOR USE IN COMPUTERIZED DATA MODELING AND ANALYSIS

OPERA SOLUTIONS U.S.A., L...

21. A computer-implemented method for rapid development and deployment of reusable analytic code for use in computerized data modeling and analysis, the method using computer processes comprising:obtaining, using a signal manager, source data from a plurality of data sources and to generate and monitor from the source data a reusable signal layer of maintained and refreshed named signals on top of the source data; and
allowing, using a graphical user interface, users to define signal categories and relationships used by the signal manager to generate the reusable signal layer of maintained and refreshed named signals, explore lineage and dependencies of the named signals in the signal layer, monitor and manage the signal layer including recovery from issues identified by monitoring of the named signals by the signal manager, and create and execute analytic code applications that utilize the named signals.

US Pat. No. 10,394,531

HYPER DYNAMIC JAVA MANAGEMENT EXTENSION

Cisco Technology, Inc., ...

1. A system for exposing application metrics on the fly, the system including:a processor;
a memory; and
one or more modules stored in the memory and executable by a processor to perform operations including:
initialize a Java managed object for Java Management Extension (JMX);
attach the initialized Java managed object to a repository for Java managed objects accessible by applications;
create a fixed reference to an attributes field in the attached Java managed object;
provide the created fixed reference to the repository after initialization of the Java managed object;
save the fixed reference;
describe a new attribute to be created in a reflection chain expression; and
create the new attribute in the Java managed object including:
create an object representing the new attribute based on a processing of the reflection chain expression,
place the object representing the new attribute into an attribute map,
reference the attribute map using a name associated with the new attribute,
inject information from the attribute map into an array of attribute information objects compatible with JMX,
update the attributes field of the Java managed object to point to the array of attribute information objects compatible with JMX.

US Pat. No. 10,394,530

COMPUTER READABLE MEDIUM FOR TRANSLATING PROTOCOLS WITH AUTONOMOUS SCRIPT WRITER

Tilden Tech Holdings LLC,...

1. A non-transitory computer readable medium, said medium encoded with a program capable of executing the steps of:inserting an inline communication hook into a scripting language from core binary, said hook permitting two way communication between said scripting language and a compiled binary application thereby allowing customizations to be applied without the need to alter the original core binary application, wherein said scripting language is ECMA-262/JavaScript;
sending input from said scripting language to said compiled binary application via said hook or sending input from said compiled binary application to said scripting language via said hook without the need to spawn an external ECMA-262 interpreter;
executing one or more additional commands based on said input, said one or more additional commands are not otherwise available in said scripting language;
outputting output, in response to said one or more additional commands, to said compiled application via said hook if said input was sent from said scripting language or outputting output, in response to said one or more additional commands, to said scripting language via said hook if said input was sent from said compiled application;
wherein at least one of said input or output is time sensitive aircraft or marine craft communication and efficiency is enhanced in relation to changing and recompiling said binary application;
wherein at least one of said one or more additional commands or an error message is received via an autonomous script writer.

US Pat. No. 10,394,529

DEVELOPMENT PLATFORM OF MOBILE NATIVE APPLICATIONS

1. A method for developing a mobile native application, comprising:providing a development platform with mobile native application templates,
wherein the development platform comprises:
a first portion including mobile configuration logics;
a second portion including mobile native application variables; and
a third portion including a mobile native application, the mobile native application including a mobile native component transform engine; and
developing the mobile native application using the development platform,
wherein developing the mobile native application only requires defining the mobile native application variables, updating a latest version of the mobile native application based on the mobile configuration logics, and presenting the latest version of the mobile native application using the mobile native component transform engine, and
wherein developing the mobile native application using the development platform does not require writing any program code.

US Pat. No. 10,394,528

RETURNING A RUNTIME TYPE LOADED FROM AN ARCHIVE IN A MODULE SYSTEM

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions which, when executed by one or more hardware processors, cause performance of operations comprising:identifying, by a class loader implemented in a first runtime environment, an archived runtime type loaded into an archive from a module source;
identifying a particular package associated with the archived runtime type;
determining whether the particular package is defined to any runtime module, of a module system, that is defined to (i) the class loader or (ii) any class loader in a class loader hierarchy to which the class loader delegates, in the first runtime environment;
responsive at least to determining that the particular package is not defined to any runtime module that is defined to (i) the class loader or (ii) any class loader in the class loader hierarchy to which the class loader delegates, in the first runtime environment: refraining from loading any runtime type based on the archived runtime type from the archive;
identifying, by the class loader implemented in a second runtime environment, the archived runtime type loaded into the archive from the module source;
determining whether the particular package is defined to any runtime module, of the module system, that is defined to (i) the class loader or (ii) any class loader in the class loader hierarchy to which the class loader delegates, in the second runtime environment;
responsive at least to determining that the particular package is defined to a runtime module that is defined to (i) the class loader or (ii) any class loader in the class loader hierarchy to which the class loader delegates, in the second runtime environment: returning directly or indirectly, by the class loader, a runtime type loaded based on the archived runtime type from the archive; and
wherein runtime type visibility is enforced by the module system based at least in part on readability of one or more runtime modules as defined in the module system.

US Pat. No. 10,394,527

SYSTEM AND METHOD FOR GENERATING AN APPLICATION STRUCTURE FOR AN APPLICATION IN A COMPUTERIZED ORGANIZATION

SERVICENOW, INC., Santa ...

1. A method for discovering and generating an application structure for a computerized system, the method comprising:receiving indication of a first entry point, wherein the first entry point corresponds to a first network location within the computerized system;
initiating a discovery process from the first entry point;
discovering a first object connected to the first entry point via a first connection;
continuing the discovery process to discover a second object using a second network location of the first object as a second entry point, wherein the first entry point and the second entry point are different from one another, and wherein the first entry point and the second entry point are connected to one another via a second connection; and
updating the application structure based on the discovery process.

US Pat. No. 10,394,525

GENERATION OF RANDOM NUMBERS THROUGH THE USE OF QUANTUM-OPTICAL EFFECTS WITHIN A MULTI-LAYERED BIREFRINGENT STRUCTURE

1. A randomization generating optical system comprising:a birefringent structure configured to receive a beam from a beam source, the birefringent structure comprising a plurality of layers,
wherein each layer has a birefringent axis that is misaligned with a birefringent axis of each previous layer, and
wherein each layer of the birefringent structure is positioned to split the beam as it propagates through the birefringent structure, forming an increasingly bifurcated beam; and
a photodetector positioned to receive randomization energy generated by the bifurcated beam exiting the birefringent structure and convert the randomization energy into a parallel randomized output signal.

US Pat. No. 10,394,524

ARITHMETIC OPERATION INPUT-OUTPUT EQUALITY DETECTION

ARM Limited, Cambridge (...

1. Data processing apparatus comprising:arithmetic circuitry to perform an arithmetic operation on one or more input operands to generate a result value; and
comparison circuitry to receive the result value from the arithmetic circuitry and to perform a comparison operation between the result value and at least one of the one or more input operands,
wherein the comparison circuitry is responsive to an equivalence of the result value with at least one of the one or more input operands, when the one or more input operands are not an identity element for the arithmetic operation, to generate a signal indicative of the equivalence.

US Pat. No. 10,394,523

METHOD AND SYSTEM FOR EXTRACTING RULE SPECIFIC DATA FROM A COMPUTER WORD

Avanseus Holdings Pte. Lt...

1. A method for extracting rule specific data from a computer word by a computer system, the method comprising:calculating, by a processor in the computer system, at least one decimal value based on a rule representation associated with a rule, wherein the rule representation is a byte array including at least one byte binary codes, value of each bit of the byte array configured to represent whether a corresponding bit position in the computer word has a data component related to the rule;
identifying, by the processor in the computer system, at least one result byte array corresponding to the rule based on the calculated at least one decimal value from a preset look-up table in the computer system,
wherein the preset look-up table includes a plurality of mappings, each mapping between a result byte array and a decimal value, the result byte array in each mapping indicating a set of reference bit positions for determining a set of bit positions in the computer word, wherein a last byte of the result byte array in each mapping is configured to represent a bit count value associated with the set of reference bit positions; and
determining, by the processor in the computer system, a set of bit positions in the computer word in which a set of data components related to the rule are stored based on both the set of reference bit positions indicated by each identified result byte array and the last byte of each identified result byte array as a loop counter.

US Pat. No. 10,394,522

DISPLAY CONTROLLER

Arm Limited, Cambridge (...

1. A display controller for a data processing system, the display controller comprising:a first set of display processing circuits comprising a first input circuit operable to read at least one input surface, a first processing circuit operable to process one or more input surfaces to generate an output surface, wherein the first processing circuit comprises a first composition circuit, wherein the first composition circuit is operable to compose two or more surfaces to generate a composited surface when generating an output surface, and a first output circuit configured to provide an output surface for display to a first display;
a second set of display processing circuits comprising a second input circuit operable to read at least one input surface, a second processing circuit operable to process one or more input surfaces to generate an output surface, wherein the second processing circuit comprises a second composition circuit, wherein the second composition circuit is operable to compose two or more surfaces to generate a composited surface when generating an output surface, and a second output circuit configured to provide an output surface for display to a second display;
one or more internal data paths configured to pass pixel data of a surface in both directions between the first and second sets of display processing circuits, wherein the one or more internal data paths are configured to directly pass pixel data of a surface between the first and second composition circuits and between the first and second output circuits; and
a control circuit configured to, in response to control signals received by the control circuit, selectively activate one or more of the first and second sets of display processing circuits to process one or more input surfaces to generate one or more output surfaces for display, and selectively control direct passing of pixel data between the first and second composition circuits and between the first and second output circuits by selectively activating the direct passing of pixel data via the one or more internal data paths and selectively controlling the direction in which pixel data is directly passed.

US Pat. No. 10,394,521

SPEAKER DEVICE WITH EQUALIZATION TOOL

Peag, LLC, Carlsbad, CA ...

1. A speaker device with equalization control, comprising:an audio receiving port for receiving audio signals from an audio source;
a circuitry comprising at least a processor and a memory coupled thereto, the processor being configured to execute one or more firmware or software programs having computer executable instructions on the memory to perform tasks for processing the audio signals, including equalization of the audio signals according to one of a plurality of equalization (EQ) settings stored in the memory;
a user input terminal for detecting a user input to send a corresponding user input signal to the circuitry; and
a speaker driver for emitting sound corresponding to the processed audio signals,
wherein
a plurality of audio prompts corresponding to the plurality of EQ settings, respectively, are stored in the memory; and
the processor is configured to retrieve from the memory one of the audio prompts corresponding to the user input signal, and send it to the speaker driver for notifying a user of the one of the EQ settings corresponding to the user input signal.

US Pat. No. 10,394,520

LOUDNESS CONTROL FOR USER INTERACTIVITY IN AUDIO CODING SYSTEMS

1. An audio processor for processing an audio signal, comprising:an audio signal modifier, wherein the audio signal modifier is configured to modify the audio signal in response to a user input;
a loudness controller, wherein the loudness controller is configured to determine a loudness compensation gain based on the one hand on at least one of a reference loudness and a reference gain and on the other hand on at least one of a modified loudness and a modified gain,
wherein the modified loudness or the modified gain depends on the user input,
wherein the loudness controller is configured to determine the loudness compensation gain based on metadata of the audio signal indicating which group is to be used and is not to be used, respectively, for determining the loudness compensation gain, and
wherein the group comprises one or more audio elements; and
a loudness manipulator, wherein the loudness manipulator is configured to manipulate a loudness of a signal using the loudness compensation gain,
wherein the loudness controller is configured to determine the loudness compensation gain based on group loudnesses and/or gain values of the at least one group of the set referred to by the preset, or
wherein the loudness controller is configured to determine the reference loudness for the set referred to by the preset using the respective group loudnesses and the respective gain values, wherein the loudness controller is configured to determine the modified loudness for the set referred to by the preset using the respective group loudnesses and the respective modified gain values, and wherein the modified gain values are modified by the user input, or
wherein the audio signal comprises a bitstream with the metadata, and wherein the metadata comprises the reference gain for at least one group, or
wherein the metadata of the audio signal comprises a group loudness for at least one group, or
wherein the loudness controller is configured to determine the reference loudness for at least one group using the group loudness and the gain value for the group, wherein the loudness controller is configured to determine the modified loudness for the group using the group loudness and the modified gain value, and wherein the modified gain value is modified by the user input, or
wherein the loudness controller is configured to determine the reference loudness for a plurality of groups using the respective group loudnesses and gain values for the groups, and wherein the loudness controller is configured to determine the modified loudness for a plurality of groups using the respective group loudness and modified gain value for the groups, or
wherein the loudness controller is configured to perform a limitation operation on the loudness compensation gain so that the loudness compensation gain is lower than an upper threshold and/or so that the loudness compensation gain is greater than a lower threshold, or
wherein the loudness manipulator is configured to apply a corrected gain to the signal determined by the loudness compensation gain and by a normalization gain determined by a target loudness level set by user input and a metadata loudness level comprised by the metadata of the audio signal.

US Pat. No. 10,394,519

SERVICE PROVIDING APPARATUS AND METHOD

Honda Motor Co., Ltd., T...

1. A service providing apparatus, comprising:an occupant detecting sensor configured to detect presence of each of a plurality of occupants in a vehicle;
an image acquiring sensor configured to acquire an image of a face of the plurality of occupants;
a microphone configured to acquire an utterance of at least one of the plurality of occupants; and
a control unit including a CPU and a memory coupled to the memory, wherein the CPU and the memory are configured to:
estimate an individual feeling of each of the plurality of occupants detected by the occupant detecting sensor, based on the acquired image of the face of the occupant acquired by the image acquiring sensor;
estimate a general mood representing an entire feeling of the plurality of occupants, based on the estimated individual feeling of each of the plurality of occupants;
interpret a gist of an utterance of at least one of the plurality of occupants, based on the utterance acquired by the microphone;
select a voice information service providing information on the interpreted gist of the utterance when the estimated general mood is more than or equal to a predetermined value, and select a music service providing music when the estimated general mood is less than the predetermined value; and
output a command to provide the selected service.

US Pat. No. 10,394,518

AUDIO SYNCHRONIZATION METHOD AND ASSOCIATED ELECTRONIC DEVICE

MEDIATEK INC., Hsin-Chu ...

1. An audio synchronization method, comprising:receiving a first audio signal from a first recording device;
receiving a second audio signal from a second recording device;
performing a correlation operation upon the first audio signal and the second audio signal to align a first pattern of the first audio signal and the first pattern of the second audio signal;
after the first patterns of the first audio signal and the second audio signal are aligned, calculating a difference between a second pattern of the first audio signal and the second pattern of the second audio signal; and
obtaining a starting-time difference between the first audio signal and the second audio signal for audio synchronization according to the difference between the second pattern of the first audio signal and the second pattern of the second audio signal;
wherein the step of obtaining the starting-time difference between the first audio signal and the second audio signal for audio synchronization according to the difference between the second pattern of the first audio signal and the second pattern of the second audio signal comprises:
moving the first audio signal and/or the second audio signal to make the first patterns of the first audio signal and the second audio signal have a specific delay, wherein the specific delay is half of the difference between the second pattern of the first audio signal and the second pattern of the second audio signal; and
comparing a head position of the first audio signal and a head position of the second audio signal to obtain the starting-time difference between the first audio signal and the second audio signal.

US Pat. No. 10,394,517

CONTROL METHOD, CONTROLLER, AND DEVICE

PANASONIC INTELLECTUAL PR...

19. A device installed in a predetermined space, the device comprising:a communicator; and
a processor, wherein
the communicator receives, from a controller connected to the device, a first command for setting a first sound volume in the device as a sound volume upper-limit value,
the processor sets the first sound volume as the sound volume upper-limit value in the device,
the first sound volume is determined by the controller based on a database and sleep information of a person present in the predetermined space, the sleep information being obtained from a biological sensor disposed in the predetermined space,
the sleep information indicates a sleep state of the person,
the database indicates a correspondence between the sleep state and a target sound volume of a corresponding device, and
the target sound volume of the corresponding device is a predetermined sound volume which does not awake a sleeping person at the sleep state and still be heard by an awake person.

US Pat. No. 10,394,516

MOBILE TERMINAL AND METHOD FOR CONTROLLING SOUND OUTPUT

Pantech Inc., Seoul (KR)...

1. A mobile terminal to control an output sound, the mobile terminal comprising:a sound control unit configured to form a virtual output path for applications requesting to output sound data and configured to determine sound data to be output selected from each application's sound data;
a mapping information storing unit configured store a priority associated with each of the applications that is set by a user for execution of the applications, and to map and store application identifiers for the applications to a sound track index;
a sound output unit configured to output the determined sound data by muting the virtual output paths not mapped to the determined sound data based on the priority of the applications and without further user input; and
a sound hardware controlled by the sound output unit.

US Pat. No. 10,394,515

DISPLAY DEVICE, ALWAYS-ON-DISPLAY CONTROL METHOD AND MOBILE TERMINAL USING THE SAME

LG Display Co., Ltd., Se...

1. A display device, comprising:a display panel comprising a pixel array of pixels arranged in a matrix by the intersections of data lines and gate lines; and
a drive circuit configured to:
write data to the pixels of the display panel;
display a first event view on the display panel in a first screen mode; and
display preset information on the display panel in a first display of a second screen mode, the preset information having a first position on the screen that does not move during the first display of the second screen mode;
display a second event view on the display panel in a second display of the first screen mode; and
change a display position of the preset information in the second screen mode when the second screen mode is resumed after the second event view in the first screen mode is finished, such that the preset information has a second position on the screen that does not move during the second display of the second screen mode, the second position being different from the first position.

US Pat. No. 10,394,514

PROJECTION DEVICE AND CONTROL METHOD THEREFOR

Canon Kabushiki Kaisha, ...

1. A display apparatus comprising:a display unit configured to display an image;
a communication unit configured to communicate with at least one of a plurality of external display apparatuses; and
a control unit configured to perform control to cause the display unit to display, in a case where an image to be displayed by the display unit and an image to be displayed by the at least one of the plurality of external display apparatuses are to be displayed side by side, a predetermined image including a plurality of areas indicating a positional relationship between a plurality of display images, area information for identifying each of the plurality of areas, and identification information of each of the plurality of external display apparatuses,
wherein the control unit controls the display unit to display the predetermined image indicating a correspondence relationship between the identification information of each of the plurality of external display apparatuses and each of the plurality of areas,
wherein in a case where identification information about one of the plurality of external display apparatuses corresponds to one of the plurality of areas, the communication unit transmits, to the one of the plurality of external display apparatuses, information for identifying one of the plurality of areas corresponding to the one of the plurality of external display apparatuses.

US Pat. No. 10,394,513

DISPLAY DEVICE AND DISPLAY APPARATUS

CHAMP VISION DISPLAY INC....

1. A display device, comprising:a display panel, having a display surface;
a transparent plate, disposed on the display panel, wherein the transparent plate has a light incident surface and a first light emitting surface opposite to each other, the display surface and the light incident surface face each other, and the transparent plate has at least one side surface connected between the light incident surface and the first light emitting surface; and
at least one compressible transparent member, disposed on the at least one side surface, wherein the at least one compressible transparent member has a second light emitting surface, and the second light emitting surface is tilted relative to the first light emitting surface.

US Pat. No. 10,394,512

MULTI-MONITOR ALIGNMENT ON A THIN CLIENT

AMZETTA TECHNOLOGIES, LLC...

1. A method of operating a thin client, comprising:obtaining dimensions of each screen of a plurality of screens of the thin client;
determining an arrangement of the plurality of screens such that each one of the plurality of screens is in contact with at least another one of the plurality of screens and does not overlap with any other one of the plurality of screens;
determining border segments of each screen of the plurality of screens, wherein each of the border segments of the each screen is not in contact with any border segment of any other screens of the plurality of screens;
determining a first pointer location for displaying a pointer of a pointer device;
receiving an input from the pointer device that defines a trajectory of the pointer device;
determining whether the first pointer location is on a border segment;
determining a second pointer location that is one pixel away from the first pointer location in at least one of a horizontal direction and a vertical direction along the trajectory in response to
(a) a determination that the first pointer location is not on a border segment, or
(b) a determination that the first pointer location is on a border segment and the trajectory is toward inside of a screen area defined by the border segments of the plurality of screens;
displaying moving the pointer from the first pointer location to the second pointer location;
in response to a determination (a) that the first pointer location is on a border segment and (b) that the trajectory is not perpendicular to the border segment and not toward inside of a screen area defined by the border segments of the plurality of screens, determining a second pointer location that is one pixel away from the first pointer location on the border segment along the trajectory; and
displaying continuously a pointer at the second pointer location and stopping moving the pointer along the trajectory.

US Pat. No. 10,394,511

EMPATHETIC IMAGE SELECTION

International Business Ma...

1. A method of selecting and displaying one or more images, the method comprising the steps of:a computer identifying a user and user profile information corresponding to the identified user;
the computer identifying a sentiment of the user;
based on the user profile information, the computer determining an association between the identified sentiment of the user and one or more sentiments included in a plurality of sentiments conveyed by a plurality of images;
the computer determining that one or more images included in the plurality of images convey the one or more sentiments;
based on the association between the identified sentiment of the user and the one or more sentiments and the one or more images conveying the one or more sentiments, the computer selecting the one or more images from the plurality of images;
the computer displaying the selected one or more images,
wherein the computer is included in a digital picture frame, wherein the step of displaying the selected one or more images includes displaying the one or more images on a display included in the digital picture frame, and wherein the step of identifying the sentiment of the user includes the steps of:
the computer determining that the user is in a proximity to the digital picture frame;
in response to the step of determining that the user in the proximity to the digital picture frame, the computer receiving lighting information from a light sensor coupled to the digital picture frame, wherein the lighting information includes a measurement of ambient lighting of a room in which the digital picture frame is located; and
in response to the step of determining that the user is in the proximity to the digital picture frame, and based on the measurement of the ambient lighting of the room in which the digital picture frame is located, the computer determining an emotional state of the user, the emotional state being included in the sentiment of the user, wherein the step of selecting the one or more images is based in part on the measurement of the ambient lighting of the room in which the digital picture frame is located;
the computer determining that one or more additional users are in the proximity to the digital picture frame;
in response to the one or more additional users being in the proximity to the digital picture frame, the computer determining one or more emotional states of the one or more additional users, respectively;
based on the emotional state of the user and the one or more emotional states of the one or more additional users, the computer determining one emotional state of a majority of users in a group of users consisting of the user and the one or more additional users;
the computer determining an association between the one emotional state of the majority of the users and one sentiment included in the plurality of sentiments conveyed by the plurality of images;
the computer determining that an image included in the plurality of images conveys the one sentiment;
based on the association between the one emotional state of the majority of users and the one sentiment and based on the image conveying the one sentiment, the computer selecting the image from the plurality of images; and
the computer displaying the image on the display included in the digital picture frame.

US Pat. No. 10,394,510

METHOD FOR DISPLAYING CONTENT AND ELECTRONIC DEVICE THEREFOR

Samsung Electronics Co., ...

1. A method for displaying content by an electronic device, the method comprising:detecting an occurrence of an event for changing a power mode of a first processor to a low power mode;
generating, by the first processor including a graphics processing unit, converted image data by encoding a plurality of screen data which are time-ordered based on a first clock signal in response to the detection of the event occurrence, the first clock signal being generated by the first processor;
transmitting, by the first processor, the converted image data to a display driver processor;
receiving and storing the converted image data in a memory of the display driver processor;
based on a second clock signal independent from the first clock signal, periodically restoring, by the display driver processor, each of the plurality of screen data in time order of displaying of the plurality of screen data by decoding the converted image data while the first processor is in the low power mode, the second clock signal being generated by the display driver processor or being provided by another component of the electronic device; and
sequentially displaying, by the display driver processor, each restored screen data based on the time order of displaying of the plurality of screen data and the second clock signal while the first processor is in the low power mode.

US Pat. No. 10,394,509

DISPLAY LIST GENERATION APPARATUS

CANON KABUSHIKI KAISHA, ...

1. A display list generation apparatus, comprising:a first memory;
a second memory; and
a processor in communication with the first memory and the second memory, wherein the processor performs:
interpreting page description language (PDL) data:,
storing a result of an interpretation of the PDL data in the first memory;
copying the result of the interpretation from the first memory to the second memory in a case where a data size of the result of the interpretation does not exceeds a threshold;
generating a display list from the copy of the result of the interpretation stored in the second memory in a case where the data size of the result of the interpretation does not exceed the threshold; and
generating a display list from the result of the interpretation stored in the first memory in a case where the data size of the result of the interpretation exceeds the threshold, wherein the generation of the display list from the result of the interpretation stored in the first memory is performed after the processor has generated the display list from all results of interpretation which are stored in the second memory until then.

US Pat. No. 10,394,508

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

RICOH COMPANY, LTD., Tok...

1. An information processing apparatus, comprising; Circuitry; a memory, wherein the circuitry is configured to execute workflow processing on data according to a workflow, the workflow defining a plurality of processes executable in an order; and obtain a first size of the data after a specific process of the plurality of processes is performed on the data and before a one of the plurality of subsequent processes of the workflow, subsequent to the specific process, is performed, store the first size of the data in the memory, and wherein the subsequent process of the plurality of processes is performed on the data based on the first size of the data obtained, the subsequent process being one of a plurality of subsequent processes, each of the plurality of subsequent processes being defined as a process of the workflow subsequent to the specific process in the workflow, wherein the one of the plurality of subsequent processes is performed, subsequent to the specific process, based on the first size of the data stored in the memory, and wherein, upon the workflow including the specific process a plurality of times among the plurality of processes in the workflow, the circuitry is configured to obtain the first size of the data each time of the plurality of times that the specific process is performed according to the workflow, and is configured to update the first size of the data stored in the memory, after each time that the specific process is performed, each time after the first size of the data is obtained.

US Pat. No. 10,394,507

CONTROL METHOD FOR COMMUNICATION TERMINAL AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A control method performed by an application executing on a communication terminal that includes an OS (operating system) and that communicably connects the communication terminal to an image processing apparatus, the method comprising:displaying, on an operating unit of the communication terminal in response to a user operation for file selection, a selection screen that includes at least first and second display items, wherein the first display item is used for displaying a first screen for a first file selection function provided by the application different from the OS, and wherein the second display item is used for displaying a second screen for a second file selection function provided by the OS;
in response to an instruction to select the first display item received through the selection screen, displaying the first screen for selecting a file within a predetermined folder of the communication terminal by using the first file selection function provided by the application different from the OS; and
in response to an instruction to select the second display item through the selection screen, invoking the second file selection function provided by the OS to display the second screen provided by the OS; and
transmitting data based on at least one file, that is selected through the displayed first screen and/or the displayed second screen, to the image processing apparatus.

US Pat. No. 10,394,506

INFORMATION PROCESSING APPARATUS, SYSTEM, CONTROL METHOD, AND PROGRAM

CANON KABUSHIKI KAISHA, ...

6. A control method of a system including a server, a client apparatus, and a printer,the method of the server comprising:
acquiring first account information input on an input screen by a user,
acquiring account change restriction information which indicates whether change in account information that is to be added to a print job is prohibited or not,
holding the acquired account change restriction information in a first restriction information memory, and
determining an operation authority on which a printer driver in the server is operating,
wherein, in a case where the operation authority on which the printer driver in the server is operating is an administrator authority, the acquired first account information is held, as administration account information, in a first administrator account registry memory which is able to be accessed by the administrator authority and which is not able to be accessed by a user authority, the user authority being lower in authority level than the administrator authority, and there being predetermined restriction on operation in the user authority, and
wherein, in a case where the operation authority on which the printer driver in the server is operating is the user authority, the acquired first account information is held, as user account information, in a first user account registry memory which is able to be accessed by the user authority, and
the method of the client apparatus comprising:
acquiring, from the server, the account change restriction information held in the first restriction information memory,
acquiring, from the server, the first account information held in the first administrator account registry memory,
acquiring second account information input on an input screen by a user,
holding, in a second restriction information memory which is able to be accessed by the administrator authority and which is not able to be accessed by the user authority, the account change restriction information acquired from the server,
holding, in a second administrator account registry memory which is able to be accessed by the administrator authority and which is not able to be accessed by the user authority, the first account information acquired from the server as administrator account information,
determining whether the account change restriction information held in the second restriction information memory indicates the change in account information is prohibited, and
determining an operation authority on which a printer driver in the client apparatus is operating,
holding the acquired second account information as user account information, in a second user account registry memory which is able to be accessed by the user authority, in a case where the operation authority on which the printer driver in the client apparatus is operating is the user authority, and where the account change restriction information held in the second restriction information memory indicates the change in account information is prohibited,
wherein, in a case where the account change restriction information held in the second restriction information memory indicates the change in account information is prohibited, the first account information held in the second administrator account registry memory is added to the print job,
wherein, in a case where the account change restriction information held in the second restriction information memory does not indicate the change in account information is prohibited and where the operation authority on which the printer driver in the client apparatus is operating is the user authority, the second account information held in the second user account registry memory is added to the print job,
wherein the print job that the first account information or the second account information is added to is transmitted to the printer via the server.

US Pat. No. 10,394,505

IMAGE FORMING APPARATUS THAT CONTROLS AN EXECUTION ORDER OF JOBS, CONTROL METHOD THEREOF, STORAGE MEDIUM, AND IMAGE FORMING SYSTEM

CANON KABUSHIKI KAISHA, ...

1. An image forming apparatus capable of communicating with an external apparatus, comprising:a storage configured to store information;
an image forming unit configured to form an image on a sheet; and
one or more controllers having a processor executing instructions stored in a memory or having circuitry, configured to perform:
processing of obtaining first print data from the external apparatus and storing, in the storage, identification information representing that image formation based on the first print data is uncompleted; and
processing of obtaining second print data from the external apparatus in a status in which the identification information is not stored in the storage, and not obtaining of the second print data from the external apparatus in a status in which the identification information is stored in the storage.

US Pat. No. 10,394,504

IMAGE FORMING SYSTEM, IMAGE FORMING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM FOR INFORMATION PROCESSING APPARATUS

Konica Minolta, Inc., To...

1. An image forming system, comprising:an image forming apparatus which includes an image forming section to perform image formation for a sheet based on print data of a print job and a communication section;
a self-travelling post processing apparatus which is separably connected to the image forming apparatus and includes a post processing section to perform post processing for a sheet conveyed from the image forming apparatus, a travelling section to move the self-travelling post processing apparatus itself, and a communication section to communicate with the image forming apparatus; and
a processor configured such that when receiving a plurality of print jobs which are executed by the image forming apparatus and include both a print job which performs the post processing by the self-travelling post processing apparatus and a print job which does not perform the post processing, the processor changes an execution order of the plurality of print jobs in accordance with a geographical position and a connecting state of the self-travelling post processing apparatus to the image forming apparatus, the connecting state indicative of a mechanical connection and an electrical connection between the self-travelling post processing apparatus and the image forming apparatus.

US Pat. No. 10,394,503

INFORMATION PROCESSING DEVICE, METHOD AND SYSTEM FOR RECORDING AND DISPLAYING OUTPUT SETTINGS

Ricoh Company, Ltd., Tok...

1. An information processing device for controlling an output device, the information processing device comprising:circuitry configured to
store, in a memory, a plurality of setting sets, wherein each setting set of the plurality of setting sets includes a combination of set values collectively set for a plurality of setting items in the setting set, and each set value of each combination of the set values is set when receiving a selection of a setting set of the plurality of setting sets via a screen;
receive a particular output setting, of the output device, that includes a particular combination of set values;
record, in the memory in response to reception of the particular output setting, the particular output setting, including the particular combination of the set values; and
display, on the screen, difference information and output setting information, the difference information indicating a difference between the particular combination of set values included in the particular output setting and setting values included in one setting set of the plurality of setting sets, the output setting information indicating the particular output setting,
wherein the circuitry is further configured to
record, in the memory, one or more output settings, each including a corresponding combination of the set values, as a history;
present the one or more recorded output settings as candidate output settings, each to be registered as a setting set that specifies the combination of the set values collectively set for the plurality of setting items when selected; and
receive the particular output setting to be registered as the setting set from among the candidate output settings.

US Pat. No. 10,394,502

ROLL-FED PRINTING APPARATUS, SOFTWARE MEDIUM, AND METHOD FOR CONTROLLING A ROLL-FED PRINTING APPARATUS

1. A method for controlling a roll-fed printing apparatus for printing images on at least one recording medium, the roll-fed apparatus comprising a print engine and a controller comprising a roll managing system for managing the printing of ripped images on the at least one recording medium, the method comprising the steps of:the controller receiving a plurality of ripped images from a raster image processor into memory of the roll managing system;
for each ripped image, the roll managing system establishing an arbitrary position in a first direction of a width of the at least one recording medium and/or in a second direction of a length of the at least one recording medium in the plane of the at least one recording medium at which arbitrary position the ripped image is intended to be printed;
the controller creating subsequent image swathes for printing the ripped images according to the established arbitrary positions of the ripped images;
the print engine subsequently printing the created subsequent image swathes;
the roll managing system displaying and maintaining an image queue area comprising representations of the ripped images to be printed on the at least one recording medium and representations of the ripped images currently being printed on the at least one recording medium;
the roll managing system providing a plurality of event image objects each of which is a non-printable user operable element and is manually placeable in the image queue area and the image queue area is configured to receive an event image object of the plurality of event image objects between the representations of the ripped images to be printed on the least one recording medium in the image queue area; and
the controller setting for an event image object of the plurality at least one finishing command to be executed by the printing apparatus in accordance with a timing sequence of printing of the ripped images displayed in the image queue area.

US Pat. No. 10,394,501

IMAGE PROCESSING APPARATUS INCLUDING AN EXECUTION AUTHORITY SETTING HAVING A PERMITTED AND NOT A PERMITTED SETTING

Oki Data Corporation, To...

1. An image processing apparatus, comprising:an operation unit that receives a user operation;
an authentication circuitry that performs a user authentication process on a basis of an instruction from the operation unit, and generates a first user identifier when the user authentication process is successful;
an executing circuitry that executes one or more types of image processing;
a job macro executing circuitry that executes, with the executing circuitry, a first job macro to which predetermined one or more types of image processing out of the one or more types of image processing are assigned, the predetermined one or more types of image processing including a first type of image processing; and
a managing circuitry issues a session on a basis of the instruction from the operation unit, sets, to the session, first access control information that is associated with the first user identifier and includes information on execution authority setting of the first type of image processing and information on execution authority setting of the first job macro, manages, on a basis of the first access control information, the execution authority setting of the first type of image processing and the execution authority setting of the first job macro, both being related to the first user identifier, temporarily changes, based on the first access control information, the execution authority setting of the first type of image processing on the basis of the execution authority setting of the first job macro upon the execution of the first job macro by the job macro executing circuitry, replaces, after the execution of the first job macro, the execution authority setting of the first type of image processing with the execution authority setting that is before being temporarily changed, and ends the session on the basis of the instruction from the operation unit received after replacing the execution authority setting of the first type of image processing,
wherein the managing circuitry is configured to temporarily cause the execution authority setting of the first type of image processing to be set to permitted, on a condition that, upon the execution of the first job macro, the execution authority setting of the first type of image processing is set to not permitted in the first access control information and the execution authority setting of the first job macro is set to permitted in the first access control information, and replace, after the execution of the first job macro, the execution authority setting of the first type of image processing with the execution authority setting that is set to not permitted,
wherein the first job macro is configured to manage execution of the predetermined one or more types of image processing, and
wherein the first type of image processing comprises one of copy processing, printing processing, fax processing or scan-to-email processing, and
wherein the first job macro does not include a user name related to a currently logged-in user or user identified information related to the currently logged-in user.

US Pat. No. 10,394,500

INFORMATION PROCESSING SYSTEM AND APPLICATION INSTALLATION METHOD

Ricoh Company, Ltd., Tok...

19. A method of installing an application, the method being executed by a computer in communication with a memory, the method being implemented in an information processing system including at least one information processing apparatus and at least one electronic device, the method comprising:storing the application by the at least one information processing apparatus;
managing, by the at least one information processing apparatus, a configuration information item required for installing the application in the at least one electronic device,
the configuration information item including a list of one or more applications to be installed, license information required for installing the application, and a status information item, managed in association with each other,
the status information item indicating that the configuration information item is scheduled to be applied to the at least one electronic device,
the configuration information item being registered in the at least one information processing apparatus according to an operation input via a user interface of the at least one information processing apparatus in a preliminary operation before the application is installed;
receiving, by the at least one information processing apparatus, an operation that causes the status information item to transition from indicating that the configuration information item is not scheduled to be applied to the at least one electronic device, to indicating that the configuration information item is scheduled to be applied to the at least one electronic device and indicating that the configuration information is made public to the at least one electronic device, and
acquiring, by the at least one electronic device from the at least one information processing apparatus over a communication network, the configuration information item including the status information item indicating that the configuration information item is scheduled to be applied to the at least one electronic device, the configuration information item being acquired from the at least one information processing apparatus in response to an operation input via a user interface of the at least one electronic device; and
installing, in the at least one electronic device by the at least one electronic device, the application acquired from the at least one information processing apparatus according to the acquired configuration information item; and
wherein the received operation causes the status information item to transition from indicating that the configuration information item is not scheduled to be applied to the at least one electronic device, to indicating that the configuration information item is scheduled to be applied to the at least one electronic device and indicating that the configuration information is made public to the at least one electronic device.

US Pat. No. 10,394,499

COMPUTER READABLE RECORDING MEDIUM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD FOR DISPLAYING CONVERTED IMAGE DATA

BROTHER KOGYO KABUSHIKI K...

1. An information processing apparatus comprising:a memory configured to store first-kind image data which data having a first format designated as target data and store a first file name of the first-kind image data, the first file name including a first data name representing the first format;
a display; and
a controller configured to execute: controlling to acquire the first-kind image data which data is stored in the memory; controlling to acquire second-kind image data which is generated by converting the acquired first-kind image data, the second-kind image data being data being of a second format different from the first format, and acquire a second file name of the second-kind image data, the second file name including a second data name representing the second format different from the first data name;
controlling the display to display a display image of the acquired second-kind image data, together with the first file name including the first data name stored in the memory without displaying the second data name associated with the second-kind image data;
determining a contents format of the data designated as the target data: controlling the display to display the display image generated using the second-kind image data if it is determined that the target data has the second format: and controlling the memory to store the first data name of the first-kind image data if it is determined that the target data has the first format: wherein the memory stores the second-kind image data in association with the first data name stored in the memory.

US Pat. No. 10,394,497

RELAY APPARATUS THAT REGISTERS USER INFORMATION INCLUDED IN A FORMING REQUEST IN A GROUP THAT INCLUDES THE ACCOUNT OF THE RELAY APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. A relay apparatus that is connected to a forming apparatus which forms a three-dimensional object, the relay apparatus comprising;a memory storing instructions; and
a processor executing the instructions causing the relay apparatus to:
receive a forming request for the forming apparatus, which includes user information, from a client terminal;
perform a registration process to a message service, in response to reception of the forming request including the user information, so that a group, which includes at least the user information and an account of the relay apparatus, is managed by the message service; and
distribute a message concerning the forming apparatus and the forming request for the group,
wherein, in the registration process, in response to a reception of a first forming request including first user information, a first group, which includes the first user information and the account of the relay apparatus, is automatically registered to the message service, and
wherein, in the registration process, the first user information is further registered in addition to a second group that has been a target of a registration process in response to a reception of a second forming request that is different from the first forming request and is managed by the message service.

US Pat. No. 10,394,496

IMAGE FORMING APPARATUS, IMAGE FORMING SYSTEM, SETTING METHOD AND COMPUTER-READABLE MEDIUM

Oki Data Corporation, To...

1. An image forming apparatus comprising:a storage section that stores a medium setting including information representing a printing area on a printing medium;
a printing section that performs printing on the printing medium based on the medium setting stored in the storage section; and
a controller,
wherein the controller determines whether a first medium setting included in a print job sent from an information processing apparatus and including information representing a printing area on a printing medium differs from a second medium setting stored as the medium setting in the storage section; and
wherein when the first medium setting differs from the second medium setting, the controller checks if a predetermined printing instruction is received and changes the medium setting stored in the storage section from the second medium setting to the first medium setting when the predetermined printing instruction is received.

US Pat. No. 10,394,495

FORM DOCUMENT SUBMISSION SYSTEM WITH ERROR FINDER MODULE

KYOCERA DOCUMENT SOLUTION...

1. A method for processing a form document with one or more fields to be filled out, comprising:generating, at an administrator computing device, a plurality of form documents, form document identifiers associated with the respective form documents, and form data rules applicable to the respective form documents;
registering, at a server device, the form documents, the form document identifiers associated with the form documents, and the form data rules applicable to the form documents, so as to be associated with a user;
accessing, at a user accessible device, the server device;
verifying, at the server device, the user of the user accessible device;
requesting, at the user accessible device, a list of the plurality of form document identifiers associated with the user;
sending, at the server device, the list of the form document identifiers;
receiving, at the user accessible device, the list of the form document identifiers;
selecting, at the user accessible device, the form document from the listed form document identifiers;
receiving, at a user accessible device, the form document of the selected form document identifier;
filling out each field with a corresponding input, at the user accessible device;
sending the filled out form, from the user accessible device to the server device;
validating the filled out form document for each field, based upon the form data rule, at the server device;
if an error is found in any of the fields, highlighting the field with the error, and sending the highlighted form, from the server device to the user accessible device, for changes to the corresponding input to comply with the form data rule;
if no error is found in any of the fields, sending the highlighted form, from the server device to the user accessible device, as confirmation that no error is found;
submitting, from the user accessible device, the validated form document to the administrator computing device; and
sending, from the administrator computing device, a receipt notice including address information to the user accessible device;
wherein the user accessible device comprises an image forming device and a client computing device, wherein:
the receiving of the form document and form document identifier, the filling out of each field with the corresponding input, and the sending of the filled out form document are at the client computing device;
if the error is found, the sending of the highlighted form is from the server device to the image forming device, the method further comprising:
printing out the highlighted form document including a graphical code including information on a destination of the highlighted form document at the image forming device;
editing the highlighted form document;
scanning the edited form document including the graphical code, at the image forming device; and
sending the scanned edited form document, from the image forming device to the server device;
wherein the edited form document is stored at the user accessible device or remote location for later use, with an expiration date of the edited form document automatically set or set by the user,
wherein the form document has the field with the associated destination, and when the filled in form document has no error in any field, the method comprises sending the filled in form document without errors automatically to the destination, from the server device.

US Pat. No. 10,394,494

METHOD FOR INPUTTING OF PRINT DATA FOR PRINTING UPON A PRINT OBJECT USING A PRINTER AND PRINTING SYSTEM WITH AT LEAST TWO PRINTERS

1. A method for printing upon a print object with a first printer or a second, external printer, the first printer having a housing with a print space in the housing, a printing device, a receiving device for the print object to be printed upon, a control and evaluation unit, a data communications interface, a memory and an input and display apparatus,comprising the following steps:
displaying a printer selection window on the input and display apparatus,
selecting one of the first and second printing devices,
displaying a completed selection of one of the first or second printer with which a print object is to be printed,
selecting a print object selection window from at least two print object selection windows stored in the memory by the control and evaluation unit using the evaluated choice of based on the printer selected,
displaying the selected print object selection window on the input and display apparatus, the print object selection window displaying at least one print object or a designation of the print object which can be printed upon by the selected printer and evaluating a completed selection of the print object in the control and evaluation unit,
selecting an input screen from a plurality of input screens stored in the memory by the control and evaluation unit using the evaluated choice of the print object,
displaying the selected input screen on the input and display apparatus,
based upon the printer selected, transmitting of print data which have been input into the input screen to the printing device of the first printer or via the data communications interface of the first printer to the second, external printer, and
printing upon the print object with the selected printer using the print data transmitted.

US Pat. No. 10,394,493

MANAGING SHINGLED MAGNETIC RECORDING (SMR) ZONES IN A HYBRID STORAGE DEVICE

Seagate Technology LLC, ...

1. A hybrid data storage device comprising:a first non-volatile memory (NVM) comprising solid state memory cells arranged into a first set of garbage collection units (GCUs) each comprising a plurality of erasure blocks that are allocated and erased as a unit;
a second NVM comprising a rotatable data recording medium arranged into a second set of GCUs each comprising a plurality of shingled magnetic recording tracks that are allocated and erased as a unit;
a control circuit configured to combine a first group of logical block units (LBUs) stored in the first set of GCUs with a second group of LBUs stored in the second set of GCUs to form a combined group of LBUs arranged in sequential order by logical address, and to write the combined group of LBUs to a zone of shingled magnetic recording tracks in a selected one of the second set of GCUs; and
a map stored as a data structure in a memory that correlates logical addresses of the LBUs to physical addresses in the respective first and second sets of GCUs at which the LBUs are stored, the map comprising at least one flag value that signifies at least a selected one of the first or second groups of LBUs were initially written as part of a sequential write operation, the control circuit selecting the at least a selected one of the first or second groups of LBUs for inclusion in the combined group of LBUs responsive to the at least one flag value.

US Pat. No. 10,394,492

SECURING A MEDIA STORAGE DEVICE USING WRITE RESTRICTION MECHANISMS

Lenovo Enterprise Solutio...

13. A computer-implemented method, comprising:determining a write rate for a media storage device or a portion thereof based on one or more factors, the write rate ranging from zero to a maximum possible write rate for the media storage device or the portion thereof;
determining an overwrite rate for the media storage device or the portion thereof based on the one or more factors;
receiving a write request to write data to the media storage device or the portion thereof; and
writing the data to the media storage device, wherein portions of the data written to empty blocks of memory on the media storage device are written using the determined write rate, wherein portions of the data written to non-empty blocks of memory on the media storage device are written using the determined overwrite rate.

US Pat. No. 10,394,491

EFFICIENT ASYNCHRONOUS MIRROR COPY OF THIN-PROVISIONED VOLUMES

International Business Ma...

1. A method for copying data from a primary thin-provisioned volume to a secondary thin-provisioned volume, the method comprising:issuing a query to a primary storage system, the primary storage system hosting a thin-provisioned volume made up of a plurality of storage elements;
returning, in response to the query, a reply indicating which storage elements in the thin-provisioned volume are backed by physical storage, wherein the reply contains at least one of an allocation status bitmap and an allocation status list that indicate which of the storage elements are backed by the physical storage; and
copying, from the primary storage system to a secondary storage system, data in only those storage elements that are backed by the physical storage.

US Pat. No. 10,394,490

FLASH REGISTRY WITH WRITE LEVELING

Weka.IO Ltd., (IL)

1. A system for controlling memory access, comprising:memory of a distributed file server; and
a plurality of computing devices operable to distribute data into the memory of the distributed file server, wherein a portion of the memory is associated with a bucket of metadata stored on a computing device of the plurality of computing devices,
wherein the bucket of metadata comprises a two-level registry accessible by a computing device of a plurality of computing devices in accordance with an inode ID and an offset, and wherein the two-level registry is configured to:
shadow changes to the memory in a first level of the registry,
push the changes to a second level of the registry if the number of changes exceeds the capacity of the first level of the registry, and
update the memory according to the changes recorded in the first level of the registry and the second level of the registry, if the number of changes pushed to the second level of the registry exceeds the capacity of the second level of the registry.

US Pat. No. 10,394,489

REUSABLE MEMORY DEVICES WITH WOM CODES

1. A multiple-write enabled flash memory system, comprising:a flash memory comprising at least two planes, wherein each plane comprises multiple blocks, and wherein each block comprises multiple pages;
a block allocator configured to:
(a) reference one or more clean active blocks on each plane for a first write, wherein the first-write comprises writing one logical page of unmodified data to one physical page, and
(b) reference one or more recycled active block on each plane for a second write, wherein each page of each recycled active block stores data from a previous first-write; and
an encoder configured to encode a page of data via a write-once-memory (WOM) code to produce WOM-encoded data,
wherein a combination of a target page of each recycled active block is configured to store the WOM-encoded data via a second write.

US Pat. No. 10,394,488

MANAGING A COLLECTION OF DATA

INTERNATIONAL BUSINESS MA...

1. A computer program product for facilitating managing a collection of data within a processing environment, the computer program product comprising:a non-transitory computer readable storage medium readable by a processing circuit and storing instructions for performing a method comprising:
storing in a data block of a buffer data relating to execution of one or more tasks of the processing environment, wherein a plurality of stores to the data block are performed, and wherein the data block is pointed to by an entry in a sample data block table, which is selected based on contents of a table entry address register;
determining whether the data block has sufficient space for another store of data;
based on determining the data block has insufficient space for the other store, determining whether an alert indicator is set for the data block;
based on determining the alert indicator is set, indicating an interrupt is to be performed; and
storing data of the other store in another data block of the buffer, based on determining the data block has insufficient space.

US Pat. No. 10,394,487

MEMORY SYSTEM AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A memory system comprising:a memory device including a plurality of memory blocks each memory block including a plurality of pages, each page including a plurality of memory cells which are coupled to a word line for storing data, wherein the plurality of memory blocks individually shares a plurality of memory device buffers, included in the memory device, for storing data related to a write operation or a read operation; and
a controller including a memory, the controller being suitable for receiving a write command and a read command from a host, storing write data corresponding to the write command in the memory, transmitting and storing the write data stored in the memory to and in at least one first memory device buffer operatively coupled to a first memory block in a page of which the write data are to be stored, reading read data corresponding to the read command from a page of a second memory block, storing the read data in at least one second memory device buffer operatively coupled to the second memory block, and storing the read data stored in the second memory device buffer, in the memory, wherein the at least one first memory device buffer and the at least one second memory device buffer are individually selected from the plurality of memory device buffers to be engaged with the first and second memory blocks, and physically isolated from each other.

US Pat. No. 10,394,486

METHODS FOR GARBAGE COLLECTION IN A FLASH MEMORY AND APPARATUSES USING THE SAME

Silicon Motion, Inc., Jh...

1. A method for GC (Garbage Collection) in a flash memory, performed by a processing unit, comprising:in a first time period, reading a first number of pages of good data from a plurality of storage sub-units sharing a channel, wherein the first number is a product of n and m, and n indicates a quantity of the plurality of storage sub-units sharing the channel, and m indicates a basic quantity of pages for programming data into one of the plurality of storage sub-units; and
in a second time period following the first time period, repeatedly performing a loop for directing each of the plurality of the storage sub-units to program m pages of the first number of pages of good data until all of the plurality of storage sub-units are operated in busy states.

US Pat. No. 10,394,485

STORAGE SYSTEM WITH EFFICIENT RE-SYNCHRONIZATION MODE FOR USE IN REPLICATION OF DATA FROM SOURCE TO TARGET

EMC IP Holding Company LL...

1. An apparatus comprising:a storage system comprising a plurality of storage devices and a storage controller;
the storage system being configured to participate as a target storage system in a replication process with a source storage system;
wherein in conjunction with a re-synchronization mode of the replication process, the target storage system is further configured:
to receive from the source storage system a plurality of content-based signatures of respective data pages of a storage object that is subject to replication from the source storage system to the target storage system, the data pages being identified by respective ones of a plurality of logical addresses;
for a given one of the received content-based signatures corresponding to a particular logical address:
to compare a reduced version of the received content-based signature with a particular one of a plurality of entries of an address-to-signature table maintained by the target storage system;
responsive to a match between the reduced version of the received content-based signature and the particular one of the entries of the address-to-signature table, to compare a full version of the received content-based signature with a full version of the content-based signature corresponding to the particular entry; and
responsive to a match between the full version of the received content-based signature and the full version of the content-based signature corresponding to the particular entry, to provide an indication of successful re-synchronization to the source storage system for the storage object data page having the received content-based signature;
wherein the target storage system is implemented using at least one processing device comprising a processor coupled to a memory.

US Pat. No. 10,394,484

STORAGE SYSTEM

Hitachi, Ltd., Tokyo (JP...

1. A storage system including a plurality of storage nodes connected via a network, the storage system comprising:a first storage node, a second storage node, and a third storage node,
wherein the first storage node receives write data of an object,
the first storage node generates a plurality of distributedly arranged write data blocks from the write data and generates a first redundant data block from the plurality of distributedly arranged write data blocks,
the first storage node transmits each of the plurality of distributedly arranged write data blocks and the first redundant data block to different storage nodes,
the different storage nodes comprise the second storage node and the third storage node, and an arrangement destination of the first redundant data block is the third storage node,
the second storage node selects a plurality of distributedly arranged write data blocks from distributedly arranged write data blocks held therein, and rearrangement destination storage nodes of the plurality of selected distributedly arranged write data blocks are different from one another,
the second storage node generates a second redundant data block from the plurality of selected distributedly arranged write data blocks, and
the second storage node rearranges each of the plurality of selected distributedly arranged write data blocks to the rearrangement destination storage node and further arranges the second redundant data block to a storage node other than the rearrangement destination storage node, so that the write data of the object received by the first storage node is rearranged to any one of the plurality of storage nodes.

US Pat. No. 10,394,483

TARGET VOLUME SHADOW COPY

International Business Ma...

1. A method for preventing data loss in target volumes of copy service functions, the method comprising:detecting a copy service function that copies data from a source volume to a target volume; and
automatically performing the following in response to detecting the copy service function:
creating a shadow volume to receive data overwritten on the target volume; and
establishing a point-in-time copy relationship between the target volume and the shadow volume to preserve data on the target volume as writes are received thereto.

US Pat. No. 10,394,482

SNAP TREE ARBITRARY REPLICATION

SEAGATE TECHNOLOGY LLC, ...

1. A system comprising:a storage controller operable to:
initialize a first replication process between a first storage volume of a first storage system and a second storage volume of a second storage system;
copy, as part of the first replication process, content from a first system snapshot of the first storage volume to a second system snapshot of the first storage volume; and
copy, as part of the first replication process, content from a first user snapshot of the first storage volume to the first system snapshot of the first storage volume, system snapshots being initiated by the first storage system or by the second storage system, and user snapshots being initiated by a user of the first storage system or of the second storage system, or of both, wherein system snapshots are not accessible to the user and user snapshots are accessible to the user.

US Pat. No. 10,394,481

REDUCING APPLICATION INPUT/OUTPUT OPERATIONS FROM A SERVER HAVING DATA STORED ON DE-DUPED STORAGE

International Business Ma...

1. A method implemented in a computer for reducing input/output operations between the computer and a storage device accessible by the computer, the method comprising:tracking, by the computer, a list of block numbers associated with a plurality of disk blocks being moved from the storage device into a memory of the computer;
arranging, by the computer, the plurality of disk blocks in the memory in a data structure including a set of primary disk blocks associated with a list of primary block numbers;
sending, by the computer, a query to a deduplication engine of the storage device, wherein the query includes a request for a list of deduplicated disk blocks;
receiving, by the computer, from the deduplication engine of the storage device, the list of deduplicated disk blocks with a list of corresponding block numbers, wherein the received list of deduplicated disk blocks includes indications of similarities among the list of deduplicated disk blocks;
identifying, by the computer and based on the indications of similarities in the received list, a set of disk blocks among the deduplicated disk blocks that are similar to the plurality of disk blocks in the memory;
rearranging, by the computer, the primary disk blocks in the data structure in order to update the list of primary block numbers to point to a plurality of block numbers associated with the identified set of disk blocks among the list of deduplicated disk blocks;
in response to a request to access a particular disk block, identifying, by the computer, a particular primary disk block number of a particular primary disk block in the rearranged data structure; and
reading, by the computer, the particular primary disk block from the memory instead of reading the requested particular disk block from the storage device.

US Pat. No. 10,394,480

STORAGE DEVICE AND STORAGE DEVICE CONTROL METHOD

Hitachi, Ltd., Tokyo (JP...

1. A storage device comprising:a main memory;
a nonvolatile semiconductor memory; and
a processor connected to the main memory and the nonvolatile semiconductor memory, wherein:
the processor stores in the nonvolatile semiconductor memory at least part of meta data indicative of the relationship between logical addresses provided to a higher-level device and physical addresses of user data in the nonvolatile semiconductor memory, and stores part of the meta data in the main memory;
the processor allocates blocks in the nonvolatile semiconductor memory as allocated user blocks for storing the user data and as allocated meta blocks for storing the meta data;
the processor is capable of performing an unoccupied user block generation process and an unoccupied meta block generation process, the unoccupied user block generation process being adapted to move user data stored in the allocated user blocks in order to generate unoccupied user blocks serving as unoccupied blocks among the allocated user blocks, the unoccupied meta block generation process being adapted to move meta data stored in the allocated meta blocks in order to generate unoccupied meta blocks serving as unoccupied blocks among the allocated meta blocks;
the processor calculates the number of unoccupied meta blocks to be consumed, that is, the number of unoccupied meta blocks to be consumed by the unoccupied user block generation process; and
the processor performs the unoccupied meta block generation process based on the number of unoccupied meta blocks to be consumed.

US Pat. No. 10,394,479

SOLID STATE STORAGE DEVICE WITH QUICK BOOT FROM NAND MEDIA

Micron Technology, Inc., ...

1. A memory device, comprising:a memory media having a region that stores initialization information; and
a controller operably coupled to the memory media, wherein the controller is configured to—
determine whether the initialization information stored at the region of the memory media is valid,
initialize the memory device based at least in part on the initialization information when valid, and
invalidate the initialization information stored at the region of the memory media by writing to the region of the memory media without first erasing the region of the memory media.

US Pat. No. 10,394,478

METHOD AND APPARATUS FOR STORAGE DEVICE ALLOCATION

EMC IP Holding Company LL...

1. A method for storage management, comprising:in response to a plurality of storage devices in a storage system being to be allocated to an unallocated logic storage area, determining a plurality of allocation schemes for allocating the plurality of storage devices to the unallocated logic storage area;
obtaining an allocation uniformity of the plurality of storage devices with respect to an allocated logic storage area of the storage system; and
selecting one of the plurality of allocation schemes at least based on the allocation uniformity, such that a uniform degree of an allocation has a minimum variation, the selecting of one of the plurality of allocation schemes including:
obtaining an allocation status of the plurality of storage devices with respect to the allocated logic storage area and sizes of the plurality of storage devices;
generating, based on the allocation status and the plurality of allocation schemes, allocation status candidates of the plurality of storage devices with respect to the allocated logic storage area and the unallocated logic storage area of the storage system;
determining, based on the allocation status candidates and the sizes of the plurality of storage devices, allocation uniformity candidates of the plurality of storage devices with respect to the allocated logic storage area and the unallocated logic storage area of the storage system; and
selecting, from the plurality of allocation schemes, an allocation scheme corresponding to one of the allocation uniformity candidates that has a minimum difference from the allocation uniformity, the allocation status being represented as a matrix, and each element of the matrix representing a number of times for allocating blocks in two of the plurality of storage devices to the same allocated logic storage area.

US Pat. No. 10,394,477

METHOD AND SYSTEM FOR MEMORY ALLOCATION IN A DISAGGREGATED MEMORY ARCHITECTURE

International Business Ma...

1. A method for allocating memory, the method comprising:receiving, by a computer, an allocation request for memory allocation to a computer node, wherein the computer node resides in a server cluster comprising a plurality of server racks, the server racks comprising a plurality of server levels, the server levels comprising a plurality of computer nodes;
determining, by a computer, whether execution of the allocation request is to be carried out within the server cluster on a cluster level, a server rack level, or on a server level;
retrieving, by a computer, a memory policy associated with the determined level of the server cluster for executing the allocation request, the memory policy being retrieved from a memory policy database;
determining, by a computer, an amount of available memory for allocation on the determined level of the server cluster;
determining whether there is enough of the available memory to meet the allocation request on the determined level of the server cluster; and
allocating, by a computer, the available memory on the determined level of the server cluster to address the received request based on the retrieved memory policy, in response to determining there is enough of the available memory on the determined level of the server cluster to meet the allocation request.

US Pat. No. 10,394,476

MULTI-LEVEL STAGE LOCALITY SELECTION ON A LARGE SYSTEM

PURE STORAGE, INC., Moun...

1. A method for execution by a computing device of a dispersed storage network (DSN), the method comprises:obtaining a plurality of write requests; and
for a write request of the plurality of write requests:
generating a vault identification and a generation number;
obtaining a rounded timestamp and a capacity factor;
generating a temporary object number based on the rounded timestamp and the capacity factor;
generating a temporary source name based on the vault identification, the generation number, and the temporary object number; and
identifying a set of storage units of a plurality of sets of storage units of the DSN based on the temporary source name.

US Pat. No. 10,394,475

METHOD AND SYSTEM FOR MEMORY ALLOCATION IN A DISAGGREGATED MEMORY ARCHITECTURE

International Business Ma...

1. A computer program product for allocating memory, the computer program product comprising:one or more non-transitory computer-readable storage media and program instructions stored on the one or more non-transitory computer-readable storage media, the program instructions comprising:
program instructions to receive, by a computer, an allocation request for memory allocation to a computer node, wherein the computer node resides in a server cluster comprising a plurality of server racks, the server racks comprising a plurality of server levels, the server levels comprising a plurality of computer nodes;
program instructions to determine, by a computer, whether execution of the allocation request is to be carried out within the server cluster on a cluster level, a server rack level, or on a server level;
program instructions to retrieve, by a computer, a memory policy associated with the determined level of the server cluster for executing the allocation request, the memory policy being retrieved from a memory policy database;
program instructions to determine, by a computer, an amount of available memory for allocation on the determined level of the server cluster;
program instructions to determine, by a computer, whether there is enough of the available memory to meet the allocation request on the determined level of the server cluster; and
program instructions to allocate, by a computer, the available memory on the determined level of the server cluster to address the received request based on the retrieved memory policy, in response to determining there is enough of the available memory on the determined level of the server cluster to meet the allocation request.

US Pat. No. 10,394,474

DEVICES, SYSTEMS, AND METHODS FOR RECONFIGURING STORAGE DEVICES WITH APPLICATIONS

SMART IOPS, INC., Milpit...

1. A solid-state storage device, comprising:non-volatile memory for data storage; and
a first controller for storing data in the non-volatile memory, the first controller operably coupled to the non-volatile memory and comprising a first processor configured to:
receive an indication to reconfigure the first controller with a first application, the first application for performing a first algorithm on user-selected data stored in the non-volatile memory to generate resulting data, wherein the first application is user-selected to be run;
receive the first application;
reconfigure the first controller with the first application such that the first controller is enabled to run the first application;
receive an indication to run the first application with a first set of user-selected data stored in the non-volatile memory as input for the first algorithm;
receive the first set of user-selected data from the non-volatile memory;
after the first controller is reconfigured with the first application, run the first application with the first set of user-selected data as input for the first algorithm;
generate first resulting data from running the first application with the first set of user-selected data as input for the first algorithm;
receive an indication to reconfigure the first controller with a second application, the second application for performing a second algorithm on user-selected data stored in the non-volatile memory to generate resulting data, wherein the second application is user-selected to be run, and wherein the second application and second algorithm are different than the first application and the first algorithm, respectively;
receive the second application;
reconfigure the first controller with the second application such that the first controller is enabled to run the second application;
receive an indication to run the second application with a second set of user-selected data stored in the non-volatile memory as input for the second algorithm;
receive the second set of user-selected data from the non-volatile memory;
after the first controller is reconfigured with the second application, run the second application with the second set of user-selected data as input for the second algorithm; and
generate second resulting data from running the second application with the second set of user-selected data as input for the second algorithm.

US Pat. No. 10,394,473

APPARATUSES AND METHODS FOR ARBITRATING A SHARED TERMINAL FOR CALIBRATION OF AN IMPEDANCE TERMINATION

Micron Technology, Inc., ...

1. An apparatus comprising:a resistor coupled between a supply voltage and a terminal; and
a device including an impedance control circuit coupled to the terminal, and the impedance control circuit configured, responsive to a calibration command, to:
monitor a voltage of the terminal to detect whether the voltage of the terminal is within a first voltage range; and
start a calibration operation using the resistor when the voltage of the terminal has been detected to be within the first voltage range.

US Pat. No. 10,394,472

CLASSIFICATION AND IDENTIFICATION FROM RAW DATA WITHIN A MEMORY DOMAIN

EMC IP Holding Company LL...

1. A computer-executable method of managing one or more tiers of memory of a host computing system, the computer-executable method comprising:accessing a portion of low level, raw data from a memory page associated with data stored on the one or more tiers of memory;
sampling the portion of low level, raw data to select a sample data chunk;
analyzing the sample data chunk to determine a sample category; and
classifying the portion of low level, raw data based at least in part by considering the sample category, wherein the classifying further comprises creating a sample survey of a desired statistical confidence limit for the sample data chunk.

US Pat. No. 10,394,471

ADAPTIVE POWER REGULATION METHODS AND SYSTEMS

QUALCOMM Incorporated, S...

27. An adaptive data retention voltage (DRV) circuit, comprising:a sensor circuit comprising a current starved ring oscillator and a frequency counter, the sensor circuit configured to:
determine a first leakage current corresponding to P-type metal oxide semiconductor (MOS) (PMOS) transistors indicative of a process variation;
determine a second leakage current corresponding to N-type MOS (NMOS) transistors indicative of the process variation;
generate a first frequency based on the first leakage current; and
generate a second frequency based on the second leakage current; and
a controller circuit configured to:
generate a first speed characteristic based on the first frequency;
generate a second speed characteristic based on the second frequency;
generate a process corner identifier based on the first speed characteristic and the second speed characteristic;
generate an adaptive DRV based on the process corner identifier; and
generate DRV control signals based on the adaptive DRV.

US Pat. No. 10,394,470

METHOD FOR BACKING UP DATA ON TAPE

International Business Ma...

1. An apparatus, comprising:a head configured to write data to a tape; and
a hardware controller coupled to the head, the controller being configured to cause the apparatus to:
copy a second data area on the tape as a third data area, the second data area corresponding to data in a first data area that has changed;
store, on the tape, index information for identifying the third data area;
copy the first data area to the tape as a fourth data area separate from the third data area; and
store, on the tape, index information for identifying the fourth data area.

US Pat. No. 10,394,469

DETECTING AND HANDLING SOLICITED IO TRAFFIC MICROBURSTS IN A FIBRE CHANNEL STORAGE AREA NETWORK

Cisco Technology, Inc., ...

1. A method comprising:at a Fibre Channel (FC) or FC-over-Ethernet (FCoE) switch having ports to forward Input-Output (IO) requests, and service data transfers, between end devices in a storage area network:
receiving at a port among the ports a time ordered sequence of IO requests for data transfers to be serviced by the port, each IO request including a data length of a data transfer;
detecting a microburst on the port for each IO request, the detecting including:
parsing the IO request to retrieve the data length;
determining a transfer time required to transfer the data length over the port;
upon receiving a next IO request, determining whether a time interval between the IO request and the next IO request is less than the transfer time; and
if the time interval is less than the transfer time, declaring that a microburst is detected on the port, otherwise not declaring that a microburst is detected;
computing a frequency of the microbursts detected on the port over time; and
when the frequency exceeds a threshold, taking action to reduce an impact of the port on the storage area network.

US Pat. No. 10,394,468

HANDLING DATA SLICE REVISIONS IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a storage unit of a dispersed storage network (DSN) that includes a processor, the method comprises:receiving, via a network, a data slice for storage;
writing the data slice by generating a first bin that includes the data slice and storing the first bin in a first location of a memory device of the storage unit;
generating an original bin pointer associated with the data slice that includes a reference to the first location;
receiving, via the network, a revision of the data slice;
writing the revision of the data slice by generating a second bin that includes the revision of the data slice and storing the second bin in a second location of the memory device, wherein the second bin is a revised version of the first bin;
generating a modified bin pointer by editing the original bin pointer to include a reference to the second location;
generating a back pointer associated with the revision of the data slice that references the first location in response to commencing writing of the revision of the data slice; and
deleting the back pointer in response to determining that the revision of the data slice has reached a finalized write stage;
wherein the original bin pointer is stored in a random access memory (RAM) of the storage unit, and wherein the back pointer is generated by retrieving the original bin pointer from RAM; and
wherein the back pointer is generated in conjunction with generating the modified bin pointer, wherein the modified bin pointer includes the back pointer, and wherein the back pointer is deleted from the modified bin pointer in response to determining that the revision of the data slice has reached the finalized write stage.

US Pat. No. 10,394,467

FLEXIBLE DEPLOYMENT AND MIGRATION OF VIRTUAL MACHINES

International Business Ma...

1. A method for assigning a set of network names to storage access paths of virtual machines accessing storage resources via storage area networks, the method comprising:identifying a maximum number of storage access paths that each virtual machine of a plurality of virtual machines is able to use on any server among a plurality of servers;
for each virtual machine of the plurality of virtual machines:
assigning a plurality of source port names equal to the maximum number of storage access paths that the virtual machine can use on any of the plurality of servers;
selecting a maximum number of concurrent live-migrations between the plurality of virtual machines; and
generating a plurality of target port names, wherein a quantity of the plurality of target port names is based on the product of:
the maximum number of storage access paths that the virtual machine can use on any of the plurality of servers; and
the maximum number of concurrent live-migrations between the plurality of virtual machines.

US Pat. No. 10,394,466

SUPPORTING MPIO FOR LOGICAL VOLUME BACKED VIRTUAL DISKS

International Business Ma...

1. A method, comprising:creating, from a physical storage device, a logical volume on a first virtual input/output server (VIOS), wherein the logical volume on the first VIOS is activated in a first access mode;
importing the logical volume to a second VIOS, wherein the logical volume on the second VIOS is activated in a second access mode different from the first access mode; and
mapping the logical volume on the first and second VIOSs as a backing storage device for at least one logical partition hosted on a computing system.

US Pat. No. 10,394,465

SEMICONDUCTOR DEVICE WITH TEMPORARY MEMORY CHIP AND METHOD FOR DRIVING THE SAME

SK hynix Inc., Gyeonggi-...

1. A semiconductor device, comprising:a first memory chip including a plurality of first memory regions;
a second memory chip stacked vertically on the first memory chip and including a plurality of second memory regions;
a temporary memory chip including a plurality of temporary memory regions; and
a control chip suitable for accessing a first access target memory region among the plurality of first memory regions or a first temporary memory region among the plurality of temporary memory regions based on first access information and first temperature readout information corresponding to the plurality of first memory regions,
wherein the control chip selects which region write data will be written into by comparing a threshold value with a temperature of the first access target memory region and comparing a second threshold value with a second temperature of a second memory region vertically adjacent to the first access target memory region,
wherein when the first temperature of the first access target memory region is lower than the first threshold value and the second temperature of the second memory region vertically adjacent to the first access target memory region is lower than the second threshold value, the control chip controls write data to be written in the first access target memory region,
wherein when the first temperature of the first access target memory region is higher than the first threshold value or the second temperature of the second memory region vertically adjacent to the first access target memory region is higher than the second threshold value, the control chip controls write data to be written in the first temporary memory region,
wherein the first memory chip includes a plurality of first temperature sensing blocks suitable for generating first oscillating signals corresponding to temperatures of the plurality of first memory regions, and
wherein the control chip includes a plurality of temperature readout blocks suitable for generating the first temperature readout information based on the first oscillating signals.

US Pat. No. 10,394,464

VOLATILE MEMORY ACCESS MODE IN AN ELECTRONIC TERMINAL FOR PROTECTING APPLICATION FILES FROM FILE OPERATIONS

Telefonaktiebolaget LM Er...

1. An electronic terminal comprising:at least one processor; and
at least one memory coupled to the at least one processor the at least one memory comprising computer readable program code that, when executed by the at least one processor causes the at least one processor to perform operations, the operations comprising:
switching from a normal memory access mode to a volatile memory access mode responsive to receiving a user input;
receiving an open-write command from an application to open a first file for writing;
determining whether the first file is located in a volatile memory partition of the at least one memory;
based on determining that the first file is not located in the volatile memory partition of the at least one memory;
copying the first file from a normal memory partition of the at least one memory to the volatile memory partition, and
opening the first file located in the volatile memory partition for writing;
based on determining that the first file is located in the volatile memory partition;
opening the first file located in the volatile memory partition for writing; and
directing write commands from the application to the first file located in the volatile memory partition;
receiving an open-read command from the application to open a second file for reading;
determining whether the second file is located in the volatile memory partition of the at least one memory;
based on determining that the second file is not located in the volatile memory partition of the at least one memory:
opening the second file located in the normal memory partition for reading, and
directing read commands from the application to the second file located in the normal memory partition; and
based on determining that the second file is located in the volatile memory partition:
opening the second file located in the volatile memory partition for reading; and
directing read commands from the application to the second file located in the volatile memory partition.

US Pat. No. 10,394,463

MANAGING STORAGE DEVICES HAVING A LIFETIME OF A FINITE NUMBER OF OPERATIONS

International Business Ma...

1. A computer-implemented method of managing a plurality of storage devices, the storage devices having a lifetime of a finite number of operations, the method comprising:calculating an average number of storage devices reaching said lifetime of a finite number of operations per first unit time;
for each one of the plurality of storage devices calculating an estimated date when said finite number of operations will be reached;
for each date, setting a variable associated with that date, the variable being related to a number of storage devices reaching said finite number of operations within a predetermined period of said date; and
for one or more variables associated with a date where the value of the variable is larger than the average number of storage devices reaching said lifetime of a finite number of operations per first unit time, carrying out an action to reduce the number of storage devices reaching said lifetime per first unit of time.

US Pat. No. 10,394,462

DATA SHAPING TO REDUCE MEMORY WEAR IN A MULTI-TENANT DATABASE

Amazon Technologies, Inc....

1. A multi-tenant database system comprising:a memory device, wherein a subset of operations on the memory device cause the memory device to be degraded;
one or more computing nodes that maintain a first dataset and a second dataset; and
one or more memories having stored thereon computer-readable instructions that, upon execution by the one or more computing nodes, cause the system at least to:
form a mapping from a subset of the dataset to a symbol, the symbol selected based at least in part to minimize operations in the subset of operations that degrade the memory device associated with storing the symbol on the memory device, wherein selection of the symbol is further based at least in part on a pattern of changes to at least one of a field of the first dataset, a column of the first dataset, or an attribute of the dataset and wherein selection of the symbol is independent of the second dataset; and
cause the symbol to be stored on the memory device, the symbol representative of the subset of the first dataset.

US Pat. No. 10,394,461

ADDRESSING USAGE OF SHARED SSD RESOURCES IN VOLATILE AND UNPREDICTABLE OPERATING ENVIRONMENTS

INTERNATIONAL BUSINESS MA...

1. A method comprising:determining, by a computer device, an input/output (I/O) wait time threshold for a computing instance;
determining, by the computer device, an I/O wait time of the computing instance; and
in response to the determined I/O wait time of the computing instance exceeding the determined I/O wait time threshold of the computing instance, moving, by the computer device, a data extent associated with the computing instance exceeding the determined I/O wait time threshold from hard disk drive (HDD) storage to solid state drive (SSD) storage.

US Pat. No. 10,394,460

ENHANCED DATA BUFFER AND INTELLIGENT NV CONTROLLER FOR SIMULTANEOUS DRAM AND FLASH MEMORY ACCESS

INTEGRATED DEVICE TECHNOL...

1. An apparatus comprising on a DIMM:an NV Controller on said DIMM, said NV Controller having a host interface, a DRAM interface, a flash memory interface, and a data buffer interface, said host interface in communication with a connector on said DIMM wherein said connector receives a host address, a host clock, a host control, and a host save, said DRAM interface in communication with one or more DRAMs on said DIMM, said flash memory interface in communication with one or more flash memory devices on said DIMM, and said data buffer interface in communication with one or more enhanced data buffers on said DIMM, wherein said enhanced data buffers have an input coupled to said connector to receive from a host processor data of 8 bits or more, said data coupled to a functional unit, wherein said functional unit has a DRAM read buffer and a DRAM write buffer, said DRAM read buffer and said DRAM write buffer connected to said one or more DRAMs, and wherein said functional unit has a read buffer including a read FIFO and a write buffer including a write FIFO, said read buffer and said write buffer connected to said NV Controller, wherein said functional unit is controlled by said host processor via a register command word written in a command space of said NV Controller, and wherein said connector receives a host Inter-Integrated Circuit (I2C) signal that is connected directly to said NV Controller and wherein a serial presence detect (SPD) is external to said NV Controller and information from said SPD does not go through said NV Controller, wherein the NV Controller configuration above using, the one or more enhanced data buffers allows transparent access to the one or more DRAMs on said DIMM and the one or more flash memory devices on said DIMM at the same time, the transparency from a standpoint that the host processor sees a same interface and a level of performance from the one or more DRAMs and the one or more flash memory devices.

US Pat. No. 10,394,459

DATA STORAGE DEVICE FOR FILTERING PAGE IN TWO STEPS, SYSTEM INCLUDING THE SAME, AND METHOD OF OPERATING THE SAME

Samsung Electronics Co., ...

1. A data storage device comprising:an internal filtering module configured to perform a filtering operation;
a central processing unit (CPU);
a non-volatile memory;
a volatile memory; and
a page type analyzer configured to analyze a type of a page that is output from the non-volatile memory and to transmit an indication signal identifying the type of the page to the CPU according to an analysis result,
wherein according to control of the CPU that operates based on the indication signal,
when the page output from the non-volatile memory is an index page including an index, the internal filtering module is configured to store the index page in the volatile memory, and
when the page output from the non-volatile memory is a user page including user data, the internal filtering module is configured to filter each of a plurality of rows in the user page to provide first filtered data based on a direct memory access (DMA) parameter set, and to transmit the first filtered data to the volatile memory.