US Pat. No. 10,248,589

INTEGRATED CIRCUIT WITH A SERIAL INTERFACE

Dialog Semiconductor (UK)...

1. An integrated circuit coupled to an external serial bus, the integrated circuit comprisinga serial interface configured to act as a bus slave with regard to the external serial bus and to detect a data address on the external serial bus;
a data cache coupled to the serial interface via an internal bus; and
a prefetch control unit configured to instruct the serial interface to prefetch a data element associated with the detected data address by causing the data element to be read from a target data storage unit associated with the data address and the data element and the data address to be written in the data cache,
wherein the prefetch control unit is configured to not instruct the serial interface to prefetch the data element in case reading data from the target data storage unit by the serial interface involves a turnaround time which is shorter than a predetermined threshold turnaround time; and
wherein the predetermined threshold turnaround time is based on a maximum response time defined by a message protocol of the external serial bus.

US Pat. No. 10,248,588

FRAME RECEPTION MONITORING METHOD IN SERIAL COMMUNICATIONS

LSIS CO., LTD., Anyang-s...

1. A frame reception monitoring method in serial communications, the method comprising:when a plurality of sub-frames constituting a frame is each entered into a reception buffer at certain time intervals, assigning timestamps to the plurality of sub-frames respectively, and allowing the respective sub-frames to be stored on a temporal buffer;
allowing the plurality of sub-frames stored on the temporal buffer to be entered into a service buffer within a predetermined inter-sub-frames time-out time;
using the plurality of sub-frames entered into the service buffer to generate a combined frame; and
using the combined frame to execute a control.

US Pat. No. 10,248,587

REDUCED HOST DATA COMMAND PROCESSING

SanDisk Technologies LLC,...

1. A storage system comprising:a buffer configured to store host data responsive to the host data being received from a host over a bus;
a backend memory for data storage;
a processor; and
a Direct Memory Access (DMA) circuit configured to copy data to the backend memory independently of the processor, the DMA circuit comprising:
a data generator configured to generate the host data responsive to the host data not being received from the host, wherein the DMA circuit is configured to:
copy the host data generated by the data generator to the backend memory independently of the processor responsive to the host data not being received from the host; and
copy the host data in the buffer to the backend memory independently of the processor responsive to the host data being received from the host; and
a metadata generator configured to:
generate the metadata from the host data that is in the buffer responsive to the host data being received from the host; and
generate the metadata from the host data that is generated by the data generator responsive to the host data not being received from the host.

US Pat. No. 10,248,586

INFORMATION PROCESSING APPARATUS AND DATA TRANSFER METHOD

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:a first chip;
a second chip; and
a third chip,
wherein the first chip, the second chip, and the third chip are connected in series, wherein the second chip includes
a receiving unit configured to receive, from the first chip, data and address information attached to the data,
a register configured to store address translation information,
a determination unit configured to determine whether the address information attached to the data received by the receiving unit corresponds to an address translation area based on the address translation information set to the register,
an address translation unit configured to translate the address information attached to the data received by the receiving unit and output the translated address information to an internal bus with the data received by the receiving unit,
a controller unit configured to control to store, in a memory for the second chip, data to which address information corresponding to an address area set for the second chip is attached among the data received from the address translation unit via the internal bus, and
a transmission unit configured to transmit, to the third chip, data to which address information corresponding to an address area set for transfer to the third chip is attached among the data received from the address translation unit via the internal bus,
wherein the address translation unit translates address information corresponding to an address area set for the second chip into an address destination in the second chip.

US Pat. No. 10,248,585

SYSTEM AND METHOD FOR FILTERING FIELD PROGRAMMABLE GATE ARRAY INPUT/OUTPUT

Oracle International Corp...

1. A method comprising:receiving a Field Programmable Gate Array (FPGA) program bitstream;
generating, based at least in part on the FPGA bitstream, FPGA programming logic segments;
modifying, based at least in part on the FPGA programming logic segments, an FPGA to comprise a core logic layer and a filtering layer;
receiving, by the filtering layer, Input/Output (I/O) associated with the core logic layer;
modifying, based at least in part on the filtering layer, the I/O associated with the core logic layer;
that the
sampling the I/O associated with the core logic layer to generate a sampled I/O;
receiving, at a processor, a timestamp from the FPGA associated with the sampled I/O;
receiving, at the processor, an interrupt signal associated with the FPGA; and
in response to receiving the interrupt signal, correlating the sampled I/O and the timestamp with an Epoch time of the processor.

US Pat. No. 10,248,584

DATA TRANSFER BETWEEN HOST AND PERIPHERAL DEVICES

Microsoft Technology Lice...

1. A peripheral device comprising:a wireless communication interface arranged to communicate with a host computing device;
an output device;
a memory arranged to store an output data set for output via the output device; and
a processor arranged to:
monitor parameters of a wireless communication link between the host computing device and the peripheral via the wireless communication interface;
detect imminent disconnection of the wireless communication link based on the parameters of the wireless communication link;
trigger, in response to imminent disconnection detection, a data transfer from the host computing device to the peripheral device via the wireless communication interface;
receive, in response to the trigger, the output data set; and
display the output data set via the output device, wherein following disconnection of the wireless communication link the output data set is fixed such that the output data set does not change.

US Pat. No. 10,248,583

SIMULTANEOUS VIDEO AND BUS PROTOCOLS OVER SINGLE CABLE

TEXAS INSTRUMENTS INCORPO...

1. A system comprising:a main switch configured to:
operate in an enhanced mode in which the main switch is configured to simultaneously transfer data from a first data source and a second data source to a cable;
operate in a default mode in which the main switch is configured to transfer data from the second data source to the cable without transferring data from the first data source;
a multipurpose switch, the multipurpose switch coupled to the cable through a first set of switch connections, the multipurpose switch configured to:
operate in a handshake mode in which the multipurpose switch is configured to transport handshake data between the cable and a digital logic, wherein the handshake data is received on the first set of switch connections;
operate in a data mode in which the multipurpose switch is configured to transport bus data between the cable and the second data source, wherein the bus data is received on the first set of switch connections; and
the digital logic coupled to the multipurpose switch through a first switch control;
the digital logic coupled to the main switch through a second switch control, the digital logic programmed to:
drive the first switch control that enables a mode of operation of the multipurpose switch, the mode of operation of the multipurpose switch comprising one of the handshake mode and the data mode; and
drive the second switch control that enables a mode of operation of the main switch, the mode of operation of the main switch comprising one of the enhanced mode and the default mode.

US Pat. No. 10,248,582

PRIMARY DATA STORAGE SYSTEM WITH DEDUPLICATION

NexGen Storage, Inc., Lo...

1. A primary data storage system for use in a computer network and having de-duplication capability, the system comprising:an input/output port configured to receive a block command packet that embodies one of a read block command and a write block command and transmitting a block result packet in reply to a block command packet;
a data store system having at least a first data store and a second data store;
wherein each of the first and second data stores is capable of receiving and storing data in response to a write block command and retrieving and providing data in response to a read block command;
wherein the first data store has a first responsiveness characteristic, the second data store has a second responsiveness characteristics, and the first and the second responsiveness characteristics are different;
a statistics database configured to provide hardware and/or volume statistical data relevant to a potential deduplication of data associated with a write block command; and
a deduplication processor configured to: (a) receive a write block command and statistical data relevant to the received write block command from the statistics database, (b) determine, using the hardware and/or volume statistical data that is relevant to the potential deduplication of data associated with the write block command, if a yet to be performed deduplication operation on the data associated with the received write block command is expected to satisfy a time constraint specifically associated with the processing of the received write block command relative to the data store system, the time constraint being the difference between (i) an allowed amount of time to process the write block command that is specifically associated with the received write block command and reflects a quality of service goal and (ii) an amount of time previously expended in processing the received write block command, (c) if the yet to be performed deduplication on the data associated with the received write block command is expected to satisfy the time constraint specifically associated with the received write block command relative to the data store system, proceeding with the performance of the deduplication operation on the data associated with the received write block command, and (d) if the yet to be performed deduplication operation on the data associated with the received write block command is not expected to satisfy the time constraint specifically associated with the received write block command relative to the data store system, forgoing the performance of the deduplication operation and proceeding with the processing of the received write block command, thereby increasing the possibility that duplicate data is established on the data store system.

US Pat. No. 10,248,581

GUARDED MEMORY ACCESS IN A MULTI-THREAD SAFE SYSTEM LEVEL MODELING SIMULATION

Synopsys, Inc., Mountain...

1. A method for multi-thread system level modeling simulation (SLMS) of a target system on a host system, the target system having a plurality of processor core models that access a memory model of the target system, the method comprising:setting a memory region of the memory model of the target system into guarded mode indicating that the memory region should be locked when the memory region is accessed by one of a plurality of SLMS processes, the plurality of SLMS processes representing functional behaviors of the processor core models;
identifying an access to the memory region by an accessing SLMS process of the plurality of SLMS processes via an interconnect model of the target system, the interconnect model connecting the processor core models to the memory model; and
responsive to the access to the memory region and the memory region being in the guarded mode, acquiring a guard lock for the memory region that allows the accessing SLMS process to access the memory region via the interconnect model while the guard lock is acquired, and wherein the plurality of SLMS processes cannot access the memory region while the guard lock is acquired.

US Pat. No. 10,248,580

METHOD AND CIRCUIT FOR PROTECTING AND VERIFYING ADDRESS DATA

STMICROELECTRONICS (ROUSS...

1. A circuit for protecting memory address data, the circuit comprising:an input data bus configured to receive write data to be written to a memory device;
an input address bus configured to receive a write address associated with the write data;
an output data bus; and
an address protection circuit coupled to said input data, input address, and output data buses and configured to
generate an address protection value based on the write address,
generate modified write data, on said output data bus, the modified write data including both the write data and the address protection value, said output data bus having a width greater than a width of said input data bus, and
generate a modified write address.

US Pat. No. 10,248,579

METHOD, APPARATUS, AND INSTRUCTIONS FOR SAFELY STORING SECRETS IN SYSTEM MEMORY

Intel Corporation, Santa...

1. A processor comprising:a hardware key;
an instruction unit to receive a compare instruction, the the compare instruction having a plaintext input value and a ciphertext input value; and
an encryption unit to, in response to the compare instruction, decrypt the ciphertext input value using the hardware key to generate a plaintext output value and compare the plaintext output value to the plaintext input value.

US Pat. No. 10,248,578

METHODS AND SYSTEMS FOR PROTECTING DATA IN USB SYSTEMS

Microsoft Technology Lice...

1. A system comprising:a processor;
memory;
a USB port;
one or more unsecure client applications stored in the memory;
one or more secure client applications stored in the memory;
an unsecure software stack including at least one unsecure USB driver;
a secure software stack including at least one secure USB driver; and
a USB host controller associated with the secure software stack and the unsecure software stack;
wherein the USB host controller being configured to:
receive a transfer descriptor including instructions for routing data from a communicating USB device, coupled with the USB port, to access the memory,
determine whether the communicating USB device is a secure USB device or an unsecure USB device,
route data to and from the unsecure USB device, based on the instructions in the transfer descriptor, through the unsecure software stack for use by the one or more unsecure client applications in response to determining the communicating USB device is the unsecure USB device, and
route data to and from the secure USB device, based on the instructions in the transfer descriptor, through the secure software stack for use by the one or more secure client applications in response to determining the communicating USB device is the secure USB device.

US Pat. No. 10,248,577

USING A CHARACTERISTIC OF A PROCESS INPUT/OUTPUT (I/O) ACTIVITY AND DATA SUBJECT TO THE I/O ACTIVITY TO DETERMINE WHETHER THE PROCESS IS A SUSPICIOUS PROCESS

International Business Ma...

1. A computer program product for detecting a security breach in a system managing access to a storage, the computer program product comprising a computer readable storage medium having computer readable program code embodied therein that when executed performs operations, the operations comprising:monitoring process Input/Output (I/O) activity of a process accessing data in a storage;
determining a peak I/O rate during a time period which an I/O rate of data access was a highest I/O rate;
determining a timestamp of when the data was last accessed;
characterizing the process initiating the I/O activity as a suspicious process in response to determining a process I/O rate of the process I/O activity as compared to the peak I/O rate of the data satisfies a first condition and a process access time at which the process is accessing the data as compared to the timestamp satisfies a second condition; and
indicating a security breach in response to characterizing the process as the suspicious process.

US Pat. No. 10,248,576

DRAM/NVM HIERARCHICAL HETEROGENEOUS MEMORY ACCESS METHOD AND SYSTEM WITH SOFTWARE-HARDWARE COOPERATIVE MANAGEMENT

HUAZHONG UNIVERSITY OF SC...

1. A Dynamic Random Access Memory/Non-Volatile Memory (DRAM/NVM) hierarchical heterogeneous memory access system with software-hardware cooperative management configured to perform the following steps:step 1: Translation Lookaside Buffer (TLB) address translation: acquiring a physical page number (ppn), a P flag, and content of an overlap tlb field of an entry where a virtual page is located, and translating a virtual address into an NVM physical address according to the ppn;
step 2: determining whether memory access is hit in an on-chip cache; directly fetching, by a Central Processing Unit (CPU), a requested data block from the on-chip cache if the memory access is hit, and ending a memory access process; otherwise, turning to step 3;
step 3: determining a memory access type according to the P flag acquired in step 1; if P is 0, which indicates access to an NVM, turning to step 4, updating information of the overlap tlb field in a TLB table, and determining, according to an automatically adjusted prefetching threshold in a dynamic threshold adjustment algorithm and the overlap tlb field acquired in step 1, whether to prefetch an NVM physical page corresponding to the virtual page into a DRAM cache; or if P is 1, which indicates access to a DRAM cache and indicates that the DRAM cache is hit, calculating an address of the DRAM cache to be accessed according to the information of the overlap tlb field acquired in step 1 and a physical address offset, and turning to step 6 to access the DRAM cache;
step 4: looking up a TLB entry corresponding to an NVM main memory page if a value of the overlap tlb field acquired in step 1 is less than the automatically adjusted prefetching threshold, and increasing the overlap tlb field of the TLB by one, wherein the overlap tlb field is a counter; turning to step 5 if the value of the overlap tlb field acquired in step 1 is greater than the automatically adjusted prefetching threshold, to prefetch the NVM main memory page into the DRAM cache; otherwise, turning to step 6 to directly access the NVM, wherein the automatically adjusted prefetching threshold is determined by a prefetching threshold runtime adjustment algorithm;
step 5: prefetching the NVM physical page corresponding to the virtual address into the DRAM cache, and updating the TLB and an extended page table; and
step 6: memory access: accessing the memory according to an address transmitted into a memory controller.

US Pat. No. 10,248,575

SUSPENDING TRANSLATION LOOK-ASIDE BUFFER PURGE EXECUTION IN A MULTI-PROCESSOR ENVIRONMENT

International Business Ma...

1. A method for operating translation look-aside buffers, TLBs, in a multiprocessor system, the multiprocessor system comprising at least one core each supporting at least one thread, the method comprising:receiving a purge request for purging one or more entries in the TLB;
determining if a thread requires access to an entry of the entries to be purged;
when the thread does not require access to the entries to be purged:
starting execution of the purge request in the TLB;
setting a suspension time window wherein the setting of the suspension time window is performed in response to an address translation request of the thread being rejected due to the TLB purge, wherein setting further comprises providing a level signal having a predefined activation time period during which the level signal is active, wherein the suspension time window is the predefined activation time period, and wherein a rejected address translation request is recycled during a recycling time window, wherein the recycling time window is smaller than the suspension time window;
suspending the execution of the purge during the suspension time window;
executing address translation requests of the thread during the suspension time window wherein the executing of the address translation requests of the thread comprises executing the recycled address translation request;
resuming the purge execution after the suspension window is ended;
when the thread requires access to the entries to be purged:
providing a branch point having two states;
providing a second branch point having two states;
blocking the thread for preventing the thread sending address translation requests to the TLB with firmware instructions, wherein blocking the thread comprises setting the branch point to a first state, setting the second branch point to a third state, and reading the first state of the branch point, and reading the third state of the second branch point, for performing the blocking with firmware instructions;
upon ending the purge request execution,
setting the first branch point to a second state
unblocking the thread and executing the address translation requests of the thread;
wherein the at least one core supporting a second thread, and when a first thread does not require access to the TLB entries to be purged and the second thread requires access to an entry to be purged, before starting the execution of the purge request at the TLB:
blocking both the first and second threads for preventing them sending requests to the TLB; and
when the purge request has started at both the first and second threads, unblocking the first thread.

US Pat. No. 10,248,574

INPUT/OUTPUT TRANSLATION LOOKASIDE BUFFER PREFETCHING

Intel Corporation, Santa...

1. An apparatus comprising:a bridge between an input/output (I/O) side of a system and a memory side of the system, the I/O side to include an interconnect on which a zero-length transaction is to be initiated by an I/O device, the zero-length transaction to include an I/O-side memory address;
an input/output memory management unit (IOMMU) including
address translation hardware to generate a translation of the I/O-side memory address to a memory-side memory address, and
an input/output translation lookaside buffer (IOTLB) in which to store the translation; and
an IOTLB prefetch control unit including prefetch control logic to cause the apparatus to, in response to determining that the memory-side address is inaccessible, emulate completion of the zero-length transaction and to, in response to determining that I/O device prefetching to the IOTLB is not enabled and that the I/O device is not permitted to access a system memory, generate a fault instead of emulating completion of the zero-length transaction.

US Pat. No. 10,248,573

MANAGING MEMORY USED TO BACK ADDRESS TRANSLATION STRUCTURES

INTERNATIONAL BUSINESS MA...

1. A computer program product for facilitating memory management of a computing environment, said computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
determining whether a block of memory is marked as being used to back an address translation structure used by a guest program for address translation, the block of memory being a block of host memory, wherein the guest program is managed by a virtual machine manager that further manages the host memory, and wherein the determining determines whether the block of memory is actively being used for the address translation structure, the address translation structure used in translating a virtual address to another address; and
performing a memory management action based on whether the block of memory is being used to back the address translation structure, the memory management action controlling availability of the block of memory for further use, wherein memory management within the computing environment is facilitated, enhancing system performance.

US Pat. No. 10,248,572

APPARATUS AND METHOD FOR OPERATING A VIRTUALLY INDEXED PHYSICALLY TAGGED CACHE

ARM Limited, Cambridge (...

1. An apparatus, comprising:processing circuitry to perform data processing operations on data;
a cache storage to store data for access by the processing circuitry, the cache storage having a plurality of cache entries, and each cache entry arranged to store data and an associated physical address portion, the cache storage being accessed using a virtual address portion of a virtual address in order to identify a number of cache entries whose stored physical address portions are to be compared with a physical address derived from the virtual address in order to detect whether a hit condition exists; and
snoop request processing circuitry, responsive to a snoop request specifying a physical address, to determine a plurality of possible virtual address portions for the physical address, and to perform a snoop processing operation in order to determine whether the hit condition is detected for a cache entry when accessing the cache storage using the plurality of possible virtual address portions, and on detection of the hit condition to perform a coherency action in respect of the cache entry causing the hit condition.

US Pat. No. 10,248,571

SAVING POSITION OF A WEAR LEVEL ROTATION

Hewlett Packard Enterpris...

1. A system comprising:a wear level handler to start a current rotation of a wear level algorithm through a plurality of cache line addresses in a region of memory, wherein the wear level algorithm alternates between an even rotation and an odd rotation, the even rotation characterized by a first value of the metadata and the odd rotation characterized by a second value of the metadata;
a location storer to store a rotation count corresponding to a cache line address belonging to the plurality;
a data mover to move a cache line from a selected cache line address to a gap cache line address corresponding to an additional cache line address,
a metadata setter to:
set a metadata of the gap cache line address to a value corresponding to the current rotation, wherein the first cache line address becomes the current gap cache line address after the data has been copied and the metadata has been set,
set the value of the metadata to the first value if the current rotation is the even rotation,
set the value of the metadata to second value if the current rotation is the odd rotation; and
a current position determiner to determine, based on the value of at least one metadata and the rotation count, a current position of the current rotation after a power loss event.

US Pat. No. 10,248,570

METHODS, SYSTEMS AND APPARATUS FOR PREDICTING THE WAY OF A SET ASSOCIATIVE CACHE

Intel Corporation, Santa...

1. A method for fetching a cache line of a far taken branch instruction and a cache line of a target of the far taken branch instruction, the method comprising:determining a hit at a first way of an instruction cache for the far taken branch instruction;
determining a target address from an information cache based on the first way;
determining a second way from a shadow cache tag structure based on the target address; and
fetching the far taken branch instruction from the instruction cache based on the first way and the target of the far taken branch instruction from a shadow cache based on the second way.

US Pat. No. 10,248,569

PATTERN BASED PRELOAD ENGINE

Futurewei Technologies, I...

1. A method for a processing unit executing an application, the method comprising:determining, by the processing unit, that the application has reached a specific location and state;
obtaining, by the processing unit, a trigger instruction responsive to determining the application has reached the specific location and state, wherein the trigger instruction includes an index into a preload engine offset table and a base address, wherein the preload engine offset table includes a plurality of distinct offsets associated with the base address;
accessing a memory by a preload engine coupled to the processing unit to obtain the preload engine offset table based on the index and base address to determine the plurality of distinct offsets relative to the base address, the plurality of distinct offsets being specific to the application location and state; and
prefetching data by the preload engine into a cache memory, for use by the processing unit executing the application, the data prefetched into the cache memory using addresses generated using the base address and each of the plurality of distinct offsets.

US Pat. No. 10,248,568

EFFICIENT DATA TRANSFER BETWEEN A PROCESSOR CORE AND AN ACCELERATOR

Intel Corporation, Armon...

1. A method comprising:writing input data to a first cache line of a cache shared by a processor and an accelerator, wherein the input data is ready to be operated on by the accelerator; and
writing instructions to one or more cache lines, the one or more cache lines designated as a queue for the accelerator, wherein the instructions indicate a first operation to be performed by the accelerator and a virtual pointer to the input data in the cache.

US Pat. No. 10,248,567

CACHE COHERENCY FOR DIRECT MEMORY ACCESS OPERATIONS

Hewlett-Packard Developme...

1. A non-transitory computer readable storage medium comprising instructions that, when executed, cause a processor to at least:receive, from a direct memory access controller, an interrupt associated with a direct memory access operation; and
handle the interrupt via the processor to maintain cache coherency based on a direction of the direct memory access operation, the interrupt generated by the direct memory access controller and comprising the direction, wherein the direction of the direct memory access operation includes:
a direct memory access operation from a memory to a peripheral; or
a direct memory access operation from the peripheral to the memory;
wherein the direct memory access controller is to execute the direct memory access operation.

US Pat. No. 10,248,566

SYSTEM AND METHOD FOR CACHING VIRTUAL MACHINE DATA

Western Digital Technolog...

14. A method for caching data from a plurality of virtual machines, the method comprising:identifying, using a cache management component, a first virtual machine, of the plurality of virtual machines, which is operating;
allocating a portion of a cache storage to the first virtual machine;
performing caching of data to handle an input/output (I/O) request of the first virtual machine, wherein data written to the cache storage is written to a top of the cache storage and existing data in the cache storage is pushed down the cache storage;
identifying a second virtual machine, of the plurality of virtual machines, which is not operating based at least in part on a determination that second virtual machine data I/O does not appear in a register of the cache storage;
determining whether a number of virtual machines that use the cache storage exceeds a threshold number of virtual machines; and
in response to determining that the number of virtual machines that use the cache storage exceeds the threshold number of virtual machines:
moving data cached by the second virtual machine in the cache storage to a bottom of the cache storage; and
invalidating a portion of the cache storage associated with the second virtual machine by writing over data at the bottom of the cache storage as data is added to the top of the cache storage.

US Pat. No. 10,248,565

HYBRID INPUT/OUTPUT COHERENT WRITE

QUALCOMM Incorporated, S...

1. A method of implementing a hybrid input/output (I/O) coherent write request on a computing device, comprising:receiving an I/O coherent write request;
generating a first hybrid I/O coherent write request and a second hybrid I/O coherent write request by duplicating the I/O coherent write request;
sending the first hybrid I/O coherent write request and I/O coherent write data of the I/O coherent write request to a shared memory; and
sending the second hybrid I/O coherent write request duplicated from the I/O coherent write request to a coherency domain without sending the I/O coherent write data of the I/O coherent write request to the coherency domain.

US Pat. No. 10,248,564

CONTENDED LOCK REQUEST ELISION SCHEME

Advanced Micro Devices, I...

1. A method comprising:storing a plurality of data blocks at a home node of a plurality of nodes;
receiving a request at the home node from a first node, the request being a request for access to a given data block of the plurality of data blocks;
maintaining a count of a number of read requests at the home node for the given data block;
in response to determining both the given data block is currently stored at a second node and the count has exceeded a threshold, the home node:
requesting a copy of the given data block from the second node;
storing the copy of the given data block at the home node responsive to receiving the given data block from the second node; and
forwarding the copy of the given data block from the home node to the first node.

US Pat. No. 10,248,563

EFFICIENT CACHE MEMORY HAVING AN EXPIRATION TIMER

International Business Ma...

1. A method, comprising:selectively invalidating data stored in at least one cache line of a cache memory of a processor in response to a determination that a predetermined amount of time has passed since the at least one cache line was last accessed, the predetermined amount of time being shorter than an average round-trip time for the processor to process a plurality of blocks of data stored sequentially to a ring buffer.

US Pat. No. 10,248,562

COST-BASED GARBAGE COLLECTION SCHEDULING IN A DISTRIBUTED STORAGE ENVIRONMENT

Microsoft Technology Lice...

1. A computer system comprising:one or more processors; and
one or more computer-readable storage media having stored thereon computer-executable instructions that are executable by the one or more processors to cause the computer system to schedule garbage collection in a distributed environment that includes a plurality of partitions that point to a plurality of data blocks that store data objects, the garbage collection scheduling being based on a cost to reclaim one or more of the data blocks for further use, the computer-executable instructions including instructions that are executable to cause the computer system to perform at least the following:
determining a reclaim cost for one or more data blocks of one or more of the plurality of partitions during a garbage collection operation;
determining a byte constant multiplier that is configured to modify the reclaim cost to account for the amount of data objects that may be rewritten during the garbage collection operation;
accessing one or more of a baseline reclaim budget and a baseline rewrite budget, the baseline reclaim budget specifying an acceptable amount of data blocks that should be reclaimed by the garbage collection operation and the baseline rewrite budget specifying an upper limit on the amount of data objects that may be rewritten during the garbage collection operation;
iteratively varying one or more of the baseline reclaim budget, the baseline rewrite budget, and byte constant multiplier to determine an effect on the reclaim cost; and
generating a schedule for garbage collection, the schedule including those data blocks that at least partially minimize the reclaim cost based on the iterative varying.

US Pat. No. 10,248,561

STATELESS DETECTION OF OUT-OF-MEMORY EVENTS IN VIRTUAL MACHINES

ORACLE INTERNATIONAL CORP...

1. A method of modifying memory allocations of virtual machines executing within a computing system, comprising:executing one or more virtual machines, including a first virtual machine, on a computing system, wherein each said virtual machine is allocated an amount of heap memory from the computer system,
performing a plurality of garbage collection processes on the computing system during execution of the first virtual machine on the computer system;
determining, by the computing system, using one or more processors of the computing system, one or more execution metrics for each of the plurality of garbage collection processes performed, the execution metrics including an execution duration for each of the plurality of garbage collection process,
using a sequential-analysis technique to analyze, by the computing system, the execution metrics for each of the plurality of garbage collection processes for stateless detection of a memory usage trend for the first virtual machine, based on the execution durations for the plurality of garbage collection processes; and
in response to detecting an upward memory usage trend for the first virtual machine based on the analysis of the execution durations for the plurality of garbage collection processes, modifying, by the computing system, the amount of heap memory allocated to the first virtual machine.

US Pat. No. 10,248,560

STORAGE DEVICE THAT RESTORES DATA LOST DURING A SUBSEQUENT DATA WRITE

Toshiba Memory Corporatio...

1. A storage device connectable to a host device, the storage device comprising:a plurality of nonvolatile memories each including first memory cells connected to a first word line and second memory cells connected to a second word line that is adjacent to the first word line; and
a controller configured to:
maintain parity data in a memory area of the host device for data that has been written to the first memory cells,
write data to the second memory cells, and
upon detecting a failure in the writing of data to the second memory cells, restore, using the parity data from the memory area of the host device, the data previously written to the first memory cells.

US Pat. No. 10,248,559

WEIGHTING-TYPE DATA RELOCATION CONTROL DEVICE AND METHOD

REALTEK SEMICONDUCTOR COR...

1. A weighting-type data relocation control device, configured to control data relocation of a non-volatile memory, in which the non-volatile memory includes used blocks and unused blocks, each of the used blocks is associated with a first relocation parameter and a second relocation parameter, and the weighting-type data relocation control device comprises:a storage controller configured to carry out at least the following steps for controlling the data relocation of the non-volatile memory:
multiplying the first relocation parameter and the second relocation parameter by a first weighting and a second weighting respectively and thereby obtaining a relocation priority index, in which at least one of said first and second relocation parameters and/or at least one of said first and second weightings relate(s) to a thermal detection result;
comparing the relocation priority index with at least one threshold and thereby obtaining a comparison result; and
if the comparison result corresponding to a used storage block of the used blocks indicates that the used storage block reaches a predetermined relocation threshold, transferring valid data of the used storage block to an unused storage block of the unused blocks,
wherein the first relocation parameter is a count of invalid data, and the second relocation parameter is a ranking of storage time.

US Pat. No. 10,248,558

MEMORY LEAKAGE POWER SAVINGS

QUALCOMM Incorporated, S...

1. A system, comprising:a cache memory, wherein a processor accesses the cache memory;
a multiplexer configured to selectively couple a first supply rail or a second supply rail to the cache memory;
a controller configured to instruct the multiplexer to couple the first supply rail to the cache memory if the processor is in a first performance mode, and to instruct the multiplexer to couple the second supply rail to the cache memory if the processor is in a second performance mode; and
a trigger device configured to detect gating of a clock signal to the cache memory or the processor, and, upon detecting gating of the clock signal, to instruct the controller to switch the cache memory from the second supply rail to the first supply rail if the cache memory is currently coupled to the second supply rail,
wherein the trigger device is configured to receive a signal indicating whether the second supply rail is being power collapsed, and to refrain from instructing the controller to switch the cache memory from the second supply rail to the first supply rail if the signal indicates that the second supply rail is being power collapsed.

US Pat. No. 10,248,557

SYSTEMS AND METHODS FOR DELAYED ALLOCATION IN CLUSTER STORAGE

Veritas Technologies LLC,...

10. A system for delayed allocation in cluster storage, the system comprising:a delegation module, stored in memory, that delegates, to a node attached to a storage cluster comprising at least one storage device that comprises a plurality of allocation units, a subset of the allocation units on the storage cluster to be held as a delayed allocation pool;
a communication module, stored in memory, that receives, from the node, a request to allocate a number of allocation units on the storage cluster;
an adjustment module, stored in memory, that deducts the number of allocation units from a number of available allocation units in the delayed allocation pool, wherein the adjustment module:
maintains, for the node, a measurement of a rate at which the node requests allocation units on the storage cluster;
compares the allocation request rate for the node to the number of allocation units held in the delayed allocation pool; and
adjusts the number of allocation units in the delayed allocation pool by delegating additional allocation units to the delayed allocation pool or relinquishing a delegation of allocation units to the delayed allocation pool based at least in part on the comparison;
an allocation module, stored in memory, that satisfies the allocation request by allocating allocation units not included in the delayed allocation pool before allocating allocation units included in the delayed allocation pool, wherein the adjustment module recalculates, based on the number of allocation units in the delayed allocation pool used to satisfy the allocation request, the number of available allocation units in the delayed allocation pool; and
at least one physical processor configured to execute the delegation module, the communication module, the adjustment module, and the allocation module.

US Pat. No. 10,248,556

FORWARD-ONLY PAGED DATA STORAGE MANAGEMENT WHERE VIRTUAL CURSOR MOVES IN ONLY ONE DIRECTION FROM HEADER OF A SESSION TO DATA FIELD OF THE SESSION

EXABLOX CORPORATION, Sun...

1. A computer-implemented method of managing data within one or more data storage media, the method comprising:creating, by a processor, a data structure within the one or more data storage media, the data structure including a plurality of memory pages, wherein each memory page comprises a plurality of sessions, each session comprising a session header and a data field configured to store a plurality of data objects;
moving, in response to a write request of particular data, a virtual cursor in only one direction within the memory page until the virtual cursor is located at the session header of a free session of the plurality of sessions in which the virtual cursor stops at the session header of the free session based on the data field of the free session that is available to store the particular data and in which the virtual cursor is configured to only move in the one direction;
writing, while the virtual cursor is located at the session header of the free session and prior to storing the particular data in the data field of the free session, one or more data object identifiers related to storage of data objects of the particular data in the header of the free session;
moving the virtual cursor in the one direction from the session header of the free session to a particular position within the data field of the free session based on information read from the session header of the free session; and
enabling, by the processor, writing the particular data to the data field of the free session starting at the particular position within the data field based on the virtual cursor being at the particular position, in which during all writing of the particular data in the data field of the free session the virtual cursor is moved in a sequential manner within the data field of the free session with the virtual cursor moving in the one direction within the data field of the free session, wherein after the particular data is written to the data field of the free session, no other information regarding the writing of the particular data to the data field of the free session is written to the session header of the free session or any preceding session of the plurality of sessions that precedes the free session with respect to movement in the one direction moved by the virtual cursor;
wherein each of the session headers includes a first hash value of a binder section, second hash values being individually associated with the plurality of data objects of the corresponding session and a sequence number of the corresponding session, wherein the binder section includes two or more memory pages bound by a memory page binder, in which the memory page binder is stored in a descriptor, the memory page binder describing how the two or more memory pages are bound and how the two or more memory pages can be accessed, the first hash value being an identifier of the binder section and enabling verifying integrity of the session header of the corresponding session and integrity of the sequence number of the corresponding session, and wherein each of the second hash values is a data object identifier of one of the plurality of data objects of the corresponding session.

US Pat. No. 10,248,555

MANAGING AN EFFECTIVE ADDRESS TABLE IN A MULTI-SLICE PROCESSOR

International Business Ma...

8. A computing system, the computing system including a multi-slice computer processor for managing an effective address table (EAT), the multi-slice computer processor configured for:receiving, by EAT management logic of an instruction fetch unit from an instruction sequence unit, a next-to-complete instruction tag (ITAG);
obtaining, from the EAT, a first ITAG from a tail-plus-one EAT row, wherein the EAT comprises a tail EAT row that precedes the tail-plus-one EAT row, and wherein each EAT row comprises a starting effective address, an ending effective address, and a first ITAG in a range of ITAGs assigned to internal operations generated from processor instructions stored at a range of effective addresses defined by the starting effective address and the ending effective address;
determining, based on a comparison of the next-to-complete ITAG and the first ITAG, that the tail EAT row has completed; and
retiring the tail EAT row based on the determination, thereby retiring one or more effective addresses of the tail EAT row, wherein retiring the tail EAT row further comprises freeing a range of ITAGs associated with the tail EAT row for reassignment to newly decoded instructions.

US Pat. No. 10,248,554

EMBEDDING PROFILE TESTS INTO PROFILE DRIVEN FEEDBACK GENERATED BINARIES

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:specifying, by a processor, one or more test cases to be embedded into a compiled binary file, wherein the one or more test cases relate to at least a portion of a computer program representing a compilation unit and further wherein the compiled binary file is generated by a profile driven feedback compiler;
executing, by the processor, the one or more embedded test cases under the computer program;
gathering, by the processor, performance data associated with the computer program as the one or more embedded test cases are executed, wherein the gathered performance data associated with the computer program includes histograms for basic blocks executed and branches taken by the computer program;
recompiling, by the processor, the compilation unit based on the performance data; and
linking, by the processor, the computer program based on the performance data.

US Pat. No. 10,248,553

TEST METHODOLOGY FOR DETECTION OF UNWANTED CRYPTOGRAPHIC KEY DESTRUCTION

International Business Ma...

1. A computer program product (CPP) comprising: a computer readable storage medium; andcomputer code stored on the computer readable storage medium, with the computer code including instructions and data for causing a processor(s) set to perform at least the following operations:
running, for a first time, a test program on a computer system, with the test program using instructions and data stored in a test program space of the computer system, with the running, for the first time, of the test program including:
receiving cipher key derivation data from a system space of the computer system,
deriving and storing in the test program space a set of subsidiary cipher key(s) based upon the cipher key derivation data, and
performing a set of data encrypted data communication(s) between the test program space and the system space co-operatively using the set of subsidiary cipher key(s) stored in the test program space and a set of master cipher key(s) stored in the system space,
during or subsequent to the running for the first time of the test program, performing an operation that erroneously destroys the set of master cipher key(s) stored in the system space to yield a set of corrupted master cipher key(s) in the system space,
as the running for the first time is ending, copying the set of subsidiary cipher key(s) from the test program space to a persistent storage that is not part of the test program space, and
subsequent to the performance of the operation that erroneously destroys the set of master key(s) stored in the system space, running, for a second time, the test program on the computer system, with the running, for the second time, of the test program including:
receiving, in the test program space, the set of subsidiary cipher key(s) previously derived and copied to the persistent storage during the running, for the first time, of the test program,
encountering an error when attempting to perform a set of data encrypted data communication(s) between the test program space and the system space co-operatively using the set of subsidiary cipher key(s) received in the test program space and the set of corrupted master cipher key(s) stored in the system space, and
logging log data indicative of the error.

US Pat. No. 10,248,552

GENERATING TEST SCRIPTS FOR TESTING A NETWORK-BASED APPLICATION

International Business Ma...

1. A computer system for testing an application, comprising: at least one processor, a memory coupled to the at least one processor, computer program instructions stored in the memory and executed by the at least one processor, to perform steps of:obtaining first temporary test scripts for testing at least one test case of a first version of the application, the first temporary test scripts being recorded with first mark data used for testing the first version of the application;
obtaining a first correspondence between the first mark data and test data;
substituting the test data for the first mark data in the first temporary test scripts based on the first correspondence to obtain first test scripts for testing the at least one test case of the first version of the application;
in response to the first mark data being included in a second mark data, wherein the second mark data is used for testing a second version of the application, obtaining second temporary test scripts for testing at least one test case of the second version of the application, the second temporary test scripts being recorded with the second mark data;
obtaining a stored first correspondence;
obtaining a second correspondence between increased test data and increased data in the second mark data comparing with the first mark data; and
substituting the test data and the increased test data for the second mark data in the second temporary test scripts based on both the first and second correspondences to obtain second test scripts for testing the at least one test case of the second version of the application.

US Pat. No. 10,248,551

SELECTIVE OBJECT TESTING IN A CLIENT-SERVER ENVIRONMENT

International Business Ma...

1. A method for testing objects in a client-server environment, the method comprising:receiving, by one or more computer processors, a plurality of proposed modifications to a baseline set of objects stored in a repository collectively submitted by a plurality of members of a first group of users and a plurality of members of a second group of users, wherein the baseline set of objects includes a set of server modules comprising at least one application server node;
registering, by one or more computer processors, each modification of the plurality of proposed modifications with a user identifier corresponding to the user proposing the modification;
receiving, by one or more computer processors, instructions to test the baseline set of objects with modifications that correspond to the first group of users, wherein receiving includes determining whether any of the users in the first group of users previously registered at least one of the plurality of proposed user modifications to the baseline set of objects by inspecting a unique folder associated with each user in the first group of users to determine whether any users previously submitted the proposed user modifications;
determining, by one or more computer processors, which of the baseline set of objects stored in the repository are subject to the modifications that correspond to the first group of users; and
executing, by one or more computer processors, the baseline set of objects and incorporating the modifications that correspond to the first group of users without incorporating modifications that correspond to the second group of users and without committing the plurality of proposed modifications to the baseline set of objects stored in the repository and;
instantiating, by one or more computer processors, an instance of the at least one application server node having at least one modification not made to the at least one application server node of the baseline set of objects, wherein the instance is inaccessible to the second group of users.

US Pat. No. 10,248,550

SELECTING A SET OF TEST CONFIGURATIONS ASSOCIATED WITH A PARTICULAR COVERAGE STRENGTH USING A CONSTRAINT SOLVER

Oracle International Corp...

1. A non-transitory computer readable medium comprising instructions which, when executed by one or more hardware processors, causes performance of a set of operations, comprising:identifying a set of test parameters for testing a target application;
identifying, for each test parameter of the set of test parameters, a respective set of candidate values for the test parameter;
identifying a set of candidate test configurations, each associated with a respective combination of candidate values for the set of test parameters;
identifying a desired coverage strength;
identifying subgroups of test parameters, from the set of test parameters, each of the subgroups of test parameters including a number of test parameters that is equal to the desired coverage strength;
identifying a set of interactions, wherein:
each interaction is associated with one of the subgroups of test parameters;
each interaction is associated with a respective combination of candidate values for the associated subgroup of test parameters; and
the set of interactions covers every possible combination of candidate values for each of the subgroups of parameters;
specifying a selection variable, indicating which of the set of candidate test configurations have been selected;
specifying a set of one or more constraints that:
minimizes a cost of testing the target application using a subset of the set of candidate test configurations that is selected based on the selection variable; and
requires each of the set of interactions to be covered by at least one of the subset of the set of candidate test configurations that is selected based on the selection variable;
storing a data model, applicable to a constraint solver, comprising:
the selection variable; and
the set of constraints;
wherein the target application is executed using at least one of the subset of the set of candidate test configurations that is selected based on the selection variable.

US Pat. No. 10,248,549

SYSTEMS AND METHODS FOR DETECTION OF UNTESTED CODE EXECUTION

Citrix Systems, Inc., Fo...

1. A method for improving the quality of a first software product, comprising:performing operations by a computing device to run the first software product generated by compiling source code modified based on code coverage data gathered during testing of the first software product prior to release on a market, the code coverage data identifying at least one first portion of the source code which was executed at least once during the testing and identifying at least one second portion of the source code which was not executed during the testing;
automatically detecting when an execution of the second portion is triggered while the first software product is being used by an end user subsequent to being released on the market;
automatically performing a notification action in response to said detecting; and
mitigating a risk of defect in the first software product based on the notification action through an end user data-driven triggering of additional testing for untested portions of the source code subsequent to when the first software product has been released.

US Pat. No. 10,248,548

CODE COVERAGE INFORMATION

ENTIT SOFTWARE LLC, Sunn...

1. A method comprising:obtaining code coverage information related to lines of code;
generating a two-way mapping based on the code coverage information, the two-way mapping comprising a first mapping that maps a particular test in the plurality of tests to at least one line in the lines of code that is covered by the particular test and a second mapping that maps a particular line of code in the lines of code to at least one test in the plurality of tests that covers the particular line of code;
receiving, from a client computing device, an indication that a new line has been added to the lines of code in an integrated development environment (IDE) running on the client computing device;
causing the plurality of tests to be executed; and
obtaining the code coverage information related to the new line, wherein the code coverage information related to the new line comprises at least one of: a first indication that the new line passed all of the tests covering the new line, a second indication that the new line failed at least one test covering the new line, or a third indication that the new line is not covered by any of the plurality of tests.

US Pat. No. 10,248,547

COVERAGE OF CALL GRAPHS BASED ON PATHS AND SEQUENCES

SAP SE, Walldorf (DE)

1. A non-transitory machine-readable medium storing a program executable by at least one processing unit of a computing device, the program comprising sets of instructions for:collecting a set of call stack data associated with a set of test cases executed on an application;
generating a set of call graphs based on the set of call stack data, each call graph in the set of call graphs comprising a set of nodes representing a set of functions in the application executed in the corresponding test case in the set of test cases;
generating a full call graph based on the set of call graphs, the full call graph comprising a plurality of nodes representing the sets of functions executed by the application during the execution of the set of test cases on the application;
determining a set of switch nodes and a set of edge nodes in the plurality of nodes of the full call graph, wherein each switch node in the set of switch nodes calls two or more other nodes in the full call graph, wherein each edge node in the set of edge nodes does not call any nodes in the full call graph;
determining, for each call graph in the set of call graphs, a set of short paths and a set of short sequences in the call graph, wherein each short path in the set of short paths comprises a path in the call graph that starts at a switch node in the call graph and ends at a first switch node in the call graph along the path, a first edge node in the call graph along the path, or a node in the call graph at an end of the path, wherein each short sequence in the set of short sequences comprises a switch node in the call graph and a set of nodes in the call graph that the switch node calls;
receiving a notification indicating a modification to a function in the sets of functions of the application;
determining a subset of the set of test cases to test the modification to the function based on the sets of short paths and the sets of short sequences in the set of call graphs; and
executing the subset of the set of test cases on a version of the application that includes the modified function.

US Pat. No. 10,248,546

INTELLIGENT DEVICE SELECTION FOR MOBILE APPLICATION TESTING

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:determining features of a new mobile application to be tested;
comparing, by a processor, the features of the new mobile application to be tested with features of multiple known mobile applications to identify one or more known mobile applications with similar features; and
based at least in part on automated analysis of user reviews of the one or more known mobile applications operating in one or more types of mobile devices, providing, by the processor, one or more risk scores for operation of the new mobile application in the one or more types of mobile devices, wherein providing the one or more risk scores comprises generating a respective risk score for operating the new mobile application in each type of mobile device of the one or more types of mobile devices, the respective risk scores being determined based, at least in part, on automated analysis of user reviews of the one or more known mobile applications with similar features operating on that type of mobile device, and wherein the one or more risk scores are each determined based on one or more factors selected from a group consisting of: a degree of feature similarity between the new mobile application and the known mobile application with similar features, a number of defect instances reported for the known mobile application on the type of mobile device, a sentiment of user reviews of the known mobile device application on the type of mobile device, and a number of like or dislike reviews of the known mobile application on the type of mobile device.

US Pat. No. 10,248,545

METHOD FOR TRACKING HIGH-LEVEL SOURCE ATTRIBUTION OF GENERATED ASSEMBLY LANGUAGE CODE

PARASOFT CORPORATION, Mo...

1. A computer implemented method for tracking high-level source attribution of a generated assembly language code, the method comprising:receiving commands for compiling or linking a high-level language code;
analyzing the received commands to determine whether a command is a compiler command for compiling the high-level language code or a link command for linking the low level object code;
when the command is a compiler command:
generating assembly language code by compiling the high-level language code,
parsing the generated assembly language code to generate an internal representation for the assembly language code,
storing the internal representation in a computer memory; and
generating associated linker input artifacts for linking;
when the command is a link command:
updating the internal representation with the associated linker input artifacts;
generating a report file from the updated internal representation instrumenting the updated internal representation for coverage metrics;
generating instrumented low level assembly language source code from the instrumented internal representation;
assembling the instrumented low level assembly language source code to produce object code, linking the object modules in the object codes to produce an executable binary image file;
executing the executable binary image file to extract dynamic coverage information; and
merging the dynamic coverage information and the updated internal representation to produce test coverage reports.

US Pat. No. 10,248,544

SYSTEM AND METHOD FOR AUTOMATIC ROOT CAUSE DETECTION

CA, Inc., Islandia, NY (...

1. A method, comprising:obtaining, with one or more processors, a history of performance of an application, wherein:
the history of performance comprises a plurality of historical transaction records corresponding to a plurality of transactions serviced by the application responsive to respective requests received at an entry point of the application,
respective historical transaction records identify a plurality of components of the application accessed to service a respective transaction among the plurality of transactions,
respective historical transaction records include a plurality of attributes of operation of respective components among the plurality of components of the application accessed to service the respective transaction, and
the attributes include, for at least some components in at least some transactions, a respective response time of the respective component in the respective transaction;
after obtaining the history of performance, receiving, with one or more processors, an error of the application occurring in a first component of the application during an executed transaction that is servicing a request;
obtaining, with one or more processors, an executed transaction record of the executed transaction associated with the error, the executed transaction record identifying a plurality of components of the application accessed to service the executed transaction;
selecting, with one or more processors, a subset of the historical transaction records at least in part by comparing at least part of the historical transaction records to at least part of the executed transaction record and determining that at least some request parameters and that at least some application components match between the executed transaction record and the subset of historical transaction records, the subset of the historical transaction records including a plurality of historical transaction records;
determining, with one or more processors, that a value of an attribute of the executed transaction record is inconsistent with values of the attribute in the selected subset of historical transaction records, the attribute being associated with a second component of the application, wherein determining that the value of the attribute of the executed transaction record is inconsistent with values of the attribute in the selected subset of historical transaction records comprises comparing a response time of the second component during the executed transaction associated with the error to a threshold response time, the response time of the second component being a portion attributable to the second component of a response time of the request serviced by the executed transaction associated with the error; and
in response to the determination, designating, with one or more processors, in memory, the second component as a potentially associated with a root cause of the error.

US Pat. No. 10,248,543

SOFTWARE FUNCTIONAL TESTING

1. A method comprising:presenting to a user through a client device a graphical representation of an output of executing software;
capturing at least one image of physical movement made by the user interacting with the graphical representation of the output of the executing software;
applying computer vision to the at least one image to identify graphical elements in the graphical representation of the output of the executing software;
applying computer vision to the at least one image to identify user interactions with the graphical elements in the graphical representation of the output of the executing software based on the graphical elements identified in the graphical representation of the output of the executing software;
receiving user input indicating functions associated with elements of the software including the graphical elements for use in executing the software;
generating a script package based on the user interactions with the graphical elements in the graphical representation of the output of the executing software and the user input indicating the functions associated with the elements of the software for use in executing the software, the script package including script capable of being executed in functionally testing the software;
functionally testing the software on at least one virtualized testbed machine using the script package;
generating output of functionally testing the software by functionally testing the software using the script package;
performing functional testing analysis of the software by applying computer vision to a graphical representation of the output of functionally testing the software to determine at least one of a degree to which the graphical representation of the output of functionally testing the software changes compared to an expected output of functionally testing the software and a frequency at which the graphical representation of the output of functionally testing the software changes compared to a graphical representation of the expected output of functionally testing the software, said at least one of the degree to which the output of functionally testing the software changes and the frequency at which the graphical representation of the output of functionally testing the software changes used to generate functional testing analytics data included as part of functional testing results.

US Pat. No. 10,248,542

SCREENSHOT VALIDATION TESTING

International Business Ma...

1. A method comprising:receiving, by one or more computer processors, parameters of a test scenario, wherein the parameters include a first screenshot of an application interface that is a webpage, and one or more page objects within the first screenshot that are models of areas within the webpage for verification testing of one or more features of the webpage;
generating, by the one or more computer processors, a second screenshot of an updated version of the application interface that includes changes to a website environment, a structure, a design, and a format of the application interface for the application interface;
identifying, by the one or more computer processors, one or more page objects within the second screenshot that are within a defined scope of each of the one or more page objects within the first screenshot in the parameters of the test scenario, wherein the scope defines outer boundaries of a section of the webpage associated with the one or more page objects within the first screenshot;
generating a partial screenshot containing only the identified one or more page objects within the second screenshot that are within the defined scope;
comparing, by the one or more computer processors, a section of the partial screenshot that includes the identified one or more page objects of the second screenshot to a section of the first screenshot that includes one or more page objects associated with the first screenshot that correspond to the one or more page objects included in the section of the second screenshot, wherein portions of the first screenshot not included in the section of the first screenshot and portions of the second screenshot not included in the section of the second screenshot are excluded from the comparison of the section of the second screenshot to the section of the first screenshot;
determining, by the one or more computer processors, whether the section of the second screenshot matches, within a predetermined tolerance level, the section of the first screenshot and
providing, by the one or more computer processors, a report based on the determination and comparison, wherein the report includes results of the verification testing of the one or more features of the webpage and the portions that are excluded from the comparison and outside of the defined scope are excluded in the report, the results including a difference between the compared sections be and the redetermined tolerance level.

US Pat. No. 10,248,541

EXTRACTION OF PROBLEM DIAGNOSTIC KNOWLEDGE FROM TEST CASES

International Business Ma...

1. A method comprising:identifying a first set of log entries generated during a first execution of an application code in a first scenario, wherein each log entry in the first set of log entries corresponds to a set of lines of code in the application code;
creating a first rule based, at least in part, on a first order of the first set of log entries, wherein the first order is based on a first chronological order;
identifying a second set of log entries generated during a second execution of the application code in a second scenario; and
determining the second set of log entries partially satisfies the first rule;
wherein:
at least determining the second set of log entries partially satisfies the first rule is performed by computer software running on computer hardware.

US Pat. No. 10,248,540

BIFURCATING A MULTILAYERED COMPUTER PROGRAM PRODUCT

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for debugging a computer program product, the method comprising:receiving an identifier of a portion of code from a first module of the computer program product, wherein the portion of code contains a defect, the computer program product comprising a plurality of modules, each module being associated with a development team identifier that indicates which development team developed the module;
determining, based on the development team identifier, and displaying a list of execution scenarios that invoke the portion of code from the first module, an execution scenario in the list invokes computer code only from a set of modules that are associated with the development team identifier of the first module; and
in response to receipt of a selection of a first execution scenario from the list of execution scenarios, executing the computer program product according to the first execution scenario by executing only the computer code from the set of modules that are associated with the development team identifier of the first module.

US Pat. No. 10,248,539

MANAGING SOFTWARE PERFORMANCE TESTS BASED ON A DISTRIBUTED VIRTUAL MACHINE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A method, comprising:in response to determining a debugging state of a software system running on a virtual machine (VM), controlling timing of a system clock of the VM;
intercepting a data packet sent to the VM from another VM, and extracting from the data packet an added system time and a reference time indicative of when the data packet is sent by the other VM;
based on the system time and the reference time of when the data packet is sent out by the other VM and a reference time of when the data packet is intercepted, calculating a timing at which the data packet is expected to be received by the VM; and
forwarding the data packet to the VM as a function of a comparison result of the timing at which the data packet is expected to be received by the VM and a system time of the VM when the data packet is intercepted;
wherein in response to intercepting a data packet sent from the VM to the other VM, parsing an address of the VM from the data packet sent to the other VM from the VM;
obtaining a system time and a reference time of the VM when sending the data packet to the other VM according to the address of the VM; and
adding the system time and the reference time of the VM when sending the data packet to the other VM in the data packet sent to the other VM from the VM.

US Pat. No. 10,248,538

CONTROLLER OF SEMICONDUCTOR MEMORY DEVICE FOR DETECTING EVENT AND STORING EVENT INFORMATION AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A controller for controlling a semiconductor memory device, the controller comprising:an event occurrence detection unit configured to detect whether an event occurs;
an event information generation unit configured to generate event information, in response to the detected result; and
a command generation unit configured to generate a command for storing the event information in the semiconductor memory device,
wherein the semiconductor memory device includes a memory cell array configured to perform a program operation of storing data,
wherein the event includes a time-out event that occurs when the program operation is performed in the semiconductor memory device, and is detected when a monitoring signal does not exist for a time-out time and a value of a data length counter is not a predetermined value, and
wherein the monitoring signal periodically inputted for every set time is continuously inputted for the time-out time, and the value of the data length counter is a value for counting data input to the semiconductor memory device.

US Pat. No. 10,248,537

TRANSLATION BUG PREDICTION CLASSIFIER

Microsoft Technology Lice...

1. A translation system, comprising:a memory configured to store a translation bug prediction model and a translation resource; and
a processing core having at least one processor configured to apply a translation bug prediction model to the translation resource to calculate a successful score for the translation resource, calculate an unsuccessful score for the translation resource, identify a potential error source based on the successful score and the unsuccessful score, and execute an automatic translation of the translation resource to create a translation target,
wherein the successful score for the translation resource and the unsuccessful score for the translation resource are based on whether a previous translation was corrected.

US Pat. No. 10,248,536

METHOD FOR STATIC AND DYNAMIC CONFIGURATION VERIFICATION

International Business Ma...

1. A method for improving functionality and performance of a client terminal by verifying correctness of application configuration of an application, comprising:for each of baseline source code and changed source code of an application for a client terminal, analyzing a graph representation of an execution flow of a plurality of application functionalities performed by execution of a respective said source code;
identifying by said analysis a plurality of baseline functional dependencies between source code segments of said baseline source code, wherein each of said baseline functional dependencies defines a dependency between one of said baseline source code segments and another one of said baseline source code segments;
defining a plurality of baseline dependency pairs, each defined for another one of said plurality of identified baseline functional dependences, wherein each of said plurality of baseline dependency pairs comprises two code segments of said baseline source code, wherein a second of said two code segments depends on a first of said two code segments;
identifying by said analysis a plurality of functional dependencies between source code segments of said changed source code, wherein each of said functional dependencies defines a dependency between one of said changed source code segments and another one of said changed source code segments, wherein each of said plurality of functional dependencies and said plurality of baseline functional dependencies is at least one of data dependency and control dependency;
defining a plurality of changed source code dependency pairs, each defined for another one of said plurality of identified changed source code functional dependencies, wherein each of said plurality of changed source code dependency pairs comprises two code segments of said changed source code, wherein a second of said two code segments depends on a first of said two code segments;
comparing functional dependencies of the baseline dependency pairs with the changed source code dependency pairs and identifying one or more matches when a baseline dependency pair has a corresponding changed source code dependency pair in the baseline source code;
identifying a configuration discrepancy according to the comparison when a dependency pair does not have a corresponding dependency pair match or identifying one or more dependency pairs in the changed source code which are not permitted by the application configuration;
generating at least one of a system of a system interrupt or an error message as a notification, when said configuration discrepancy is identified.

US Pat. No. 10,248,535

ON-DEMAND AUTOMATED LOCALE SEED GENERATION AND VERIFICATION

INTERNATIONAL BUSINESS MA...

1. A method for locale verification, comprising:receiving a user's locale information to fill a locale data template at a user device;
converting the user's filled locale data template to a normalized format to prevent data loss across platforms;
converting a set of expected locale responses into the normalized format based on the locale data template;
comparing the user's normalized filled locale data template to the notinalized set of expected locale responses to identify one or more mismatches between the received locale data template and the set of expected locale responses, pertaining to at least one locale-related attribute of the system under test, using a processor; and
automatically altering the at least one locale-related attribute of the system under test to correct the one or more mismatches.

US Pat. No. 10,248,534

TEMPLATE-BASED METHODOLOGY FOR VALIDATING HARDWARE FEATURES

International Business Ma...

1. A method comprising:creating a workload pool including a plurality of workloads;
receiving a definition of a test thread including: (i) a test block of instructions for testing functionality of a processor, and (ii) a first workload hook in the form of a callback function identifying a first workload of the plurality of workloads;
executing, by the processor, a test thread according to the test thread definition, with the execution of the test thread including:
responsive to encountering the first workload hook, performing the callback function of the first workload hook,
responsive to performance of the callback function, retrieving the first workload from the workload pool,
responsive to the retrieval of the first workload, performing the first workload by executing the test block of instructions to obtain first test results; and
communicating the test results to a user to test at least some of the functionality of the processor;
wherein the use of the callback function of the first workload hook and the retrieval of the first workload from the workload pool decouples a definition of the test thread from the first workload performed by the test thread to obtain the first test results.

US Pat. No. 10,248,533

DETECTION OF ANOMALOUS COMPUTER BEHAVIOR

State Farm Mutual Automob...

1. A computer-implemented method for determining features of a dataset that are indicative of anomalous behavior of one or more computers in a large group of computers, the computer-implemented method comprising, via one or more processors and/or transceivers:receiving log files including a plurality of entries of data regarding connections between a plurality of computers belonging to an organization and a plurality of websites outside the organization, each entry being associated with the actions of one computer;
determining a plurality of embedded features that are included in each entry;
determining a plurality of derived features that are extracted from the embedded features;
creating a plurality of features including the embedded features and the derived features;
executing a time series decomposition algorithm on a portion of the features of the data to generate a first list of features;
implementing a plurality of traffic dispersion graphs to generate a second list of features; and
implementing an autoencoder and a random forest regressor to generate a third list of features.

US Pat. No. 10,248,532

SENSITIVE DATA USAGE DETECTION USING STATIC ANALYSIS

Amazon Technologies, Inc....

1. A system, comprising:a plurality of computing devices configured to implement a sensitive data detection system and a service-oriented system, wherein the service-oriented system comprises a plurality of services including a particular service, and wherein the sensitive data detection system is configured to:
retrieve a service model specifying one or more operations exposed by the particular service, wherein the service model is retrieved based at least in part on a name of the particular service;
extract names of the one or more operations from the service model;
extract names of one or more parameters of the one or more operations from the service model;
identify one or more sensitive operations among the one or more operations, wherein the names of the one or more operations and the names of the one or more parameters are checked against a dictionary of sensitive terms;
identify one or more consumers of the particular service using a metadata repository, wherein the metadata repository specifies a dependency of the one or more consumers on one or more client-side packages of the particular service;
identify one or more of the sensitive operations called by the consumers in source code for the one or more consumers, wherein the source code for the one or more consumers is retrieved from a source code repository; and
responsive to identification of a particular called sensitive operation, implement one or more security measures to enhance security of data to be processed by the particular called sensitive operation, wherein the one or more security measures include at least one of:
generation of a directed graph representing flow of sensitive data through one or more of the services, wherein the data to be processed by the particular sensitive operation comprises the sensitive data,
modification of source code of the service, or
migration of the service from a current deployment environment to another deployment environment that is more secure than the current deployment environment.

US Pat. No. 10,248,531

SYSTEMS AND METHODS FOR LOCALLY STREAMING APPLICATIONS IN A COMPUTING SYSTEM

United Services Automobil...

1. A system, comprising:a storage component configured to store data; a processor configured to:
store a first set of data generated during a first period of time in the storage component;
generate a first partition in the storage component after the first period of time;
store a second set of data generated during a second period of time after the first period of time into a second partition of the storage component;
receive a request to return to a state associated with the first period of time; and
swap the second partition with a third partition of the storage component in response to the request, wherein the second partition is temporarily disabled, wherein the third partition comprises a hidden portion of the storage component and wherein the hidden portion of the storage components configured to be invisible to a user of a computing system.

US Pat. No. 10,248,530

METHODS AND SYSTEMS FOR DETERMINING CAPACITY

Comcast Cable Communicati...

1. A method comprising:determining, by a computing device, a first set of capacity test results by performing a first set of capacity tests of a computing system;
determining a second set of capacity test results by performing a second set of capacity tests of the computing system, wherein the second set of capacity tests is different from the first set of capacity tests;
determining a third set of capacity test results by performing a third set of capacity tests of the computing system, wherein the third set of capacity tests is different from the first set of capacity tests and the second set of capacity tests;
calibrating one or more of the first set of capacity tests, the second set of capacity tests, or the third set of capacity tests based on the first set of capacity test results, the second set of capacity test results, or the third set of capacity test results;
performing a fourth set of capacity tests using one or more of the first set of capacity test results, the second set of capacity test results, or the third set of capacity test results associated with the calibrated one or more of the first set of capacity tests, the second set of capacity tests, or the third set of capacity tests; and
determining a computing system capacity based on a fourth set of capacity test results derived from the fourth set of capacity tests, the first set of capacity test results, the second set of capacity test results, and the third set of capacity test results.

US Pat. No. 10,248,529

COMPUTING RESIDUAL RESOURCE CONSUMPTION FOR TOP-K DATA REPORTS

International Business Ma...

1. A system for monitoring computer system operation, the system comprising a processor, memory accessible by the processor, and computer program instructions stored in the memory and executable by the processor to perform:monitoring and measuring metrics of system resource consumption of a plurality of entities in at least one computer system to generate resource consumption data;
generating a report of the system resource consumption data for the plurality of entities for each of a plurality of time periods;
identifying a number, k, of the plurality of entities as top-k consumers of computer system resources for each of the plurality of time periods;
capturing, long term resource consumption data of an entity of the plurality of entities other than the top-k entities, said long term resource consumption data of the entity being stored as residual resource consumption data;
determining, based on said residual resource consumption data for each entity of the plurality of entities other than the top-k entities, whether the resources used by the entity accumulated over the plurality of time periods is greater than the resources used by any top-k entity during any of the plurality of time periods;
identifying at least one residual entity of the plurality of entities as an entity whose accumulated resource consumption over said plurality of time periods is greater than the resources used by any of the top-k entities based on residual resource consumption data of the entity for the plurality of time periods; and
resampling the reports of the system resource consumption data corresponding to the top-k entities and to the identified at least one residual entity to generate at least one report covering a time period including the plurality of time periods, said generated at least one report reflecting a state of said at least one computing system with increased accuracy.

US Pat. No. 10,248,527

AUTOMATED DEVICE-SPECIFIC DYNAMIC OPERATION MODIFICATIONS

Amplero, Inc, Seattle, W...

1. A computer-implemented method comprising:tracking, by a configured computing system, device operations of a plurality of mobile devices, including generating data about modification actions of multiple types that are performed on configuration settings affecting use of hardware components on the mobile devices, and about subsequent changes in the device operations of the mobile devices;
determining, by the configured computing system and based on an automated analysis of the generated data, measured effects on the device operations that result from performing associated modification actions on the configuration settings, wherein the mobile devices have a plurality of device attributes reflecting the hardware components on the mobile devices, and wherein the associated modification actions of the multiple types have a plurality of action attributes reflecting associated configuration setting modifications;
generating, by the configured computing system, a decision tree structure for a subset of the device and action attributes that are selected based on having associated measured effects of an indicated type, including generating a node in the decision tree structure associated with each of multiple combinations of the device and action attributes of the subset, and storing a target distribution and a control distribution for each node to identify measured effects for devices having the device attributes of the associated combination for the node, wherein the target distribution for a node identifies the measured effects for devices receiving modification actions with the action attributes of the associated combination for the node, and wherein the control distribution for a node identifies the measured effects for other devices that do not receive modification actions with the action attributes of the associated combination for the node; and
using, by the configured computing system, the decision tree structure to control ongoing operations for an additional mobile device, including:
determining, by the configured computing system, and for each of multiple modification actions to possibly perform, a node in the decision data structure with an associated combination of device and action attributes that matches attributes of the additional mobile device and the modification action, and using the target and control distributions for the determined node to predict an effect of performing the modification action on the additional mobile device;
selecting, by the configured computing system, one of the multiple modification actions based at least in part on the predicted effect of performing the selected one modification action on the additional mobile device; and
performing, by the configured computing system, the selected one modification action on the additional mobile device to modify one or more configuration settings that affect use of hardware components on the additional mobile device.

US Pat. No. 10,248,526

DATA STORAGE DEVICE AND DATA MAINTENANCE METHOD THEREOF

Silicon Motion, Inc., Jh...

1. A data storage device, comprising:a flash memory, having a plurality of blocks, and the blocks comprise a current block and temporary block; and
a controller, writing a first data sector corresponding to a first logical address into the current block, determining whether the temporary block has a second data sector that also corresponds to the first logical address, wherein when the temporary block already has a second data sector corresponding to the first logical address, the controller writes a first temporary-block table into the temporary block.

US Pat. No. 10,248,525

INTELLIGENT MEDICAL IMPLANT AND MONITORING SYSTEM

Bayer Oy, Turku (FI)

1. An intelligent medical implant and monitoring system comprising:an implant with a communication device;
an inserter for inserting the implant;
a reader that operates to broadcast a signal specific to the communication device causing the communication device to respond with a unique identifier; and
an external database for storing and providing access to information keyed to the unique identifier from the communication device;
wherein the inserter comprises circuitry for communicating with the communication device and storing the unique identifier in the communication device during an insertion process.

US Pat. No. 10,248,524

INSTRUCTION AND LOGIC TO TEST TRANSACTIONAL EXECUTION STATUS

Intel Corporation, Santa...

1. A system comprising:a plurality of processors comprising a plurality of multithreaded cores, wherein one or more of the multithreaded cores are to perform out-of-order instruction execution for a plurality of threads, and wherein the one or more of the multithreaded cores comprise:
instruction fetch logic to fetch a plurality of instructions of one or more of the threads,
an instruction decode unit to decode the instructions,
register renaming logic to rename one or more registers for the instructions within a register file,
an instruction cache to cache one or more of the instructions to be executed,
a data cache to cache data for the instructions,
a level 2 (L2) cache unit to cache one or more of the instructions and data for the instructions,
checkpoint logic to checkpoint an architectural state responsive to a first instruction among the instructions to initiate a transactional execution region comprising transactional memory operations,
transaction tracking logic to determine whether the transactional memory operations of the transactional execution region result in a conflict, wherein the transaction tracking logic is to adjust one or more flags responsive to a determination that a conflict exists,
an execution unit to execute a second instruction among the instructions to test a status of the transactional execution region, wherein, to execute the second instruction to test the status of the transactional execution region, the execution unit is further to determine that the second instruction is within a context of the transactional execution region and, in response, set a flag register to a value indicating that the second instruction is within the context of the transactional execution region, and
logic to atomically commit the transactional memory operations following a determination that no conflict exists or to roll back to the checkpointed architectural state following the determination that the conflict exists;
a processor interconnect to communicatively couple two or more of the processors; and
a system memory comprising a dynamic random access memory communicatively coupled to one or more of the processors.

US Pat. No. 10,248,523

SYSTEMS AND METHODS FOR PROVISIONING DISTRIBUTED DATASETS

Veritas Technologies LLC,...

1. A computer-implemented method for provisioning distributed datasets, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:identifying a dataset, wherein a production cluster stores a primary instance of the dataset by distributing data objects within the dataset across the production cluster according to a first partitioning scheme that assigns each data object within the dataset to a corresponding node within the production cluster;
receiving a request for a testing instance of the dataset on a testing cluster, wherein the testing cluster is to distribute storage of data objects across the testing cluster according to a second partitioning scheme that maps data objects to corresponding nodes within the testing cluster;
locating, in response to the request, a copied instance of the dataset that is derived from the primary instance of the dataset and that is stored outside both the production cluster and the testing cluster;
partitioning the copied instance of the dataset according to the second partitioning scheme, thereby generating a plurality of partitions of data objects that map to corresponding nodes within the testing cluster; and
providing the testing instance of the dataset in response to the request by providing storage access for each node within the testing cluster to a corresponding partition within the plurality of partitions without copying the copied instance of the dataset to the testing cluster.

US Pat. No. 10,248,522

SYSTEM AND METHOD FOR AUTOMATIC FEEDBACK/MONITORING OF AVIONICS ROBUSTNESS/CYBERSECURITY TESTING

Rockwell Collins, Inc., ...

1. A system for automatic feedback and monitoring of avionics robustness and cybersecurity testing, comprising:at least one fuzzer configured to be coupled to an avionics system under test (SUT) and configured to:
request at least one initial state of the SUT from a monitor module coupled to the fuzzer;
generate a plurality of test cases associated with the SUT; and
transmit the plurality of test cases to the SUT;
the at least one monitor module comprising an SNMP monitor, a serial monitor, a SYSLOG monitor, and a TCP monitor, configured to be coupled to the SUT via a serial connection, a SYSLOG server, and a TCP port, and to:
request a management information base from a managed device in data communication with the SUT;
determine at least one subsequent state of the SUT by at least one of 1) sending a system message to the SUT, the system message corresponding to a system response; and 2) detecting a regular activity of the SUT;
determine at least one error of the SUT by comparing the subsequent state and the initial state, the determined error corresponding to at least one first test case of the plurality of test cases; and
generate at least one log including one or more of the determined error, the associated initial state, the associated subsequent state, and the at least one first test case.

US Pat. No. 10,248,521

RUN TIME ECC ERROR INJECTION SCHEME FOR HARDWARE VALIDATION

MICROCHIP TECHNOLOGY INCO...

1. An integrated peripheral device having a runtime self-test capabilities, comprisinga read path configured to internally forward read data;
a read fault injection logic configured to, under program control, inject at least one faulty bit into the forwarded read data; and
an error indication logic configured to, under program control, provide an error indication to a processor when a fault injection occurs;
wherein the read fault injection logic is further configured to inject the at least one faulty bit into the forwarded read data when a user has enabled the injection and when a read address matches a user-specified memory location.

US Pat. No. 10,248,520

HIGH SPEED FUNCTIONAL TEST VECTORS IN LOW POWER TEST CONDITIONS OF A DIGITAL INTEGRATED CIRCUIT

Oracle International Corp...

1. A method for functional testing of a microelectronic circuit, the method comprising:loading an initial test value into the microelectronic circuit through one or more inputs to the microelectronic circuit, the microelectronic circuit comprising a plurality of logic stages;
conducting a first portion of a functional test by transmitting a first clock signal to the microelectronic circuit, the first clock signal comprising one or more pulses of a first clock signal and one or more first delay periods, the first clock signal configured to test a first subset of the plurality of logic stages at an operational frequency of the microelectronic circuit;
storing a result of the first portion of the functional test in a memory device;
initializing the microelectronic circuit by reloading the initial value into the microelectronic circuit through the one or more inputs;
conducting a second portion of the functional test by transmitting a second clock signal to the microelectronic circuit, the second clock signal comprising at least one offsetting clock pulse, one or more pulses of a second clock signal and one or more second delay periods, the second clock signal configured to test a second subset of the plurality of logic stages at the operational frequency of the microelectronic circuit, the second subset of the plurality of logic stages different than the first subset, the one or more first delay periods of the first clock signal and the one or more second delay periods of the second clock signal causing a charge for one or more components of the microelectronic circuit to replenish; and
comparing the stored result from the first portion and a result from the second portion to an expected result.

US Pat. No. 10,248,519

INPUT DEVICE TEST SYSTEM AND METHOD THEREOF

PRIMAX ELECTRONICS LTD., ...

1. A mouse device test system, configured to test a mouse device having a plurality of functional elements and the functional elements comprise a left key, a right key, a capacitance detector, or an optical detector, wherein the mouse device test system comprises:a test host, configured to execute a test program and a message interception program, and output a test message by means of the test program, wherein the test message comprising testing the left key, the right key, or the optical detector; and
a test platform, configured to receive the test message and operate the mouse device according to the test message, wherein the mouse device outputs a response message to the test host in response to the operation, wherein the test platform is a mechanical arm or a striking head, and when the test message comprises testing the left key or the right key, the striking head strikes the left key or the right key and when the test message comprises testing the optical detector, the mechanical arm moves the mouse device towards a preset direction at a preset speed, wherein
the message interception program is used to intercept the response message and convert the response message into at least one code, and the test program determines whether the at least one code is consistent with the test message.

US Pat. No. 10,248,518

INFORMATION PROCESSING DEVICE AND METHOD OF STORING FAILURE INFORMATION

FUJITSU LIMITED, Kawasak...

1. An information processing device, comprising:a memory; and
a processor coupled to the memory and the processor configured to
perform a diagnosis of hardware of the information processing device,
generate plural pieces of failure information that each indicate a failure detected by the diagnosis, the plural pieces of failure information being classified into groups corresponding to respective different importance levels,
store the plural pieces of failure information in consecutive storage areas of the memory, the consecutive storage areas being divided into storage sections corresponding to the respective groups in order of importance level, and
store first piece of failure information in a head of a second storage section among the storage sections in absence of free areas in first storage section among the storage sections, the first piece of failure information being included in a first group among the groups, the first group corresponding to a first importance level among the importance levels, the first storage section being secured for the first group, the second storage section being secured for a second group among the groups, the second group corresponding to a second importance level among the importance levels, the second importance level being lower than the first importance level by one level, the first storage section and the head of the second storage section being consecutive.

US Pat. No. 10,248,517

COMPUTER-IMPLEMENTED METHOD, INFORMATION PROCESSING DEVICE, AND RECORDING MEDIUM

FUJITSU LIMITED, Kawasak...

1. A computer-implemented method for detecting a fault occurrence within a system including a plurality of information processing devices, the method comprising:each time when a failure of the system occurred, acquiring first configuration information from a failed processing device of the plurality of processing devices;
generating learning data in which a setting item, a setting value that includes a setting error and a fault type are associated with each other;
storing the generated learning data;
determining whether each of fault types included in the learning data depends on a software configuration;
extracting first software configuration pattern indicating a combination of setting files in which settings related to software are described, from the first configuration information, based on a result of the determining whether each of the fault types included in the learning data depends on the software configuration;
storing the extracted first software configuration pattern;
when monitoring the system, acquiring second configuration information from a target processing device of the plurality of processing devices;
extracting second software configuration pattern indicating a combination of setting files in which settings related to software are described, from the acquired second configuration information;
determining whether to output an indication of the fault occurrence within the target processing device based on a result obtained by comparing the second software configuration pattern with the first software configuration pattern; and
notifying the indication of the fault occurrence when determined to output the indication of the fault occurrence.

US Pat. No. 10,248,515

IDENTIFYING A FAILING GROUP OF MEMORY CELLS IN A MULTI-PLANE STORAGE OPERATION

Apple Inc., Cupertino, C...

1. An apparatus, comprising:an interface, configured to communicate with a memory comprising multiple memory cells arranged in multiple planes, wherein each plane comprises one or more blocks of the memory cells; and
storage circuitry, which is configured to: apply a multi-plane storage operation to multiple blocks simultaneously across the respective planes;
apply a single-plane storage operation, in response to detecting that the multi-plane storage operation has failed, to one or more of the blocks that were accessed in the multi-plane storage operation, including a given block, and if the single-plane operation applied to the given block fails, identify the given block as a bad block; and
for subsequent write operations, retire the given block that was accessed in the multi-plane operation and was identified as a bad block, but permit storage of data in the blocks that were accessed in the multi-plane operation but were not identified as bad blocks.

US Pat. No. 10,248,514

METHOD FOR PERFORMING FAILSAFE CALCULATIONS

Micro Motion, Inc., Boul...

1. A method for performing failsafe computation, the method comprising the steps of:performing a first calculation to generate a first result using a single math library or using a single math co-processor;
performing a second calculation using a scalar and the first calculation to generate a second result using the single math library or using a single math co-processor, the second calculation including multiplying the first calculation by the scalar to generate a scaled result, and dividing the scaled result by the scalar to generate the second result; and
indicating whether the first result and the second result are equivalent.

US Pat. No. 10,248,513

CAPACITY MANAGEMENT

International Business Ma...

1. A computer-implemented method comprising:receiving system information, pertaining to a storage backup environment, wherein the storage backup environment comprises a client computer system and a backup server;
calculating a compression ratio of the storage on the backup server and a backup ratio between an amount of data on the client computer system and an amount of data on the backup server;
calculating an average amount of storage consumed on the backup server per unit of storage on the client computer system based, at least in part, on the calculated backup ratio and the calculated compression ratio; and
determining an existing backup capacity for the storage backup environment by identifying an amount representing the actual capacity of data capable of being stored on the backup server and reducing the amount representing the actual capacity according to the calculated average amount of storage consumed on the backup server.

US Pat. No. 10,248,512

INTELLIGENT DATA PROTECTION SYSTEM SCHEDULING OF OPEN FILES

International Business Ma...

1. A method comprising:collecting and recording, by one or more processors, historical information pertaining to one or more backup jobs for a plurality of backup clients with a common backup window, wherein the historical information includes a temporal pattern of a number of files open during previous backup jobs and information pertaining to subsequent backup jobs initiated by an administrator after the completion of the previous backup jobs;
for each of the plurality of backup clients:
estimating, by one or more processors, a number of files to be open during the common backup window based, at least in part, on the historical information,
determining, by one or more processors, a number indicating how many subsequent backup jobs were initiated by the administrator after the completion of scheduled backup jobs of the files estimated to be open during the common backup window,
inferring, by one or more processors, an impact of skipping a backup of the files estimated to be open during the common backup window, where the impact is inferred from the historical information according to one or more predetermined criteria, wherein the one or more predetermined criteria include the determined number indicating how many subsequent backup jobs were initiated by the administrator after the completion of scheduled backup jobs of the files estimated to be open during the common backup window, and
combining, by one or more processors according to a predetermined cost function, the estimated number of files to be open during the common backup window and the inferred impact of skipping the backup of the estimated number of files to be open during the common backup window; and
scheduling, by one or more processors, an order of the one or more backup jobs among the plurality of clients during the common backup window to reduce an overall impact of skipping backups of files estimated to be open during the common backup window, based on the combining according to the predetermined cost function for each of the plurality of backup clients.

US Pat. No. 10,248,511

STORAGE SYSTEM HAVING MULTIPLE LOCAL AND REMOTE VOLUMES AND MULTIPLE JOURNAL VOLUMES USING DUMMY JOURNALS FOR SEQUENCE CONTROL

Hitachi, Ltd., Tokyo (JP...

1. A storage system comprising a primary storage system equipped with a primary storage subsystem having a primary volume and a first journal volume and a secondary storage subsystem having a local volume in which replica of data stored in the primary volume is stored and a second journal volume, and a secondary storage system equipped with a remote storage subsystem having a remote volume in which replica of the data stored in the primary volume is stored and a third journal volume, whereinin a state where the primary storage subsystem stores a write data from a host to the primary volume, the primary storage subsystem determines a sequence number which is a serial number for specifying a write order of the write data, creates a journal including a replica of the write data and the determined sequence number, stores the created journal in the first journal volume, and transmits the created journal to the remote storage subsystem,
in a state where the secondary storage subsystem receives the sequence number included in the journal stored in the first journal volume from the primary storage subsystem, the secondary storage subsystem stores the write data to the local volume, creates the journal including the replica of the write data and the sequence number, and stores the journal in the second journal volume,
during normal operation, in a state where the primary storage subsystem stops creating the journal after determining the sequence number, the secondary storage subsystem creates a dummy journal including the determined sequence number and not including the write data, and stores the dummy journal in the second journal volume, and
if a predetermined time has elapsed after the secondary storage subsystem creates a journal including a serial number larger than a second sequence number, without creating a journal including the second sequence number which is a serial number subsequent to a first sequence number,
the secondary storage subsystem creates a dummy journal including the second sequence number, and stores the dummy journal in the second journal volume.

US Pat. No. 10,248,510

GUARDRAILS FOR COPY DATA STORAGE

Actifio, Inc., Waltham, ...

1. A computerized method of preventing a user from configuring a service level agreement from creating a data management schedule that creates a set of data backups that exceeds data resource limits available for storing the set of data backups, the method being executed by a processor in communication with memory storing instructions configured to cause the processor to:receive first data indicative of a schedule to perform a backup of at least one application;
determine a first amount of pool resources associated with the backup of each of the at least one application according to the received schedule, wherein determining the first amount of pool resources comprises:
calculating a number of copies of an application associated with the received schedule,
determining a change rate parameter comprising at least one of:
an application specific change rate associated with stored historical backup data corresponding to each of the at least one application,
a system-wide change rate corresponding to change rates associated with stored historical backup data associated with applications similar to each of the at least one application, and
a generic application change rate, and
multiplying the change rate parameter for each the at least one application with a size of the application, and with a number of copies of the application associated with each of the at least one application,
add the first amount of pool resources for each of the at least one application to form an aggregate amount of pool resources;
determine a first amount of data volumes associated with the backup of each of the at least one application according to the received schedule, wherein determining the first amount of data volumes comprises:
determining a second amount of data volumes associated with each copy of the at least one application; and
multiplying the second amount of data volumes with the number of copies of the application associated with the received schedule;
add the first amount of data volumes for each of the at least one application to form an aggregate amount of data volume resource; and
transmit a resource shortage warning when the aggregate amount of pool resources exceeds an available amount of pool resources or the aggregate amount of data volume resource exceeds an available amount of data volume resource, thereby preventing a user from configuring a service level agreement that exceeds data resource limits.

US Pat. No. 10,248,509

EXECUTING COMPUTER INSTRUCTION INCLUDING ASYNCHRONOUS OPERATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for executing a computer instruction including an asynchronous operation, the method comprising:computing, by a processing element, parameters associated with the asynchronous operation, and transmitting, by the processing element, via a communication interface, a command for executing the asynchronous operation by an external device;
intercepting and storing, by an interface logic controller, the parameters associated with the asynchronous operation into one or more log registers;
assigning, by the processing element, a tag to the asynchronous operation, and mapping, by the interface logic controller, the tag assigned to the asynchronous operation with the log registers that store the parameters associated with the asynchronous operation;
receiving, by the processing element, a response to the asynchronous operation, the response being indicative of whether the asynchronous operation was a success or a failure; and
in response to the asynchronous operation being a success, executing a next instruction by the processing element, and
in response to the asynchronous operation being a failure:
accessing, by the processing element, the parameters associated with the asynchronous operation from the one or more log registers; and
restarting the asynchronous operation using the parameters from the one or more log registers.

US Pat. No. 10,248,508

DISTRIBUTED DATA VALIDATION SERVICE

Amazon Technologies, Inc....

1. A system, comprising:a relational data store configured to store respective data sets that are received from respective ones of a plurality of network-based services of a provider network, wherein each of the respective data sets comprises service usage metrics for a respective one of the plurality of network-based services; and
a plurality of hardware compute nodes that together implement a distributed data validation service that provides a service interface for a plurality of clients;
wherein the distributed data validation service is configured to:
receive, via the service interface from each of multiple clients of the plurality of clients, a respective validation project indicating respective one or more service metric validation rule sets corresponding to a respective one or more of the data sets;
automatically apply each of the respective one or more service metric validation rule sets to validate the service usage metrics of the respective ones of the one or more data sets according to a dynamically determined schedule for the automatic application of each of the respective one or more service metric validation rule sets,
wherein the one or more service metric validation rule sets comprise one or more rules for identifying one or more errors in the service usage metrics included in the data set being validated, wherein the one or more errors comprise corrupt data, lost data, or erroneously modified data,
wherein the distributed data validation service is configured to assign different ones of the service metric validation rule sets to different task workers implemented on different ones of the hardware compute nodes of the distributed data validation service according to the dynamically determined schedule to validate the one or more data sets;
detect at least one reporting event based on identifying one or more errors in the service usage metrics included in one or more of the respective ones of the data sets being validated, wherein the at least one reporting event corresponds to a particular service metric validation rule set of the respective one or more service metric validation rule sets, wherein the one or more errors comprise corrupt data, lost, data, or erroneously modified data;
in response to the detection of the at least one reporting event, perform a responsive action as indicated by the particular service metric validation rule set,
wherein the system is configured to perform subsequent analysis, aggregation, or reporting of the service usage metrics included in the one or more data sets that have been validated.

US Pat. No. 10,248,507

VALIDATION OF CONDITION-SENSITIVE MESSAGES IN DISTRIBUTED ASYNCHRONOUS WORKFLOWS

Amazon Technologies, Inc....

14. A computer-implemented method, comprising:generating an original electronic file, wherein the original electronic file includes at least one attribute related to a condition with respect to the original electronic file;
initially validating, by a validation service, a current validity of the condition;
generating at least one alternative for the electronic file, wherein the at least one alternative for the original electronic file is based upon the condition;
forwarding the original electronic file and the at least one alternative for the original electronic file along a workflow to an electronic file publishing service;
based upon the at least one condition, selecting, by a gatekeeper service, one of (i) the original electronic file or (ii) one of the at least one alternative for the original electronic file;
providing the selected one of the original electronic file or the selected at least one alternative for the original electronic file to the electronic file publishing service; and
publishing the selected one of the original electronic file or the selected at least one alternative for the original electronic file to an entity.

US Pat. No. 10,248,506

STORING DATA AND ASSOCIATED METADATA IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:generating metadata for a data object;
first disperse storage error encoding the metadata to produce a set of metadata slices, wherein the first disperse storage error encoding includes utilizing first dispersal parameters to store metadata, the first dispersal parameters including a decode threshold of 1;
partitioning the data object to produce a plurality of data segments;
first disperse storage error encoding at least a first data segment of the plurality of data segments to produce a set of data slices, wherein the first disperse storage error encoding includes utilizing the first dispersal parameters of the metadata, the first dispersal parameters including a decode threshold of 1;
second disperse storage error encoding additional data segments of the plurality of data segments to produce a plurality of sets of encoded data slices, wherein the additional data segments do not include the at least a first data segment and wherein the second disperse storage error encoding includes utilizing second dispersal parameters, the second dispersal parameters different from the first dispersal parameters and including a decode threshold greater than 1; and
facilitating storage of the set of metadata slices and the plurality of sets of encoded data slices in one or more storage units of the DSN; and
wherein subsequent accessing of the metadata slices reproduces the metadata and the first data segment of the data object, making the first data segment of the data object available for use by the one or more computing devices while continuing to retrieve the additional data segments from the DSN.

US Pat. No. 10,248,505

ISSUE ESCALATION BY MANAGEMENT UNIT

INTERNATIONAL BUSINESS MA...

1. A method for use in a distributed storage network (DSN) including at least one DSN memory including a plurality of storage units storing encoded data slices, the method comprising:monitoring a health status of the DSN using at least one processor and associated memory included in the DSN, wherein monitoring the health status includes:
obtaining first status information indicating a first operational status of the at least one DSN memory at a first point in time, the first operational status indicating one or more first operational issues;
obtaining second status information indicating a second operational status of the at least one DSN memory at a second point in time, the second point in time being later than the first point in time, the second operational status indicating one or more second operational issues;
comparing the first operational status to the second operational status to identify outstanding operational issues, wherein an outstanding operational issue is an operational issue indicated in both the first status information and the second status information;
mapping each of the outstanding operational issues to an impact category;
monitoring a health status of one or more management units included in the DSN;
determining a number of remaining management units capable of monitoring the health status of the DSN;
determining an escalation level for particular outstanding operational issues based, at least in part, on the impact category mapped to the particular outstanding operation issues;
in response to determining that there is at least one remaining management unit capable of monitoring the health status of the DSN, issuing one or more notifications for the particular outstanding operational issues from the at least one remaining management unit, the one or more notifications based, at least in part, on the escalation level determined for the particular outstanding operational issues; and
in response to determining that there are no remaining management units capable of monitoring the health status of the DSN, issuing one or more notifications for the particular outstanding operational issues from the at least one DSN memory.

US Pat. No. 10,248,504

LIST REQUEST PROCESSING DURING A DISPERSED STORAGE NETWORK CONFIGURATION CHANGE

International Business Ma...

1. A method for processing and proxying a listing request by resources of a dispersed storage network (DSN) during a system configuration change, the method comprises:identifying a set of resources that are affiliated with a range of slice names identified by the listing request;
creating an ordered classification of the set of resources based on the system configuration change;
determining, by a resource of the set of resources, whether the resource is in a last class of the ordered classification;
when the resource is in the last class:
processing the listing request to generate a listing response regarding encoded data slices associated with slice names within a sub-range of slice names of the range of slice names, wherein the sub-range of slices names is affiliated with the resource; and
sending the listing response to another resource in a lower higher class of the ordered classification;
when the resource is not in the last class:
identifying a second resource of the set of resource for proxying of the listing request, wherein the second resource is in a next higher class of the ordered classification;
sending the listing request to the second resource;
receiving, in response to the sending, a cumulated listing response from the second resource; and
processing the listing request to generate the listing response regarding encoded data slices associated with slice names within the sub-range of slice names;
combining the listing response with the cumulated listing response to produce an updated cumulated listing response.

US Pat. No. 10,248,503

DATA STORAGE DEVICE AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A data storage device comprising:a nonvolatile memory device including a first page group coupled to a first word line and a second page group coupled to a second word line, which is subsequent to the first word line in order of a write operation; and
a controller suitable for, after an abnormal power-off during or after a write operation to the first page group, copying a first data stored in a weak page of the first page group to a stable page of the second page group when a first error correction operation to data stored in the first page group is a success, wherein the stable page is a different type of page than the weak page.

US Pat. No. 10,248,502

CORRECTING AN ERROR IN A MEMORY DEVICE

International Business Ma...

1. A method of correcting an error in a memory device, the method comprising:using a temperature sensor to determine a temperature profile associated with a region of a memory device, wherein the temperature profile is one of a plurality of temperature profiles each associated with a respective region of a plurality of regions of the memory device; and
using a controller in communication with the memory device and the temperature sensor to determine a correction capability based on the thermal profile;
generating a bias vector indicative of the correction capability and communicating the bias vector to an error-correcting code (ECC) decoder; and
using the controller to correct an error in the memory region using the determined correction capability.

US Pat. No. 10,248,501

DATA STORAGE APPARATUS AND OPERATION METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. An operation method of a data storage apparatus, the method comprising:performing a first read operation using an optimal read voltage on read-failed memory cells;
performing an error correction code (ECC) decoding operation on read data read through the first read operation;
performing a second read operation using an oversampling read voltage on the read-failed memory cells when the ECC decoding operation to the read data fails;
determining whether or not potential error memory cells which are turned on through the optimal read voltage and are turned off through the oversampling read voltage are present in the read data;
determining whether or not neighboring memory cells which share a bit line with the potential error memory cells and are coupled to neighboring word lines are in an erased state by performing a read operation on the neighboring memory cells when the potential error memory cells are present; and
inverting bit values corresponding to the potential error memory cells in the read data read from the read-failed memory cells through the first read operation when the neighboring memory cells are in the erased state.

US Pat. No. 10,248,500

APPARATUSES AND METHODS FOR GENERATING PROBABILISTIC INFORMATION WITH CURRENT INTEGRATION SENSING

Micron Technology, Inc., ...

1. A method comprising:sensing, responsive to a trip point selector instructing a current detector, a first set of memory cells from a plurality of memory cells at a first sense threshold;
responsive to sensing the first set of memory cells of the plurality of memory cells, identifying, via a sense latch, the first set of memory cells as having a voltage stored thereon within a first range of voltages;
sensing, responsive to the trip point selector instructing the current detector, a second set of memory cells from the plurality of memory cells at a second sense threshold, wherein the first set of memory cells are undetected by the sensing of the second set of memory cells;
responsive to sensing the second set of memory cells of the plurality of memory cells, identifying, via the sense latch, the second set of memory cells as having a voltage stored thereon within a second range of voltages; and
performing, via a decoder circuit communicatively coupled to the current detector, an error correction operation on the first and second sets of memory cells based, at least in part, on the first and second ranges of voltages.

US Pat. No. 10,248,499

NON-VOLATILE STORAGE SYSTEM USING TWO PASS PROGRAMMING WITH BIT ERROR CONTROL

SANDISK TECHNOLOGIES LLC,...

1. A non-volatile storage apparatus, comprising:a set of non-volatile memory cells; and
one or more control circuits in communication with the non-volatile memory cells, the one or more control circuits are configured to receive data from a host and perform partial programming of all the data to the set of non-volatile memory cells until no more than a first number of programming errors exist and subsequently report to the host that the programming of the data has successfully completed even though the one or more control circuits are configured to continue the programming of the data, the one or more control circuits are configured to continue to perform the programming of the data to the set of non-volatile memory cells until the one or more control circuits determine that no more than a second number of programming errors exist after reporting to the host that the programming of the data has successfully completed, the second number is lower than the first number.

US Pat. No. 10,248,498

CYCLIC REDUNDANCY CHECK CALCULATION FOR MULTIPLE BLOCKS OF A MESSAGE

Futurewei Technologies, I...

1. A method for performing a cyclic redundancy check (CRC), comprising:dividing data into a plurality of blocks, each of the plurality of blocks having a fixed size equal to a degree of a generator polynomial;
for each block in which at least one bit has been modified, applying the XOR operation to the block for which the at least one bit has been modified and a corresponding original block of the data to generate an XOR value;
independently performing a CRC computation for each of the plurality of blocks; and
combining an output of the CRC computation for each of the plurality of blocks by application of an exclusive or (XOR) operation.

US Pat. No. 10,248,497

ERROR DETECTION AND CORRECTION UTILIZING LOCALLY STORED PARITY INFORMATION

Advanced Micro Devices, I...

1. A method comprising:implementing a memory external to a processor, the memory comprising multiple banks of data, each data bank comprising a plurality of data blocks stored at locations in the memory, each data block of the plurality of data blocks including an associated checksum value for error detection;
storing a plurality of parity blocks for error correction in a cache on the processor, each parity block corresponding to a set of data blocks of the plurality of data blocks;
accessing a first data block and its associated first checksum value from the set of data blocks;
detecting an error in the first data block based on the associated first checksum value;
storing, by the processor, a modified data value to the first data block in the memory;
determining, at the processor, an updated checksum value for the first data block based on the modified data value;
storing the updated checksum value to the memory;
determining, at the processor, an updated parity block for a first set of data blocks that include the first data block based on the modified data value; and
storing the updated parity block to the cache in the processor.

US Pat. No. 10,248,496

ITERATIVE FORWARD ERROR CORRECTION DECODING FOR FM IN-BAND ON-CHANNEL RADIO BROADCASTING SYSTEMS

Ibiquity Digital Corporat...

1. A radio receiver comprising:physical layer circuitry to receive a digital radio broadcast signal;
processing circuitry configured to:
receive a plurality of protocol data units of the digital radio broadcast signal, each protocol data unit having a header including a plurality of control word bits, and a plurality of audio frames or data packets, each including a cyclic redundancy check code;
wherein the processing circuit includes an audio decoder configured to:
decode the protocol data units using an iterative decoding technique that refines bit decoding information passed between an inner error correction code and an outer error correction code over at least one iteration of the decoding technique, wherein the iterative decoding technique comprises:
for a first decoding iteration, decode the inner error correction code using Viterbi decoding as an inner decoding;
pass decoding information from the inner decoding to the outer error correction code;
use decoded cyclic redundancy check codes to detect audio frames or data packets that contain errors and flag the audio frames or data packets containing errors that require further decoding iterations; and
change the inner decoding for the further decoding iterations of flagged audio frames or flagged data packets to include decoding the inner error correction code using at least one of soft output decoding or a List Viterbi decoding; and
codec circuitry configured to produce an audio signal using audio frames of the protocol data units, or a data signal using data packets of the protocol data units, decoded using the iterative decoding technique.

US Pat. No. 10,248,495

EVENTUAL CONSISTENCY INTENT CLEANUP IN A DISPERSED STORAGE NETWORK

INTERNATIONAL BUSINESS MA...

1. A method for execution by a dispersed storage (DS) cleanup unit of a dispersed storage network (DSN) that includes a processor, the method comprises:determining a dead session of the DSN;
generating a subset of a plurality of eventual consistency intent names by identifying ones of the plurality of eventual consistency intent names that include a session identifier corresponding to the dead session in a prefix of the ones of the plurality of eventual consistency intent names, wherein the subset of the plurality of eventual consistency intent names corresponds to all eventual consistency intents of the dead session;
determining a subset of storage units responsible for storing the all eventual consistency intents of the dead session based on the prefix of the eventual consistency intent names in the sub set;
retrieving the all eventual consistency intents of the dead session from the subset of storage units; and
facilitating execution of eventual consistency updates indicated in the all eventual consistency intents of the dead session.

US Pat. No. 10,248,494

MONITORING, DIAGNOSING, AND REPAIRING A MANAGEMENT DATABASE IN A DATA STORAGE MANAGEMENT SYSTEM

Commvault Systems, Inc., ...

1. A method comprising:monitoring a database, by a storage manager, during active operation of a data storage management system,
wherein the database stores information used by the storage manager to manage the data storage management system, and
wherein the storage manager executes on a computing device comprising one or more processors and non-transitory computer-readable memory;
collecting, by the storage manager, information about the database,
wherein the collected information includes information about the structure of the database and information about the operation of the database;
diagnosing, by the storage manager, a problem associated with the database,
based on analyzing at least some of the collected information about the database; and
correcting the problem associated with the database, at least in part, by causing
at least a threshold number of temporary-database data structures to be instantiated in the database.

US Pat. No. 10,248,493

INVARIANT DETERMINATION

HEWLETT PACKARD ENTERPRIS...

1. A method comprising:determining that an operation is accessing data on a persistent memory;
retrieving a log of the operation;
determining a type of the data being accessed by the persistent memory by the operation;
identifying, from the log, a location in the persistent memory of the data accessed by the operation;
determining contents of the data accessed by the persistent memory by the operation; and
determining whether the contents of the data hold an invariant corresponding to the type of data.

US Pat. No. 10,248,492

METHOD OF EXECUTING PROGRAMS IN AN ELECTRONIC SYSTEM FOR APPLICATIONS WITH FUNCTIONAL SAFETY COMPRISING A PLURALITY OF PROCESSORS, CORRESPONDING SYSTEM AND COMPUTER PROGRAM PRODUCT

Intel Corporation, Santa...

1. A method for executing in a system an application program provided with functional safety, the system including a single-processor or multiprocessors, and an independent control module, said method comprising:performing an operation of decomposition of the program that includes a safety function and is to be executed via said system into a plurality of parallel sub-programs;
assigning execution of each parallel sub-program to a respective processing module of the system, in particular a processor of said multiprocessors or a virtual machine associated to one of said processors; and
carrying out in the system, periodically according to a cycle frequency of the program during normal operation of said system, a framework of safety function diagnostic-self-test operations associated to each of said sub-programs and to the corresponding processing modules on which they are run;
wherein carrying out said self-test operations comprises:
generating respective self-test data corresponding to the self-test operations and carrying out checking operations on said self-test data;
exchanging said self-test data continuously via a protocol of messages with the independent control module; and
carrying out at least part of said checking operations in said independent control module; and
wherein performing an operation of decomposition of the program comprises decomposition of the program into a plurality of parallel sub-programs to obtain a coverage target for each of said self-test operations that is associated to a respective sub-program or processing module in such a way that it respects a given failure-probability target.

US Pat. No. 10,248,491

QUANTUM COMPUTING IN A THREE-DIMENSIONAL DEVICE LATTICE

1. A method comprising:encoding information in data qubits in a three-dimensional device lattice, the data qubits residing in multiple layers of the three-dimensional device lattice, each layer comprising a respective two-dimensional device lattice;
by operation of a control system, applying a three-dimensional color code in the three-dimensional device lattice to detect errors in one or more of the data qubits residing in the multiple layers; and
by operation of the control system, applying a two-dimensional color code in the two-dimensional device lattice in each respective layer to detect errors in one or more of the data qubits residing in the respective layer,
wherein a signal delivery system transfers signals between the three-dimensional device lattice and the control system.

US Pat. No. 10,248,490

SYSTEMS AND METHODS FOR PREDICTIVE RELIABILITY MINING

Tata Consultancy Services...

1. A computer implemented method for predictive reliability mining in a population of connected machines, the method comprising:identifying sets of discriminative Diagnostic Trouble Codes (DTCs) from DTCs generated preceding failure, the sets of discriminative DTCs corresponding to associated pre-defined parts of the connected machines;
generating a temporal conditional dependence model based on temporal dependence between failure of the pre-defined parts from past failure data and the identified sets of discriminative DTCs;
segregating the population of connected machines into a first set comprising connected machines in which DTCs are not generated in a given time period and a second set comprising connected machines in which at least one DTC is generated in the given time period; and
predicting future failures based on the generated temporal conditional dependence model and occurrence and non-occurrence of DTCs in the segregated population of connected machines.

US Pat. No. 10,248,489

ELECTRONIC CONTROL UNIT

DENSO CORPORATION, Kariy...

1. An electronic control unit for a vehicle comprising:a first storage unit configured to store a readiness flag indicating that an abnormality diagnosis for an abnormality diagnostic item is complete;
a second storage unit configured to store permanent diagnostic (PDTC) information in a non-volatile manner; and
a microcomputer configured to
communicate with a data scan tool, wherein the scan tool is external to a vehicle network and connects to the vehicle network to communicate with the microcomputer via a data link connector in the vehicle,
determine whether a current all clear request is received from the vehicle data scan tool, which requests clearing of all readiness flags in the first storage unit,
determine whether an additional condition that is different from the current all clear request is fulfilled in which the additional condition includes that the microcomputer receives a read-out request from the scan tool for reading out the permanent diagnostic information stored in the second storage unit, and
clear all readiness flags in the first storage unit in response to determining both that the current all clear request from the scan tool is received and the additional condition including the read-out request from the scan tool for reading out the permanent diagnostic information stored in the second storage unit is fulfilled.

US Pat. No. 10,248,488

FAULT TOLERANCE AND DETECTION BY REPLICATION OF INPUT DATA AND EVALUATING A PACKED DATA EXECUTION RESULT

Intel Corporation, Santa...

1. An apparatus comprising:circuitry to replicate input sources of a scalar arithmetic instruction, wherein an opcode of the scalar arithmetic instruction is to indicate the use of using single instruction, multiple data (SIMD) hardware;
arithmetic logic unit (ALU) circuitry to execute the scalar arithmetic instruction with replicated input sources using the SIMD hardware to produce a packed data result; and
comparison circuitry coupled to the ALU circuitry to evaluate the packed data result and output a singular data result into a destination of the scalar arithmetic instruction, wherein the singular data result is to be stored as a scalar in a least significant data element of a packed data destination register.

US Pat. No. 10,248,487

ERROR RECOVERY FOR MULTI-STAGE SIMULTANEOUSLY RUNNING TASKS

VIOLIN SYSTEMS LLC, San ...

1. A method of managing a server with a stateless connection to a user, the method comprising:providing a server computer having a communications interface with a user computer; and
a memory system,
wherein responding to a request for a service from the user computer comprises:
receiving a user request from the user computer over a communications interface;
executing a task by the server computer to respond to the user request, the task comprising a supervisory task and a number of sub-tasks, each sub-task of the number of sub-tasks having one or more subsidiary tasks; and
configuring each sub-task to report a completion status of each subsidiary task and completion of the sub-task to the supervisory task each time a subsidiary task completes;
wherein a completion status of a subsidiary task is one of success or error, and the supervisory task determines the completion status of each subsidiary task at the completion time of the subsidiary task prior to completion of the sub-task and,
the supervisory task is configured to report an error in completion of the subsidiary task of the sub-task to the user computer if the completion status of the subsidiary task is error, wherein the error terminates or aborts execution and completion of the sub-task and other sequential subsidiary tasks of the sub-task;
wait for another completion status change when a number of completed sub-tasks is less than the number of sub-tasks; and
respond to the user request over the communications interface when a number of successfully completed sub-tasks is equal to the number of sub-tasks.

US Pat. No. 10,248,486

MEMORY MONITOR

Intel Corporation, Santa...

1. An integrated circuit to monitor main memory, the integrated circuit and the main memory disposed in a computer system, the integrated circuit comprising:a detection circuit to detect that the computer system enters a sleep state;
a test circuit to test whether the main memory is still installed in the computer system by attempting to read from a memory address in the main memory and determining that the main memory is no longer installed in the computer system by detecting that the memory address is inaccessible; and
a recovery circuit to perform a recovery process when the test indicates that the main memory is no longer installed in the computer system.

US Pat. No. 10,248,485

DUAL PHYSICAL-CHANNEL SYSTEMS FIRMWARE INITIALIZATION AND RECOVERY

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:operating, by a processor, first and second physical channel identifier (PCHID) devices comprised of a plurality of functional logic components, wherein one or more of the functional logic components are specific to one or more of the first and second PCHIDs and wherein one or more of the functional logic components are in common and not specific to one or more of the first and second PCHIDs;
determining, by the processor, that an error condition exists in the first PCHID or the second PCHID;
executing, by the processor, a recovery method to remove the error condition from the first PCHID or the second PCHID in which the error condition exists; and
executing, by the processor, an initialization method for both of the first and second PCHIDs.

US Pat. No. 10,248,484

PRIORITIZED ERROR-DETECTION AND SCHEDULING

Intel Corporation, Santa...

1. A method for performing prioritized error detection on an array of memory cells used by an integrated circuit, the method comprising:at error detection circuitry, receiving a prioritized error detection schedule that prescribes more frequent error detection for a first subset of the array of memory cells and less frequent error detection for a second subset of the array of memory cells; and
with the error detection circuitry, performing prioritized error detection on the array of memory cells based on the prioritized error detection schedule;
with the error detection circuitry, performing error detection on the first subset of the array of memory cells with a first frequency based on the prioritized error detection schedule; and
with the error detection circuitry, performing error detection on the second subset of the array of memory cells with a second frequency that is less than the first frequency based on the prioritized error detection schedule.

US Pat. No. 10,248,483

DATA RECOVERY ADVISOR

Oracle International Corp...

1. A computer-implemented method to diagnose and fix problems in a data storage system, the data storage system being implemented at least partially by one or more computers, the method comprising:checking integrity of one or more components of the data storage system;
wherein a data failure is related to corruption of data in a file, the data being read by or written by or read and written by a software program, and at least the corruption of the data is identified by said checking of integrity after an error is encountered by said software program which during normal functioning is unable to process the data due to the corruption of the data;
wherein said checking of integrity comprises checking for at least existence of said file;
identifying a type of repair based at least in part on using, with a map in a memory of a computer that maps failure types to repair types, a type of the data failure related to corruption;
wherein the type of repair identifies a group of alternative repairs each of which can fix the data failure related to corruption, such that each repair in the group is an alternative to another repair in the group, wherein at least one repair in the group uses a backup of the data;
checking feasibility of the group of alternative repairs at least by checking for existence of a backup of the data in a storage device, wherein at least said checking of feasibility is performed automatically, and a plurality of feasible repairs are selected by said checking of feasibility, from among the group of alternative repairs, and said at least one repair is excluded from the plurality of feasible repairs in response to the checking feasibility being unable to find a backup of the data;
consolidating multiple repairs in the plurality of feasible repairs, based on respective impacts of the multiple repairs, into one or more repair plans;
displaying the one or more repair plans;
receiving identification of a specific repair plan selected by user input from among the one or more repair plans displayed;
performing the specific repair plan selected by the user input, to obtain corrected data to fix the corruption in the data;
storing the corrected data in non-volatile storage media of the data storage system; and
said software program at least using the corrected data;
wherein at least said checking of feasibility, said consolidating, said performing and said storing are performed by one or more processors in the one or more computers.

US Pat. No. 10,248,482

INTERLINKING MODULES WITH DIFFERING PROTECTIONS USING STACK INDICATORS

International Business Ma...

1. A computer program product for facilitating linking of modules of a computing environment, said computer program product comprising:a computer readable storage medium readable by a processing circuit and storing instructions for execution by the processing circuit for performing a method comprising:
determining whether one module to be executed by a processor of the computing environment supports use of a guard word to protect a stack frame of the computing environment;
determining whether another module to be linked with the one module supports use of the guard word to protect the stack frame;
based on determining that at least one of the one module and the other module fails to support use of the guard word to protect the stack frame, providing an indication that the one module and the other module are linked modules not supporting use of the guard word, wherein the other module includes a verify guard word condition instruction to be used by a the one module to check the guard word of the stack frame based on determining that the one module and the other module support use of the guard word to determine whether the guard word is an expected value;
based on the indication, processing the one module and the other module without use of the guard word to protect the stack frame; and
executing the verify guard word condition instruction by the one module, and failing to return to a return address in the stack frame based on the guard word being an unexpected value.

US Pat. No. 10,248,481

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM AND PROGRAM

KONICA MINOLTA, INC., Ch...

1. An information processing device comprising a hardware processor configured to;acquire a determination result of a state of a user, who has given a transmission job execution instruction, determined based on biological information of the user compared to a predetermined threshold value, the biological information including information related to at least one of a pulse wave, an electrocardiogram, a temperature, a heart rate, or a blood pressure; and
control an execution of the transmission job according to the user state determination result,
wherein when it is determined that the user is in an off-normal state, the user being in the off-normal state when the acquired biological information of the user is equal to or larger than the predetermined threshold value, the hardware processor executes a confirmation request process to request the user to make a confirmation related to the transmission job,
wherein
the hardware processor also executes the confirmation request process when the user is in a normal state, the user being in the normal state when the acquired biological information of the user is smaller than the predetermined threshold value, and
a confirmation of a larger number of confirmation items are requested in the confirmation request process executed in a case where it is determined that the user is in an off-normal state, compared to the number of the confirmation items requested to confirm in a confirmation request process executed in a case where it is determined that the user is in an normal state, and
wherein the hardware processor decides that transmission of transmission target data needs to be executed when the confirmation request process is executed, in a case where a confirmation completion operation indicating an intention that a confirmation related to the transmission job is completed is given by the user.

US Pat. No. 10,248,480

ADAPTIVE QUOTA MANAGEMENT SYSTEM

Matrixx Software, Inc., ...

1. A system for determining a quota, comprising;an input interface configured to:
receive an input quota amount and a candidate minimum amount;
a processor configured to:
determine a total balance velocity, a time to threshold, a fair balance amount, and a fair quota amount;
determine whether the fair quota amount is greater than or equal to the input quota amount;
in response to determining that the fair quota amount is neither greater than nor equal to the input quota amount, determine whether the fair quota amount is greater than the candidate minimum amount;
in response to determining that the fair quota amount is not greater than the candidate minimum amount, set an output quota amount to the candidate minimum quota amount; and
provide modified quota values.

US Pat. No. 10,248,479

ARITHMETIC PROCESSING DEVICE STORING DIAGNOSTIC RESULTS IN PARALLEL WITH DIAGNOSING, INFORMATION PROCESSING APPARATUS AND CONTROL METHOD OF ARITHMETIC PROCESSING DEVICE

FUJITSU LIMITED, Kawasak...

1. An arithmetic processing device comprising:a first memory control unit configured to control an access to a first memory;
a second memory control unit configured to control an access to a second memory; and
a diagnostic control unit configured to sequentially diagnose first parts within the first memory via the first memory control unit and to sequentially store in the second memory, via the second memory control unit, diagnostic results of sequentially diagnosing the first parts, the storing being in parallel with the diagnosing the first parts via the first memory control unit when diagnosing the first parts within the first memory, and
wherein the diagnostic control unit is further configured to sequentially diagnose second parts within the second memory via the second memory control unit and sequentially store in the first memory, via the first memory control unit, diagnostic results of sequentially diagnosing the second parts, the storing being in parallel with the diagnosing the second parts via the second memory control unit when diagnosing the second parts within the second memory.

US Pat. No. 10,248,478

INFORMATION PROCESSING DEVICE AND SPECIFICATION CREATION METHOD

FUJITSU LIMITED, Kawasak...

6. A specification creation method executed by a computer, the method comprising:generating an assumed endpoint, based on class relationship information indicating a relationship between classes of an existing web application, which is API specification information assumed from the relationship;
referring to the class relationship information and extracting a verb and a noun, being basis of an actual endpoint which is API specification information based on an execution result, from an access log to be output when the web application is executed;
generating the actual endpoint by converting the extracted verb into a method name and converting the extracted noun into a path; and
identifying an endpoint included in generated assumed endpoints, as the API specification information of the web application, among generated actual endpoints.

US Pat. No. 10,248,477

SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT FOR SHARING INFORMATION IN A DISTRIBUTED FRAMEWORK

Stragent, LLC, Longview,...

1. A layered system for sharing information in an automobile vehicle, said system comprising:an automotive electronic control unit comprising a micro-processor and an operating system;
a hardware abstraction layer within the electronic control unit allowing the operating system to be adapted to a specific hardware implementation as used in the electronic control unit;
non-volatile memory comprising a database with a data structure;
a memory manager associated with the non-volatile memory, said memory manager comprising an upgrade and configuration manager to configure the data structure of the non-volatile memory, an event manager to capture input-output events as variables and generate new events, flags or signals, a data access manager to control code update and configuration of the memory and access rights for individual applications at execution, and a data integrity component to analyze stored state variables for integrity and generate events or flags if any problem occurs;
the non-volatile memory further comprising instructions to:
receive information in the form of a packet data unit representing datum information carried by an overall message from a first physical network selected from the group consisting of FlexRay, Controller Area Network, and Local Interconnect Network;
in response to the receipt of the information, issue a storage resource request in connection with a storage resource;
determine whether the storage resource is available for storing the information;
determine whether a threshold has been reached in association with the storage resource request;
in the event the storage resource is not available and the threshold associated with the storage resource request has not been reached, issue another storage resource request in connection with the storage resource;
in the event the storage resource is available, store the information in the storage resource; and
share the stored information with at least one of a plurality of heterogeneous processes including at least one process associated with a second physical network selected from the group consisting of FlexRay, Controller Area Network, and Local Interconnect Network, utilizing a network protocol different from a protocol of the first physical network;
interfaces for communication with each of FlexRay, Controller Area Network, and Local Interconnect Network networks, with each physical network in communication with a component including at least one of a sensor, an actuator, or a gateway, and with each of the FlexRay, Controller Area Network, and Local Interconnect Network interfaces comprising a corresponding network communication bus controller including a corresponding network communication bus driver;
the interfaces including a first communication interface for interfacing with the first physical network, the first communication interface including a first communication interface-related data link layer component, said first communication interface configured to extract variables from the overall message communicated by the first physical network employing a first protocol and storing the packet data unit representing the datum information carried by the overall message from a first physical network in the database; and
a second communication interface for interfacing with the second physical network utilizing a protocol different than the protocol of the first physical network, the second communication interface including a second communication interface-related data link layer component;
wherein the automotive electronic control unit is configured such that the stored information may be shared with the second physical network by replicating the packet unit data obtained from the first physical network by composing another message configured to be communicated using the different protocol of the second physical network.

US Pat. No. 10,248,476

EFFICIENT COMPUTATIONS AND NETWORK COMMUNICATIONS IN A DISTRIBUTED COMPUTING ENVIRONMENT

SAS INSTITUTE INC., Cary...

1. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to:receive an instruction to determine a measurement from observations distributed among a plurality of nodes in a distributed environment, the observations associated with a plurality of variables;
at each node of the plurality of nodes, count node-specific observations;
combine the observations across the plurality of nodes, the combining comprising, at each of the plurality of nodes:
determining a node-specific sketch for each of the variables associated with the node-specific observations, the node-specific sketch representing a summary of the observations stored at the respective node as those observations relate to each of the variables and generated by mapping the respective node's observations into a sketch vector that records frequency information relating to the respective node's observations, wherein the sketches are determined by:
(1) dividing, at the respective node, a number of the node-specific observations stored at the respective node into a number of subsets,
(2) forking, at the respective node, a plurality of threads, a number of the plurality of threads corresponding to the number of subsets,
(3) on each of the plurality of threads, reading a next observation from among the subset of node-specific observations assigned to the thread,
(4) on each of the plurality of threads, updating each sketch associated with the respective node by mapping the next observation into a sketch vector that records frequency information relating to the next observation, and
(5) repeating (3) and (4) in parallel across the plurality of threads on the respective node until each observation of the node-specific observations is read, and
storing the node-specific sketches in a node-specific sketch array;
receive a plurality of node-specific sketch arrays, each node-specific sketch array originating at a respective node in the distributed environment;
for each of the node-specific sketch arrays:
access each node-specific sketch from the node-specific sketch array, each node-specific sketch associated with a variable,
access sketch merging logic that defines how to combine multiple sketches for the variable, and
apply the sketch merging logic to merge the current node-specific sketch for the variable with a summary sketch for the variable; and
generate an approximation of the measurement using the summary sketches for the one or more variables.

US Pat. No. 10,248,475

CLOUD RULES AND ALERTING FRAMEWORK

Cerner Innovation, Inc., ...

1. A method of providing cloud rules and an alerting framework, the method comprising:receiving one or more rules associated with an alerting framework and one or more healthcare information systems, the one or more rules each designating an initiating application and at least one target application, each of the at least one target applications being subscribed to the alerting framework;
associating one or more actions to the at least one target application, the one or more actions comprising initiating a change in a workflow associated with at least one target application; and
monitoring the initiating application for a trigger associated with the one or more rules;
wherein the initiating application and the target application are not integrated except through the alerting framework.

US Pat. No. 10,248,474

APPLICATION EVENT DISTRIBUTION SYSTEM

Microsoft Technology Lice...

1. A method, comprising:receiving a plurality of events generated by one or more of a plurality of primary applications executing on a processing device of a plurality of processing devices, the plurality of primary applications written by one or more primary application developers in a first programming language comprising an application code, the plurality of events occurring in execution of, and generated by, the one or more of the plurality of primary applications, the plurality of events comprising custom events designed by the one or more primary application developers specifically for use in one or more secondary applications based on the one or more primary applications, the one or more secondary applications written by one or more secondary application developers that are different from the primary application developers, wherein the receiving of the plurality of events includes receiving in a prioritization order according to an event priority associated with each of the plurality of events by each of the plurality of processing devices;
defining a set of transformation rules, wherein the transformation rules comprise core transformation rules provided by an administrator of a multiuser service and custom transformation rules provided by the primary application developers;
transforming, by the multiuser service accessible by the one or more secondary application developers, the plurality of events into a plurality of statistics according to the set of transformation rules, the plurality of statistics representing information about the execution of the primary applications across the plurality of processing devices; and
publishing the plurality of statistics and at least a portion of the plurality of events including the custom events to the one or more of the secondary application developers, wherein published ones of the plurality of statistics and at least the portion of the plurality of events are configured for use by the secondary application developers to create the one or more secondary applications to provide information to supplement a user experience with the plurality of primary applications.

US Pat. No. 10,248,473

DISCOVERING OBJECT DEFINITION INFORMATION IN AN INTEGRATED APPLICATION ENVIRONMENT

International Business Ma...

1. A method of discovering object definition information in an integrated application environment, comprising:providing an object discovery agent (ODA) client;
providing a plurality of ODAs, wherein each ODA is associated with one application and includes:
application programming interfaces (APIs) to communicate with the associated application to discover definition information on objects maintained by the application; and
code to communicate, with the ODA client, information on objects used by the associated application;
communicating, by the ODA client, to each ODA of the ODAs, selection of at least one object used by the application associated with the ODA to which the ODA client is communicating; and
returning to the ODA client, by each ODA of the ODAs to which the ODA client is communicating, definition information for the at least one selected object; and
providing gathered object definition information to an integration server to integrate the objects in an environment including heterogeneous objects from applications associated with the ODAs, wherein the integration server uses the object definition information to transform a source application object with the integration server for which definition information is gathered to a generic object and from the generic object to a target application object.

US Pat. No. 10,248,472

RECURSIVE MODULARIZATION OF SERVICE PROVIDER COMPONENTS TO REDUCE SERVICE DELIVERY TIME AND COST

1. A computer-readable storage medium having instructions stored thereon that, when executed by a processor, cause the processor to perform operations comprising:encapsulating a first module with a second module, wherein the first module is a virtual network function module, wherein the first module comprises service module implementation logic encapsulated behind a set of application programming interfaces of the first module and behind a set of user interfaces of the first module through which the set of application programming interfaces of the first module are callable, wherein the set of application programming interfaces of the first module comprises a first application programming interface and a second application programming interface, and wherein the second module leverages the set of application programming interfaces of the first module and the set of user interfaces of the first module to offer a set of application interfaces of the second module and a set of user interfaces of the second module in response to encapsulating the first module with the second module;
exposing, via a module controller, the set of application programming interfaces of the second module and the set of user interfaces of the second module; and
receiving, via the set of user interfaces of the second module, input to call at least a portion of the set of application programming interfaces of the second module to instantiate a module instance of the first module encapsulated by the second module, wherein the first application programming interface comprises a configuration application programming interface that collects a configuration for the module instance, and wherein the second application programming interface comprises an instance provisioning application programming interface that instantiates the module instance based upon the configuration.

US Pat. No. 10,248,471

LOCKLESS EXECUTION IN READ-MOSTLY WORKLOADS FOR EFFICIENT CONCURRENT PROCESS EXECUTION ON SHARED RESOURCES

Oracle International Corp...

1. A method for lockless access to a resource in a computing environment, comprising:exposing at least one shared resource to be accessed by two or more processing entities in the computing environment;
forming a data structure associated with the shared resource wherein the data structure comprises:
a consecutive read count value that indicates a number of consecutive read access requests to the shared resource, wherein the consecutive read access requests are successful contiguously-successive read-only access requests for the shared resources by the two or more processing entities, and
a state variable to hold a plurality of state values;
receiving, from a first one of the two or more processing entities, a shared access request to read the shared resource, the shared access request corresponding to a read-only access request;
incrementing, responsive to receiving the shared access request, the consecutive read count value; and
changing the state variable from a first state value to a second state value based at least in part on a comparison of the consecutive read count value to a threshold value, wherein the first state value indicates locks are needed to facilitate the shared access request and the second state value indicates locks are not needed to facilitate the shared access request.

US Pat. No. 10,248,470

HIERARCHICAL HARDWARE OBJECT MODEL LOCKING

International Business Ma...

1. A method comprising:locking a system mutex of a system target with a read-lock operation, wherein the system target comprises a node group comprising a plurality of nodes, wherein each node of the plurality of nodes comprises a descendant group, and wherein each descendant group comprises a plurality of descendants;
subsequent to locking the system mutex, locking the node group with a first write-lock operation, wherein locking the node group comprises simultaneously locking all of the plurality of nodes of the node group with the first write-lock operation; and
subsequent to locking the node group, sequentially locking the descendant group for each of the plurality of nodes with a second write-lock operation, wherein locking the descendant group of a particular node simultaneously locks all of the descendants of the respective descendant group, and wherein the first write-lock operation is different than the second write-lock operation.

US Pat. No. 10,248,469

SOFTWARE BASED COLLECTION OF PERFORMANCE METRICS FOR ALLOCATION ADJUSTMENT OF VIRTUAL RESOURCES

International Business Ma...

1. A computer program product for collecting and processing performance metrics, the computer program product comprising:one or more computer readable storage devices and program instructions stored on the one or more computer readable storage devices, the stored program instructions comprising:
program instructions to assign an identifier corresponding to a first workload associated with a first virtual machine;
program instructions to record resource consumption data of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
program instructions to create a relational association of the first workload and the first virtual machine to the resource consumption data of the at least one processor;
program instructions to determine if the first workload is complete;
responsive to determining that the first workload is not complete, program instructions to calculate a difference in recorded resource consumption data between the performance monitoring interrupt and a previous performance monitoring interrupt;
program instructions to assign an identifier corresponding to a second workload associated with the first virtual machine;
program instructions to record resource consumption data of at least one processor, wherein the at least one processor contains the first virtual machine, at a performance monitoring interrupt;
program instructions to aggregate the recorded resource consumption data to provide one or more resource consumption estimates; and
program instructions to notify a resource manager of a workload switch between the second workload and the third workload and data regarding changes in resource consumption of the at least one processor over time.

US Pat. No. 10,248,468

USING HYPERVISOR FOR PCI DEVICE MEMORY MAPPING

International Business Ma...

1. A method to manage peripheral component interconnect (PCI) memory, the method comprising:mapping base address register (BAR) space for PCI devices with entries in a page table;
associating the page table with a physical memory address of PCI memory to generate memory mapped I/O (MMIO);
determining whether an address of the BAR space for PCI devices with entries in the page table is in MMIO;
where the address of the BAR space for PCI devices with entries in the page table is not in the MMIO, invoking a hypervisor to perform read/write operations to obtain address information for entry to the page table, wherein invoking the hypervisor comprises validating by the hypervisor that a PCI device owns the address of the BAR space; and
where the address of the BAR space for PCI devices with entries in the page table is in the MMIO, using a partition to perform read/write operations to obtain the address information for entry to the page table.

US Pat. No. 10,248,467

CODE EXECUTION REQUEST ROUTING

Amazon Technologies, Inc....

1. A system, comprising:one or more processors; and
one or more memories, the one or more memories having stored thereon instructions, which, when executed by the one or more processors, configure the one or more processors to:
maintain a plurality of virtual machine instances on one or more physical computing devices;
in response to a first request to execute a program code, cause the program code to be executed in a container created on one of the plurality of virtual machine instances, the execution of the program code modifying one or more computing resources associated with the container;
determine, based on an amount of information stored by the execution in response to the first request, that the container is not to be shut down for at least a period of time after completion of the execution in response to the first request;
in response to the determination, refrain from shutting down the container prior to receiving a second request to execute the program code; and
in response to the second request, cause the program code to be executed in the container using the one or more computing resources associated with the container.

US Pat. No. 10,248,466

MANAGING WORKLOAD DISTRIBUTION AMONG PROCESSING SYSTEMS BASED ON FIELD PROGRAMMABLE DEVICES

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for managing workload distribution based on field programmable devices, the method comprising:determining, by a processor, a first workload performance for a first general purpose processor and a first field programmable device for a first processing system, wherein determining the first workload performance is based on a first utilization of the first general purpose processor and a first utilization of the first field programmable device in the first processing system, wherein the first utilization of the first field programmable device is calculated based at least in part on an available capacity over a total capacity, wherein the total capacity is calculated based at least in part on a currently utilized capacity plus the available capacity, wherein the available capacity of the first field programmable device for the first processing system is calculated based at least in part on an amount of additional work that the first field programmable device can process without causing an average number of queued requests to increase over a first threshold;
determining, by the processor, a second workload performance for a second general purpose processor and a second field programmable device for a second processing system;
determining whether the first processing system is likely to outperform the second processing system for execution of a workload; and
responsive to determining that the first processing system is likely to outperform the second processing system for the workload, deploying the workload to the first processing system.

US Pat. No. 10,248,465

CONVERGENT MEDIATION SYSTEM WITH DYNAMIC RESOURCE ALLOCATION

Comptel Corporation, Hel...

1. A convergent mediation system for mediating data records, the convergent mediation system comprising:a common platform comprising hardware defining a processing power for online processing and off-line processing of data corresponding to usage of at least one of a communication network or a data communications platform, the common platform comprising:
a plurality of independent processing nodes forming processing streams, each of the processing streams comprising at least two independent processing nodes in sequence, the independent processing nodes structured to continue operating if another independent node in the processing stream fails; and
a buffer between an upstream and a downstream independent processing node from the at least two independent processing nodes in sequence, wherein the system is adapted to transport a portion of the data from the downstream to the upstream processing node through the buffer and to remove the portion of the data from the buffer only after successfully processing the portion of the data in the upstream independent processing node after receipt thereof from the buffer,
wherein at least the first independent node in the processing stream is an interface node adapted to receive the data from the at least one of the communications network or the service delivery platform and send a response thereto, wherein each of the independent processing nodes of the plurality of independent processing nodes comprises a node application and a node base, wherein the node application contains logical rules according to which the processing node processes the data and the node base provides basic functionalities for the processing node; and
a system controller adapted to dynamically allocate the processing power of the common platform for the online processing and off-line processing of the data, wherein the system controller allocates more of the processing power for the online processing of the data when a current processing power allocated to the online processing of the data exceeds a current online processing load caused by the online processing of the data by a value less than a minimum reserve threshold, the minimum reserve threshold defining a reserve of the processing power to maintain a low online processing latency during peak load times, the low online processing latency representing an online processing speed that is small enough to prevent a fraud window.

US Pat. No. 10,248,464

PROVIDING ADDITIONAL MEMORY AND CACHE FOR THE EXECUTION OF CRITICAL TASKS BY FOLDING PROCESSING UNITS OF A PROCESSOR COMPLEX

INTERNATIONAL BUSINESS MA...

1. A method, comprising:maintaining a plurality of processing entities of a processor complex, wherein each processing entity has a local cache and the processor complex has a shared cache and a shared memory;
allocating one of the plurality of processing entities for execution of a critical task;
in response to the allocating of one of the plurality of processing entities for the execution of the critical task, folding other processing entities of the plurality of processing entities by stopping processing operations in the other processing entities and releasing the local cache of the other processing entities for use by the processing entity allocated for execution of the critical task, wherein prior to folding the other processing entities, currently scheduled tasks on the other processing entities are temporarily suspended;
utilizing, by the critical task, the local cache of the other processing entities that are folded, the shared memory, additional resources that are freed by folding the other processing entities, and the shared cache, in addition to the local cache of the processing entity allocated for the execution of the critical task;
in response to completion of the critical task in the processing entity that is allocated, performing an unfolding of the other processing entities to make the other processing entities operational; and
in response to performing the unfolding of the other processing entities, resuming any suspended tasks and dispatch queued tasks.

US Pat. No. 10,248,463

APPARATUS AND METHOD FOR MANAGING A PLURALITY OF THREADS IN AN OPERATING SYSTEM

Honeywell International I...

1. A method for managing a plurality of threads, the method comprising:providing an environment associated with an operating system to execute one or more threads of the plurality of threads, the environment comprising a plurality of virtual priorities and a plurality of actual priorities, wherein each of the plurality of threads selects a virtual priority of the plurality of virtual priorities to be assigned, wherein the plurality of virtual priorities comprises a broader range of values than the plurality of actual priorities;
assigning, by the operating system, the plurality of virtual priorities to the plurality of threads;
associating an actual priority of the plurality of actual priorities to one of the plurality of threads based on the plurality of virtual priorities assigned to the plurality of threads; and
executing the one of the plurality of threads associated with the actual priority.

US Pat. No. 10,248,462

MANAGEMENT SERVER WHICH CONSTRUCTS A REQUEST LOAD MODEL FOR AN OBJECT SYSTEM, LOAD ESTIMATION METHOD THEREOF AND STORAGE MEDIUM FOR STORING PROGRAM

NEC Corporation, Tokyo (...

1. A management server that is connected to at least one object system, the management server comprising:a memory configured to store instructions; and
a processor configured to process the instructions to:
generate a request load model in which load information for the object system is correlated with a classification of request into which request information for the object system is classified; and
select, in a process of the classification of the request information, the classification of request to be an object for sub-classification when a reference value that is calculated from estimation distribution information about the load information for each of the classification of request is equal to or greater than a fixed value.

US Pat. No. 10,248,461

TERMINATION POLICIES FOR SCALING COMPUTE RESOURCES

Amazon Technologies, Inc....

1. A computer implemented method for scaling compute resources, the method comprising:receiving a plurality of user-specified termination policies, including an ordering of attributes, that is used to select one or more virtual machine instances from a group of virtual machine instances to de-provision, the group of virtual machine instances having been provisioned for a user on one or more host computing devices and the ordering of attributes determining the order in which virtual machine instances should be de-provisioned;
detecting that at least one virtual machine instance of the group of virtual machine instances should be de-provisioned based on one or more attributes associated with the at least one virtual machine instance;
selecting the at least one virtual machine instance to de-provision based at least in part on applying a user-specified termination policy of the plurality of user-specific termination policies;
applying the plurality of user-specified termination policies in a specified order until at least one virtual machine instance is selected to be de-provisioned; and
de-provisioning the selected at least one virtual machine instance.

US Pat. No. 10,248,460

STORAGE MANAGEMENT COMPUTER

Hitachi, Ltd., Tokyo (JP...

1. A management computer which communicates with a host computer and a storage device, comprising:a memory configured to store configuration information including information about a plurality of storage media each having a different performance level in the host computer and the storage device, while indicating a storage area supplied from the storage media and the host computer with their association, information about a virtual machine stored in the storage area and executed by the host computer in association with the storage area, and information about a required performance of the virtual machine; and
a CPU connected to the memory, and configured to receive an allocation request of the storage area to the virtual machine executed by the host computer, which contains information about access characteristics by the host computer and a capacity of the storage area to be allocated, select the storage medium capable of providing the storage area with the capacity from the storage media of the storage device and the host computer based on the access characteristics included in the allocation request in reference to the configuration information, generate a configuration scheme for allocating the storage area with the capacity from the selected storage media to the host computer, and output the configuration scheme,
wherein the storage media of the host computer and the storage device include at least one Flash Memory Drive,
wherein the CPU is configured to generate the configuration scheme for allocating the storage area to the virtual machine with a higher required performance preferentially to a Flash Memory Drive in reference to the configuration information, and
wherein the CPU is further configured to control allocation of the storage area to the host computer based on the configuration scheme.

US Pat. No. 10,248,459

OPERATING SYSTEM SUPPORT FOR GAME MODE

Microsoft Technology Lice...

1. A computing system for allocating one or more system resources for the exclusive use of an application, the computing system comprising:at least one processor; and
at least one storage device having stored thereon computer-executable instructions which, when executed by the at least one processor, cause the computing system to:
receive a request for an exclusive allocation of one or more system resources for a first application, the one or more system resources being useable by the first application and one or more second applications, wherein receiving the request for the exclusive allocation of the one or more system resources comprises a negotiation process that includes:
receiving an inquiry about a maximum amount of the one or more system resources that can be allocated to the exclusive use of the first application;
responding to the inquiry by providing the maximum amount of the one or more system resources that can be allocated to the exclusive use of the first application;
receiving information that specifies an amount of the one or more system resources that the first application desires for its exclusive use; and
determining if the first application is to be given the exclusive use of the one or more system resources specified in the received information;
determine an appropriate amount of the one or more system resources that are to be allocated exclusively to the first application;
partition the one or more system resources into a first portion that is allocated for the exclusive use of the first application and a second portion that is not allocated for the exclusive use of the first application, the second portion being available for the use of the one or more second applications; and
provide an indication of a disposition of the request, the indication including information detailing which specific system resources were selected for inclusion in the first portion.

US Pat. No. 10,248,458

CONTROL METHOD, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM, AND CONTROL DEVICE

FUJITSU LIMITED, Kawasak...

1. A control method executed by a control device, the control method comprising:identifying a specified time period based on execution history information on previous jobs related to a plurality of systems, the specified time period being a period prior to execution start timing of a first job, update processing for a data storage area from which the first job refers to data is not executed during the specified time period;
performing control so that evaluation timing of an amount of the data that the first job refers to from the data storage area is included in the identified time period;
determining a specified system among from the plurality of systems based on the amount of the data evaluated at the evaluation timing;
causing the specified system to execute the first job;
monitoring a processing load related to a system, from among the plurality of systems, that executes the one or more jobs different from the first job; and
determining a timing, as the evaluation timing, at which the processing load is equal to or less than a predetermined reference, the determined timing being included in the identified time period.

US Pat. No. 10,248,457

PROVIDING EXCLUSIVE USE OF CACHE ASSOCIATED WITH A PROCESSING ENTITY OF A PROCESSOR COMPLEX TO A SELECTED TASK

INTERNATIONAL BUSINESS MA...

1. A method, comprising:maintaining a plurality of processing entities in a processor complex;
in response to determining that a task is a critical task, dispatching the critical task to a scheduler, wherein it is preferable to prioritize execution of critical tasks over non-critical tasks;
in response to dispatching the critical task to the scheduler, determining, by the scheduler, which processing entity of the plurality of processing entities has a least amount of processing remaining to be performed for currently scheduled tasks;
moving tasks queued on the determined processing entity to other processing entities, and completing the currently scheduled tasks on the determined processing entity;
in response to moving tasks queued on the determined processing entity to other processing entities and completing the currently scheduled tasks on the determined processing entity, dispatching the critical task on the determined processing entity, wherein the plurality of processing entities comprise a plurality of cores, and wherein the determined processing entity corresponds to a determined core that has a clean L1 cache and a clean L2 cache at a time at which the critical task is scheduled for execution in the determined core and no other tasks besides the critical task are running on the determined core at the time; and
in response to the critical task being scheduled on the determined core, the critical task secures exclusive access to the L1 cache and L2 cache of the determined core, wherein if data is not found in the L1 cache of the determined core then the data is retrieved from the L2 cache of the determined core, and wherein each core of the plurality of cores have different sets of L1 cache and L2 cache but share a L3 cache.

US Pat. No. 10,248,456

METHOD AND SYSTEM FOR PROVIDING STACK MEMORY MANAGEMENT IN REAL-TIME OPERATING SYSTEMS

Samsung Electronics Co., ...

1. A method of providing memory management in a real-time operating system (RTOS) based system, the method comprising:creating, by a task generator, a plurality of tasks with a two level stack scheme including a first-level stack and a second-level stack;
scheduling, by a task scheduler, a first task in a first state for execution by transferring task contents associated with the first task from the first-level stack to the second-level stack;
determining whether the first task is pre-empted;
allocating the second-level stack to the first task in a second state, if the first task is not pre-empted;
changing, by the task scheduler, an active task for execution;
determining whether the first task relinquishes control from the second state and is awaiting a resource;
scanning the second-level stack, if the first task relinquishes control from the second state and is awaiting the resource;
determining whether a register is present in a range of stack addresses of the second-level stack;
determining whether usage of the second-level stack is less than a size of the first-level stack, if there are no registers present in the range of the stack addresses of the second-level stack; and
transferring the task contents from the second-level stack to the first-level stack, if the usage of the second-level stack is less than the size of the first-level stack.

US Pat. No. 10,248,455

STORAGE DEVICE AND TASK EXECUTION METHOD THEREOF, AND HOST CORRESPONDING TO THE STORAGE DEVICE AND TASK EXECUTION METHOD THEREOF

Silicon Motion, Inc., Jh...

1. A storage device, comprising:a data storage media; and
a control unit, electrically coupled to the data storage media and configured for controlling an operation of the data storage media, wherein the control unit is further configured for receiving a task assignment packet from a host, the task assignment packet comprises a plurality of tasks and each of the tasks has a task identification (ID), wherein the control unit is further configured for sorting the tasks of the task assignment packet to generate an execution order for the tasks and reply the host with a task arrangement packet according to the execution order,
wherein the control unit is further configured for receiving at least one of the task IDs from the host and executing at least one of the tasks corresponding to the at least one of the task IDs, wherein the at least one of the task IDs corresponds to at least one of the tasks having a highest priority in the execution order,
wherein the task arrangement packet comprises the execution order for the tasks and the corresponding task IDs.

US Pat. No. 10,248,454

INFORMATION PROCESSING SYSTEM AND APPARATUS FOR MIGRATING OPERATING SYSTEM

FUJITSU LIMITED, Kawasak...

1. An information processing system comprising:a first server;
a second server; and
an information processing apparatus, the information processing apparatus including:
a processor configured to manage a process of causing OS (Operating System) running on the first server to run on the second server, and the first server including:
a driver configured to acquire an address of a first physical memory area allocated for running Booting OS to boot the OS on the first server during an operating status of the first server; and
a controller configured to notify the processor of the address of the first physical memory area,
the first server is a physical server,
the processor causes the Booting OS to run at a pseudo physical address corresponding to the address of the first physical memory area of the first server,
the processor causes the Booting OS to acquire control information of the processor from the OS running on the first server and to re-run the OS as a virtualized quest OS on the first server based on the control information,
the processor equalizes an address of a second physical memory area allocated for running the Booting OS on the second server to the address of the first physical memory area, and
the processor causes the OS to run on the second server by transferring the OS to the second server from the first server.

US Pat. No. 10,248,453

CLIENT LIVE MIGRATION FOR A VIRTUAL MACHINE

Red Hat Israel, Ltd., Ra...

1. A method comprising:connecting a first client device to a running virtual machine instance of a virtual machine;
receiving a request from a second client device to connect to the running virtual machine instance of the virtual machine while the first client device is connected to the running virtual machine instance;
connecting the second client device to the running virtual machine instance in response to the request from the second client device to access the virtual machine; and
disabling, by a processing device, one or more functions to receive input data for the running virtual machine from the first client device after the second client device has been connected to the running virtual machine instance by converting a connection between the first client device and the running virtual machine instance of the virtual machine from a primary mode to a secondary mode, wherein the primary mode corresponds to a respective client device receiving output data from the running virtual machine instance and providing the input data from the respective client device to the running virtual machine instance and the secondary mode corresponds to the respective client device receiving the output data from the running virtual machine instance without providing the input data from the respective client device to the running virtual machine instance.

US Pat. No. 10,248,452

INTERACTION FRAMEWORK FOR EXECUTING USER INSTRUCTIONS WITH ONLINE SERVICES

Microsoft Technology Lice...

1. A computer implemented framework for processing one or more user instructions on behalf of a computer user, the framework comprising:a first computing device comprising a processor and memory, hosting an instruction processing agent, wherein the instruction processing agent, in execution on the first computing device, is configured to receive a user instruction from a user agent executing on a user computing device, and maintain a list of domain agents; and
one or more domain agent computing devices, the one or more domain agent computing devices hosting a plurality of domain agents wherein each domain agent, in execution on a domain agent computing device of the one or more domain agent computing devices, corresponds to a domain and is configured to receive a domain instruction from the instruction processing agent that can be carried out within the domain, and carry out the domain instruction on behalf of the computer user;
wherein, in execution, the instruction processing agent:
receives the user instruction from the user agent;
identifies a domain suitable for carrying out the user instruction based on an identified intent of the user instruction;
maps the user instruction into at least one domain instruction according to a domain ontology of the identified domain;
selects a domain agent of the plurality of domain agents, the selected domain agent corresponding to the identified domain; and
submits the user instruction to the selected domain agent for execution; and
wherein, in execution, the selected domain agent:
maintains a plurality of proxies for interfacing with a plurality of online services, wherein each of the plurality of proxies interfaces with one each of the plurality of online services and maps the domain instruction to a respective one of the one each of the plurality of online services;
receives the domain instruction from the instruction processing agent;
identifies an online service from the plurality of online services for completing the domain instruction; and
executes the domain instruction with the online service via the respective one of the plurality of proxies for the identified online service.

US Pat. No. 10,248,451

USING HYPERVISOR TRAPPING FOR PROTECTION AGAINST INTERRUPTS IN VIRTUAL MACHINE FUNCTIONS

1. A method of a hypervisor restricting access to memory resources by a virtual machine by controlling access by the virtual machine to virtual machine functions, wherein the virtual machine executes with a default page view stored in a default page table, wherein the default page view limits access by the virtual machine to memory resources of the virtual machine, and wherein the virtual machine has access to additional memory resources by invoking virtual machine functions, the method comprising:activating a trap to the hypervisor in response to receiving an instruction that loads an interrupt data structure on the virtual machine, wherein the hypervisor maintains an alternate page table which stores an alternate page view associated with a virtual machine function, wherein the virtual machine function has access to memory resources restricted from the default page view via the alternate page view;
reading the interrupt data structure, by the hypervisor, to determine that the interrupt data structure points to the alternate page view and that the alternate page view includes a reference to a memory location outside of a memory location of the virtual machine function; and
responsive to determining that the alternate page view includes a reference to the memory location outside of the memory location of the virtual machine function, disabling the virtual machine function.

US Pat. No. 10,248,450

VIRTUAL MACHINE MIGRATION USING A PREDICTION ALGORITHM

FUJITSU LIMITED, Kawasak...

6. A virtual machine control device, comprising:a memory; and
a processor coupled to the memory and the processor configured to:
acquire usage information stored in the memory, the usage information including an actual usage value of respective virtual machines operating on each of information processing apparatuses during each of past periods, the actual usage value being an amount of a resource used by the respective virtual machines during each of the past periods;
create prediction information for each of the information processing apparatuses on basis of the acquired usage information, the prediction information including a prediction usage value of the respective virtual machines during each of periods corresponding to the past periods, the prediction usage value being an amount of the resource to be used by the respective virtual machines operating on each of the information processing apparatuses during each of the periods;
determine, upon detecting a first virtual machine whose actual usage value is not included in the usage information, whether a first period exists, in which a sum of the actual usage value of the first virtual machine and prediction usage values of virtual machines operating on a first apparatus of the information processing apparatuses exceeds a criterion for the first apparatus, the first virtual machine operating on the first apparatus; and
issue, upon determining that the first period exists, an instruction to move one of virtual machines operating on the first apparatus to a second apparatus of the information processing apparatuses before the first period, the second apparatus being different from the first apparatus.

US Pat. No. 10,248,449

APPLICATION CONTAINERS RUNNING INSIDE VIRTUAL MACHINE

Virtuozzo International G...

1. A system for launching application containers inside Virtual Machines (VMs) without data duplication, the system comprising:a first VM running on a host;
a second VM running on the host;
a data storage to which the host has access;
a host-side container generation daemon running on the host and configured to interface to VM-side container generation daemons inside each of the VMs;
the VM-side container generation daemons configured to transmit to the host-side container generation daemon requests to pull container layers;
the host-side container generation daemon is configured to receive and process the requests to pull the requested container layers from the VM-side container generation daemons; and
a direct access (DAX) device residing on each of the VMs,
wherein:
the host-side container generation daemon sends a request for any container layers that have not yet been pulled in prior requests, to a registry, and writes these container layers onto the data storage;
the host-side container generation daemon maps all the pulled container layers to the VMs as the DAX devices; and
the host-side container generation daemon maps all needed container layers to the first VM and also maps identical container layers that are identical to both the first and second VMs to the second VM, without accessing the registry.

US Pat. No. 10,248,448

UNIFIED STORAGE/VDI PROVISIONING METHODOLOGY

VMware, Inc., Palo Alto,...

1. A method for providing a virtual desktop infrastructure (VDI), the method comprising:receiving an indication of a desktop pool type;
provisioning a plurality of virtual machines (VMs) to a host computing device, each of the plurality of VMs configured to execute a virtual desktop of the desktop pool type;
provisioning virtual shared storage for the plurality of VMs by using a storage manager on the host computing device, wherein provisioning the virtual shared storage includes tuning configuration settings of the virtual shared storage based on pool-related parameters associated with the desktop pool type;
detecting that a storage performance benchmark result indicating storage performance of the VMs utilizing the virtual shared storage does not meet a target threshold that is defined for the desktop pool type; and
executing an optimization loop to optimize the virtual shared storage by periodically modifying the configuration settings of the virtual shared storage and/or modifying an allocation of processor cores and/or random access memory (RAM) allocated to the storage manager.

US Pat. No. 10,248,447

PROVIDING LINK AGGREGATION AND HIGH AVAILABILITY THROUGH NETWORK VIRTUALIZATION LAYER

Red Hat, Inc., Raleigh, ...

1. A method comprising:receiving, by a processing device of a host computer system executing a hypervisor, a network packet from a virtual port associated with a virtual machine managed by the hypervisor;
generating a metadata item associated with the network packet, the metadata item comprising an identifier of the virtual port and an identifier of a transmission mode for the network packet;
recording the metadata item in a data structure identifying an address space of the hypervisor;
determining, by the processing device executing the hypervisor, in view of the identifier of the transmission mode of the metadata item, whether the transmission mode is one of a link aggregation (LA) mode, a high availability (HA) mode, or a combined LA and HA mode;
identifying a network interface controller (NIC) of the host machine for processing the network packet according to the determined transmission mode; and
transmitting the network packet to the NIC according to the determined transmission mode.

US Pat. No. 10,248,446

RECOMMENDING AN ASYMMETRIC MULTIPROCESSOR FABRIC LINK AGGREGATION

International Business Ma...

6. A system for recommending an asymmetric multiprocessing fabric link aggregation, the system comprising:a processor; and
a computer readable storage medium having program instructions embodied therewith, the program instructions executable by the processor to cause the system to perform a method, the method comprising:
creating, based upon user defined parameters, a system plan for a single symmetric multiprocessing server having a plurality of computing nodes with at least one hypervisor that spans across the plurality of computing nodes, wherein the single symmetric multiprocessing server has a symmetric multiprocessor structure in which each of the plurality of computing nodes is connected to a shared memory and each input/output device of the single symmetric multiprocessing server;
determining, based upon the user defined parameters, an asymmetric cabling structure between the plurality of computing nodes of the single symmetric multiprocessing server for the system plan while maintaining the symmetric multiprocessor structure of the single symmetric multiprocessing server;
wherein determining the asymmetric cabling structure further includes determining, based on the user defined parameters, which nodes in the plurality of computing nodes are expected to communicate with each other more frequently;
wherein the asymmetric cabling structure is determined to increase bandwidth between the nodes of the plurality of computing nodes that are expected to communicate with each other more frequently while maintaining at least a minimum bandwidth between other nodes of the plurality of computing nodes; and
displaying to a user through a graphical user interface, a recommendation to alter one or more cables connecting the plurality of computing nodes of the symmetric multiprocessing server to conform to the system plan and the asymmetric cabling structure.

US Pat. No. 10,248,445

PROVISIONING OF COMPUTER SYSTEMS USING VIRTUAL MACHINES

VMware, Inc., Palo Alto,...

1. A method for creating a virtualized computer system, the method comprising:receiving information identifying a desired computer configuration to deploy a virtual machine thereon, the information comprising characteristics of a desired hardware platform;
based on the received information, selecting a staged virtual machine from a plurality of staged virtual machines, the plurality of staged virtual machines are created from pre-configured model virtual machines, wherein the plurality of staged virtual machines having at least one model virtual machine identifier removed;
based on the characteristics of the desired hardware platform, selecting, from a plurality of physical host platforms, a physical host platform that is compatible with the staged virtual machine and comprises the desired computer configuration; and
automatically configuring and deploying, on the physical host platform, the staged virtual machine according to the information, wherein the configuring and deploying comprises installing an application in the staged virtual machine when the application is requested via the information and the application is not pre-installed in the staged virtual machine.

US Pat. No. 10,248,444

METHOD OF MIGRATING VIRTUAL MACHINES BETWEEN NON-UNIFORM MEMORY ACCESS NODES WITHIN AN INFORMATION HANDLING SYSTEM

Dell Products, L.P., Rou...

1. A computer implemented method for allocating virtual machines (VMs) to run within a non-uniform memory access system having at least a first processing node and a second processing node, the method comprising:arranging multiple VMs into an ordered array of VMs based on relative percentages of utilization of memory resources, measured in cycles, by the individual VMs associated with the first processing node as a primary weighing factor utilized in ranking the VMs of the first processing node, and utilizing a utilization of processor resources as a secondary weighting factor, wherein arranging the multiple VMs include:
ranking the multiple VMs based on processor usage and memory usage, the memory usage including a second memory usage value from a second memory associated with the second processing node and wherein the second memory usage value is the amount of memory used by VMs executing on the first processing node; and
generating the ordered array of the multiple VMs executing on the first processing node based on the ranking;
receiving a request at the first processing node for additional capacity for establishing an additional VM on the first processing node having multiple VMs executing thereon;
in response to receiving the request, identifying whether the first processing node has the additional capacity requested;
in response to identifying that the first processing node does not have the additional capacity requested, selecting from the ordered array of the multiple VMs executing on the first processing node, at least one VM having low processor and memory usage relative to the other VMs to be re-assigned for execution from the first processing node to the second processing node;
migrating the at least one selected VM from the first processing node to the second processing node for execution; and
establishing the additional VM on the first processing node when the migrating of the at least one VM to the second processing node provides the additional capacity requested on the first processing node.

US Pat. No. 10,248,443

VIRTUAL MODEM TERMINATION SYSTEM MIGRATION IN A CABLE MODEM NETWORK ENVIRONMENT

Cisco Technology, Inc., ...

1. A method comprising:spawning, by an orchestration component executing using a processor, a first instance of a virtual network function (VNF) on a first server in a cable modem network, wherein the VNF is associated with a specific hardware interface in the cable modem network;
storing state of the first instance as state information in an external database;
spawning a second instance of the VNF on a second server, the second server being different from the first server;
synchronizing state of the second instance with the state information stored in the external database, wherein synchronizing includes associating the second instance with the specific hardware interface;
creating a first communication tunnel between the second instance and a remote physical layer (R-PHY) entity in the cable modem network, wherein the R-PHY entity is communicatively coupled to the first instance and communicates data traffic with the first instance;
creating a second communication tunnel between the first instance and the second instance;
communicating heartbeat messages between the first instance and the second instance over the second communication tunnel;
switching over the data traffic to the first communication tunnel between the second instance and the R-PHY entity; and
deleting the first instance.

US Pat. No. 10,248,442

AUTOMATED PROVISIONING OF VIRTUAL MACHINES

Unisys Corporation, Blue...

1. A computer-implemented method for automatically provisioning virtual machines within a programmable processing system, the method comprising:detecting when processing demand within the programmable processing system exceeds a predefined capacity limit;
starting a virtual machine when processing demand exceeded the predefined capacity limit;
assigning at least one community-of-interest to the virtual machine when the processing demand on the virtual machine is detected to have exceeded the predefined capacity limit, wherein the virtual machine and other virtual machines within the community-of-interest form an enclave; and
configuring the virtual machine for communications with a virtual gateway in the community-of-interest, wherein a client communicates with virtual machines of the enclave through the virtual gateway;
wherein all virtual machines within the enclave communicate with each other through a common bus, the common bus is encrypted with a key of the community-of-interest;
wherein the virtual gateway decrypts a communication when communicating with the client; and
wherein the community-of-interest being defined by a role played by the virtual machine in the community of interest and by capabilities of the virtual machine.

US Pat. No. 10,248,441

REMOTE TECHNOLOGY ASSISTANCE THROUGH DYNAMIC FLOWS OF VISUAL AND AUDITORY INSTRUCTIONS

International Business Ma...

1. A method, comprising:receiving, by a first device, action information describing a first action of a plurality of actions performed on a second device, wherein the action information was generated based on system calls generated on the second device when a user performed the first action on a target object on the second device;
identifying the target object on the second device based on metadata included in the action information;
determining one or more attributes of the target object, based on the metadata included in the action information;
identifying, based on the one or more attributes of the target object, a corresponding object on the first device;
outputting, by the first device:
a sequence of images depicting performance of the first action on the second device;
a textual instruction proximate to the corresponding object, wherein the textual instruction specifies how to perform the first action on the corresponding object on the first device; and
an audio instruction specifying how to perform the first action on the corresponding object on the first device; and
upon determining the first action has successfully been performed on the corresponding object on the first device:
transmitting an indication that the first action has successfully performed the first action on the corresponding object on the first device; and
receiving, by the first device, action information describing a second action of the plurality of actions performed on the second device.

US Pat. No. 10,248,440

PROVIDING A SET OF USER INPUT ACTIONS TO A MOBILE DEVICE TO CAUSE PERFORMANCE OF THE SET OF USER INPUT ACTIONS

GOOGLE LLC, Mountain Vie...

1. A method comprising:receiving a selection of a first screen capture image representing a screen captured on a mobile device associated with a user, the first image having a first timestamp;
determining, using a data store of images of previously captured screens of the mobile device, a reference image from the data store that has a timestamp prior to the first timestamp;
identifying a plurality of images in the data store that have respective timestamps between the timestamp for the reference image and the first timestamp;
determining a set of stored user input actions that occur prior to the first timestamp of the first image and after a timestamp corresponding to the reference image; and
providing the reference image, the plurality of images, the first image, and the set of user input actions to the mobile device, wherein providing the reference image, the plurality of images, the first image, and the set of user input actions to the mobile device causes the mobile device to perform the set of user input actions starting from a state corresponding to the reference image, wherein in performing the set of stored user input actions the mobile device automatically performs the actions using a virtual screen that is not visible via the mobile device.

US Pat. No. 10,248,439

FORMAT OBJECT TASK PANE

MICROSOFT TECHNOLOGY LICE...

1. A method for providing formatting controls in a format object task pane in an application, the method comprising:receiving a request for a formatting functionality;
in response to receiving the request, presenting a format object task pane, wherein the format object task pane is mode less to thereby enable performance of formatting tasks on multiple objects without having to dismiss and relaunch the format object task pane and the format object task pane is presented within an application window such that the format object task pane does not obstruct a document workspace;
receiving a selection of a first object displayed in the document workspace of the application, wherein the first object is of a first object type and in response to the selection of the first object,
displaying in the format object task pane a first set of formatting controls that are applicable to the first object type;
receiving a selection of a second object displayed in the document workspace of the application, wherein the second object is different from the first object and is of a second object type, and in response to the selection of the second object, displaying in the format object task pane a second set of formatting controls that are applicable to the second object type; and
receiving a selection of a third object displayed in the document workspace of the application, wherein the third object is different from the first object and the second object, and wherein the third object comprises both the first object type and the second object type, and in response to a selection of the third object, initially displaying in the format object task pane the first set of formatting controls and an options toggle that is operable to replace display of the first set of formatting controls with display of the second set of formatting controls in the format object task pane upon selection.

US Pat. No. 10,248,438

SYSTEM AND METHODS FOR A RUN TIME CONFIGURABLE USER INTERFACE CONTROLLER

FLUFFY SPIDER TECHNOLOGIE...

1. A method for reskinning a user interface of a consumer electronics device, the method comprising:intercepting data between an application and input and output drivers within the consumer electronics device, wherein the input and output drivers control at least one of video, audio and tactile interfaces, and wherein the at least one interface includes a plurality of input key variables;
accessing a database for at least one key variable of the plurality of input key variables to determine a key type for the at least one key variable, wherein the key type is either a table or a function;
transforming the intercepted data by use of a set of configuration files, wherein the set of configuration files are alterable by a user and in response to a plurality of external source cues, wherein the plurality of external source cues include a limited power indicator, time, location and ringer alert style, and wherein the transforming includes applying a function to the intercepted data when the key type is a function, and when the key type is a table retrieving additional data from a second database that associates value tuples to the key and incorporating said additional data with the intercepted data; and
executing scripted user interface components using the set of configuration files and the external source cues to generate a uniquely customized interface, wherein the transformed intercepted data replaces objects presented in the interfaces with replaced objects responsive to the set of configuration files, wherein a subset of the replaced objects are consolidated into a single output that includes animations or stepwise alterations responsive to timers and I/O watching files causing at least one of the animations and alterations to move as time progresses, battery levels change and signal strength varies, and wherein the background is at least one of animated and altered to include moving objects based upon timers, and differing images displayed according to if unread messages or missed calls are present.

US Pat. No. 10,248,437

ENHANCED COMPUTER PERFORMANCE BASED ON SELECTABLE DEVICE CAPABILITIES

INTERNATIONAL BUSINESS MA...

1. A method, comprising:determining a total number of the hardware devices having a capability indicating one or more performance aspects capable of being rendered by a hardware device of a computer system;
upon determining that a total number of hardware devices in the system matches the total of the hardware devices having the capability, enabling the capability for each of the hardware devices of the computer system with respect to a corresponding performance aspect; and
upon determining that the total number of hardware devices in the computer system does not match the total number of hardware devices having the capability, and upon determining that a high availability profile of the computer system is to be maintained, and upon determining that the identified capability cannot be dynamically enabled on a per-request basis, enabling a different capability attributed to respective hardware devices, the different capability based on an earlier version of the respective hardware device.

US Pat. No. 10,248,436

ELECTRONIC APPARATUS CONTROLLING CONNECTION WITH ACCESSORY DEVICE, ACCESSORY DEVICE, CONTROL METHODS THEREFOR, AND STORAGE MEDIUMS STORING CONTROL PROGRAMS THEREFOR

CANON KABUSHIKI KAISHA, ...

1. An electronic apparatus capable of communicating with an accessory device connected, the electronic apparatus comprising:a detection unit configured to detect whether the accessory device supports both a first communication method and a second communication method of which communication speed is higher than communication speed of the first communication method; and
a setting unit configured to set the second communication method during communication between the electronic apparatus and the accessory device and to set the first communication method during no communication between the electronic apparatus and the accessory device, when said detection unit detects that the accessory device supports both the first communication method and the second communication method.

US Pat. No. 10,248,435

SUPPORTING OPERATION OF DEVICE

International Business Ma...

1. A method for supporting an operation by an operator of a target device, the method comprising:storing, in advance, by one or more processors, a first topology as a template indicating dependency relationship of a plurality of device types including a device type of the target device;
responsive to an event occurring with the target device, generating, by the one or more processors, a second topology indicating dependency relationship of a plurality of devices including the target device and one or more devices adjacent to the target device, by performing, based on the first topology, a topology discovery for the plurality of devices, each of the plurality of devices having any one of the plurality of device types; and
providing, by the one or more processors, an operation sequence of the plurality of devices to the operator, the operation sequence being generated based on the second topology.

US Pat. No. 10,248,434

LAUNCHING AN APPLICATION

BlackBerry Limited, Wate...

1. A method performed by a hardware processor of a mobile device, comprising:configuring, at the mobile device, a plurality of process classes, wherein each of the process classes is associated with a template process among a plurality of template processes, each template process of the plurality of template processes is associated with a different randomized memory layout among a plurality of randomized memory layouts, and at least one of the randomized memory layouts is associated with a plurality of applications in a same template process;
receiving, at the mobile device, a launching request for an application;
in response to receiving the launching request for the application, determining a process class, among the plurality of process classes, that is associated with the application; and
in response to determining the process class associated with the application, launching the application through a forked template process associated with the determined process class, wherein other different forked template processes from the plurality of template processes are running on the mobile device.

US Pat. No. 10,248,433

NETWORK MANAGEMENT APPARATUS AND METHOD FOR REMOTELY CONTROLLING STATE OF IT DEVICE

ELECTRONICS AND TELECOMMU...

1. A network management apparatus for remotely controlling a state of an information technology (IT) device connected to the network management apparatus, the network management apparatus comprising:a detection pattern generator configured to generate a detection pattern containing an Address Resolution Protocol (ARP) bit pattern and a Wake-on-LAN (WOL) bit pattern as a detection target;
a remote state controller configured to, even when the IT device is in an inactive state, analyze a data bit stream received over a network connected to the IT device to determine whether the data bit stream includes a bit pattern that is the same as the detection pattern, and generate an interrupt including information regarding the same bit pattern to wake up the IT device when the same bit pattern is determined to be present in the data bit stream; and
a wake-up cause analyzer configured to, after the wake-up of the IT device caused by the interrupt, analyze a cause of the wake-up according to the bit pattern included in the interrupt to activate a central processing unit (CPU) of the IT device or maintain the inactive state of the IT device.

US Pat. No. 10,248,432

INFORMATION PROCESSING APPARATUS INCLUDING MAIN SYSTEM AND SUBSYSTEM

Canon Kabushiki Kaisha, ...

1. An information processing apparatus comprising:a main system which includes a main processor;
a subsystem which includes a first sub processor, a memory used for storing a program to be executed by the first sub processor, and a second sub processor; and
a connecting circuit which connects the main system and the subsystem,
wherein, based on a shift event to shift the information processing apparatus to a power-saving state, the subsystem shifts to a subsystem power-saving state and the main system shifts to a main system power-saving state,
wherein, after the shift event to shift the information processing apparatus to the power-saving state is detected and before the subsystem shifts to the subsystem power-saving state, the main processor of the main system transfers a boot program of the subsystem from the main system into the memory of the subsystem via the connecting circuit and shifts the memory of the subsystem that has stored the transferred boot program of the subsystem into a self-refresh state, and
wherein, after a return event to return at least the subsystem from the subsystem power-saving state is detected, the second sub processor causes the memory of the sub system to cancel the self-refresh state and the first sub processor of the subsystem executes, without necessity to transfer the boot program from the main system into the memory of the subsystem after the return event is detected, the boot program that has been transferred into the memory of the subsystem before the subsystem shifts to the subsystem power-saving state and has remained stored in the memory of the subsystem while the subsystem has been in the subsystem power-saving state.

US Pat. No. 10,248,431

SYSTEM AND METHOD FOR PRESENTING DRIVER INSTALL FILES WHEN ENABLING A USB DEVICE

VERTIV IT SYSTEMS, INC., ...

1. A system for enabling implementation of a secondary function of a universal serial bus (USB) device on a computer that the USB device is communicating with, wherein an operating system of the computer does not have a required driver associated with the operating system which needs to be mapped to the USB device to enable implementation of at least one unsupported feature of the secondary function of the USB device, the system comprising:a USB device having a housing and being in communication with the computer and with an electronic device of a user, the USB device having a primary function along with the secondary function, the USB device including:
a USB mass storage device housed in the USB device;
at least one file stored on the USB mass storage device, the file being selectable by a user and including the required driver which needs to be mapped to the USB device by the computer to enable the unsupported feature of the secondary function on the computer; and
a manually engagable switch on the housing which enables the user to access the secondary function of the USB device and to transmit at least one file from the USB device to the computer.

US Pat. No. 10,248,430

RUNTIME RECONFIGURABLE DISSIMILAR PROCESSING PLATFORM

HAMILTON SUNDSTRAND CORPO...

1. A configurable channel system, comprising:a plurality of control channels; and
a plurality of processing platforms,
each of the plurality of processing platforms being associated with one of the plurality of control channels and being configured to perform channel management and channel monitoring of the associated one of the plurality of control channels, each of the plurality of processing platforms comprising:
a first microcontroller comprising a first core; and
a second microcontroller comprising a second core,
wherein the second microcontroller is dissimilar to the first microcontroller,
wherein each of the plurality of processing platforms loads a unique soft core from a corresponding flash memory at a runtime and executes on the first core and the second core resident software in one of an active state and a monitor state,
wherein the first core is configured in the active state to perform the channel management of the associated one of the plurality of control channels, and
wherein the second core is configured in the monitor state to perform the channel monitoring of the associated one of the plurality of control channels.

US Pat. No. 10,248,429

CONFIGURATION BASED ON A BLUEPRINT

HEWLETT PACKARD ENTERPRIS...

1. A configurable computing device, comprising:a plurality of configurable components;
non-volatile storage including a plurality of blueprints, each blueprint defining a particular configuration for the configurable components; and
processing logic coupled to the non-volatile storage and to:
receive a selection of one of the blueprints from the non-volatile storage;
validate the selected blueprint by validating a digital signature of the selected blueprint, authenticating a hash value associated with the selected blueprint, and determining whether the plurality of configurable components can be configured as defined by the selected blueprint; and
configure the configurable components in accordance with the selected and validated blueprint.

US Pat. No. 10,248,428

SECURELY BOOTING A COMPUTING DEVICE

Intel Corporation, Santa...

1. A computing device to perform a secure boot, the computing device comprising:a security engine of the computing device comprising a secure boot module to: (i) consecutively determine a hash value for each block of a plurality of blocks of initial boot firmware, wherein to consecutively determine the hash values comprises to (a) retrieve each block of the plurality of blocks from a memory of the computing device, (b) store each retrieved block in a secure memory of the security engine, and (c) determine the hash value for each block stored in the secure memory; and (ii) generate an aggregated hash value from the hash value determined for each block of the initial boot firmware; and
a processor comprising a Cache as RAM and a processor initialization module to (i) compare the aggregated hash value to a reference checksum value associated with the initial boot firmware to determine whether the aggregated hash value matches the reference checksum value and (ii) complete initialization of the processor in response to a determination that the aggregated hash value matches the reference checksum value,
wherein the secure boot module or the processor initialization module to copy each block stored in the secure memory to the Cache as RAM of the processor.

US Pat. No. 10,248,427

SEMICONDUCTOR DEVICE PERFORMING BOOT-UP OPERATION ON NONVOLATILE MEMORY CIRCUIT AND METHOD OF OPERATING THE SAME

SK hynix Inc., Gyeonggi-...

1. A semiconductor device comprising:one or more internal circuits;
a nonvolatile memory circuit having a first region suitable for storing first data for operation of the nonvolatile memory circuit itself and a second region suitable for storing second data for the internal circuits;
a first register suitable for temporarily storing the first data transferred from the nonvolatile memory and being used to optimize the nonvolatile memory;
one or more second registers suitable for temporarily storing the second data transferred from the nonvolatile memory and being used for an operation of the one or more internal circuits; and
a control circuit suitable for controlling the nonvolatile memory circuit to transmit the first data to the first register and controlling the nonvolatile memory circuit to transmit the second data to the second register after the first data is transferred to the first register and the nonvolatile memory circuit is optimized, when a boot up operation is performed.

US Pat. No. 10,248,426

DIRECT REGISTER RESTORE MECHANISM FOR DISTRIBUTED HISTORY BUFFERS

International Business Ma...

1. A method for restoring register data, comprising:receiving an instruction to flush one or more general purpose registers (GPRs) in a processor;
determining history buffer entries of a history buffer to be restored to the one or more GPRs;
creating a mask vector that indicates which history buffer entries will be restored to the one or more GPRs and also indicates whether each history buffer entry is in a Level 1 (L1) storage or a Level 2 (L2) storage;
restoring the indicated history buffer entries to the one or more GPRs; and
as each indicated history buffer entry is restored, updating the mask vector to indicate which history buffer entries have been restored.

US Pat. No. 10,248,425

PROCESSOR WITH SLAVE FREE LIST THAT HANDLES OVERFLOW OF RECYCLED PHYSICAL REGISTERS AND METHOD OF RECYCLING PHYSICAL REGISTERS IN A PROCESSOR USING A SLAVE FREE LIST

VIA ALLIANCE SEMICONDUCTO...

1. A processor, comprising:a plurality of physical registers, each identified by a physical register index;
a reorder buffer comprising a plurality of instruction entries each storing up to two physical register indexes for recycling corresponding physical registers, wherein said reorder buffer retires up to N instructions in each processor cycle in which N is a positive integer;
a master free list and a slave free list, each comprising N input ports and storing corresponding physical register indexes of said physical registers, wherein said physical registers whose corresponding physical register indexes are stored in said master free list are for allocating to instructions being issued;
a master recycle circuit that routes a first physical register index, which is stored in an instruction entry of an instruction, to one of said N input ports of said master free list when said instruction is retired; and
a slave recycle circuit that routes a second physical register index, which is stored in said instruction entry of said instruction, to one of said N input ports of said slave free list when said instruction is retired;
wherein the processor further comprises a transfer circuit that, for any given processor cycle in which said master recycle circuit routes less than N physical register indexes by a difference number, transfers up to said difference number of physical register indexes stored in said slave free list to available input ports of said master free list.

US Pat. No. 10,248,424

CONTROL FLOW INTEGRITY

Intel Corporation, Santa...

1. An apparatus comprising:control flow graph (CFG) generator circuitry to generate a CFG for a target application, wherein the CFG comprises a plurality of nodes that each includes a start address of a first basic block, an end address of the first basic block, and a next possible address of a second basic block or a not found tag;
collector circuitry to, during execution of the target application, capture processor trace (PT) data from a PT driver, the PT data comprising a first target instruction pointer (TIP) packet comprising a first runtime target address of an indirect branch instruction of the executing target application, and cause the PT driver to configure PT circuitry to limit collected PT data to TIP packets;
decoder circuitry to extract the first TIP packet from the PT data and to decode the first TIP packet to yield the first runtime target address; and
control flow validator circuitry to determine, based at least in part on the generated CFG, whether a control flow transfer to the first runtime target address corresponds to a control flow violation.

US Pat. No. 10,248,423

EXECUTING SHORT POINTER MODE APPLICATIONS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method of facilitating processing in a computing environment, said computer-implemented method comprising:executing a short pointer mode application loaded in an address space configured for use by a plurality of types of applications including the short pointer mode application and a long pointer mode application, the address space having a first portion addressable by short pointers of a defined size and a second portion addressable by long pointers of another defined size, the other defined size being different from the defined size; and
based on executing the short pointer mode application:
converting one or more short pointers of the short pointer mode application to one or more long pointers;
based on using a long format service function, passing an in-memory short pointer as a parameter in a long format to use the long format service function, wherein the passing includes loading or accessing a long pointer representation in a parameter list holding an image of a long form register representation of the parameter; and
using the one or more long pointers to access memory within the first portion of the address space addressable by short pointers.

US Pat. No. 10,248,422

SYSTEMS, APPARATUSES, AND METHODS FOR SNOOPING PERSISTENT MEMORY STORE ADDRESSES

Intel Corporation, Santa...

1. An apparatus comprising:a decoder circuit to decode an instruction having fields to specify an opcode, a source operand, and a destination operand; and
execution circuitry to execute the decoded instruction to determine whether a tag from the address from the source operand matches a tag in any of N selected cache lines of a N-way set-associative non-volatile memory address cache (NVMAC), the N cache lines being selected by an index from the address from the source operand, wherein, to make the determination, the execution circuitry is to:
compare a tag value from each selected cache line to the tag from the address from the source operand, each of the comparisons to determine whether there is a match,
AND each comparison result with a valid bit from the respective selected cache line,
OR the outputs of the AND operations, and
store the output of the OR in the destination operand, the result of the OR comprising a hit indication;
wherein when there is a match, the hit indication is stored in the destination operand, and when there is not a match, a no hit indication is stored in the destination operand and the NVMAC is updated with the tag from the address from the source operand.

US Pat. No. 10,248,421

OPERATION OF A MULTI-SLICE PROCESSOR WITH REDUCED FLUSH AND RESTORE LATENCY

International Business Ma...

1. A method of operation of a multi-slice processor, the multi-slice processor including a plurality of execution slices and a plurality of load/store slices, each execution slice comprising an issue queue, one or more general purpose registers, a history buffer, and a plurality of execution units including one or more floating point units and one or more vector/scalar units wherein the load/store slices are coupled to the execution slices via a results bus, the method comprising: for a target instruction targeting a logical register, determining whether an entry in a general purpose register representing the logical register is pending a flush; and when the entry in the general purpose register representing the logical register is pending a flush: cancelling the flush for the entry of the general purpose register; storing the target instruction in the entry of the general purpose register representing the logical register, and when an entry in a history buffer targeting the logical register is pending a restore, cancelling the restore for the entry of the history buffer.

US Pat. No. 10,248,420

MANAGING LOCK AND UNLOCK OPERATIONS USING ACTIVE SPINNING

Cavium, LLC, Santa Clara...

1. A method for managing instructions on a processor comprising a plurality of processor cores, the method comprising:executing a plurality of threads on the processor cores, each thread having access to a stored library of operations including at least one lock operation and at least one unlock operation; and
managing instructions that are issued on a first processor core of the plurality of processor cores, for a first thread executing on the first processor core, the managing including:
for each instruction included in the first thread and identified as being associated with a lock operation corresponding to a particular lock, determining if the particular lock has already been acquired for another thread executing on a processor core other than the first processor core, and if the particular lock has already been acquired, continuing to perform the lock operation for a plurality of attempts using a hardware lock operation different from the lock operation in the stored library, and if the particular lock has not already been acquired, acquiring the particular lock for the first thread, wherein the hardware lock operation performs a modified atomic operation that changes a result of the hardware lock operation for failed attempts to acquire the particular lock relative to a result of the lock operation in the stored library, and
for each instruction included in the first thread and identified as being associated with an unlock operation corresponding to a particular lock, releasing the particular lock from the first thread.

US Pat. No. 10,248,419

IN-MEMORY/REGISTER VECTOR RADIX SORT

International Business Ma...

1. A computer-implemented method, comprising:retrieving a plurality of cache lines of data from an input buffer, wherein each cache line comprises a plurality of elements;
scattering the plurality of elements of each retrieved cache line into a plurality of bins, wherein said scattering comprises using one or more vector instructions;
forming a bin cache line in a corresponding one of the plurality of bins, wherein the bin cache line comprises a group of the plurality of elements which were scattered from multiple distinct cache lines among the plurality of cache lines to the corresponding one of the plurality of bins;
writing the bin cache line from the corresponding one of the plurality of bins to a memory, wherein said writing the bin cache line to the memory comprises using one or more predicated instructions, and wherein the one or more predicated instructions are predicated based on the occupied size of the bin relative to the size of the bin cache line; and
loading the bin cache line from the memory to the input buffer;
wherein the steps are carried out by at least one computing device.

US Pat. No. 10,248,418

CLEARED MEMORY INDICATOR

INTERNATIONAL BUSINESS MA...

1. A system comprising:a memory having computer readable instructions; and
one or more processing devices for executing the computer readable instructions, the computer readable instructions comprising:
receiving an instruction from a requesting device for access to an amount of memory;
selecting a memory space in the memory corresponding to the amount of memory, the memory space including one or more memory blocks, each memory block of the one or more memory blocks having an address;
inspecting a clear indicator associated with each memory block, the clear indicator stored as at least one bit in a field that is separate from contents of a respective memory block, the clear indicator indicating whether the respective memory block is in a cleared state, wherein the cleared state is a state of the respective memory block in which the respective memory block does not have any data stored therein;
determining based on the clear indicator whether each memory block in the memory space is in the cleared state;
based on determining that a first set of one or more memory blocks in the memory space is not in the cleared state while a second set of one or more memory blocks in the memory space is in the cleared state, clearing the memory space by clearing only the first set of one or more memory blocks;
assigning the memory space to the requesting device;
in response to the requesting device finishing its use of the memory space, accessing the clear indicator for each memory block and determining based on the clear indicator whether (1) each memory block in the memory space used by the requesting device is in the cleared state or (2) a third set of one or more memory blocks in the memory space is not in the cleared state while a fourth set of one or more memory blocks in the memory space is in the cleared state;
based on determining that a third set of one or more memory blocks in the memory space used by the requesting device is not in the cleared state while a fourth set of one or more memory blocks in the memory space used by the requesting device is in the cleared state, clearing the memory space by clearing only the third set of one or more memory blocks; and
making the memory space available for subsequent requests.

US Pat. No. 10,248,417

METHODS AND APPARATUSES FOR CALCULATING FP (FULL PRECISION) AND PP (PARTIAL PRECISION) VALUES

VIA ALLIANCE SEMICONDUCTO...

1. A method for calculating FP (Full Precision) and PP (Partial Precision) values, performed by an ID (Instruction Decode) unit, the method comprising:decoding an instruction request from a compiler; and
executing a loop m times to generate m microinstructions for calculating first-type data, or n times to generate n microinstructions for calculating second-type data according to an instruction mode of the instruction request, thereby enabling a plurality of ALGs (Arithmetic Logic Groups) to execute a plurality of lanes of a thread;
wherein m is less than n and a precision of the first-type data is lower than a precision of the second-type data;
wherein each ALG comprises:
a first-type computation lane; and
a plurality of second-type computation lanes,
wherein when the instruction mode is a first mode, each of the first-type computation lane and the second-type computation lanes completes calculations for a set of the first-type data independently; and, when the instruction mode is a second mode, each of the second-type computation lanes calculates a portion of a set of the second-type data to generate a partial result and the first-type computation lane combines the partial results by the second-type computation lanes, outputs a combined result and uses the combined result to complete calculations for the set of the second-type data.

US Pat. No. 10,248,416

ENHANCING CODE REVIEW THROUGHPUT BASED ON SEPARATE REVIEWER AND AUTHOR PROFILES

International Business Ma...

1. A computer-implemented method for enhancing code review throughput based on separate profiles, comprising:generating a first type of profile for a first user based on a collection of first source associated with the first user, wherein the first type of profile indicates different types of errors that occurred in the collection of first source code greater than at least one of a first predefined threshold number of times and a first predefined percentage of time;
determining, based on the first type of profile for the first user, that the different types of errors have a likelihood of occurring in source code written by the first user, when the first type of profile indicates that frequency of occurrence of the different types of errors is greater than at least one of the first predefined threshold number of times and the first predefined percentage of time;
receiving second source code associated with the first user;
determining, for each of one or more second users, one or more coding review attributes associated with the second user from a second type of profile for the second user;determining a set of metrics for each of the second users, based on a collection of third source code associated with different users, wherein the set of metrics comprises at least one of: (i) a number of the different types of errors identified for every predetermined number of lines of the third source code and (ii) a code review throughput of the third source code;generating, based on the one or more metrics, a proficiency score associated with each coding review attribute of each of the second users;
evaluating the one or more coding review attributes based on the proficiency score assigned to each coding review attribute, wherein the proficiency score for each coding review attribute indicates a number of times the second user identified at least one of the different types of errors indicated in the first type of profile greater than at least one of a second predefined number of times and a second predefined percentage of time when reviewing source code;
selecting at least one of the second users to review the first user's second source code based at least in part on the evaluation;
sending the second source code to the selected at least one second user to review; and
after sending the second source code, highlighting different portions of the second source code on an interface of the selected at least one second user based on the first type of profile for the first user to increase efficiency of code review by the selected at least one second user, wherein the different portions of the second source code are associated with the different types of errors that have the likelihood of occurring in source code written by the first user.

US Pat. No. 10,248,415

DYNAMIC CODE GENERATION AND MEMORY MANAGEMENT FOR COMPONENT OBJECT MODEL DATA CONSTRUCTS

Microsoft Technology Lice...

1. A computing device, comprising:a processor communicatively coupled to a memory, the memory storing computing executable instructions that when executed cause the processor to:
initiate a request to reclaim memory associated with a script code object with a dependency to a native code object represented in native code;
in a first phase, request the script code object and the native code object to prepare for the request to reclaim the memory associated with the script code object; and
in a second phase, only in response to receiving confirmation from the script code object that the script code object is prepared for the request and in response to receiving confirmation from the native object that the native object is prepared for the request, unwind the dependency to the native code object represented in native code and proceed to reclaim the memory associated with the script code object.

US Pat. No. 10,248,414

SYSTEM AND METHOD FOR DETERMINING COMPONENT VERSION COMPATIBILITY ACROSS A DEVICE ECOSYSTEM

Duo Security, Inc., Ann ...

1. A method comprising:at a network connected platform, collecting component version data from a plurality of device instances;
constructing a device version dataset for each device instance of the plurality of device instances based on the component version data;
classifying the device version dataset for each device instance into a device profile repository;
at the network connected platform, querying the device profile repository in response to a component version query request from a requesting entity, wherein the component version query request comprises component version specification data; and
sending, to the requesting entity in response to the component version query request, query response data relating to results of the querying the device profile repository.

US Pat. No. 10,248,413

MERIT BASED INCLUSION OF CHANGES IN A BUILD OF A SOFTWARE SYSTEM

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method, comprising:receiving, from a first user, a change to a software system under development from a first user;
calculating, based upon success of prior changes received from the first user, a merit score for the first user;
comparing the merit score for the first user with a merit threshold for the software system under development;
determining that the merit score for the first user complies with the merit threshold; and
accepting, responsive to the determining, the change for inclusion in a build of the software system under development, wherein
the merit score for the first user is increased responsive to approval of the change by a second user having at least a minimum merit score.