US Pat. No. 10,769,097

AUTONOMOUS MEMORY ARCHITECTURE

Micron Technologies, Inc....

1. A system, comprising:a host interface to couple to a host device; and
a distributed array of autonomous memory devices in communication with the host device, the distributed array of autonomous memory devices being implemented by a plurality of die, each of the autonomous memory devices being formed on a single die of the plurality of die, each single die comprising a microcontroller that is embedded on the single die to perform computations independently of the host device, each of the autonomous memory devices being configured to maintain a routing table to keep track of the other autonomous memory devices in the distributed array and to store a latency cost based on a location of the autonomous memory device in communication with the other autonomous memory devices in the distributed array, the routing table enabling the autonomous memory device to route a message to another autonomous memory device in the distributed array.

US Pat. No. 10,769,096

APPARATUS AND CIRCUIT FOR PROCESSING DATA

Samsung Electronics Co., ...

1. A communication device comprising:a communication processor;
an application processor comprising a direct memory access unit; and
a memory comprising:
a first memory region accessed by the communication processor and not by the application processor,
a second memory region accessed by the communication processor and the application processor,
a third memory region accessed by the application processor and not by the communication processor, and
a fourth memory region accessed by the communication processor and the application processor for location information,
wherein the communication processor:
writes a first information in the first memory region,
writes a second information in the second memory region, wherein the second information being scattered in the second memory region, and
transmits, as an inter processor communication message, a fourth information indicative of a location of the second information which is scattered in the second memory region to the fourth memory region as a linked list, and
wherein the application processor:
identifies the location of the second information based on the fourth information in the fourth memory region,
reads the second information from the second memory region using the direct memory access unit connected with the second memory region and the third memory region, and
writes third information in the third memory region.

US Pat. No. 10,769,095

IMAGE PROCESSING APPARATUS

Canon Kabushiki Kaisha, ...

1. An image processing apparatus comprising:an imaging unit;
serially connected image processors, wherein a first stage processor included in the image processors is connected to the imaging unit;
a mode instruction circuitry that provides an instruction to select one of operation modes including a moving image capture mode, wherein in the moving image capture mode, each of image processors other than a final stage processor included in the image processors performs a predetermined image processing on a portion of image data that needs to be processed, and outputs the processed portion of the image data and a remaining portion of the image data to a subsequent image processor without performing the predetermined image processing on the remaining portion of the image data; and
a power supply unit that supplies power to the image processors,
wherein one of the image processors is set as a power supply master, and
the image processor that has been set as the power supply master performs control so as to sequentially bring the image processors into the power supply state corresponding to the operation mode indicated by the mode instruction circuitry, wherein the control includes measuring first and second predetermined time periods from receipt of the instruction, transmitting a first activation instruction and a first notification of the selected operation mode to a first of the serially connected image processors when the first predetermined time period elapses, and transmitting a second activation instruction and a second notification of the selected operation mode to a second of the serially connected processors when the second predetermined time period elapses.

US Pat. No. 10,769,094

CONFIGURATION OPTIONS FOR DISPLAY DEVICES

Hewlett-Packard Developme...

1. A method, comprising:receiving a selection of a data transfer configuration option from a plurality of different data transfer configuration options of a data cable that has data lanes to transfer video data and non-video data, wherein the selection is made on an on-screen display menu of a display device, wherein the on-screen display menu is to display different selectable configuration options for the display device that are associated with different data transfer configuration options of the data cable;
modifying a reported number of supported resolutions or refresh rates in accordance with the data transfer configuration option that is selected; and
transmitting the reported number of supported resolutions or refresh rates that is modified to a computing device connected to the display device via the data cable to transmit data via the data cable in accordance with the data transfer configuration option that is selected.

US Pat. No. 10,769,093

AUTOMATICALLY CONFIGURING A UNIVERSAL SERIAL BUS (USB) TYPE-C PORT OF A COMPUTING DEVICE

Dell Products L.P., Roun...

1. A method comprising:receiving, by a logic device, an indication from a port controller of a computing device that an external device is connected via a cable to a universal serial bus (USB) Type-C port of the computing device, wherein the computing device comprises:
the logic device;
a cross-point switch coupled to the logic device; and
the USB Type-C port;
after receiving the indication, determining, by the logic device, a device class of the external device;
after determining, the device class of the external device, determining, by the logic device, a role of the external device;
in response to determining, based on the role and the device class, that the external device comprises a display device, sending, by the logic device, an instruction to the cross-point switch to connect at least a video bus of the computing device to the USB Type-C port;
in response to determining, based on the role and the device class, that the external device comprises a storage device, sending, by the logic device, an instruction to the cross-point switch to connect at least a device bus to the USB Type-C port; and
in response to determining, based on the role and the device class, that the external device is requesting that power be provided, sending by the logic device, an instruction to the cross-point switch to connect at least a power bus to the USB Type-C port.

US Pat. No. 10,769,092

APPARATUS AND METHOD FOR REDUCING LATENCY OF INPUT/OUTPUT TRANSACTIONS IN AN INFORMATION HANDLING SYSTEM USING NO-RESPONSE COMMANDS

Dell Products, L.P., Rou...

1. An information handling system having reduced latency of input/output transactions, comprising:a system memory;
a processor coupled to the system memory, the processor to direct a command response issued by the system memory; and
an accelerator coupled to the system memory, the accelerator configured to:
intercept the command response from the processor and that is issued by the system memory;
distinguish a correct drive from an incorrect drive based on an attribute of the command response;
map an address of the command response and send the command response to the correct drive;
send a no-response command to the incorrect drive; wherein the correct drive is configured to complete the command response, and the incorrect drive is configured to issue a completion and interrupt response including encoded information in reserved bits of the interrupt;
receive the completion and interrupt response; and
based on the encoded information in the reserved bits of the interrupt, discard the completion and interrupt response to prevent the processor from processing the completion and interrupt response from the incorrect drive.

US Pat. No. 10,769,091

MEMORY CARD AND ELECTRONIC SYSTEM

SAMSUNG ELECTRONICS CO., ...

1. An electronic system comprising:a controller;
an input/output device;
a memory;
an interface that electrically communicates with an external device; and
a bus that communicatively connects the controller, the input/output device, the memory, and the interface to each other,
wherein the interface comprises a card socket configured to accommodate a memory card,
wherein the card socket comprises a first connection pin connectable to a card detection terminal of a universal flash storage (UFS) card when the UFS card is inserted in the card socket, and
wherein, when the memory card is inserted in the card socket, the controller is configured to detect the memory card as a UFS card when a voltage of the first connection pin is lower than a first reference voltage and to supply a first voltage to a power supply terminal of the UFS card, and is configured to when the voltage of the first connection pin is in a floating state, apply a second voltage to a second connection pin and detect the memory card as a micro secure digital (SD) card or a peripheral component interconnect express (PCIe) card based on a voltage of a third connection pin.

US Pat. No. 10,769,090

INFORMATION PROCESSING APPARATUS, CONTROL METHOD OF INFORMATION PROCESSING, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING PROGRAM

FUJITSU LIMITED, Kawasak...

1. An information processing apparatus comprising:a programmable circuit including a plurality of reconfigurable regions in which logic is reconfigurable; and
a processor coupled to the programmable circuit, the processor being configured to
execute an extraction process that includes extracting, from the plurality of reconfigurable regions, one or more installable regions in which any of a plurality of first circuits is installable; each of the plurality of first circuits including a first processing section for executing a first process and a first input and output section for receiving and outputting information; the each of the plurality of first circuits being configured so that the positions of the first input and output sections are different from each other when each of the plurality of first circuits is installed in any of the reconfigurable regions,
execute a first determination process that includes determining whether each of a plurality of second circuits is installable in a first reconfigurable region; the first reconfigurable region being adjacent to the one or more installable regions extracted by the extraction process; each of the plurality of second circuits including a second processing section for executing the first process and a second input and output section to be coupled to a first input and output section among the first input and output sections; each of the plurality of second circuits corresponding to any of the plurality of first circuits,
execute a second determination process that includes determining a first installation circuit and a first installation region based on the determination executed by the first determination process; the first installation circuit being among the plurality of first circuits and to be installed in the programmable circuit; the first installation region being a region that is among the one or more installable regions and in which the first installation circuit is to be installed, and
execute an installation process that includes installing the first installation circuit determined by the second determination process in the first installation region determined by the second determination process.

US Pat. No. 10,769,089

COMBINATION WRITE BLOCKING SYSTEMS WITH CONNECTION INTERFACE CONTROL DEVICES AND METHODS

CRU Acquisition Group, LL...

1. A write blocking system comprising:a host computer including a host processor operatively configured as a blocking driver;
a switch configured to be connected to the host computer and a storage drive and operable selectively to operatively couple the host computer to the storage drive, and
a connection interface control device physically separate from the host computer and configured to be operatively coupled to the host computer and to the switch;
wherein the connection interface control device is configured to receive a communication from the blocking driver while the host computer is operatively uncoupled from the storage drive; the blocking driver is configured to communicate with the connection interface control device; the connection interface control device is configured selectively to operate the switch to establish communication between the storage drive and the host computer after receiving the communication from the blocking driver; and the blocking driver is further configured to prevent the host computer from altering data stored on the connected storage drive.

US Pat. No. 10,769,088

HIGH PERFORMANCE COMPUTING (HPC) NODE HAVING A PLURALITY OF SWITCH COUPLED PROCESSORS

Raytheon Company, Waltha...

1. A computing node comprising:a first motherboard;
at least two first processors integrated onto the first motherboard and configured to communicate with each other; and
a first switch integrated onto the first motherboard, the at least two first processors communicably coupled to the first switch, the first switch configured to communicably couple the at least two first processors to second processors integrated onto a second motherboard via a second switch integrated onto the second motherboard, the second switch configured to communicably couple the second processors to the first switch and third processors integrated onto a third motherboard via a third switch integrated onto the third motherboard;
respective host channel adapters communicatively coupled between each of the at least two first processors and the first switch;
the at least two first processors configured to communicate with the second processors on the second motherboard via the first switch and the second switch; and
the at least two first processors configured to communicate with the third processors via the first switch, the second switch, and the third switch without communicating via either of the second processors.

US Pat. No. 10,769,087

COMMUNICATION MODULE FOR A SECURITY SYSTEM AND METHOD FOR SAME

Response Technologies, Lt...

1. A communication module for a security system, the communication module comprising:a radio interface module configured to interface with a communication port of a half-duplex radio, the radio interface module being configured to receive a voice signal from the half-duplex radio and configured to transmit an audio signal to the half-duplex radio;
a microprocessor module in communication with the radio interface module and configured to receive the voice signal from the radio interface module and configured to transmit the audio signal to the radio interface module, the microprocessor module being further configured to translate the voice signal into text data and to transmit a control signal to the radio interface module for controlling at least one control feature of the half-duplex radio; and
a signal routing module in communication with the microprocessor module and configured to receive the text data from the microprocessor module for transmission to a user of the security system, the signal routing module being further configured to transmit the audio signal to the microprocessor module, wherein the audio signal is derived from at least one detection device and indicates activation of the at least one detection device to a remote user through the half-duplex radio, wherein:
the microprocessor module facilitates selective operation of the half-duplex radio in either of a voice reception mode and a transmit mode; and
when the half-duplex radio receives a digital talk permit tone, the microprocessor module is configured to activate the transmit mode of the half-duplex radio and is configured to control the half-duplex radio based upon the control signal to facilitate transmission of the audio signal from the half-duplex radio to a remote half-duplex radio.

US Pat. No. 10,769,086

RECORDING MEDIUM, ADAPTER, AND INFORMATION PROCESSING APPARATUS

Panasonic Intellectual Pr...

4. An adapter in which a recording medium is inserted and removed, the recording medium including multiple recording units, a local bus, an information storage and a communication bus, the local bus including a plurality of switches or bridges, the local bus distributed into a number of buses such that the multiple recording units connect to the adapter via the plurality of switches or bridges, the information storage stores information indicating a bus configuration of the local bus, the bus configuration includes the number of buses, and the communication bus being different from the local bus and configured to transfer the information to the adapter, the adapter comprising:a processor configured to:
set a bus number from a root bus for each of the plurality of switches or bridges to construct a bus configuration, and
acquire the information indicating the bus configuration from the information storage via the communication bus; and
a detector configured to detect insertion and removal of the recording medium to and from the adapter;
wherein, after insertion of the recording medium is detected by the detector, the processor configured to set a subordinate bus number of each of the plurality of switches or bridges based on the information indicating the bus configuration acquired via the communication bus including setting the subordinate bus number to the number of buses, and then reconstruct the bus configuration.

US Pat. No. 10,769,085

BUS SYSTEM

Samsung Electronics Co., ...

1. A bus system, comprising:a slave functional block provided with a first bus protector dedicated to the slave functional block;
a master functional block that transmits a first command to the slave functional block;
a bus connecting the master functional block and the slave functional block;
a system manager, separated from the slave functional block and from the first bus protector, that resets the slave functional block when the first command does not exist on the bus and the slave functional block is in a first state of not being able to receive the first command or not being able to transmit a response signal corresponding to the first command, and
a counter that determines whether a number of first commands transmitted to the slave functional block is the same as a number of dummy signals received from the slave functional block,
wherein the first bus protector receives the first command on behalf of the slave functional block and transmits a dummy signal corresponding to the first command to the master functional block, when the slave functional block is in the first state.

US Pat. No. 10,769,084

OUT-OF BAND INTERRUPT MAPPING IN MIPI IMPROVED INTER-INTEGRATED CIRCUIT COMMUNICATION

Intel Corporation, Santa...

1. A host controller comprising:processing circuitry to:
identify an inter-integrated circuit (I2C) out-of-band interrupt (OBI) received on a general purpose input-output (GPIO) pin from an I2C device that is unable to directly generate an improved inter-integrated circuit (I3C) in-band interrupt (IBI) for consumption by one or more I3C devices coupled to an I3C bus, to which the I2C device and the host controller are also coupled; and
generate on the I3C bus, on behalf of the I2C device, based on the I2C OBI, an I3C IBI that includes information related to the I2C OBI, including information indicating to the one or more I3C devices that the I2C OBI is from the I2C device; and
transmission circuitry coupled with the processing circuitry, the transmission circuitry to transmit the I3C IBI on the I3C bus, on behalf of the I2C device, for the consumption by the one or more I3C devices.

US Pat. No. 10,769,083

SOURCE SYNCHRONIZED SIGNALING MECHANISM

INTEL CORPORATION, Santa...

1. An apparatus comprising:an interconnect fabric including:
a plurality of network interfaces for connection of the interconnect fabric to a plurality of nodes, each network interface including a set of one or more synchronizers, and
a plurality of network switches to route data transfers between the plurality of network interfaces;
wherein the interconnect fabric is to provide for source synchronous transfer of data within the interconnect fabric between a plurality of nodes connected to the plurality of network interfaces, the plurality of nodes including a plurality of compute nodes and a global shared memory for the plurality of compute nodes, wherein:
a first network interface of the plurality of network interfaces is to transmit a data signal and a source clock signal from a first node of the plurality of nodes, and
a second network interface of the plurality of network interfaces is to receive the data signal and the source clock signal for a second node of the plurality of nodes, wherein the second network interface is to sample and filter the data signal based on a clock domain associated with the source clock signal.

US Pat. No. 10,769,082

DDR5 PMIC INTERFACE PROTOCOL AND OPERATION

Integrated Device Technol...

1. An apparatus comprising:a host interface configured to receive (i) control words and (ii) a system clock signal from a host; and
a power management interface configured to (i) enable said host to read/write data from/to a power management circuit of a dual in-line memory module, (ii) communicate said data, (iii) generate a clock signal and (iv) communicate an interrupt signal, wherein (a) said power management interface is disabled at power on, (b) said apparatus is configured to (i) decode said control words, (ii) enable said power management interface when said control words provide an enable command and (iii) perform a response to said interrupt signal and (c) said clock signal of said power management interface operates independently from said system clock signal received through said host interface to provide ripple voltage processing without using said system clock signal.

US Pat. No. 10,769,081

COMPUTER PROGRAM PRODUCT, SYSTEM, AND METHOD TO ALLOW A HOST AND A STORAGE DEVICE TO COMMUNICATE BETWEEN DIFFERENT FABRICS

INTEL CORPORATION, Santa...

1. A computer program product including a a non-transistory computer readable storage media in communication with nodes, wherein the computer readable storage media includes program code executed by at least one processor in a system to:communicate with an originating node over a first network and communicating with a destination node over a second network;
receive an origination packet from the originating node over the first network to the destination node having a storage device, wherein the origination packet includes a first fabric layer for transport through the first network, a command in a transport protocol with a storage Input/Output (I/O) request, with respect to the storage device at the destination node, and a host memory address at the originating node;
determine a transfer memory address of a location in a memory device to map to the host memory address, wherein the memory device is in the system that is separate from the storage device and wherein the system is in communication with the storage device via the second network;
construct a destination packet including a second fabric layer for transport through the second network and the command in the transport protocol to send the storage I/O request and the transfer memory address, wherein different first and second fabric protocols are used to communicate on the first and second networks, respectively; and
send the destination packet over the second network to the destination node to perform the storage I/O request with respect to the storage device.

US Pat. No. 10,769,080

DISTRIBUTED AND SHARED MEMORY CONTROLLER

Futurewei Technologies, I...

1. A distributed and shared memory controller (DSMC) comprising:a plurality of switches distributed into a series of stages including at least a first stage of switches, a second stage of switches, and a last stage of switches, at least some switches in each stage in the series of stages being connected with one or more switches in a different stage in the series of stages via internal connections, wherein each switch in the first stage of switches connects to a corresponding switch in the second stage of switches of a neighboring building block via outward connections, and each switch in the second stage of switches connects to a corresponding switch in the first stage of switches of a corresponding neighboring building block via inward connections;
a plurality of master ports coupled to the first stage of switches, each switch in the first stage of switches being connected to at least one of the plurality of master ports via master connections;
a plurality of bank controllers with associated memory banks coupled to the last stage of switches, each switch in the last stage of switches connecting to at least one of the plurality of bank controllers via memory connections; and
a command scanner coupled to the plurality of switches, the command scanner configured to scramble a request address of a read or write request received from a master core and to distribute the read or write request, with the scrambled request address, among the plurality of switches.

US Pat. No. 10,769,079

EFFECTIVE GEAR-SHIFTING BY QUEUE BASED IMPLEMENTATION

QUALCOMM Incorporated, S...

1. An apparatus, comprising:a host configured to access a storage device over a connection, the host comprising:
a host PHY configured to communicate with the storage device over the connection in a plurality of communication modes, each communication mode being different from all other communication modes;
a plurality of queues corresponding to the plurality of communication modes applicable to the connection; and
a request controller configured to interface with the host PHY,
wherein the request controller is configured to
receive a plurality of requests targeting the storage device,
determine a communication mode of each request,
send each request to a queue corresponding to the communication mode of that request,
select a queue based on a number of requests in each queue, and
provide the requests of the selected queue to the host PHY for transmission of the provided requests to the storage device by the host PHY over the connection.

US Pat. No. 10,769,078

APPARATUS AND METHOD FOR MEMORY MANAGEMENT IN A GRAPHICS PROCESSING ENVIRONMENT

Intel Corporation, Santa...

1. An apparatus comprising:a first plurality of graphics processing resources to execute graphics commands and process graphics data;
a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory;
a second plurality of graphics processing resources to execute graphics commands and process graphics data; and
a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU, wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU;
wherein a memory transaction originated from either the first or the second MMU includes an identifier (ID) code to identify the originating MMU being the first or the second MMU.

US Pat. No. 10,769,077

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING SYSTEM

AUTONETWORKS TECHNOLOGIES...

1. An information processing apparatus comprising:a memory that is configured to store written information and to store a writing program for writing information to the memory; and
an electronic control unit that is configured to write information to the memory in accordance with the writing program stored in the memory, wherein:
the memory stores a disabling program for disabling overwriting of the information stored in the memory,
the electronic control unit disables overwriting of the information stored in the memory in accordance with the disabling program stored in the memory for disablement when writing of the information executed by the electronic control unit is finished,
the memory stores reference information which is information to be referred to when the electronic control unit executes writing of information, and
the electronic control unit overwrites the reference information stored in the memory with information that is unrelated to the reference information.

US Pat. No. 10,769,076

DISTRIBUTED ADDRESS TRANSLATION IN A MULTI-NODE INTERCONNECT FABRIC

NVIDIA Corporation, Sant...

1. A method, comprising:exporting, by a destination node executing a process associated with an affiliation identifier, a first virtual address range into a first fabric linear address (FLA) range within a FLA space for sharing data residing in one or more local system physical memory pages within the destination, wherein the exporting is initiated by the process and the first FLA range is mapped by the destination node to the one or more local system physical memory pages comprising a local system physical address (SPA) space for a memory subsystem within the destination node;
configuring, by a source node executing at least one other process that is associated with the affiliation identifier, one or more page table entries within a local translation lookaside buffer (TLB) to map a second virtual address range for the source node to the first FLA range, wherein the second virtual address range is mapped by one TLB entry with a size that is larger than a physical page size at the destination node;
generating, by the source node, a memory access request comprising a virtual address (VA) within the second virtual address range that targets the data;
mapping, by the source node, the VA to a FLA within the first FLA range that targets the data;
composing, by the source node, a remote access request that includes the FLA;
transmitting, by the source node, the remote access request to the destination node;
mapping, by a second TLB within the destination node, the FLA to a guest physical address (GPA) that targets the data; and
mapping, by the destination node, the GPA to a SPA that targets the data within the local SPA space.

US Pat. No. 10,769,075

STORAGE OF DATABASE DICTIONARY STRUCTURES IN NON-VOLATILE MEMORY

SAP SE, Walldorf (DE)

1. A database system comprising:a volatile random access memory storing first header data, and storing a first data block comprising an array of distinct values of a database table column, and with the first header data comprising a first pointer to the first data block;
a non-volatile random access memory; and
a processing unit to:
determine a memory size associated with the first header data and the first data block;
allocate a first memory block of the non-volatile random access memory based on the determined memory size;
determine an address of the non-volatile random access memory associated with the allocated first memory block; and
write an indicator of the number of distinct values of the array and a binary copy of the first data block at the address of the non-volatile random access memory,
wherein the first data block comprises a first number of logical blocks storing the array of distinct values,
wherein the volatile random access memory further stores a second data block comprising a second number of logical blocks storing a second array of distinct values of a database table column, and a data structure specifying, for each of the logical blocks of the first data block and the second data block, a number of the data block which includes the logical block and an offset at which the logical block is located in the data block,
wherein the first header data comprises a second pointer to the second data block, and a third pointer to the data structure, and
wherein writing of the indicator of the number of distinct values of the array and the binary copy of the first data block at the address of the non-volatile random access memory comprises contiguously writing, from the address of the non-volatile random access memory, descriptive information of the first data block, a number of alignment bits, the binary copy of the first data block, descriptive information of the second data block, a second number of alignment bits, a binary copy of the second data block, descriptive information of the data structure, a third number of alignment bits, and a binary copy of the data structure.

US Pat. No. 10,769,074

COMPUTER MEMORY CONTENT MOVEMENT

MICROSOFT TECHNOLOGY LICE...

1. An apparatus comprising:a processor; and
a memory storing machine readable instructions that when executed by the processor cause the processor to:
ascertain a request associated with content of computer memory;
determine whether the request is directed to the content that is to be moved from a source of the computer memory to a destination of the computer memory by
determining, based on an analysis of a map page table, whether the request is directed to the content that is to be moved from the source to the destination, wherein the map page table includes an indication of whether the content is located at the source, is located at the destination, or is to be moved from the source to the destination;
based on a determination that the request is directed to the content that is to be moved from the source to the destination, initiate a reflective copy operation to begin a reflective copy of the content from the source to the destination, and determine whether the content is at the source, is in a process of being moved from the source to the destination, or has been moved from the source to the destination;
based on a determination that the content is at the source, perform the request associated with the content using the source;
update, at the beginning of the reflective copy, the map page table to direct guest access to an address of a controller associated with the reflective copy operation, and based on a determination that the content is in the process of being moved from the source to the destination, perform, using the controller, the request associated with the content using the source;
modify, upon completion of the reflective copy operation, the map page table to direct the guest access from the address of the controller associated with the reflective copy operation to the destination, and based on a determination that the content has been moved from the source to the destination, perform the request associated with the content using the destination; and
reset the controller associated with the reflective copy operation to begin a new reflective copy operation.

US Pat. No. 10,769,073

BANDWIDTH-BASED SELECTIVE MEMORY CHANNEL CONNECTIVITY ON A SYSTEM ON CHIP

Qualcomm Incorporated, S...

1. A system for managing memory channel connectivity, the system comprising:a high-bandwidth memory client electrically coupled to each of a plurality of memory channels via an interconnect;
a low-bandwidth memory client electrically coupled to only a portion of the plurality of memory channels via the interconnect; and
an address translator in communication with the high-bandwidth memory client and configured to perform physical address manipulation when a memory page to be accessed by the high-bandwidth memory client is shared with the low-bandwidth memory client.

US Pat. No. 10,769,072

AVOID CACHE LOOKUP FOR COLD CACHE

INTEL CORPORATION, Santa...

11. A method comprising:receiving a cache access request from at least one of a plurality of graphics processing cores, wherein the cache access request comprises a cache set identifier associated with requested data in a cache set; and
terminating the cache access request in response to a determination that the cache set associated with the cache set identifier is in an inaccessible state or an invalid state and that a read-modify-write (RMW) pipeline is empty or includes only read accesses.

US Pat. No. 10,769,071

COHERENT MEMORY ACCESS

Micron Technology, Inc., ...

1. An apparatus, comprising:a memory array;
a first processing resource;
a first cache line and a second cache line coupled to the memory array;
a first cache controller coupled to the first processing resource and to the first cache line and configured to provide coherent access to data stored in the second cache line and corresponding to a memory address; and
a second cache controller coupled through an interface to a second processing resource external to the apparatus and coupled to the second cache line and configured to provide coherent access to the data stored in the first cache line and corresponding to the memory address,
wherein coherent access is provided using a first cache line address register of the first cache controller which stores the memory address and a second cache line address register of the second cache controller which also stores the memory address, and
wherein coherent access is provided by, responsive to determining that the memory address is stored in the second cache line address register, locking the second cache line prior to copying the data from the second cache line to the first cache line.

US Pat. No. 10,769,070

MULTIPLE STRIDE PREFETCHING

Arm Limited, Cambridge (...

1. Apparatus comprising:data loading circuitry to retrieve data values from addresses specified by load instructions for storage in a storage component;
prefetching circuitry to receive the addresses specified by the load instructions and to cause the data loading circuitry to retrieve a further data value from a further address before the further address is received, wherein the prefetching circuitry comprises:
stride determination circuitry to determine a stride value as a difference between a current address and a previously received address, the stride determination circuitry comprising stride sequence determination circuitry to determine a plurality of stride values corresponding to a sequence of received addresses;
multiple stride storage circuitry to store the plurality of stride values determined by the stride determination circuitry;
cumulative stride determination circuitry to determine at least one cumulative stride value as a sum of at least two of the plurality of stride values stored in the multiple stride storage circuitry;
new address comparison circuitry to determine whether the current address corresponds to a matching stride value based on the plurality of stride values stored in the multiple stride storage circuitry, wherein the new address comparison circuitry is responsive to reception of the current address to determine whether the at least one cumulative stride value is the matching stride value; and
prefetch initiation circuitry to cause the further data value to be retrieved from the further address, wherein the further address is the current address modified by the matching stride value of the plurality of stride values.

US Pat. No. 10,769,069

PREFETCHING IN DATA PROCESSING CIRCUITRY

Arm Limited, Cambridge (...

1. Data processing circuitry comprising:a cache memory to cache a subset of data elements from a main memory;
a processing element to execute program code to access data elements having respective memory addresses, the processing element being configured to access the data elements in the cache memory and, in the case of a cache miss, to fetch the data elements from the main memory;
prefetch circuitry, responsive to an access to a current data element, to initiate prefetching into the cache memory of a data element at a memory address defined by a current offset value relative to the address of the current data element;
offset value selection circuitry comprising:
an address table to store memory addresses for which a data element accessed by the processing element resulted in a cache miss or an access to a previously prefetched data element;
detector circuitry to detect, for each of a group of candidate offset values, one or more respective metrics representing a proportion of a set of data element accesses which resulted in a cache miss or an access to a previously prefetched data element, for which the memory address for that data element access differs by the candidate offset value from a memory address in the address table;
in which the detector circuitry is configured to set a next instance of the offset value in response to the one or more detected metrics;
verification circuitry to detect, for the current offset value while the prefetch circuitry initiates prefetching using the using the current offset value, at one or more predetermined stages with respect to the processing of the group of candidate offset values by the offset value selection circuitry to generate the next instance of the offset value, one or more verification metrics representing a proportion of a set of data element accesses which resulted in a cache miss or an access to a previously prefetched data element, for which the memory address for that data element access differs by the current offset value from a memory address in the address table, to detect whether the one or more verification metrics comply with a predetermined condition; and
control circuitry to inhibit prefetching at least until generation of the next instance of the offset value by the offset value selection circuitry, in response to a detection by the verification circuitry that the one or more verification metrics do not comply with the predetermined condition.

US Pat. No. 10,769,068

CONCURRENT MODIFICATION OF SHARED CACHE LINE BY MULTIPLE PROCESSORS

INTERNATIONAL BUSINESS MA...

1. A computer program product for facilitating processing within a computing environment, the computer program product comprising:at least one computer readable storage medium readable by at least one processing circuit and storing instructions for performing a method comprising:
obtaining, from a plurality of processors of the computing environment, a plurality of store requests to store to a shared cache line, the plurality of store requests being of a concurrent store type in which non-exclusive access to the shared cache line for storing data by the plurality of processors is being requested; and
storing concurrently, based on the plurality of store requests, data to the shared cache line, wherein the plurality of processors concurrently maintain write access to the shared cache line, and wherein the storing concurrently comprises storing the data directly to the shared cache line absent storing the data in one or more private caches of the plurality of processors and absent inspection by the plurality of processors of content within the shared cache line being updated by the data.

US Pat. No. 10,769,067

CACHE LINE CONTENTION

Arm Limited, Cambridge (...

1. An apparatus comprising:a cache interconnect comprising snoop circuitry to store a table containing an entry, for each of a plurality of cache lines, comprising a cache line identifier, an indication of a most recent processing element of a plurality of processing elements associated with the cache interconnect to access the cache line, and an indication of a data item in the cache line which was identified by the most recent processing element to be accessed;
wherein the snoop circuitry is responsive to a request from a requesting processing element of the plurality of processing elements, the request identifying a requested data item, to determine a requested cache line identifier corresponding to the requested data item, and
wherein the snoop circuitry is responsive to determining that the requested cache line identifier is stored in an identified entry in the table to provide, based on the requested data item and the indication of the data item in the table, an identity indication as to whether the requested data item is the same as the data item in the identified entry,
wherein the snoop circuitry further comprises a false sharing counter, and the snoop circuitry is responsive to the identity indication showing that the requested data item is not the same as the data item in the entry for the requested cache line to increment a value held by the false sharing counter.

US Pat. No. 10,769,066

NONVOLATILE MEMORY DEVICE, DATA STORAGE DEVICE INCLUDING THE SAME AND OPERATING METHOD THEREOF

SK hynix Inc., Gyeonggi-...

1. A nonvolatile memory device comprising:a plurality of dies each configured to store mapping information of logical block addresses which are previously assigned,
wherein when a composite read command and location information indicating where mapping information of a logical block address, of the logical block addresses, is stored are received from a controller, a target die corresponding to the logical block address among the plurality of dies performs a first operation of translating the logical block address to a physical block address based on the location information, and a second operation of reading user data stored in a region of the translated physical block address and outputting the read user data to the controller.

US Pat. No. 10,769,065

SYSTEMS AND METHODS FOR PERFORMING MEMORY COMPRESSION

Apple Inc., Cupertino, C...

1. An apparatus comprising:a table comprising a plurality of entries; and
compression circuitry comprising a plurality of hardware lanes, wherein in response to receiving an indication of a compression instruction, the compression circuitry is configured to:
assign a first group of two or more input words to the plurality of hardware lanes;
responsive to determining at least a first input word and a second input word of the first group of two or more input words correspond to a same entry of the table, generate for the first input word and the second input word:
a single read request for the table; and
a single write request for the table; and
generate a compression packet for each of the first input word and the second input word.

US Pat. No. 10,769,064

METHOD FOR RETRIEVING KEY VALUE PAIRS AND GROUPS OF KEY VALUE PAIRS

Pliops Ltd, Tel Aviv (IL...

10. A non-transitory computer readable medium that stores instructions that once executed by a system that comprises a solid-state drive (SSD) memory controller, causes the system to perform the steps of:receiving, by the SSD memory controller, an input key;
applying, by the SSD memory controller, a first hash function on an input key to provide a first address;
reading, using the first address, an indicator that indicates whether (a) the input key is associated with a group of key-value pairs or whether (b) the input key is associated only with the key-value pair;
retrieving, from a SSD memory, the values of the group associated with the input key, and extracting the value of the key-value pair, when the indicator indicates that the input key is associated with the group; and
retrieving, from the SSD memory, a value of a key-value pair, when the indicator indicates that the input key is associated with the key-value pair.

US Pat. No. 10,769,063

SPIN-LESS WORK-STEALING FOR PARALLEL COPYING GARBAGE COLLECTION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for object copying in a computer performing parallel copying garbage collection on deques using work-stealing, the method comprising:acquiring, for original objects in a source deque space, a destination deque space to copy the original objects to;
copying, from the source deque space to the destination deque space, any of the original objects in the source deque space having a reference to other ones of the original objects;
registering, together with an address to copy to, any of the original objects in the source deque space lacking the reference to the other ones of the original objects; and
setting, in the source deque space, forwarding pointers to copied ones of the original objects in the destination deque space.

US Pat. No. 10,769,062

FINE GRANULARITY TRANSLATION LAYER FOR DATA STORAGE DEVICES

Western Digital Technolog...

1. A Data Storage Device (DSD), comprising:a non-volatile memory configured to store data; and
control circuitry configured to:
receive a memory access command from a host to access data in the non-volatile memory;
identify a location in the non-volatile memory for performing the memory access command using an Address Translation Layer (ATL) that has a finer logical-to-physical granularity than a logical-to-physical granularity of a logical block-based file system executed by the host; and
access the non-volatile memory at the identified location to perform the memory access command.

US Pat. No. 10,769,061

MEMORY SYSTEM WITH READ RECLAIM DURING SAFETY PERIOD

SK hynix Inc., Gyeonggi-...

1. A memory system, comprising:a buffer suitable for buffering victim block information;
a queue suitable for queuing the victim block information;
a scheduling unit suitable for detecting a read reclaim safety period and generating a trigger signal;
a queue management unit suitable for detecting a remaining capacity of the queue during the safety period;
a buffer management unit suitable for queuing as much of the buffered victim block information in the queue, as the remaining capacity of the queue during the safety period; and
an execution unit suitable for performing a read reclaim operation based on the queued victim block information during the safety period.

US Pat. No. 10,769,060

STORAGE SYSTEM AND METHOD OF OPERATING THE SAME

SK hynix Inc., Gyeonggi-...

1. A method of operating a storage system, comprising:outputting, by a host system, a command for reading address mapping data, pieces of which correspond to first to (n?1)-th memory systems, the address mapping data being stored in a nonvolatile memory device of an n-th memory system, where n is a natural number of 3 or more;
outputting, in a first transmission operation, the address mapping data from the n-th memory system and inputting the address mapping data to the host system in response to the command; and
outputting, in a second transmission operation, the address mapping data from the host system and inputting the address mapping data to respective controller buffer memories of the first to (n?1)-th memory systems,
wherein each of the first to n-th memory systems comprises a controller buffer memory and a nonvolatile memory device.

US Pat. No. 10,769,059

APPARATUS AND SYSTEM FOR OBJECT-BASED STORAGE SOLID-STATE DEVICE

1. An apparatus comprising:a storage physical layer interface;
a processor for processing a first set algorithms for physical storage device management interleaved with a second set of algorithms for OSD; and
at least one solid-state non-volatile memory device.

US Pat. No. 10,769,058

GENERIC SERVERLESS PERFORMANCE TESTING FRAMEWORK

Capital One Services, LLC...

1. A computing device, comprising:a processor circuitry;
a storage device; and
logic, wherein at least a portion of the logic is implemented in the processor circuitry coupled to the storage device and is configured to:
receive, from a client device, login information for a user;
authenticate the user based on the received login information;
receive, from the client device, an indication of application code to be tested within a serverless computing framework, wherein the application code is associated with an application server;
receive, from the client device, one or more parameters for testing of the application code, wherein the one or more parameters comprise a number of simulated users for the testing of the application code, a duration of the testing of the application code, a type of the testing of the application code, and a number of test simulations for the testing of the application code;
generate a performance test execution file based on the application code and the one or more parameters specifying the testing of the application code;
determine and assign one or more serverless functions for implementing the testing of the application code based on the performance test execution file;
execute the one or more serverless functions to implement the testing of the application code on a cloud-based computing resource;
receive, from the client device, an input modifying the one or more parameters during the testing of the application code;
execute the one or more serverless functions to complete the testing of the application code on the cloud-based computing resource according to the modified one or more parameters;
generate one or more test outputs based on the execution of the one or more serverless functions to complete the testing of the application code;
store the one or more test outputs within a memory storage repository; and
stream, in real-time, a portion of the one or more test outputs to a dashboard visualization tool subsequent to executing the one or more serverless functions to complete the testing of the application code.

US Pat. No. 10,769,057

IDENTIFYING POTENTIAL ERRORS IN CODE USING MACHINE LEARNING

International Business Ma...

1. A method for identifying potential errors in a software product after it is built but prior to release, the method comprising:identifying negative log reports of previously-built software products, wherein said negative log reports contain errors in code in connection with building said previously-built software products;
vectorizing language of said identified negative log reports of previously-built software products;
storing said vectorized negative log reports;
vectorizing language of a build log report for a software product to form a vectorized log report upon completion of build of said software product;
comparing said vectorized log report with said stored vectorized negative log reports; and
halting release of said software product in response to identifying a stored vectorized negative log report within a threshold degree of distance of said vectorized log report.

US Pat. No. 10,769,056

SYSTEM FOR AUTONOMOUSLY TESTING A COMPUTER SYSTEM

The Ultimate Software Gro...

1. A system, comprising:a memory that stores instructions; and
a processor that executes the instructions to perform operations, the operations comprising:
parsing data obtained from a source to generate parsed data;
extracting a source concept from the parsed data;
updating, based on the source concept extracted from the parsed data, a model from a set of agglomerated models to update the set of agglomerated models;
interacting with an application under evaluation by the system via an input;
receiving an output from the application under evaluation based on the interacting conducted with the application under evaluation via the input;
updating, by utilizing the output, the set of agglomerated models; and
recursively updating the set of agglomerated models over time as further data is obtained, as further interacting is conducted with the application under evaluation, as interacting is conducted with another application under evaluation, or a combination thereof.

US Pat. No. 10,769,055

DYNAMICALLY REVISING AN IN-PROCESS BUILD

Red Hat Israel, Ltd., Ra...

1. A method comprising:receiving, by a build system executing on a computing device comprising a processor device, a build configuration comprising information that defines a plurality of successive stages, each stage comprising at least one step, and one or more of the stages comprising a plurality of successive steps, the build configuration defining a build process that, when completed, alters a state of a storage device, wherein the build configuration defines at least one of:
a compile stage comprising a copying step configured to copy a source file to a destination, and a compilation step configured to generate a new executable file from the source file;
a provisioning stage comprising a copying step configured to implement the executable file in a test environment; or
a testing stage comprising an initiation step configured to initiate the executable file as a running process, and an input step to provide the running process with a plurality of predetermined inputs;
initiating, by the build system, a build process sequence on the build configuration;
receiving, by the build system after initiating the build process sequence and before the build process sequence terminates, notification of a desire to add a revision to a particular stage of the plurality of stages defined in the build configuration;
making a determination that performance of the particular stage has not begun;
in response to the determination, incorporating the revision into the particular stage defined in the build configuration; and
continuing the build process after incorporating the revision into the particular stage defined in the build configuration.

US Pat. No. 10,769,054

INTEGRATED PROGRAM CODE MARKETPLACE AND SERVICE PROVIDER NETWORK

Amazon Technologies, Inc....

1. A non-transitory computer-readable storage medium having computer-executable instructions stored thereupon which, when executed by a computer, cause the computer to:receive program code and an associated execution environment definition, the associated execution environment definition defining a suitable execution environment for the program code by providing configuration information for the suitable execution environment;
perform one or more first tests on the program code;
determine that the program code passed the one or more first tests;
include the program code in a program code marketplace;
receive a request from a user to deploy the program code from the program code marketplace to a service provider network;
in response to receiving the request, utilize the associated execution environment definition to create the suitable execution environment for the program code in the service provider network;
deploy the program code from the program code marketplace to the suitable execution environment in the service provider network;
cause the program code to execute in the suitable execution environment created in the service provider network;
perform one or more second tests of the program code in the suitable execution environment created in the service provider network;
determine that the program code in the suitable execution environment passed the one or more second tests; and
provide, to the user, access to the program code in the suitable execution environment.

US Pat. No. 10,769,053

METHOD AND SYSTEM FOR PERFORMING USER INTERFACE VERIFICATION OF A DEVICE UNDER TEST

HCL Technologies Limited,...

1. A method of performing User interface (UI) verification of a Device Under Test (DUT) characterized by correcting orientation of a captured image associated to the UI, the method comprising:positioning a corner marker, of a set of corner markers, at each corner of a display frame associated to the DUT;
receiving, by a processor, a DUT image of a UI pertaining to a DUT, wherein the DUT image is captured by an image capturing unit;
correcting, by the processor, orientation of the DUT image by determining an orientation correction factor, wherein the orientation correction factor is determined by,
verifying whether each corner marker, positioned at respective corners of the display frame, is present in the DUT image,
zooming at least one of IN and OUT a focus of the image capturing unit in a manner such that each corner marker is present in the DUT image, wherein the focus is zoomed when at least one corner marker, of the set of corner markers, is absent in the DUT image captured by the image capturing unit,
aligning the DUT image based on the set of corner markers,
verifying a content and a location of the content present in the DUT image upon referring to a DUT configuration file wherein the DUT configuration file comprises metadata associated to the content, and
determining whether the DUT image is occupying the content greater than a predefined threshold percentage of content present in the UI of the DUT; and
storing, by the processor, the orientation correction factor in a pre-configuration file when the DUT image is occupying content greater than the predefined threshold percentage, wherein the orientation correction factor is to be referred while testing a UI of the DUT.

US Pat. No. 10,769,052

APPLICATION ARRANGEMENT METHOD AND SYSTEM

HITACHI, LTD., Tokyo (JP...

1. An application arrangement system comprising:one or more volume drivers that are executed in one or more servers that execute one or more container engines, wherein
a first volume that is provided from a storage system including one or more storage devices and that is used in execution of an application, is associated with a first container that executes the application on a first container engine which is a container engine in a first server,
a first volume driver embeds a volume ID of the first volume in a container image created by imaging of the first container, and the volume ID is the ID of a volume and an ID according to information acquired from the storage system with respect to the volume,
a second volume driver searches, in the storage system, for a volume to be associated with a target second container among one or more second containers, by using the volume ID embedded in the container image outputted from the first server and inputted into the second server,
the second server is the first server or another server among the one or more servers separated from the first server,
the first volume driver is a volume driver in the first server, among the one or more volume drivers,
the second volume driver is a volume driver in the second server, and is the first volume driver or another volume driver separated from the first volume driver, among the one or more volume drivers,
each of the one or more second containers is a container that executes the application on a second container engine,
the second container engine is the first container engine or another container engine separated from the first container engine, among the one or more container engines, and
an arrangement manager that is executed in a management server coupled to the one or more servers, wherein
the arrangement manger
detects a failure in the application by monitoring the first container,
transmits, in response to detection of the failure, an image creation command for imaging the first container for the application in which the failure has occurred, to the first container engine such that the container image is created by the first container engine in response to the image creation command,
receives a notification from a container registry having registered therein the container image outputted from the first server, and
transmits, in response to the notification, a command for inputting the container image, to the second container engine such that the container image is inputted from the container registry to the second server by the second container engine, in response to the input command.

US Pat. No. 10,769,051

METHOD AND SYSTEM TO DECREASE MEASURED USAGE LICENSE CHARGES FOR DIAGNOSTIC DATA COLLECTION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:executing, by one or more computing resources, one or more software products associated with a measured usage pricing model;
receiving, by the one or more computing resources during execution of the one or more software products, an indication to execute a set of diagnostic machine instructions related to execution of the one or more software products;
executing, by the one or more computing resources responsive to receiving the indication, the set of diagnostic machine instructions, wherein execution of the set of diagnostic machine instructions includes offloading all diagnostic instructions related to execution of the one or more software products during the execution of the one or more software products to a specialty engine;
calculating monetary usage fees for usage of the one or more computing resources based on the measured usage pricing model, wherein the monetary usage fees comprise usage fees that correspond to a first amount of the one or more computing resources used in performing said executing the one or more software products, and wherein the calculating of the monetary usage fees excludes a second amount of the one or more computing resources used by the specialty engine in performing said executing the set of diagnostic machine instructions.

US Pat. No. 10,769,050

MANAGING AND MAINTAINING MULTIPLE DEBUG CONTEXTS IN A DEBUG EXECUTION MODE FOR REAL-TIME PROCESSORS

TEXAS INSTRUMENTS INCORPO...

1. A method of debugging program code executing on a processor configured with a debug controller performing a debug monitor function, the program code including a plurality of code sections, each code section associated with an execution priority level, the method comprising:executing a first code section having a first execution priority on the processor while allowing interrupting code sections of a higher execution priority than the first execution priority to pre-empt execution of the first code section for a period of execution to complete each interrupting code section;
recognizing a first breakpoint in the first code section at the debug controller;
pausing execution of the processor at the first breakpoint;
allowing the debug controller to process functions of the debug monitor while at the first breakpoint;
obtaining information representative of a first debug context at the debug monitor, the first debug context associated with register values attributable to the first code section at the first breakpoint;
receiving an indication that a second code section, of a higher execution priority than the first execution priority, attempts to pre-empt execution of the first code section while at the first breakpoint of the first code section;
allowing the processor to execute for a period of execution to complete the second code section while maintaining the first debug context of the debug monitor associated with the first breakpoint of the first code section;
recognizing a second breakpoint in the second code section at the debug controller;
pausing execution of the processor at the second breakpoint;
storing information of the first debug context on a stack;
allowing the debug controller to process functions of the debug monitor while at the second breakpoint;
obtaining information representative of a second debug context at the debug monitor;
receiving an indication to continue execution from the second breakpoint;
pausing execution of the processor upon return from the second code section to the first code section; and
returning control to the debug controller performing the debug monitor function in the first debug context upon completion of the second code section.

US Pat. No. 10,769,049

DEBUGGING SUPPORT APPARATUS AND DEBUGGING SUPPORT METHOD

MITSUBISHI ELECTRIC CORPO...

1. A debugging support apparatus that supports debugging of a sequence program, the debugging support apparatus comprising:a control apparatus configured to be connected to one or more machines and the control apparatus including a first processor that executes the sequence program;
a first memory to store a first program which, when executed by the first processor, performs a step of recording log data including step numbers and operation data, the step numbers being order information indicating execution order of arithmetic processing of each step for components constituting the sequence program, and the operation data being data handled in arithmetic processing of each step, the operation data including internal data of the one or more machines;
a second processor;
a second memory to store a second program which, when executed by the second processor, performs a step of representing a relationship between the step numbers and the operation data by displaying a graph representing the relationship between the step numbers and the operation data based on the log data; and
an input device,
wherein when the input device accepts an operation for specifying a point in time in the graph, the second processor obtains a step number indicating the specified point in time, searches the sequence program for a circuit block that corresponds to a step number that is specified from the graph among a plurality of the circuit blocks that constitute the sequence program, extracts operation data of the circuit block from the log data, displays, on an edit screen, the searched circuit block and the extracted operation data of the circuit block, and performs an editing process for the sequence program.

US Pat. No. 10,769,048

ADVANCED BINARY INSTRUMENTATION FOR DEBUGGING AND PERFORMANCE ENHANCEMENT

PayPal, Inc., San Jose, ...

1. A system, comprising:a memory; and
one or more hardware processors coupled to the memory and configured to read instructions from the memory to cause the system to perform operations comprising:
identifying a first software program for integration into an execution of a second software program;
dividing the first software program into a plurality of segments;
obtaining binary code associated with the second software program; and
modifying the second software program by inserting a plurality of instrumentation points into a plurality of locations within the binary code associated with the second software program, wherein each instrumentation point of the plurality of instrumentation points corresponds to one segment of the plurality of segments of the first software program, and wherein the inserting the plurality of instrumentation points causes, during an execution of the modified second software program, a context switch from the execution of the modified second software program to an execution of a corresponding segment of the plurality of segments of the first software program when an instrumentation point of the plurality of instrumentation points is encountered at a corresponding location of the plurality of locations within the binary code associated with the second software program.

US Pat. No. 10,769,047

STEPPING AND APPLICATION STATE VIEWING BETWEEN POINTS

MICROSOFT TECHNOLOGY LICE...

1. A method of debugging code performed on a computing device, the method comprising:debugging a portion of code by logging, by the computing device, the portion of code located between a start breakpoint and an end breakpoint assigned in the code as the portion of the code is executed, where the logging includes storing a value of a variable that is determined from execution of the logged portion of the code in a memory from which the value of the variable is accessible for playback of the portion of the code, and where the start breakpoint represents a point in code execution where logging of executing code begins and the end breakpoint represents a point in code execution where execution and logging stops.

US Pat. No. 10,769,046

AUTOMATED REVIEW OF SOURCE CODE FOR STYLE ISSUES

Rubrik, Inc., Palo Alto,...

1. A method implemented on a computer system, the computer system executing instructions to effect a method for automatically identifying style issues in a source code base, the method comprising:accessing a reference set for a known style issue, the reference set including source code examples that exhibit the style issue, the style issue including an extra newline error;
comparing the source code examples in the reference set to the source code base, the source code examples including a first source code example comprising a source code line immediately before a location of the extra newline error and a source code line immediately after the location of the extra newline error;
automatically extracting source code examples from the prior source code bases, based on the locations identified in the prior style reviews;
calculating individual scores for comparison of each source code example with the source code base, the individual scores being machine-readable binary variables; and
based on the comparison, identifying locations in the source code base that are likely to exhibit the style issue.

US Pat. No. 10,769,045

MEASURING EFFECTIVENESS OF INTRUSION DETECTION SYSTEMS USING CLONED COMPUTING RESOURCES

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:generating a set of cloned computing resource environments based at least in part on metadata associated with a set of computing resource environments provided by a computing resource service provider, the metadata maintained by the computer resource service provider and indicating access information to a set of computing resources operated by a set of customers within the set of computing resource environments, a cloned computing resource environment of the set of cloned computing resource environments containing at least one computing resource modified based at least in part on the metadata;
generating a set of simulated attack payloads of a simulated attack, a simulated attack payload of the set of simulated attack payloads including a signature indicating the simulated attack and additional metadata to allow a threat analysis service to determine a simulated attack pattern associated with the simulated attack payload and bind the simulated attack payload to the simulated attack by at least differentiating the set of simulated attack payloads from other network traffic;
transmitting the set of simulated attack payloads;
causing the set of simulated attack payloads to be distributed to the cloned computing resource environment of the set of cloned computing resource environments based at least in part on the signature;
obtaining diagnostic information from computing resources of the cloned computing resource environment;
generating threat analysis information based at least in part on processing the diagnostic information by the threat analysis service based at least in part on the additional metadata; and
generating a measure of effectiveness of the threat analysis service based at least in part on the threat analysis information and the simulated attack.

US Pat. No. 10,769,044

STORAGE DEVICE WITH A DISPLAY DEVICE FOR INDICATING A STATE

SAMSUNG ELECTRONICS CO., ...

1. A storage device comprising:nonvolatile memory devices;
a controller configured to control the nonvolatile memory devices;
a display device;
a detection circuit; and
a display controller configured to control the display device, wherein the display controller is further configured to control the display device to display different colors respectively corresponding to states of the storage device, the states comprising an access state in which the controller accesses the nonvolatile memory devices according to a request from an external host device, a standby state in which the controller is ready to perform the request from the external host device, a device fail state in which the controller and the nonvolatile memory devices cannot operate, and a replacement state in which the controller and the nonvolatile memory devices are selected for replacement,
wherein the display controller is further configured to determine that the storage device is in the replacement state in response to a signal from the detection circuit being activated, and
wherein the detection circuit is configured to activate the signal in response to a physical force not being applied to the detection circuit, and deactivate the signal in response to the physical force being applied to the detection circuit.

US Pat. No. 10,769,043

SYSTEM AND METHOD FOR ASSISTING USER TO RESOLVE A HARDWARE ISSUE AND A SOFTWARE ISSUE

HCL TECHNOLOGIES LTD., N...

1. A system for assisting a user to resolve a hardware issue and a software issue, the system comprising:a memory;
a processor coupled to the memory, wherein the processor is configured to execute programmed instructions stored in the memory to:
categorise a set of tickets stored in a ticket repository to generate a set of clusters, wherein each cluster, from the set of clusters, is configured to maintain one or more tickets from the set of tickets, wherein each ticket is associated with a ticket description, and wherein the categorization is based on the ticket description;
identify a target cluster, from the set of clusters, associated with a new ticket received from a user, wherein a new ticket description, associated with the new ticket, is analysed using a Convolution Neural Network based neural language model for identifying the target cluster, and wherein the new ticket corresponds to a target issue;
recommend one or more runbook scripts, from a runbook repository, associated with the new ticket based on analysis of the new ticket description using an artificial intelligence based recommendation engine, wherein the one or more runbook scripts are associated with a runbook script description, and wherein the one or more runbook scripts comprise a probability, to resolve the new ticket, higher than a pre-defined threshold probability;
identify a new runbook script, corresponding to the new ticket, from a set of external repositories using a natural language processing and a text extraction algorithm, when the one or more runbook scripts are not available in the runbook repository or the one or more runbook scripts fail to resolve the target issue;
execute at least one of the one or more runbook scripts or the new runbook script associated with the new ticket; and
generate a document based on the execution of the one or more runbook scripts or the new runbook script, thereby assisting the user to resolve the target issue.

US Pat. No. 10,769,042

SINGLE PORT DATA STORAGE DEVICE WITH MULTI-PORT VIRTUALIZATION

Seagate Technology LLC, ...

13. A method comprising:connecting a remote host to a diskgroup comprising a first single port data storage device and a second single port data storage device, the first single port data storage device comprising a first controller connected to a first memory array, the second single port data storage device comprising a second controller connected to a second memory array, each controller comprising a multipath I/O connecting to the respective single port data storage devices;
initializing a first logical volume and a second logical volume in the first single port data storage device with the first controller;
initializing a third logical volume and a fourth logical volume in the second single port data storage device with the second controller;
servicing a first data access request from the remote host with the first logical volume as directed by the second controller via a first remote data memory access module of the first single port data storage device; and
accessing the fourth logical volume in response to a request to a data request to the first volume by the first controller via a second remote data memory access module of the second single port data storage device.

US Pat. No. 10,769,041

INFORMATION PROCESSING SYSTEM, MONITORING APPARATUS, AND NETWORK DEVICE

FUJITSU LIMITED, Kawasak...

1. An information processing system comprising:a plurality of information processing apparatuses having ports that input and output data;
a monitoring apparatus connected to each of the plurality of information processing apparatuses; and
a network device connected to the monitoring apparatus, the network device further connected by a single port, to a port of the each of the plurality of information processing apparatuses, wherein
the each of the plurality of information processing apparatuses judges based on a result of comparison of a first value indicating an extent of delay of a response to a request at the port of the each of the plurality of information processing apparatuses and a second value obtained from another of the plurality of information processing apparatus and indicating an extent of delay of a response to the request at a port of the other information processing apparatus, whether response to the request is executable by the port of the each of the plurality of information processing apparatuses,
the monitoring apparatus acquires from the each of the plurality of information processing apparatuses, a result of judgment of whether response to the request is executable by the port of the each of the plurality of information processing apparatuses, and based on the acquired result of judgment by the each of the plurality of information processing apparatuses, determines from the plurality of information processing apparatuses, a first information processing apparatus to which the data is not transmitted and a second information processing apparatus to which the data is transmitted in place of the first information processing apparatus, and
the network device acquires from the monitoring apparatus, information that identifies the first information processing apparatus and the second information processing apparatus, the network device further changes to the second information processing apparatus, a destination of the data to the first information processing apparatus and changes to the first information processing apparatus, a transmission source of the data from the second information processing apparatus.

US Pat. No. 10,769,040

LOGICAL EQUIVALENT REPLICATION WITH SNAPSHOT BASED FALLBACK OF DATABASE SYSTEMS

SAP SE, Walldorf (DE)

1. A computer implemented method comprising:registering a first database system comprising a first processor and a first memory with a separate and distinct second database system comprising a second processor and a second memory, the first database system comprises multiple snapshots; and
performing a failback operation to restore a state of the first database system comprising:
identifying a most recent snapshot in persistence storage of the second database system that is also present in persistent storage on the first database system, the most recent snapshot being identified as an active snapshot;
opening the snapshot comprising data known to have existed on the first database system and the second database system at a first time;
requesting transaction log information from the second database system, the transaction log information being a delta log corresponding to transactions performed on the second database system beginning with the first time; and
applying the transaction log information to the snapshot data on the first database system;
wherein:
older snapshots are deleted after the opened snapshot is identified as an active snapshot;
after applying the transaction log information to the snapshot data on the first database system, the data on the first database system is logically equivalent and physically different to data on the second database system.

US Pat. No. 10,769,039

METHOD AND APPARATUS FOR PERFORMING DISPLAY CONTROL OF A DISPLAY PANEL TO DISPLAY IMAGES WITH AID OF DYNAMIC OVERDRIVE STRENGTH ADJUSTMENT

HIMAX TECHNOLOGIES LIMITE...

1. A method for performing display control of a display panel to display images with aid of dynamic overdrive (OD) strength adjustment, each of the images comprising a plurality of blocks, each of the plurality of blocks comprising a plurality of pixels, each of the plurality of pixels comprising a plurality of sub-pixels, the method comprising:encoding image data of a current image to generate encoded image data of the current image, wherein the encoded image data is compressed data of the image data;
decoding the encoded image data of the current image to generate decoded image data of the current image;
according to the image data and the decoded image data of the current image, performing block error estimation to generate quantized block error values of blocks of the current image, respectively, wherein the quantized block error values represent compression error of the decoded image data caused by encoding;
according to the quantized block error values, determining OD depressed gains, respectively; and
according to the OD depressed gains, adjusting OD strength of corresponding blocks within a next image, respectively, for controlling the display panel to display the next image, wherein the OD depressed gains represent reduction magnification regarding the OD strength.

US Pat. No. 10,769,038

COUNTER CIRCUITRY AND METHODS INCLUDING A MASTER COUNTER PROVIDING INITIALIZATION DATA AND FAULT DETECTION DATA AND WHEREIN A THRESHOLD COUNT DIFFERENCE OF A FAULT DETECTION COUNT IS DEPENDENT UPON THE FAULT DETECTION DATA

Arm Limited, Cambridge (...

1. Apparatus comprising:master counter circuitry to generate a master count signal in response to a clock signal;
slave counter circuitry responsive to the clock signal to generate a respective slave count signal, the slave counter circuitry comprising having associated fault detection circuitry; and
a synchronisation connection providing signal communication between the master counter circuitry and the slave counter circuitry, the master counter circuitry being configured to provide via the synchronisation connection:
initialisation data at an initialisation operation; and
fault detection data at a fault detection operation;
the initialisation data and subsequent fault detection data each representing respective indications of a state of the master count signal;
the slave counter circuitry being configured, during an initialisation operation for that slave counter circuitry, to initialise a counting operation of that slave counter circuitry in response to the initialisation data provided by the master counter circuitry; and
the fault detection circuitry associated with the slave counter circuitry being configured, during a fault detection operation for that slave counter circuitry, to detect whether a counting operation of that slave counter circuitry generates a slave count signal which is within a threshold count difference of a fault detection count value dependent upon the fault detection data provided by the master counter circuitry.

US Pat. No. 10,769,037

TECHNIQUES FOR LIF PLACEMENT IN SAN STORAGE CLUSTER SYNCHRONOUS DISASTER RECOVERY

NetApp Inc., Sunnyvale, ...

1. A method, comprising:replicating a logical interface (LIF) of a first cluster to a second cluster as a replicated LIF;
extracting LIF configuration information from the first cluster;
assigning the replicated LIF to a peer node within the second cluster in response to identifying one or more ports on the peer node that match a connectivity, specified within the LIF configuration information, of the LIF of the first cluster;
filtering the one or more ports based upon filtering criteria to generate a candidate port list; and
selecting a target port from the candidate port list to utilize for the peer node based upon a load of the target port.

US Pat. No. 10,769,036

DISTRIBUTED TRANSACTION LOG

VMware, Inc., Palo Alto,...

1. A computer-implemented method comprising:receiving an input/output (I/O) operation request from a virtual machine at a hypervisor of a host computer node, the I/O operation request for a corresponding virtual disk of the virtual machine, wherein the virtual disk is associated with a distributed storage system;
determining particular resource component objects of the virtual disk that are subject to the I/O operation, wherein each resource component object is associated with particular physical storage resources of the distributed storage system;
performing a plurality of distributed transactions on the particular resource component objects to carry out the I/O operation;
logging each transaction in a journal for each respective resource component object, wherein each entry includes a sequence identifier; and
reconciling the entries on different journals such that lost transactions from an offline resource component object are derived from comparing the content of the journals using the respective sequence identifiers.

US Pat. No. 10,769,035

KEY-VALUE INDEX RECOVERY BY LOG FEED CACHING

International Business Ma...

1. A computer system comprising:a memory including a plurality of nodes;
a storage device;
a processor coupled to the memory and the storage device to define a key value database architecture;
a first node of the plurality thereof configured to generate a plurality of checkpoints, each checkpoint including a plurality of checkpoint records that track indexes of the key value database architecture, each index including a mapping from a key to a location of a corresponding value on the storage device; and
a second node of the plurality thereof including spare node memory, the second node configured to operate a log feed cache manager to cache one or more portions of a log feed in the spare node memory, each portion including at least the checkpoint records of one of the checkpoints generated by the first node.

US Pat. No. 10,769,034

CACHING DML STATEMENT CONTEXT DURING ASYNCHRONOUS DATABASE SYSTEM REPLICATION

SAP SE, Walldorf (DE)

1. A method for implementation in connection with a database system including a primary database system having an associated and separate secondary database system mirroring data stored in the primary database system, the method comprising:continuously replaying, as part of a database recovery operation, redo log records on the secondary database system in each of a plurality of recovery queues, wherein redo log records of a table partition are placed and processed in a specific one of the plurality of recovery queues corresponding to such table partition, wherein each table partition has its own in-memory store;
caching, during the continuous replay of redo log records on the secondary database system and on a per table partition basis, objects that are created during replaying of the redo log records which are common across multiple database manipulation language (DML) redo log records for a same table partition, the objects comprising table level metadata information or column level metadata information;
reusing, as part of the continuous replay of the redo log records on the secondary database system, cached objects as they are accessed during sequential processing of DML redo records for the same table partition;
deleting, as part of the continuous replay of the redo log records on the secondary database system, cached objects when certain conditions associated with a database definition language (DDL) operation are met preventing such deleted cached objects from being reused, the certain conditions comprising encountering a redo log record that modifies content within a particular cached object;
and wherein a flag is set for a DDL redo operation that spans multiple DDL redo log records for which there are interspersed DML redo log records, the flag indicating that redundant synchronization primitives are to be skipped when processing the interspersed DML redo log records; the flag is unset when the last DDL redo log record of the DDL operation is processed.

US Pat. No. 10,769,033

PACKET-BASED DIFFERENTIAL BACKUP OF NETWORK-ATTACHED STORAGE DEVICE CONTENT

Cohesity, Inc., San Jose...

1. A method, comprising:receiving at a secondary storage system from a network attached storage device a storage capture instance associated with a first time instance, the network attached storage device being external to the secondary storage system;
receiving, at the secondary storage system from a packet analyzer, at least a portion of metadata of a subset of tracked network packets associated with the network attached storage device, the tracked network packets being selected from a plurality of network packets external to the secondary storage system based on the at least the portion of the metadata in each of the tracked network packets, wherein the packet analyzer analyzes a set of the tracked network packets in route from a client device to the network attached storage device to identify network packets associated with one or more change operations for content items stored by the network attached storage device, wherein the subset of the tracked network packets associated with the network attached storage device include the identified network packets;
identifying at least one changed content item of the network attached storage device that has changed since the first time instance by analyzing the at least the portion of the metadata of the tracked network packets received; and
performing an incremental backup of the network attached storage device at a second time instance based at least in part on the at least one changed content item identified, wherein performing the incremental backup of the network attached storage device includes:
requesting, by the secondary storage system from the network attached storage device, data associated with the at least one changed content item;
receiving at the secondary storage system from the network attached storage device the requested data associated with the at least one changed content item; and
backing up the requested data associated with the at least one changed content item.

US Pat. No. 10,769,032

AUTOMATION AND OPTIMIZATION OF DATA RECOVERY AFTER A RANSOMWARE ATTACK

EMC IP HOLDING COMPANY LL...

11. A method for performing a restore operation, the method comprising:determining that an event has occurred in a computing system that includes production data, wherein a plurality of candidate backups are available to the computing system such that the production data can be restored;
identifying a healthy backup from the plurality of candidate backups by:
extracting features from the plurality of candidates;
augmenting the process of identifying the healthy backup with augmented data;
evaluating the extracted features and the candidate backups based on the augmented data; and
assigning a score to each of the candidate backups based on the extracted features and the augmented data;
selecting, as the healthy backup, the candidate backup with the best score; and
restoring the production data from the selected healthy backup.

US Pat. No. 10,769,031

LOST DATA RECOVERY FOR VEHICLE-TO-VEHICLE DISTRIBUTED DATA STORAGE SYSTEMS

1. A method for a connected vehicle that is a member of a vehicular micro cloud, comprising:detecting that one or more data segments of a first data set are lost during a first data handover function transmitted over a Vehicle-to-Everything (V2X) network, wherein the first data set is partitioned into multiple data segments including the one or more data segments during a previous data handover function performed prior to the first data handover function; and
modifying an operation of a communication unit of the connected vehicle to collect the one or more data segments from a set of overhearing vehicles that overhears the one or more data segments during the previous data handover function so that the first data set is recovered on the connected vehicle.

US Pat. No. 10,769,030

SYSTEM AND METHOD FOR IMPROVED CACHE PERFORMANCE

EMC IP Holding Company LL...

1. A coordination point for assigning clients to remote backup storages, comprising:a persistent storage comprising client type to remote backup storage mappings; and
a processor programmed to:
obtain a data storage request for data from a client of the clients;
obtain, without reading the data, an inferential characterization of the client;
identify a type of the client using the inferential characterization of the client, wherein, to identify the type of the client using the inferential characterization of the client, the processor is further programmed to:
make a prediction of a content of the data based on the inferential characterization of the client, and
match the predicted content of the data to one type of a plurality of types,
wherein each type of the plurality of types is associated with a different respective contents;
select a remote backup storage of the remote backup storages based on the identified type of the client using the client type to remote backup storage mappings; and
assign the selected remote backup storage to service the data storage request.

US Pat. No. 10,769,029

ACCESSING RECORDS OF A BACKUP FILE IN A NETWORK STORAGE

INTERNATIONAL BUSINESS MA...

1. A computer program product for maintaining a backup file in in a network storage over a network, wherein the computer program product comprises a computer readable storage medium having computer readable program instructions executed by a processor to perform operations, the operations comprising:processing a backup file comprising a sequential file of metadata records and data set records, wherein the metadata records include metadata on data of data sets in the data set records;
generating backup objects to store the metadata and the data set records in the backup file;
determining containers in the network storage to store the backup objects offered by at least one storage service;
generating backup object information indicating, for each backup object of the backup objects, an order of the data set records in the backup file, stored in the backup object, and a container name of a container in the network storage offered by one of the at least on storage service to store the backup object;
determining, from the backup object information for the backup objects, containers for the backup objects; and
concurrently transmitting multiple of the backup objects to the network storage over the network to concurrently write to the containers determined from the backup object information for the backup objects being transmitted.

US Pat. No. 10,769,028

ZERO-TRANSACTION-LOSS RECOVERY FOR DATABASE SYSTEMS

AXXANA (ISRAEL) LTD., Te...

1. A method, comprising:partitioning a software stack for processing storage commands, into first, second and third software components managed respectively by a database server at a primary site, by a secure storage unit at or adjacent to the primary site, the secure storage unit comprising a protection storage unit and a disaster-proof storage unit, and by a recovery system at a secondary site;
receiving, by the database server, a new database transaction comprising an update for a local database stored at the primary site;
storing the received database transaction to a secure log file in the disaster-proof unit using the first and the second software components, wherein using the first and the second software components comprises mapping the protection storage unit to the database server and mapping the disaster-proof storage unit comprising the secure log file to the protection storage unit; and
following a disaster occurring at the primary site, recovering, from the disaster-proof storage unit by the recovery system using the second and the third software components, the database transactions in the secure log files so as to synchronize a remote database to a most recent state of the local database prior to the failure, wherein using the second and the third software components comprises mapping the disaster-proof storage unit comprising the secure log file to the recovery system.

US Pat. No. 10,769,027

QUEUED SCHEDULER

ServiceNow, Inc., Santa ...

1. An apparatus for maintaining computing assets implementing configuration items within a configuration management database, comprising:memory; and
a processor configured to execute instructions stored in the memory to:
receive, on a periodic basis, a first queue including a plurality of first queue maintenance jobs for configuration items within a configuration management database, the first queue initially having a defined order and at least some of the plurality of first queue maintenance jobs including a backup job for data stored within a non-transitory storage medium;
process the first queue starting with a first queue entry in the defined order;
receive, while processing the first queue, a second queue maintenance job forming a respective entry within a second queue, the second queue initially ordered by receipt of a request for the second queue maintenance job, and wherein processing of the second queue has a higher priority than processing of the first queue; and
in response to receiving the second queue maintenance job and after completion of a first queue maintenance job of the plurality of first queue maintenance jobs, process the second queue in sequence by, for a second queue entry:
comparing requirements for performing the second queue maintenance job to conditions occurring within the configuration management database, wherein the second queue maintenance job is associated with the second queue entry and the conditions occurring within the configuration management database include a status of processing the first queue;
when the conditions meet the requirements, performing the second queue maintenance job by copying data stored within a first non-transitory storage medium to a second non-transitory storage medium; and
when the conditions do not meet the requirements, moving the second queue entry to an end of the second queue without performing the second queue maintenance job; and
wherein processing the second queue defers processing of the first queue until no second queue entries remain within the second queue or the conditions meet the requirements of no second queue entry within the second queue.

US Pat. No. 10,769,026

DYNAMICALLY PAUSING LARGE BACKUPS

EMC IP Holding Company LL...

1. A system, comprising:a processor configured to:
determine that a backup of a set of backup sources is triggered at a first instance by a backup policy associated with the set of backup sources;
determine for each backup source of the set of backup sources, a size of data to be backed up; and
in response to determining that the determined size of data to be backed up of a selected backup source of the set exceeds a threshold size:
pause a backup of the selected backup source that was to be performed at the first instance; and
perform the backup of another backup source of the set based at least in part on the backup policy instead of the backup of the selected backup source,
wherein the backup of the selected backup source is resumed at a second instance specified by a backup resume policy, wherein the second instance is a time instance specified by the backup resume policy; and
a memory coupled with the processor and configured to provide the processor with instructions.

US Pat. No. 10,769,025

INDEXING A RELATIONSHIP STRUCTURE OF A FILESYSTEM

Cohesity, Inc., San Jose...

1. A method, comprising:identifying one or more storage locations associated with a plurality of inodes in a data source to be backed up, wherein the one or more identified storage locations of the plurality of inodes include one or more data ranges in a disk file of the data source which correspond to the plurality of inodes;
extracting information associated with the plurality of inodes from the one or more identified storage locations of the data source to be backed up to a storage system, wherein at least one item of the extracted information associated with the plurality of inodes includes a reference to a parent inode, wherein extracting the information associated with the plurality of inodes from the one or more identified storage locations of the data source to be backed up to the storage system includes copying the extracted information associated with the plurality of inodes to a first storage tier of the storage system and storing the extracted information associated with the plurality of inodes in one or more data structures that are stored in the first storage tier of the storage system, wherein the one or more data structures include one or more key-value stores; and
analyzing contents of the one or more data structures to index a relationship structure of the plurality of inodes of the data source, wherein analyzing the contents of the one or more data structures includes:
scanning the one or more key-value stores; and
generating the index of the relationship structure of the plurality of inodes of the data source based on the scanning of the one or more key-value stores.

US Pat. No. 10,769,024

INCREMENTAL TRANSFER WITH UNUSED DATA BLOCK RECLAMATION

NetApp Inc., Sunnyvale, ...

1. A method comprising:determining that a transfer utilized a base snapshot of a source volume hosted by a first node for replication of blocks to a destination volume hosted by a second node, where a snapshot of the source volume is deleted after the transfer and a new snapshot of the source volume is created subsequent the transfer;
identifying a set of new blocks, allocated to the new snapshot, to transfer to the destination volume by a subsequent transfer based upon a comparison of a first active map of the base snapshot and a second active map of the new snapshot;
identifying a set of unused blocks, previously allocated to the snapshot, deleted from the source volume based upon a comparison of the first active map and a first summary map of the base snapshot with the second active map and a second summary map of the new snapshot, wherein the identifying comprises:
including a block within the set of unused blocks based upon the active and summary maps indicating that the block was unallocated to the base snapshot and unallocated to at least one snapshot of the source volume when the base snapshot was captured, and indicating that the block was unallocated to the new snapshot and unallocated to at least one snapshot of the source volume when the new snapshot was captured; and
performing the subsequent transfer to transmit the set of new blocks to the second node to write to the destination volume, wherein the subsequent transfer instructs the second node to deallocate the set of unused blocks from the destination volume during execution of the subsequent transfer to write the set of new blocks to the destination volume.

US Pat. No. 10,769,023

BACKUP OF STRUCTURED QUERY LANGUAGE SERVER TO OBJECT-BASED DATA STORAGE SERVICE

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:transmitting a first request, at a backup manager of one or more computing devices to a virtual device application programming interface, to create a virtual device to which a structured query language (SQL) server can write;
transmitting, to the SQL server, a second request to write a backup of a database managed by the SQL server to the virtual device;
buffering a portion of the backup written to the virtual device for the backup in a volatile memory corresponding to the virtual device, wherein the portion of the backup is compressed prior to the portion of the backup being written to the virtual device;
as a result of a determination that a size of the portion of the backup buffered in volatile memory has not met a threshold size, looping by the backup manager such that additional data is added to the portion of the backup;
transmitting at the backup manager, as a result of the size of the portion of the backup buffered in the volatile memory reaching the threshold size, a web service application programming interface request to an object-based data storage service to transfer the portion of the backup from the volatile memory to the object-based data storage service to be stored in a data object in the object-based data storage service; and
receiving a status update related to the portion of the backup written to the virtual device.

US Pat. No. 10,769,022

DATABASE BACKUP FROM STANDBY DB

Oracle International Corp...

1. A method for backing up a database, comprising:maintaining a database system having a primary database and a standby database, the primary database generating redo records that are copied to the standby database;
receiving a request to back up the standby database; and
backing up the standby database by copying standby database datafiles and archived redo log files to a backup storage device,
wherein backing up the standby database does not initiate a switch operation at the primary database to transfer online redo log records to an archived redo log file.

US Pat. No. 10,769,021

CACHE PROTECTION THROUGH CACHE

EMC IP Holding Company L...

1. A method for providing cache coherency protection, comprising:receiving a data write request for a data block at a first director from a host;
immediately in response to receiving the data write request, and independent of any subsequent read request or write request:
storing the data block in a cache of the first director;
transmitting a copy of the data block from the first director to a second director; and
storing the copy of the data block in a cache of the second director;
maintaining a directory that identifies a location of the data block;
in response to a read request received at a respective one of the first director or the second director for the data block, locally servicing the read request using the data block or the copy of the data block stored in the cache of the respective one of the first director or the second director;
in response to failure of the first director, a failure recovery process is initiated using the data block or the copy of the data block stored in the cache of the second director, and a third director operates as a protection partner of the second director in place of the first director during a failure recovery process until the first director returns to operation and the first and second directors become protection partners; and
in response to failure of the second director, the failure recovery process is initiated using the data block or the copy of the data block stored in the cache of the first director and the third director operates as a protection partner of the first director in place of the second director during the failure recovery process until the second director returns to operation and the first and second directors become protection partners.

US Pat. No. 10,769,020

SHARING PRIVATE SPACE AMONG DATA STORAGE SYSTEM DATA REBUILD AND DATA DEDUPLICATION COMPONENTS TO MINIMIZE PRIVATE SPACE OVERHEAD

EMC IP Holding Company LL...

1. A method of sharing private storage space among storage system components of a data storage system, comprising:determining a first amount of private storage space for use by a data rebuild component of a data storage system;
determining a second amount of private storage space for use by a file system checking (FSCK) component of the data storage system;
determining a third amount of private storage space for use by a data deduplication component of the data storage system;
determining a sum of (i) the first amount of private storage space determined for use by the data rebuild component and (ii) a maximum of (a) the second amount of private storage space determined for use by the FSCK component and (b) the third amount of private storage space determined for use by the data deduplication component;
allocating, in the data storage system, an amount of private storage space equal to the determined sum; and
sharing the allocated amount of private storage space among the data rebuild component, the FSCK component, and the data deduplication component of the data storage system.

US Pat. No. 10,769,019

SYSTEM AND METHOD FOR DATA RECOVERY IN A DISTRIBUTED DATA COMPUTING ENVIRONMENT IMPLEMENTING ACTIVE PERSISTENCE

ORACLE INTERNATIONAL CORP...

1. A method for supporting automated recovery of persisted data in a distributed data grid, the method comprising:actively persisting persisted data during operation of the distributed data grid;
storing cluster membership data in a global partition of the distributed data grid wherein said cluster membership data consists essentially of a current number of storage-enabled members in the cluster and a current number of storage-enabled members on each host node of the cluster;
updating the stored cluster membership data in response to changes in the current number of storage-enabled members in the cluster and the current number of storage-enable members on each host node of the cluster;
providing a recovery coordinator service wherein, during a recovery operation, the recovery coordinator service operates to,
retrieve the cluster membership data from the global partition,
calculate a dynamic recovery quorum threshold by applying an algorithm to the retrieved cluster membership data,
monitor current cluster membership, and
defer recovery of the persisted data until the dynamic recovery quorum threshold is satisfied.

US Pat. No. 10,769,018

SYSTEM AND METHOD FOR HANDLING UNCORRECTABLE DATA ERRORS IN HIGH-CAPACITY STORAGE

Alibaba Group Holding Lim...

1. A computer-implemented method for handling errors in a storage system, the method comprising:detecting, by a data-placement module of the storage system, an error occurring at a first physical location within the storage system;
in response to determining that the error occurs during a write access, writing to-be-written data into a second physical location within the storage system;
updating a mapping between a logical address and a physical address associated with the to-be-written data;
in response to determining that the error occurs during a read access that belongs to a background process,
determining a destination physical location associated with to-be-read data based on a current mapping table,
writing dummy data into the destination physical location and
indicating, by setting a data-incorrect flag, in a mapping table entry associated with the to-be-read data that current data stored at the destination physical location is incorrect;
retrieving a copy of the to-be-read data from a second storage system; and
serving the read access using the retrieved copy.

US Pat. No. 10,769,017

ADAPTIVE MULTI-LEVEL CHECKPOINTING

Hewlett Packard Enterpris...

1. An apparatus comprising:a processor; and
a non-transitory computer readable medium storing machine readable instructions that when executed by the processor cause the processor to:
ascertain, for checkpoint data stored in node-local storage, a transfer parameter associated with transfer of the checkpoint data from the node-local storage to a parallel file system;
compare the transfer parameter to a specified transfer parameter threshold
determine, based on the comparison of the transfer parameter to the specified transfer parameter threshold, whether to transfer the checkpoint data from the node-local storage to the parallel file system, wherein the transfer parameter includes input/output bandwidth or a percentage disk idle time; and
where the transfer parameter includes the input/output bandwidth, based on a determination that the input/output bandwidth is less than the specified input/output bandwidth threshold, cause the transfer of the checkpoint data from the node-local storage to the parallel file system; or
where the transfer parameter includes the percentage disk idle time, based on a determination that the percentage disk idle time is less than the specified percentage disk idle time threshold, cause the transfer of the checkpoint data from the node-local storage to the parallel file system.

US Pat. No. 10,769,016

STORING A PLURALITY OF CORRELATED DATA IN A DISPERSED STORAGE NETWORK

PURE STORAGE, INC., Moun...

1. A method for execution by a dispersed storage and task (DST) client module that includes a processor, the method comprises:obtaining a plurality of sorted data entries;
obtaining a data access performance goal level associated with the plurality of sorted data entries;
obtaining DSN performance information;
selecting compression parameters based on the data access performance goal level and the DSN performance information, wherein selecting the compression parameters includes:
determining predicted latency for each of a given set of compression parameters based on the DSN performance information; and
selecting one of the given set of compression parameters with a lowest predicted latency;
selecting sorted data entries of the plurality of sorted data entries based on the selected compression parameters to produce a data object;
compressing the data object to produce a compressed data object using the selected compression parameters; and
dispersed storage error encoding the compressed data object to produce one or more sets of encoded data slices for storage in a set of storage units.

US Pat. No. 10,769,015

THROTTLING ACCESS REQUESTS AT DIFFERENT LAYERS OF A DSN MEMORY

INTERNATIONAL BUSINESS MA...

1. A method for execution by one or more processing modules of one or more computing devices of a dispersed storage network (DSN), the method comprises:determining, by a processing unit, an I/O (input/output) capacity of a storage pool storage level of DSN memory, wherein the DSN memory includes at least one storage pool;
determining, by the processing unit, an I/O capacity of a dispersed storage (DS) unit set storage level of the DSN memory, wherein the at least one storage pool includes at least one DS unit set;
determining, by the processing unit, an I/O capacity of a DS unit storage level of the DSN memory, wherein the at least one DS unit set includes at least one DS unit;
determining, by the processing unit, an I/O capacity of a memory device storage level of the DSN memory, wherein the at least one DS unit includes at least one memory device;
determining, by the processing unit, a required performance level of each of the storage pool storage level, the DS unit set storage level, the DS unit storage level, and the memory device storage level to meet operational demands of services including inter-device set rebuild and compaction that are operating at each of the storage pool storage level, the DS unit set storage level, the DS unit storage level, and the memory device storage level; and
setting, by the processing unit, a storage level throttle rate for each of the storage pool storage level, the DS unit set storage level, the DS unit storage level, and the memory device storage level by balancing the I/O capacity of each of the storage pool storage level, the DS unit set storage level, the DS unit storage level, and the memory device storage level, the required performance level of each of the storage pool storage level, the DS unit set storage level, the DS unit storage level, and the memory device storage level, a requester-reserved performance level, and a total I/O aggregated demand rate at a given point in time, wherein the requester-reserved performance level is a desired minimum level of performance reserved for external requesters of the DSN.

US Pat. No. 10,769,014

DISPOSABLE PARITY

Micron Technology, Inc., ...

1. An array controller for disposable parity in a NAND device, the array controller comprising:a volatile memory; and
processing circuitry to:
obtain a first portion of data;
store a first parity value for the first portion of data in a first memory location;
update a data-to-parity table with a first entry that includes a mapping between the first portion of data and the first memory location;
obtain a second portion of data;
store a second parity value for the second portion of data in a second memory location that is adjacent to the first memory location;
update the data-to-parity table with a second entry that including a mapping between the second portion of data and the second memory location;
detect an error during a write of the first portion of data;
use the first parity value via a lookup in the data-to-parity table to correct the error; and
discard the first memory location and second memory location in response to a successful write of the first data portion and the second data portion.

US Pat. No. 10,769,013

CACHING ERROR CHECKING DATA FOR MEMORY HAVING INLINE STORAGE CONFIGURATIONS

Cadence Design Systems, I...

1. A method comprising:storing, on a memory, primary data inline with error checking data generated for the primary, data using split addressing for memory transactions, the primary data being stored on the memory at a range of inline primary data addresses and the error checking data being stored on the memory at a range of inline error checking data addresses, the range of inline primary data addresses not overlapping with the range of inline error checking data addresses;
generating, based on a first memory transaction for reading particular primary data stored on the memory, a first read command for reading the particular primary data from the memory and a second read command for reading particular error checking data from the memory, the particular error checking data being generated for the particular primary data;
adding, to a command queue, the first read command and the second read command;
executing one or more commands from the command queue, the executing the one or more commands comprising:
selecting for execution, from the command queue, the second read command for reading the particular error checking data;
determining, based on a comparison of inline error checking data addresses, whether the second read command for reading the particular error checking data hits existing error checking data currently stored in an error checking data buffer, the comparison of inline error checking data addresses comprising comparing an inline error checking data address associated with the second read command with an inline error checking data address associated with the error checking data buffer; and
in response to determining that the second read command for reading the particular error checking data does not hit existing error checking data currently stored in the error checking data buffer:
executing the second read command to read the particular error checking data from the memory; and
storing, in the error checking data buffer, the particular error checking data read from the memory.

US Pat. No. 10,769,012

MEMORY WITH ERROR CORRECTION FUNCTION AND AN ERROR CORRECTION METHOD

1. A memory with error correction function, wherein the memory comprises a data array, an ECC array, a flag bit array, an ECC encoding module, an ECC decoding module, a flag bit generation module and a flag bit detection module;wherein:
the data array is configured to store data;
the flag bit generation module is configured, when data is being stored, to generate a flag bit and an encode enable signal in response to the one or more external control signals that affect the length of the data, the flag bit being stored in the flag bit array, and the encode enable signal being used to control the operation of the ECC encoding module;
the ECC encoding module is configured, when the encode enable signal is enabled, to receive the data and encode the data according to the ECC algorithm preset therein to generate parity bits;
the ECC array is configured to store the generated parity bits;
when data is being read from the data array, the flag bit detection module is configured to detect the flag bit, and in response to the flag bit, to generate a decode enable signal for controlling the operation of the ECC decoding module; and
the ECC decoding module is configured, when the decode enable signal is enabled, to detect and correct erroneous data using the parity bits from the ECC array and the data from the data array, and to output the corrected data;
wherein the flag bit indicates whether the length of the data is equal or not equal to the data length required by the preset ECC algorithm.

US Pat. No. 10,769,011

MEMORY DEVICE THAT CHANGES A WRITABLE REGION OF A DATA BUFFER BASED ON AN OPERATIONAL STATE OF AN ECC CIRCUIT

TOSHIBA MEMORY CORPORATIO...

1. A memory device, comprising:a semiconductor memory cell array including memory cells in pages, the memory cells arranged in rows and columns, memory cells in a same column connected to a same bit line, each memory cell in a page being assigned a column address from a continuous sequence of column addresses;
a controller circuit configured to communicate with a host through a serial interface, to receive write data from the host to be written to the semiconductor memory cell array as a page, and to store the write data in a data buffer before writing the write data to the semiconductor memory cell array as a page; and
an error-correcting code (ECC) circuit configured to generate an error correction code from the write data if the ECC circuit is in a valid state, wherein
when the ECC circuit is in the valid state and the controller circuit receives a first logical column address and first write data from the host, the controller circuit writes the first write data to a first physical column address of the data buffer, and writes the error correction code generated by the ECC circuit from the first write data to a second physical column address, the second physical column address being located within a first region of the data buffer, wherein data from the host cannot be written into the first region by the controller circuit when the ECC circuit is in the valid state, and the first physical column address being located within a second region of the data buffer, wherein data from the host can be written into the second region by the controller circuit when the ECC circuit is in the valid state, and
when the ECC circuit is in an invalid state and the controller circuit receives a third logical column address and second write data from the host, the controller circuit writes the second write data to a third physical column address of the data buffer, and writes the error correction code generated by the host from the second write data to a fourth physical column address of the data buffer, the third physical column address being located within the second region of the data buffer and the fourth physical column address being located within the first region of the data buffer.

US Pat. No. 10,769,010

DYNAMIC RANDOM ACCESS MEMORY DEVICES AND MEMORY SYSTEMS HAVING THE SAME

Samsung Electronics Co., ...

17. A memory system comprising:a system board;
a plurality of DRAM devices installed on the system board; and
a controller installed on the system board and configured to control the plurality of DRAM devices,
wherein each of the plurality of DRAM devices comprises:
first terminals through which n-bit first data and a first data strobe signal, which are applied from the controller, are input, wherein n is a positive integer;
second terminals through which n-bit second data and a second data strobe signal, which are applied from the controller, are input;
a control signal generator configured to generate a control signal; and
a cyclic redundancy code (CRC) unit configured to
perform a first CRC logical operation on a first data group comprising qn-bit first data generated by sequentially inputting n-bit first data with the first data strobe signal q times, wherein q is a positive integer,
generate a first CRC result signal,
perform a second CRC logical operation on a second data group including qn-bit second data generated by sequentially inputting n-bit second data with the second data strobe signal q times,
generate a second CRC result signal, and
generate an error signal based on the first CRC result signal and the second CRC result signal in response to the control signal having a first value and generate the error signal based on the second CRC result signal regardless of the first CRC result signal in response to the control signal having a second value.

US Pat. No. 10,769,009

ROOT CAUSE ANALYSIS FOR CORRELATED DEVELOPMENT AND OPERATIONS DATA

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method for root cause analysis, the method comprising:receiving, by a processor, operations data associated with a plurality of applications;
performing a trend analysis on the operations data to determine an operations issue associated with at least one of the plurality of applications; and
performing a root-cause analysis on the operations issue to identify a set of candidate applications from the plurality of applications that may be a cause of the operations issue by:
displaying, on a user dashboard, call graph data associated with the operations data for each candidate application, wherein the call graph data includes one or more weighted features of each candidate application;
selecting, by a user, a root cause feature from the one or more weighted features from a root cause application based at least in part on the call graph data displayed on the user dashboard.

US Pat. No. 10,769,008

SYSTEMS AND METHODS FOR AUTOMATIC FORMAL METASTABILITY FAULT ANALYSIS IN AN ELECTRONIC DESIGN

Cadence Design Systems, I...

1. A computer-implemented method for use in a formal verification of an electronic design comprising:receiving, using at least one processor, a clock-domain crossing electronic design;
analyzing the electronic design;
automatically generating a set of nodes where metastability can be originated;
automatically generating one or more preconditions representative of metastability effects at the output of at least one synchronizer associated with the electronic design;
automatically generating, based upon, at least in part, the one or more preconditions, one or more properties configured to analyze a propagation of the metastability effects associated with the at least one synchronizer; and
visually displaying at least one of the metastability effects and one or more unaffected areas at an annotated graph configured to display the at least one synchronizer and at least one clock domain.

US Pat. No. 10,769,007

COMPUTING NODE FAILURE AND HEALTH PREDICTION FOR CLOUD-BASED DATA CENTER

MICROSOFT TECHNOLOGY LICE...

1. A system associated with a cloud computing environment having a plurality of computing nodes, comprising:a node historical state data store containing historical node state data, the historical node state data including a first set of time-series data that represents an attribute of a node during a period of time prior to a node failure, and a first set of non-time-series data that represents a second attribute of a node during a period of time prior to a node failure;
a node failure prediction algorithm creation platform, coupled to the node historical state data store, to generate a machine-learned node failure prediction algorithm, where generation of the node failure prediction algorithm comprises conversion of the first set of time-series data into a second set of non-time-series data, and combination of the second set of non-time-series data with the first set of non-time-series data;
an active node data store containing information about the plurality of computing nodes in the cloud computing environment, including, for each node, a second set of time-series data that represents an attribute of that node over time, a third set of non-time-series data generated based on the second set of time-series data, a fourth set of non-time-series data that represents a second attribute of a node during a period of time prior to a node failure, and a fifth set of non-time-series data generated based on the third set of non-time-series data and the fourth set of non-time-series data; and
a virtual machine assignment platform, coupled to the active node data store, to:
execute the node failure prediction algorithm to determine a node failure probability score for each of the plurality of computing nodes based on the fifth set of non-time-series data in the active node data store,
create a ranked list of two or more of the plurality of computing nodes based on the determined node failure probability scores,
select one of the two or more of the plurality of computing nodes based at least in part on the ranked list, and
assign a virtual machine to the selected computing node.

US Pat. No. 10,769,006

ENSEMBLE RISK ASSESSMENT METHOD FOR NETWORKED DEVICES

CISCO TECHNOLOGY, INC., ...

1. A computer-implemented method comprising:at a management entity:receiving device fingerprints representing corresponding devices connected to one or more networks, each device fingerprint including a multi-bit word indicating hardware, software, network configuration, and failure features for a corresponding one of the devices;
processing the device fingerprints using statistical risk of failure scoring methods to produce risk of failures for each device;
processing the device fingerprints using machine learning risk of failure scoring methods that are different from the statistical risk of failure scoring methods to produce risk of failures for each device;
combining into a composite risk of failure for each device the risk of failures produced by the statistical risk of failure scoring methods and the risk of failures produced by the machine learning risk of failure scoring methods;
ranking the devices based on the composite risk of failures for the devices, to produce a risk ranking of the devices; and
outputting the risk ranking.

US Pat. No. 10,769,005

COMMUNICATION INTERFACE BETWEEN A FUSION ENGINE AND SUBSYSTEMS OF A TACTICAL INFORMATION SYSTEM

Goodrich Corporation, Ch...

1. A subsystem of a tactical information system (TIS), the system comprising:a memory configured to store instructions;
a processor disposed in communication with the memory, wherein:
the processor, upon execution of the instructions is configured to:
receive first standardized entity messages that include target information from multiple automatic target recognition (ATR) systems;
parse the first standardized entity messages to extract the target information;
provide the extracted target information to a fusion algorithm for fusion processing that determines whether to fuse the target information from different ATR systems and fuses the extracted target information to generate fused target information about a single target when determined to do so;
receive fused target information about the single target, if any, from the fusion algorithm; and
generate a second standardized entity message that includes the fused target information about the single target,
wherein the processor, upon execution of the instructions is further configured to:
receive from the fusion algorithm a non-standardized success indication that indicates success or failure of fusing the extracted target information by the fusion algorithm; and
transmit a standardized product processing report externally that indicates the success or failure of the fusing.

US Pat. No. 10,769,004

PROCESSOR CIRCUIT, INFORMATION PROCESSING APPARATUS, AND OPERATION METHOD OF PROCESSOR CIRCUIT

FUJITSU LIMITED, Kawasak...

1. A processor circuit comprising:multiple processor cores;
multiple individual memories, each of the multiple individual memories being associated with one of the multiple processor cores and being configured to be accessed from the associated one of the multiple processor cores;
multiple shared memories, each of the multiple shared memories being associated with a first processor core, the first processor core being any one of the multiple processor cores, each of the multiple shared memories being configured to be accessed from either the first processor core or a first adjacent processor core, the first adjacent processor core being one of the multiple processor cores and being adjacent to the first processor core in a first direction among the multiple processor cores;
multiple memory control circuits, each of the multiple memory control circuits being provided between the first processor core and an associated individual memory of the multiple individual memories and being configured to output a read request from the first processor core to the associated individual memory belonging to the first processor core;
multiple selectors, each of the multiple selectors being associated with one of the multiple shared memories and being configured to select a read request from one of the first processor core, to which the associated one of the multiple shared memories belong, and the first adjacent processor core, output the selected read request to the associated one of the multiple shared memories, select a transfer request from one of a specific memory control circuit of the multiple memory control circuits and another memory control circuit of the multiple memory circuits belonging to a second adjacent processor core adjacent to the specific memory control circuit in a second direction, and output the selected transfer request to the associated one of the multiple shared memories; and
a control core configured to control the multiple processor cores;
wherein, in a case where the control core sets, in each of the multiple memory control circuits, a transfer source address of one of the multiple individual memories and the multiple shared memories that store transfer data to be transferred among the multiple processor cores and a transfer destination address of one of the multiple shared memories to which the transfer data is to be transferred and also sets transfer selection information in each of the multiple selectors,
with respect to each of the multiple memory control circuits, when an address of the read request from the first processor, to which a specific memory control circuit belongs, is identical to the transfer source address, the specific memory control circuit controls the transfer data in accordance with the read request to be transferred to the transfer destination address via a specific selector of the multiple selectors in which the transfer selection information is set,
wherein, in a case where the control core sets read selection information in each of the multiple selectors, with respect to each of the multiple shared memories, read data is read by one of the first processor core, to which an associated shared memory of the multiple shared memories belongs, and the first adjacent processor core from the associated shared memory via a specific selector of the multiple selectors in which the read selection information is set.

US Pat. No. 10,769,003

APPLICATION SERVER PROGRAMMING LANGUAGE CLOUD FUNCTIONS

SAP SE, Walldorf (DE)

1. A system comprising:at least one hardware processor; and
a computer-readable medium storing instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform operations comprising:
establishing a bi-directional communications channel between an application operating on a first device in a computer network and an application server;
establishing a push channel on top of the bi-directional communications channel;
receiving, at the application server, an indication of a first trigger event from the first device, the application server receiving the indication via the push channel, wherein the first trigger event is a request for runtime logs, from a test and log console, using correlation identifications; and
executing, at the application server, a first function based on the first trigger event.

US Pat. No. 10,769,002

MANAGING A VIRTUAL OBJECT

INTERNATIONAL BUSINESS MA...

1. A server device, comprising:a processor; and
a memory communicatively coupled to said processor; said memory comprising executable code that causes said processor, upon execution of said executable code, to:
store a virtual object in a database accessible to the server device, said database comprising a number of avatars and a number of virtual objects distinct from said avatars; and
in response to a non-subscriber user performing, with one of said avatars, a first action on said virtual object, send a message from said server device to at least one user that subscribes to said virtual object;
wherein said at least one user that subscribes to said virtual object comprises at least one user subscribing to said virtual object that is not a member of a community to which said non-subscriber user belongs.

US Pat. No. 10,769,001

SYSTEM AND METHOD FOR PROCESS STATE PROCESSING

DiDi Research America, LL...

1. A system for processing process states, the system comprising:one or more processors; and
a memory storing instructions that, when executed by the one or more processors, cause the system to perform:
obtaining process event information of a computing device, the process event information characterizing states of processes of the computing device;
storing the process event information within a queue;
determining graph information based on the process event information within the queue, the process event information including a respective unique process identifier for each of the processes of the computing device, each of the respective unique process identifiers being based on a combination of a respective reusable process identifier and a respective process start time, the graph information characterizing states of processes of the computing device using nodes and edges, each node of the graph information representing a respective process of the processes of the computing device, and each edge of the graph information representing a respective relationship between two or more processes of the processes of the computing device;
determining, based on the respective edges of the graph information and the respective unique process identifiers based on the combination of the respective reusable process identifiers and the respective process start times, an order of any of the processes of the computing device or process events associated with the process event information; and
storing the graph information within a graph database.

US Pat. No. 10,769,000

METHOD AND APPARATUS FOR TRANSPARENTLY ENABLING COMPATIBILITY BETWEEN MULTIPLE VERSIONS OF AN APPLICATION PROGRAMMING INTERFACE (API)

CLOUDFLARE, INC., San Fr...

1. A method comprising:receiving, at a first compute server of a plurality of compute servers, a first Application Programming Interface (API) request from a client device, wherein the plurality of compute servers are part of a distributed cloud computing platform, and wherein the receiving is a result of a Domain Name System (DNS) request for a domain associated with the first API request resolving to the first compute server instead of an origin server;
determining whether the first API request is of a first version of an API that is different from a second version of the API used in the origin server to which the first API request is destined; and
responsive to determining that the first API request is of the first version of the API that is different from the second version of the API used in an origin server to which the first API request is destined, performing:
executing, by a single process at the first compute server, an API compatibility enabler, wherein the API compatibility enabler is run in one of a plurality of isolated execution environments to convert the first API request into a second API request in the second version of the API; and
fulfilling the second API request instead of the first API request.

US Pat. No. 10,768,999

INTELLIGENT LOAD SHEDDING FOR MULTI-CHANNEL PROCESSING SYSTEMS

HAMILTON SUNSTRAND CORPOR...

1. A system for performing intelligent load shedding for a multi-channel processing system, the system comprising:a multi-channel processing system, wherein each channel of the multi-channel processing system includes a plurality of processors, wherein each processor of the plurality of processors of each channel includes different types of processors;
a plurality of links coupling each channel with each other channel in the multi-channel processing system, wherein the links are used to transmit status information of the plurality of processors; and
a plurality of cooling elements coupled to each channel having the plurality of processors, wherein the plurality of cooling elements are configured to remove heat from the multi-channel processing system,
wherein the multi-channel processing system is configured to: selectively disable a processor of the plurality of processors based at least in part on the detected failure of the one or more cooling elements and the obtained status information for the plurality of processors;
detecting a failure of a subsequent cooling element;
determining a type of processor associated with the subsequently failed cooling element; and
disabling one or more processors of a dissimilar type than the subsequently failed cooling element to prevent a common mode failure and maintaining other dissimilar processors of the one or more processors in operation in a commode mode.

US Pat. No. 10,768,998

WORKLOAD MANAGEMENT WITH DATA ACCESS AWARENESS IN A COMPUTING CLUSTER

INTERNATIONAL BUSINESS MA...

1. A method for workload management with data access awareness in a computing cluster, by a processor, comprising:configuring a workload manager within the computing cluster to include a data requirements evaluator module and a scheduler module; and
in response to receiving an input workload for scheduling by the workload manager:
retrieving, by the data requirements evaluator module, a set of inputs from a storage system, wherein the inputs each include:
an indication of whether the input workload is intensive in Input/Output (I/O) of new data or intensive in I/O of existing data,
data locality proportions for a set of files associated with the input workload, wherein the data locality proportions specifies a proportion of a total data of the set of files that is respectively stored on each of a plurality of cluster hosts of the computing cluster, and
data access costs specified for each pair of the plurality of cluster hosts in the computing cluster, wherein the data access costs specifies, for each of the plurality of cluster hosts, a cost of accessing data stored on any other one of the plurality of cluster hosts in the computing cluster, and wherein the data access costs are computed based on networking latencies between the plurality of cluster hosts and storage device access latencies within each of the plurality of cluster hosts;
generating, by the data requirements evaluator module, a list of the plurality of cluster hosts ranked for performing the input workload according to the set of inputs;
providing the ranked list of cluster hosts to the scheduler module; and
generating, by the scheduler module, a scheduling of the input workload to certain hosts specified in the ranked list of cluster hosts within the computing cluster where the generated scheduling is optimized with the set of inputs.

US Pat. No. 10,768,997

TAIL LATENCY-BASED JOB OFFLOADING IN LOAD-BALANCED GROUPS

INTERNATIONAL BUSINESS MA...

1. A method comprising:determining a type of a request that is currently being processed at a data processing system;
selecting a distribution from a set of processing time distributions, the distribution forming a model that is applicable to the type;
computing a threshold point for the model, wherein a processing time that exceeds a processing time at the threshold point is regarded as exhibiting tail latency according to the model, tail latency comprising a delay in processing of the request due to a reason other than a utilization of a resource of the data processing system exceeding a threshold utilization and a size of a queue in the data processing system exceeding a threshold size;
evaluating that the request will experience tail latency during processing at the data processing system;
aborting, responsive to the evaluating and prior to the request reaching the threshold, processing of the request at the data processing system;
offloading the request for processing at a peer data processing system in a load-balanced group of data processing systems;
identifying a first queued request in the queue of the data processing system, the first queued request awaiting processing at the data processing system;
determining a first type of the first queued request;
selecting a first model from a first subset of the set of models, the first model corresponding to the first type;
estimating a first processing time of the first queued request using the first model;
evaluating that the first processing time will exceed a first acceptable processing time for the first queued request;
removing, responsive to the first processing time exceeding the first acceptable processing time, the first queued request from the queue; and
offloading the first queued request for processing at a second peer data processing system in the load-balanced group of data processing systems.

US Pat. No. 10,768,996

ANTICIPATING FUTURE RESOURCE CONSUMPTION BASED ON USER SESSIONS

VMWARE, INC., Palo Alto,...

1. A system, comprising:a computing device comprising a processor and a memory;
machine readable instructions stored in the memory that, when executed by the processor, cause the computing device to at least:
receive a message comprising a prediction of a future number of concurrent user sessions to be hosted by a virtual machine within a predefined future interval of time;
determine that the future number of concurrent user sessions will cause the virtual machine to cross a predefined resource threshold during the predefined future interval of time; and
send a message to a first hypervisor hosting the virtual machine to migrate the virtual machine to a second hypervisor.

US Pat. No. 10,768,995

ALLOCATING HOST FOR INSTANCES WITH ANTI AFFINITY RULE WITH ADAPTABLE SHARING TO ALLOW INSTANCES ASSOCIATED WITH DIFFERENT FAILURE DOMAINS TO SHARE SURVIVING HOSTS

TELEFONAKTIEBOLAGET LM ER...

1. A method of managing a communications network by allocating hosts for virtual network function instances of a virtual network function component, the method comprising:receiving a request to allocate instances to be shared by a virtual network function component;
obtaining, from the request, a number N indicating a minimum number of the instances that are required, and a number M indicating how many additional ones of the instances are to be allocated;
responsive to the request, providing an indication of a maximum number of the instances to be allocated to one of the hosts, wherein the maximum number of the instances is determined by automatically deriving at least one of the maximum number according to a specified number of instances still available after loss of at least one host, the maximum number according to a specified total number of hosts, or the maximum number according to a specified fault tolerance in terms of how many of the allocated hosts can be lost while still leaving sufficient hosts that the virtual network function component can still be shared across at least N of the instances; and
responsive to the request requesting that the allocation of the instances be to different ones of the hosts and if the sharing of the instances by the virtual network function component can be adapted in the event of unavailability of any of the allocated instances, allocating automatically N+M of the instances to less than N +M of the hosts across a plurality of separate failure domains based on the maximum number of instances.

US Pat. No. 10,768,994

METHOD AND SYSTEM FOR MODELING AND ANALYZING COMPUTING RESOURCE REQUIREMENTS OF SOFTWARE APPLICATIONS IN A SHARED AND DISTRIBUTED COMPUTING ENVIRONMENT

ServiceNow, Inc., Santa ...

1. A system to manage a plurality of applications in a distributed computing environment, the system comprising:an application manager executable by one or more processors in a distributed environment, wherein the application manager is configured to:
receive a service specification for a first application of the plurality of applications in the distributed computing environment, wherein the service specification defines a first plurality of computing resources used to execute the first application; and
request the first plurality of computing resources out of a plurality of potential computing resources that include the first plurality of computing resources; and
a resource supply manager executable by one or more processors in the distributed computing environment, wherein the resource supply manager is in communication with the application manager and is configured to:
manage the plurality of potential computing resources in the distributed computing environment,
receive the request from the application manager;
responsive to the request, determine availability of the first plurality of computing resources within the distributed computing environment according to resource allocation policies; and
allocate the first plurality of computing resources to the application manager, wherein the application manager is configured to manage allocation of the first plurality of computing resources to the first application.

US Pat. No. 10,768,993

ADAPTIVE, PERFORMANCE-ORIENTED, AND COMPRESSION-ASSISTED ENCRYPTION SCHEME

NICIRA, INC., Palo Alto,...

1. A method for an adaptive, performance-oriented, and compression-assisted encryption scheme implemented on a host computer to adaptively improve utilization of CPU resources, the method comprising:queueing a new data packet and determining a size of the new data packet;
based on, at least in part, historical data, determining a plurality of already encrypted data packets;
based on, at least in part, information stored for the plurality of already encrypted data packets, determining an average ratio of compression for the plurality of already encrypted data packets;
based on, at least in part, the size of the new data packet, retrieving a throughput of compression value and a throughput of encryption value;
based on, at least in part, the average ratio of compression, the throughput of compression value and the throughput of encryption value, determining whether the throughput of compression value exceeds a threshold based on the throughput of encryption value, thereby indicating a prediction that compressing the new data packet reduces overall load on CPU resources; and
in response to determining that the throughput of compression value exceeds the threshold, generating a compressed new data packet by compressing the new data packet.

US Pat. No. 10,768,992

PROVISIONING A NEW NETWORK DEVICE IN A STACK

Hewlett Packard Enterpris...

1. A method comprising:detecting, by a master network device in a stack, a new network device in the stack;
in response to detecting, determining, by the master network device, whether a member network device of the stack is missing;
in response to determining that the member network device of the stack is missing, identifying, by the master network device, each active adjacent member of the member network device;
determining, by the master network device, whether each active adjacent member of the member network device has detected the new network device in the stack; and
in response to determining that each active adjacent member of the member network device has detected the new network device in the stack, provisioning, by the master network device, the new network device with a member ID of the member network device to the stack.

US Pat. No. 10,768,991

LOADING MODELS ON NODES HAVING MULTIPLE MODEL SERVICE FRAMEWORKS

Alibaba Group Holding Lim...

1. A computer-implemented method, comprising:determining, based on a preset execution script and resource information of multiple execution nodes, loading-tasks corresponding to the execution nodes, wherein each execution node is deployed on a corresponding cluster node; and
sending loading requests to the execution nodes, thereby causing the execution nodes to start execution processes based on the corresponding loading requests, wherein:
the execution processes start multiple model service frameworks on each cluster node;
multiple models are loaded onto each of the model service frameworks;
each loading request comprises loading-tasks corresponding to the execution node to which the loading request was sent; and
the execution processes comprise a respective execution process for each model service framework.

US Pat. No. 10,768,990

PROTECTING AN APPLICATION BY AUTONOMOUSLY LIMITING PROCESSING TO A DETERMINED HARDWARE CAPACITY

International Business Ma...

1. A method to protect an application, comprising:extracting a set of hardware characteristics for an underlying computing system, the set of hardware characteristics comprising processing power, memory, and a topology;
for each hardware characteristic of the set of hardware characteristics, determining an operating limit based at least in part on a baseline value determined for the application with respect to the hardware characteristic;
computing a recommended limit for a system feature of the underlying computing system based on the operating limits determined for each of the hardware characteristics; and
during runtime execution of the application, enforcing the recommended limit for the system feature to protect the application.

US Pat. No. 10,768,989

VIRTUAL VECTOR PROCESSING

Intel Corporation, Santa...

1. An apparatus comprising:a first logic circuitry to allocate a first portion of one or more operations corresponding to a virtual vector request to a first processor core; and
a second logic circuitry to generate a first signal corresponding to a second portion of the one or more operations.

US Pat. No. 10,768,988

REAL-TIME PARTITIONED PROCESSING STREAMING

PEARSON EDUCATION, INC., ...

1. A system for processing data sets by using a distributed network to generate and process partitioned streams, the system comprising:a message allocator processor that:
receives a plurality of data sets from one or more producer devices;
for each of the plurality of data sets:
identifies a tag or characteristic of the data set;
identifies an initial partition stream from amongst a plurality of initial partition streams that corresponds to the tag or the characteristic; and
appends the data set to the identified initial partition stream;
a partition controller processor that manages a set of task processors,
a first task processor in the set of task processors, wherein the first task processor is configured to:
generate, via performance of a first task, processed data sets corresponding to the data sets appended to the identified initial partition stream;
generate a processed partition stream that includes the processed data sets in the identified initial partition stream; and
facilitate routing the processed partition stream for further processing of the processed data sets hi accordance with one or more other tasks;
a second task processor in the set of task processors, wherein the second task processor is configured to:
calculate a plurality of scores, each score of the plurality of scores calculated for a data set of the data sets appended to the identified initial partition stream; and
a third task processor hi the set of task processors, wherein the third task processor is configured to;
determine that each data set of the data sets appended to the identified initial partition stream was received from the one or more producer devices within a given time period;
generate a real-time analytic variable based on the plurality of scores; and
facilitate providing the real-time analytic variable to a client device.

US Pat. No. 10,768,987

DATA STORAGE RESOURCE ALLOCATION LIST UPDATING FOR DATA STORAGE OPERATIONS

Commvault Systems, Inc., ...

1. A non-transitory computer-readable medium storing instructions that, when executed by at least one computing device in a data management system, cause the computing device to perform operations comprising:assigning a category code to data management requests that have similar data management resource requirements for performing data management operations in the data management system;
from a list of data management requests pending in the data management system,
attempting to perform a first data management request by assigning the first data management request to one or more data management resources in the data management system; and
if the first data management request fails,
determining at least one data management resource that is at least partially responsible for the failure of the first data management request,
identifying other data management requests having the category code of the failed first data management request, and
updating the list of data management requests to indicate that the data management system should not perform the identified other data management requests having the category code of the failed first data management request and wherein the updating of the list of data management requests comprises removing from the list the identified other data management requests having the category code of the failed first data management request.

US Pat. No. 10,768,986

MANAGEMENT AND UTILIZATION OF STORAGE CAPACITIES IN A CONVERGED SYSTEM

International Business Ma...

1. A computer-implemented method, comprising:identifying a request to create a consumer within a converged system;
defining the consumer within a hierarchy of consumers, where the consumer represents a function in an organization, and where the hierarchy of consumers includes:
a root consumer having a storage capacity attribute with a value that is less than or equal to a total amount of a plurality of storage resources available to the hierarchy of consumers, and
a plurality of leaf nodes each representing an application within the converged system, where each application inherits access to a portion of the plurality of storage resources available to a parent node of the application within the hierarchy of consumers;
associating the consumer with the plurality of storage resources and a plurality of computing resources; and
adjusting a storage capacity attribute for the consumer in response to a removal of a portion of the plurality of storage resources, where while a value for the storage capacity attribute assigned to the root consumer is greater than a physical storage capacity of the remaining plurality of storage resources, additional data volumes are prevented from being defined for consumers within the organization, and a storage capacity attribute for all consumers within the organization is prevented from being increased.

US Pat. No. 10,768,985

ACTIVE DIRECTORY ORGANIZATIONAL UNIT-BASED DEPLOYMENT OF COMPUTING SYSTEMS

PARALLELS INTERNATIONAL G...

1. A computer-implemented method for deploying a distributed computing system, wherein the method comprises:receiving a system configuration of a distributed directory-service-based system, wherein the system configuration specifies a path to a root organizational unit (OU) within a domain of a domain controller, wherein the domain comprises a plurality of computer objects each having an assigned system role, wherein each of the plurality of computer objects represents a node configured to run a service of a plurality of services of the distributed computing system, and wherein the assigned system role represents a type of system service provided by a corresponding node;
generating, within the domain, one or more group policy objects based on system requirements for each system role, wherein the one or more group policy objects specify settings and permissions for computer objects of a given system role;
creating, in the root OU of the domain, an organizational unit (OU) for each assigned system role;
linking, in the domain, each of the group policy objects for each assigned system role to the corresponding created OU;
moving, in the domain, the computer objects to at least one OU according to the system role assigned to each computer object;
generating a distribution scheme according to a number of the system roles and computer objects of each system role retrieved from the domain; and
deploying a plurality of nodes of the distributed directory-service-based system according to the generated distribution scheme.

US Pat. No. 10,768,984

SYSTEMS AND METHODS FOR SCHEDULING TASKS USING SLIDING TIME WINDOWS

Honeywell International I...

1. A processor, comprising:processing units configured to execute, simultaneously, respective tasks during at least one of multiple time windows each having a respective estimated start time and each common to the processing units; and
a scheduler configured to
determine a pre-run-time-determinable slack associated with a first one of the time windows,
determine an only-at-run-time-determinable slack in the first one of the time windows, and
advance, simultaneously in each of the processing units, an actual start time of a second one of the time windows relative to the estimated start time of the second one of the time windows by a time no greater than a sum of the determined pre-run-time-determinable slack and the determined only-at-run-time-determinable slack to an earlier time no sooner than an actual end time of the first one of the time windows, the second one of the time windows being the next time window immediately after the first one of the time windows.

US Pat. No. 10,768,983

MECHANISM FOR FACILITATING A QUORUM-BASED COORDINATION OF BROKER HEALTH FOR MANAGEMENT OF RESOURCES FOR APPLICATION SERVERS IN AN ON-DEMAND SERVICES ENVIRONMENT

salesforce.com, inc., Sa...

1. A database system-implemented method comprising:monitoring, by a health checker of the database system, resource distribution health of message queue brokers managing resource domains having worker nodes and thread resources for processing job requests, wherein managing includes routing of the job requests to job queues associated with application servers in a multi-tenant resource distribution environment and in communication over a network, wherein each message queue broker is associated with at least one resource domain having a set of working nodes and a set of thread resources;
generating, by the health checker of the database system, a status report associated with a first message queue broker based on quorum-based votes casted by other message queue brokers, wherein the status report to indicate fair or unfair management of a first resource domain by the first message queue broker, wherein each message queue broker participates in a quorum-based voting system to vote to rate one or more of the message queue brokers on their management of one or more of the resource domains; and
classifying, based on based votes, the first message queue broker as one of successful, failed and recoverable, and failed and unrecoverable,
wherein the first message queue broker when classified as successful, the first message queue broker remains active such that a set of job requests distributed to the first resource domain associated with the first message queue broker is routed to one or more of the job queues without interruption,
wherein the first message queue broker when classified as failed and recoverable, the first message queue broker is recovered from failure and turned active to resume routing of the set of job requests, and
wherein the first message queue broker when classified as failed and unrecoverable, the first message queue broker is retired, and the set of job requests is distributed to a second message queue broker managing a second resource domain.

US Pat. No. 10,768,982

ENGINE FOR REACTIVE EXECUTION OF MASSIVELY CONCURRENT HETEROGENEOUS ACCELERATED SCRIPTED STREAMING ANALYSES

Oracle International Corp...

1. A method comprising:associating a particular software actor with one or more data streams, wherein said particular software actor comprises a backlog queue;
responsive to receiving content in a subset of said one or more data streams, distributing data based on said content to the particular software actor;
in response to determining that said data satisfies completeness criteria of the particular software actor, appending an indication of said data onto said backlog queue of said particular software actor;
resetting, to an initial state, said particular software actor by loading, into computer memory, an execution snapshot of a previous initial execution of an embedded virtual machine;
resuming, based on said particular software actor, execution of said execution snapshot of said previous initial execution to dequeue and process said indication of said data from said backlog queue of the particular software actor to generate a result.

US Pat. No. 10,768,981

DYNAMIC TIME SLICING FOR DATA-PROCESSING WORKFLOW

Microsoft Technology Lice...

1. A method for dynamically scheduling a data-processing workload, the method comprising:recognizing a minimum execution slice size representing a minimum duration of date-dependent records for processing in a job, and a maximum execution slice size representing a maximum duration of date-dependent records for processing in a job;
recognizing a predicted execution slice size for a current job from a collection of jobs based on a duration defined by a start date and an end date for date-dependent records associated with the current job;
responsive to one or both of 1) the predicted execution slice size for the current job exceeding the maximum execution slice size, and 2) the end date for the current job being in the future of a current date:
splitting the current job into a working slice and a remainder slice,
adding the remainder slice to the collection of jobs, and executing the working slice; and
responsive to the predicted execution slice size for the current job exceeding the minimum execution slice size and not exceeding the maximum execution slice size: executing the current job.

US Pat. No. 10,768,980

AUTOMATED EXECUTION OF A BATCH JOB WORKFLOWS

AMERICAN EXPRESS TRAVEL R...

1. A method, comprising:generating, by a batch job execution platform, a batch job workflow, wherein the batch job workflow comprises a batch job having a first task and a second task, the first task and the second task not being dependent on one another;
retrieving, by the batch job execution platform and from an execution database, first scheduler data corresponding to the first task in the batch job and second scheduler data corresponding to the second task in the batch job, wherein each respective scheduler data comprises a batch job dependency, a task dependency, a system assignment, and a technology wrapper assignment, a respective system assignment indicating a corresponding system on which to execute the respective task, a respective technology wrapper assignment being based at least in part on the respective system assignment;
determining, by the batch job execution platform and based on first and second wrapper and system assignments, that the first and second wrapper and system assignments are different;
generating, by the batch job execution platform, a task schedule based on the first scheduler data and the second scheduler data;
invoking, by the batch job execution platform, a first technology wrapper by transmitting the first task to the first technology wrapper, wherein the first technology wrapper is invoked based on a first technology wrapper assignment and comprises first access credentials for executing the first task on a first system, the first system being based on the first system assignment;
invoking, by the batch job execution platform, a second technology wrapper by transmitting the second task to the second technology wrapper, wherein the second technology wrapper is invoked based on a second technology wrapper assignment and comprises second access credentials different than the first access credentials for executing the second task on a second system, the second system being based on the second system assignment; and
executing, by the batch job execution platform, the first task on the first system using the first technology wrapper in parallel with the second task on the second system using the second technology wrapper.

US Pat. No. 10,768,979

PEER-TO-PEER DISTRIBUTED COMPUTING SYSTEM FOR HETEROGENEOUS DEVICE TYPES

APPLE INC., Cupertino, C...

1. A non-transitory machine-readable medium storing instructions which, when executed by one or more processors of a computing device, cause the computing device to perform operations comprising:receiving, using a first-level protocol and by a daemon process on a first device, a first message from a second device, the first message including portable code which when compiled provides one or more functions to perform a computation by the first device within a distributed computing system, wherein the first-level protocol includes a first-level command messaging protocol;
receiving, using a second-level protocol and by a worker process created by the daemon process that is separate from the daemon process, a second message from the second device, the second message including a first job request, wherein the first job request includes an indication of a first function of the one or more functions, and an indication of a required set of input data to perform the first function, wherein the second-level protocol includes a second-level command messaging protocol different from the first-level command messaging protocol, wherein the worker process manages messages of the second-level protocol including the second message different from messages of the first-level protocol, wherein the worker process is instantiated by the daemon process;
in response to receiving the second message by the worker process, compiling a portion of the portable code corresponding to the first function without compiling the rest of the portable code that includes a third function; and
in response to the first device obtaining the required set of input data, the first device executing the first function to perform the computation to fulfill the first job request,
wherein the portion of the portable code is compiled in response to determining the portable code is safe to execute on the first device, and
wherein the portable code is LLVM IR (low level virtual machine intermediate representation), and the first device and the second device are different types of devices and each operate using a different operating system.

US Pat. No. 10,768,978

MANAGEMENT SYSTEM AND MANAGEMENT METHOD FOR CREATING SERVICE

Hitachi, Ltd., Tokyo (JP...

1. A management system, comprising:an interface device coupled to an operation target system including one or more operation target apparatuses;
a storage resource configured to store a management program; and
a processor configured to execute the management program,
the processor being configured to execute the management program to:
manage a plurality of component input properties and a plurality of components each including a processing content to be executed based on an input value that is input to the component input property;
create or edit a service template that is associated with one or more components and an execution order and that includes one or more template input properties;
display the template input properties for each property group, and receive a designation of a service template and an input value to be input to the template input property;
generate, based on the designated input value and the service template, an operation service for executing the processing content included in the component using the designated input value;
execute the generated operation service to operate the operation target apparatus;
store an identifier and a version of the component in the storage resource as the component management;
store, as an association between the service template and the component, a correspondence relation between the service template and the identifier and version of the associated component in the storage resource;
receive a user operation for displaying a correspondence relation between the component and the service template;
specify an identifier and a version of a component to be subjected to the user operation;
select a service template constituted by a component having the specified component identifier and the specified version; and
display an identifier of the specified service template.

US Pat. No. 10,768,977

SYSTEMS AND METHODS FOR EDITING, ASSIGNING, CONTROLLING, AND MONITORING BOTS THAT AUTOMATE TASKS, INCLUDING NATURAL LANGUAGE PROCESSING

FIRST AMERICAN FINANCIAL ...

1. A method comprising:initializing a bot controller application instance;
receiving, at the bot controller application instance, registration information from each of one or more bot hosts, the registration information comprising a network address of each of the one or more bot hosts and an identification of scripts currently installed on a machine of each of the one or more bot hosts, wherein each of the one or more bot hosts is configured to manage one or more bots;
retrieving, from a web services gateway, configuration information for each of the one or more bot hosts; and
using at least the registration information and configuration information for each of the one or more bot hosts, displaying at a graphical user interface of the bot controller application instance a summary of the one or more bot hosts and data relating to one or more scripts executed by each of the one or more bot hosts, the data relating to the one or more scripts executed by each of the one or more bot hosts comprising: an identification of each of the one or more scripts and a timestamped notification of a most recent action performed by each of the one or more bot hosts.

US Pat. No. 10,768,976

APPARATUS AND METHOD TO CONFIGURE AN OPTIMUM NUMBER OF CIRCUITS FOR EXECUTING TASKS

FUJITSU LIMITED, Kawasak...

1. An apparatus comprising:a programmable circuit configured to configure circuits for executing tasks; and
a processor configured to:
estimate an execution time-period required for executing a first task by first circuits configured in the programmable circuit,
determine a first configuration number indicating a number of second circuits that are to be configured, in the programmable circuit, for executing a second task to be executed after the first task, based on the estimated execution time-period and a configuration time-period required for configuring the first configuration number of the second circuits in the programmable circuit,
cause the programmable circuit to configure, during execution of the first task, the first configuration number of the second circuits, and
adjust the number of second circuits that are to be configured during execution of the first task, based on a relationship between a time at which the first task is completed and a time at which configuration of the second circuits in the programmable circuit is completed, wherein
the processor determines a second configuration number indicating a number of the second circuits for which a total time period from a start time at which configuration of the second circuits in the programmable circuit is started to an end time at which execution of the second task by the second circuits is completed is reduced; and
in a case where the first task is completed before configuration of the first configuration number of the second circuits in the programmable circuit is completed, the processor:
causes the programmable circuit to stop configuration of a new circuit for the second circuits when a third configuration number indicating a number of the second circuits that are configured in the programmable circuit until completion of the first task is equal to or greater than the second configuration number, and
causes the programmable circuit to configure the second configuration number of the second circuits in the programmable circuit when the third configuration number is smaller than the second configuration number.

US Pat. No. 10,768,975

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD

Ricoh Company, Ltd., Tok...

1. An information processing system including one or more information processing apparatuses coupled via a network configured to implement various functions of the information processing system, the information processing system comprising:a memory configured to store
flow information and flow identification information identifying the flow information in association with each other for each sequence of processes performed by using electronic data, the flow information defining program identification information identifying one or more programs for respectively executing the processes included in the sequence of processes, the flow information also defining an execution order of executing the one or more programs, and
computer-executable instructions; and
one or more processors configured to execute the computer-executable instructions such that the one or more processors execute a process including:
accepting, over a communication channel, a request including information relating to the electronic data used in the sequence of processes and the flow identification information, from one of one or more devices coupled to the information processing system;
acquiring the flow information stored in the memory in association with the flow identification information included in the request, when the request is accepted; and
executing the sequence of processes using the electronic data based on the information relating to the electronic data, by respectively executing the one or more programs identified by the program identification information defined in the acquired flow information, in the execution order defined in the acquired flow information,
wherein when the sequence of processes includes branching, the executing includes branching the processes included in the sequence of processes according to a condition of the branching, to execute the sequence of processes,
wherein the branching branches the sequence of processes into storing the electronic data in one of a plurality of different external storages, and
wherein the flow information defines a flow detail name for each process in the sequence of processes thereby distinguishing a plurality of processes in the sequence of processes to be executed by a same component, which is implemented by programs and modules, by flow detail names,
wherein the branching includes determining whether number of different flow details immediately before a given flow detail name is greater than or equal to 2 thereby determining whether a process having the given flow detail name is executed by a merging component.

US Pat. No. 10,768,974

SPECIFYING AN ORDER OF A PLURALITY OF RESOURCES IN A TRANSACTION ACCORDING TO DISTANCE

International Business Ma...

1. A computer-based method comprising:receiving a transaction for a plurality of resources;
determining, for each resource of the plurality of resources, work embodied by the transaction, the work comprising a distance parameter relating to an operation for the resource;
specifying, by a processor, an order of the plurality of resources according to respective distances of the plurality of resources in the context of the transaction;
upon specifying the order of the plurality of resources, committing the transaction from an application associated with the transaction to a transaction manager; and
upon committing the transaction, invoking each of the plurality of resources in the specified order.

US Pat. No. 10,768,973

METHOD FOR IMPLEMENTATION AND OPTIMIZATION OF ONLINE MIGRATION SYSTEM FOR DOCKERS CONTAINER

Huazhong University of Sc...

1. A method for implementation and optimization of an online migration system for Docker containers, the method comprising the steps of:based on different container file systems and parent-child relationship of their image layers, determining an integrating and collecting mechanism for data of the image layers so as to accomplish integration and collection of the data of the image layers, including:
identifying parent-child hierarchical relationship between tags of the image layers of the container based on the current file system and a container ID, and collecting configuration information about the container based on the container ID;
based on the container ID and a container configuration file, integrating information about metadata of the image layers and information about metadata of a container image in an image management directory; and
based on a mount point field in the container configuration file, determining information about a volume mount point;
based on Diff commands between parent image layers and child image layers, determining a latest version of the data of the image layers, and transferring the latest version of the data of the image layers between a source machine container and a destination machine container so as to accomplish iterative synchronization of the data of the image layers;
implementing iterative synchronization of a memory by means of implementing a Pre-Copy mechanism, wherein the iterative synchronization of a memory includes:
creating a first checkpoint image file, synchronizing the first checkpoint image file to the destination machine container by means of storing the first checkpoint image file to a designate directory, and accomplishing a first round of said iterative synchronization of the memory by means of such configuring a pre-dump parameter that a service process of the source machine container remains active;
creating a second checkpoint image file for a second round of said iterative synchronization by means of taking the first checkpoint image file as a parent image and configuring the -parent parameter; and
where the second checkpoint image file satisfies at least one of a first threshold requirement, a second threshold requirement and a third threshold requirement, accomplishing the iterative synchronization of the memory by means of synchronizing the second checkpoint image file to the destination machine container; and
when the iterative synchronization for both of the data of the image layers and the memory have been completed, performing restoration of the destination machine container.

US Pat. No. 10,768,972

MANAGING VIRTUAL MACHINE INSTANCES UTILIZING A VIRTUAL OFFLOAD DEVICE

AMAZON TECHNOLOGIES, INC....

1. A computing system comprising:a physical computing device comprising one or more processors configured to execute computer-executable instructions that configure the physical computing device to host a virtual machine instance and instantiate virtual computing resources for the virtual machine instance using hardware computing resources of the physical computing device, wherein a virtual input/output (I/O) component is associated with the virtual machine instance;
an offload computing device comprising one or more offload computing device processors, wherein the offload computing device is in communication with the physical computing device, wherein the offload computing device and the physical computing device are separate computing devices, wherein the one or more offload computing device processors are configured to execute computer-executable instructions that configure the offload computing device to:
instantiate the virtual I/O component for the virtual machine instance using hardware computing resources of the offload computing device, wherein the virtual I/O component is configured to perform I/O functions on behalf of the virtual machine instance.

US Pat. No. 10,768,971

CROSS-HYPERVISOR LIVE MOUNT OF BACKED UP VIRTUAL MACHINE DATA

Commvault Systems, Inc., ...

1. A method comprising:powering on a first virtual machine on a first hypervisor,
wherein a first virtual disk is configured to store data for the first virtual machine,
wherein the first virtual disk is associated with a backup copy of a second virtual machine,
wherein the backup copy was generated in a hypervisor-independent format by a block-level backup operation of a second virtual disk of the second virtual machine,
wherein the first virtual disk is configured in cache storage that is mounted to the first hypervisor, and
wherein the first hypervisor executes on a first computing device comprising one or more processors and computer memory;
by the first hypervisor, transmitting to the first virtual disk a first read request issued by the first virtual machine for a first data block;
by a media agent that maintains the cache storage, intercepting the first read request transmitted to the first virtual disk,
wherein the media agent executes on a second computing device comprising the cache storage, one or more processors, and computer memory;
based on determining by the media agent that the first data block is not in the first virtual disk, by the media agent: (i) reading the first data block from the backup copy, and (ii) storing the first data block to the first virtual disk; and
based on determining by the media agent that first data block is in the first virtual disk, serving the first data block from the first virtual disk to the first hypervisor, thereby providing the first data block from the backup copy of the second virtual machine to the first virtual machine.

US Pat. No. 10,768,970

SYSTEM AND METHOD OF FLOW SOURCE DISCOVERY

Virtual Instruments Corpo...

1. A system comprising:one or more processors;
memory containing instructions configured to control the one or more processors to:
receive a period of time for flow source discovery of an enterprise network;
receive a plurality of flow packets from network traffic analyzing platforms, the network traffic analyzing platforms being in communication with the enterprise network, the plurality of flow packets indicating network traffic into and out of flow sources of the enterprise network, at least one flow source of the flow sources of the enterprise network being a router of switch fabric integrated within the enterprise network;
for each particular flow packet of the plurality of flow packets:
identify the particular flow packet of the plurality of flow packets as belonging to one of at least two flow packet types based at least in part on a format of the particular flow packet;
if the particular flow packet is an sFlow flow packet, determine if the particular flow packet is an sFlow sample, an sFlow counter record, or a third sFlow packet type;
if the particular flow packet is the sFlow sample or the sFlow counter record, identify a flow source of the particular flow packet and at least one metric of the network traffic data, the flow source being one of a plurality of flow sources of the enterprise network, and update a flow source data structure to include the identified flow source and the at least one metric of the network traffic data;
if the particular flow packet is the third sFlow packet type, ignore the particular flow packet; and
if the particular flow packet is a second flow packet type, the second flow packet type being different from an sFlow flow packet type:
if the particular flow packet is of a format that matches one of a plurality of template records stored in a template datastore, identify the flow source associated with the particular flow packet and the at least one metric of the network traffic data, and update the flow source data structure to include the identified flow source and the at least one metric of the network traffic data; and
if the format of the particular flow packet does not match one of the plurality of template records, ignore the flow particular packet; and
after termination of the period of time, output the flow source data structure, the flow source data structure combining information from the sFlow flow packets and information from the flow packets of the second flow packet type, the flow source data structure indicating a plurality of flow sources including the identified flow sources as well as a plurality of attributes of the network traffic data based on the at least one metric of the network traffic data of the plurality of flow packets, the flow source data structure enabling an operator of the enterprise network to control and monitor network traffic of the enterprise network.

US Pat. No. 10,768,969

STORAGE ARCHITECTURE FOR VIRTUAL MACHINES

VMware, Inc., Palo Alto,...

1. A computing system comprising:a first virtual computing instance configured to execute on a first host; and
a storage system accessible by the first host, the storage system comprising one or more units of storage for storing disk data of one or more virtual computing instances including the first virtual computing instance, including first disk data for the first virtual computing instance,
wherein, responsive to a first disk access request from the first virtual computing instance, which includes a location within the first disk data to be accessed, at least one processor of the first host is configured to:
generate a second disk access request based on the location within the first disk data to be accessed, the second disk access request including a location to be accessed within one of the one or more units of storage in which the first disk data is stored,
determine whether or not access to the location to be accessed within said one of the units is permitted; and
upon determining that the access to the location to be accessed within said one of the units is permitted, transmit the second disk access request to said one of the units, and forward a response to the second disk access request from said one of the units to the first virtual computing instance as a response to the first disk access request.

US Pat. No. 10,768,968

SPLIT-CONTROL OF PAGE ATTRIBUTES BETWEEN VIRTUAL MACHINES AND A VIRTUAL MACHINE MONITOR

Intel Corporation, Santa...

1. A method comprising:receiving, by a processor from a virtual machine (VM) executed by the processor, an indication that a proper subset of a plurality of virtual memory pages of the VM are secure memory pages;
determine whether the VM is attempting to access a first memory page;
responsive to determining the VM is attempting to access the first memory page, determining whether the proper subset comprises the first memory page;
determining whether an ignore page attribute table (IPAT) entry for the first memory page in an extended page table (EPT) of a virtual machine monitor (VMM) is set to zero, the VMM executed by the processor to manage the VM; and
responsive to determining the proper subset comprises the first memory page and that the IPAT entry in the EPT is set to zero, using first attributes specified by the VM for the first memory page and ignoring second attributes specified by the VMM for the first memory page.

US Pat. No. 10,768,967

MAINTENANCE CONTROL METHOD, SYSTEM CONTROL APPARATUS AND STORAGE MEDIUM

FUJITSU LIMITED, Kawasak...

1. A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process comprising:classifying, by referring to correspondence information in which a virtual machine, a desired time zone for executing a maintenance operation of a physical machine are associated with each other for each of the plurality of virtual machines, a plurality of virtual machines into a plurality of groups based on the desired time zone, an overlap time of one or more desired time zones corresponding to one or more virtual machines included in each of the plurality of groups being equal to or longer than a predetermined time; and
mapping, for each of the plurality of groups, the one or more virtual machines to one of the plurality of physical machines such that mapping destinations for the plurality of groups are different from each other; and
executing, for each of the plurality of groups, the maintenance operation in a time zone corresponding to the overlap time.

US Pat. No. 10,768,966

SYSTEM AND METHOD FOR TRAPPING SYSTEM CALLS FOR REMOTE EXECUTION

PARALLELS INTERNATIONAL G...

1. A method for executing system calls in a virtualized environment, wherein the method comprises:executing a guest process within a virtual machine and having an associated guest-process virtual address space;
executing, on a host, a host process corresponding to the guest process and having an associated host-process virtual address space;
mapping the host-process virtual address space and the guest-process virtual address space to a same host physical memory;
trapping a system call invoked by the guest process;
determining whether to perform the trapped system call using the guest process or the host process based on a type of the trapped system call;
performing the trapped system call using the host process, wherein state changes in the host-process virtual address space caused by the trapped system call are reflected in the guest-process virtual address space; and
resuming execution of the guest process in response to completing execution of the trapped system call.

US Pat. No. 10,768,965

REDUCING COPY OPERATIONS FOR A VIRTUAL MACHINE MIGRATION

Amazon Technologies, Inc....

1. A system comprising:a host processor configured to execute a hypervisor, wherein the hypervisor is configured to:
provide information, associated with a page table entry (PTE) corresponding to a page of memory, to schedule a copy operation of the page for a live migration of a virtual machine (VM) running on the host processor, wherein the page is associated with a guest operating system (OS) executing within the VM, and wherein the PTE comprises a dirty page indicator to indicate that the page has been modified; and
a processing engine separate from the hypervisor executing on the host processor, the processing engine configured to:
receive the information associated with the PTE corresponding to the page, the information including an address of the PTE associated with the guest OS;
read the dirty page indicator in the PTE using the address to determine that the page has been modified;
schedule the page for the copy operation;
determine that the page has later been modified again;
execute an atomic operation to clear the dirty page indicator in the PTE after the later modification and before performing the copy operation; and
perform the copy operation.

US Pat. No. 10,768,964

VIRTUAL MACHINE MESSAGING

Hewlett Packard Enterpris...

1. A computing device, comprising:a host operating system;
a virtual machine running on the host operating system; and
a split driver comprising:
a frontend driver residing on the virtual machine, and
a backend driver residing on the host, the split driver to process messages received from the virtual machine,
wherein the messages are passed from the frontend driver to the backend driver via a message ring shared between the frontend driver and the backend driver to provide zero-copy communication between the frontend driver and the backend driver,
wherein the backend driver establishes a context with the virtual machine to translate addresses of the virtual machine to addresses of the host for processing the messages,
wherein the context is saved in a virtual volume by the host operating system, and
wherein a plurality of messages are collected on the shared message ring prior to the backend driver being signaled to process the messages.

US Pat. No. 10,768,963

VIRTUAL NETWORK FUNCTIONS ALLOCATION IN A DATACENTER BASED ON EXTINCTION FACTOR

Hewlett Packard Enterpris...

1. A computing system for allocating Virtual Network Functions (VNF) in a datacenter, the computing system comprising:a computation module that, in operation:
based on a state of the datacenter, a VNF catalogue including a plurality of VNFs, and a set of allocation rules comprising compulsory and preferred spatial relationships of the plurality of VNFs in the datacenter, determines a plurality of extinction factors corresponding to available resources in a plurality of datacenter units in the datacenter; and
in response to the determined plurality of extinction factors, develops an allocation model indicating a particular datacenter unit to which a first VNF from the plurality of VNFs is to be allocated;
an allocation module that, in operation:
allocates a first VNF from the plurality of VNFs to the particular datacenter unit based on the allocation model, wherein for a VNF entity within the first VNF that can be allocated in multiple datacenter units of the plurality of datacenter units, the allocation module is configured to allocate the VNF entity in one or more of the multiple datacenter units that extinct a fewer number of datacenter units and in accordance with the set of allocation rules; and
the computation module being responsive to the allocation of the first VNF to the particular datacenter unit to determine an updated extinction factor corresponding to the particular datacenter unit.

US Pat. No. 10,768,962

EMULATING MODE-BASED EXECUTE CONTROL FOR MEMORY PAGES IN VIRTUALIZED COMPUTING SYSTEMS

VMware, Inc., Palo Alto,...

1. A method of emulating nested page table (NPT) mode-based execute control in a virtualized computing system, comprising:providing NPT mode-based execute control from a hypervisor to a virtual machine (VM) executing in the virtualized computing system, the NPT mode-based execute control having NPT fields that mark each of a plurality of memory pages as executable in combination with a privilege level of the execution of a plurality of privilege levels of a processor of the virtualized computing system;
generating a plurality of shadow NPT hierarchies at the hypervisor based on an NPT mode-based execute policy obtained from the VM;
configuring the processor to exit from the VM to the hypervisor in response to an escalation from a user privilege level to a supervisor privilege level caused by guest code of the VM;
exposing a first shadow NPT hierarchy of the plurality of shadow NPT hierarchies to the processor in response to an exit from the VM to the hypervisor due to the escalation from the user privilege level to the supervisor privilege level;
receiving an exit from the VM to the hypervisor due to an NPT access violation by the guest code of the VM; and
searching the plurality of shadow NPT hierarchies for an alternative shadow NPT hierarchy permitted based on a current privilege level of the guest code of the VM.

US Pat. No. 10,768,961

VIRTUAL MACHINE SEED IMAGE REPLICATION THROUGH PARALLEL DEPLOYMENT

International Business Ma...

1. A computer-implemented method for generating secondary virtual machine seed image storage, the computer-implemented method comprising:receiving, by a computer, an input to deploy a primary virtual machine on a primary data processing site and a secondary virtual machine on a secondary data processing site, wherein a first copy of a golden virtual machine image used to deploy the primary virtual machine is maintained on the primary data processing site, and wherein an out-of-date copy of the golden virtual machine image is stored on the secondary data processing site;
responsive to the computer receiving the input, deploying, by the computer, the primary virtual machine from the first copy of the golden virtual machine image on the primary data processing site, generating a phantom virtual machine seed image on the secondary data processing site, and deploying the secondary virtual machine from the out-of-date copy of the golden virtual machine image on the secondary data processing site;
enabling, by the computer, the primary virtual machine to execute on the primary data processing site and sending, by the computer, state data updates corresponding to the primary virtual machine to the secondary data processing site;
utilizing, by the computer, the phantom virtual machine seed image to receive the state data updates from the primary virtual machine while running software on the secondary virtual machine to update the out-of-date copy of the golden virtual machine image to match the first copy of the golden virtual machine image;
suspending, by the computer, execution of the secondary virtual machine on the secondary data processing site;
generating, by the computer, a complete seed image corresponding to the secondary virtual machine that is up-to-date at that point in time in storage at the secondary data processing site to form the secondary virtual machine seed image storage by merging a partial seed image generated from the updated golden virtual machine image with the phantom virtual machine seed image; and
enabling, by the computer, the secondary virtual machine seed image storage to receive state data updates from the primary virtual machine on the primary data processing site.

US Pat. No. 10,768,960

METHOD FOR AFFINITY BINDING OF INTERRUPT OF VIRTUAL NETWORK INTERFACE CARD, AND COMPUTER DEVICE

Huawei Technologies Co., ...

1. A method for affinity binding of an interrupt of a virtual network interface card, wherein the method comprises:receiving, by a computer device, a request message from an Infrastructure as a Service (IaaS) resource management system, wherein the request message carries an interrupt affinity policy parameter of the virtual network interface card;
enabling a capturing module included in a host operating system executed by the computer device based on the interrupt affinity policy parameter, wherein the capturing module is configured to, when enabled, capture an operation executed by a guest operating system of a virtual machine for performing an affinity binding of a virtual interrupt of the virtual network interface card and a first virtual central processing unit (CPU);
performing, by the computer device and in response to the capturing module being enabled, affinity binding between a physical interrupt of the virtual network interface card and a first physical CPU, wherein the first virtual CPU is in affinity binding with the first physical CPU;
processing interrupts of the virtual network interface card; and
performing affinity binding between the physical interrupt of the virtual network interface card and a second physical CPU in response to the capturing module detecting the virtual interrupt of the virtual network interface card changing to be in affinity binding with a second virtual CPU, wherein the second virtual CPU is in affinity binding with the second physical CPU.

US Pat. No. 10,768,959

VIRTUAL MACHINE MIGRATION USING MEMORY PAGE HINTS

RED HAT ISRAEL, LTD., Ra...

1. A method for migrating memory pages, the method comprising:running a virtual machine by a hypervisor, the virtual machine including a guest that is allocated a plurality of guest memory pages;
initializing a data structure corresponding to a memory page of the plurality of guest memory pages, wherein the data structure is readable by the guest and writable by the hypervisor;
assigning the memory page a first status in the data structure, the first status indicating that the memory page has not been migrated;
migrating the memory page that is assigned the first status;
assigning the memory page a second status in the data structure, the second status indicating that the memory page has been migrated and has not been modified since the migration;
receiving a notification that a migration has started;
determining that at least one memory page of the plurality of guest memory pages is to be modified, wherein the determining is triggered by a request to write data to the at least one memory page;
in response to determining that the at least one memory page of the plurality of guest memory pages has the first status indicating that the at least one memory page of the plurality of guest memory pages has not been migrated, modifying the at least one memory page of the plurality of guest memory pages that has the first status, wherein the determining is based on a size of data that is to be written into the plurality of guest memory pages; and
notifying the guest via a signal that the data structure has been modified.

US Pat. No. 10,768,958

USING VIRTUAL LOCAL AREA NETWORKS IN A VIRTUAL COMPUTER SYSTEM

VMware, Inc., Palo Alto,...

1. A method for isolating virtual machines (VMs) to particular virtual local area networks (VLANs) in a virtual infrastructure, the method comprising:receiving, by a VLAN coordinator, a network frame sent from a VM, wherein the VLAN coordinator is a component of virtualization software running outside the VM and supporting operation of the VM on a host system;
determining, by the VLAN coordinator, that the network frame sent from the VM currently includes a first VLAN identifier selected by the VM, wherein the first VLAN identifier is provided to restrict a transmission of the network frame sent from the VM to a particular VLAN over a physical network;
replacing, by the VLAN coordinator, the first VLAN identifier in the network frame with a second VLAN identifier selected by the VLAN coordinator, wherein the second VLAN identifier restricts a transmission of the network frame sent from the VM to the particular VLAN over the physical network, wherein replacing the first VLAN identifier in the network frame with the second VLAN identifier is transparent to the VM; and
transmitting the network frame with the second VLAN identifier over the physical network to the particular VLAN.

US Pat. No. 10,768,957

COMPUTER ARCHITECTURE FOR ESTABLISHING DYNAMIC CORRELITHM OBJECT COMMUNICATIONS IN A CORRELITHM OBJECT PROCESSING SYSTEM

Bank of America Corporati...

1. A system configured to emulate a correlithm object processing system, comprising:a first device comprising:
a first network interface configured to communicate data using a network connection; and
a first processor operably coupled to the first network interface, configured to implement a first correlithm object engine configured to:
send correlithm objects having a first bit string length to a second device, wherein each correlithm object is an n-bit digital word of binary values;
send a test correlithm object having the first bit string length to the second device;
receive a switch command in response to sending the test correlithm object; and
send correlithm objects having a second bit string length to the second device in response to receiving the switch command, wherein the second bit string length is greater than the first bit string length; and
the second device in signal communication with the first device via the network connection, comprising:
a second network interface configured to communicate data using the network connection; and
a second processor operably coupled to the second network interface, configured to implement a second correlithm object engine configured to:
receive the test correlithm object;
determine a distance between the test correlithm object and a reference correlithm object, wherein the distance between the test correlithm object and the reference correlithm object is the number of different bits between a digital word representing the test correlithm object and a digital word representing the reference correlithm object;
compare the distance between the test correlithm object and the reference correlithm object to a distance threshold value; and
send the switch command to the first device in response to determining the distance between the test correlithm object and the reference correlithm object exceeds the distance threshold value.

US Pat. No. 10,768,956

DYNAMIC CLOUD STACK TESTING

Bank of America Corporati...

1. A dynamic cloud stack testing system, comprising:a cloud network comprising a plurality of cloud components;
a cloud stack server communicatively coupled to the cloud network, the cloud stack server comprising:
an interface operable to receive a cloud stack request from a user device
a cloud stack configuration engine implemented by a processor operably coupled to the interface, the cloud stack configuration engine configured to:
identify one or more cloud components;
determine a cloud stack configuration that incorporates the one or more cloud components; and
determine whether the cloud stack configuration is a unique cloud stack configuration;
a cloud stack testing engine implemented by the processor, the stack testing engine configured to:
in response to determining that the cloud stack configuration is a unique cloud stack configuration, determine a cloud stack configuration test;
execute the cloud stack configuration test for the cloud stack configuration; and
wherein the cloud stack testing engine is further configured to:
determine that the cloud stack configuration failed the cloud stack configuration test;
identify a failed cloud component in the cloud stack configuration, wherein the failed cloud component caused the cloud stack configuration to fail the cloud stack configuration test; and
wherein the cloud stack configuration engine is further configured to:
identify an alternative cloud component to replace the failed cloud component in the cloud stack configuration that caused the cloud stack configuration test to fail.

US Pat. No. 10,768,955

EXECUTING COMMANDS WITHIN VIRTUAL MACHINE INSTANCES

Amazon Technologies, Inc....

1. An apparatus, comprising:a processor; and
a non-transitory computer-readable storage medium having instructions stored thereupon which are executable by the processor and which, when executed, cause the processor to:
expose a public web service application programming interface (API) comprising a method configured to execute a specified command within a virtual machine (VM) instance;
receive a call over the API to execute the specified command; and
cause a request to execute a script associated with the specified command to be transmitted to a software component, the software component executing within the VM instance and configured to execute the script within the VM instance.

US Pat. No. 10,768,954

PERSONALIZED DIGITAL ASSISTANT DEVICE AND RELATED METHODS

AIQUDO, INC., Campbell, ...

1. A computer-implemented method for disambiguating commands to perform digital assistant operations, comprising:obtaining, by a digital assistant device, a command representation that corresponds to speech data received as input;
generating, by the digital assistant device, a query that includes the command representation to search a stored repository of action datasets, each action dataset in the stored repository of action datasets having a corresponding set of command templates;
based on a determination that each of two or more action datasets identified in the search has at least one command template that corresponds to the command representation, accessing, by the digital assistant device, stored profile information associated with the digital assistant device;
determining, by the digital assistant device, that the accessed profile information includes a piece of device usage data that corresponds to a particular action dataset of the two or more action datasets; and
based on the determination that the accessed profile information includes the corresponding piece of device usage data, interpreting, by the digital assistant device, the particular action dataset to perform a set of emulated touch operations that corresponds to the command representation.

US Pat. No. 10,768,953

COMPUTING SYSTEM PROVIDING SUGGESTED ACTIONS WITHIN A SHARED APPLICATION PLATFORM AND RELATED METHODS

CITRIX SYSTEMS, INC., Fo...

1. A computing system comprising:at least one client computing device and a display associated therewith; and
a server configured to
provide access to a plurality of shared applications by the at least one client computing device,
extract text displayed by the shared applications on the display while the shared applications are being used by the at least one client computing device,
generate a concept map associating the extracted text with actions initiated by the at least one client computing device after displaying respective text on the display,
weight the extracted text within the concept map,
determine a suggested action to perform based upon text subsequently displayed on the display and the concept map and generate an overlay to be displayed on the display including the suggested action, and
increase a weighting associated with the extracted text in the concept map based upon initiation of the suggested action by the at least one client computing device responsive to the overlay.

US Pat. No. 10,768,952

SYSTEMS AND METHODS FOR GENERATING INTERFACES BASED ON USER PROFICIENCY

CAPITAL ONE SERVICES, LLC...

1. A system for displaying a user interface, comprising:a server hosting a webpage, the webpage having the user interface;
a user database storing a user effectiveness index for a user;
a monitoring program comprising instructions for execution on a client device containing a display, a processor, and one or more input devices;
wherein, upon execution, the monitoring program is configured to:
track the operation of each of the one or more input devices and capture input operation data for each input device, the input operation data having one or more selected from the group of at least one input positive effect and at least one input negative effect,
track the operation of the display to capture output operation data of the display, wherein the output operation includes at least one selected from the group of display size, display resolution, and font size, and the output operation data having one or more selected from the group of at least one output positive effect and at least one output negative effect, wherein an output positive effect includes at least one selected from the group of a small display size, a high display resolution, and a small font size and an output negative effect includes at least one selected from the group of a large display size, a low display resolution, and a large font size;
calculate a user effectiveness index based on one or more selected from the group of the at least one input positive effect, the at least one input negative effect, the at least one output positive effect, and the at least one output negative effect, wherein the monitoring program increments the user effectiveness index based on each input positive effect and each output positive effect and decrements the user effectiveness index based on each input negative effect and each output negative effect, and
transmit the user effectiveness index to the server; and
wherein, upon receipt of the user effectiveness index, the server is configured to:
store the user effectiveness index in the profile for the user in the user database, and
adapt the user interface of the webpage in response to the user effectiveness index.

US Pat. No. 10,768,951

PROVIDING AUGMENTED REALITY USER INTERFACES AND CONTROLLING AUTOMATED SYSTEMS BASED ON USER ACTIVITY INFORMATION AND PRE-STAGING INFORMATION

Bank of America Corporati...

1. A computing platform, comprising:at least one processor;
a communication interface communicatively coupled to the at least one processor; and
memory storing computer-readable instructions that, when executed by the at least one processor, cause the computing platform to:
receive, via the communication interface, from a client user device, a trip start notification indicating that a user of the client user device is initiating a trip to an enterprise center;
in response to receiving the trip start notification from the client user device, generate a pre-staging augmented reality user interface for a client augmented reality device linked to the client user device;
send, via the communication interface, to the client augmented reality device linked to the client user device, the pre-staging augmented reality user interface generated for the client augmented reality device linked to the client user device, wherein sending the pre-staging augmented reality user interface generated for the client augmented reality device linked to the client user device causes the client augmented reality device linked to the client user device to display the pre-staging augmented reality user interface and prompt the user of the client user device to provide pre-staging input;
receive, via the communication interface, from the client augmented reality device linked to the client user device, pre-staging information identifying one or more events to be performed at the enterprise center when the user of the client user device arrives at the enterprise center;
generate one or more pre-staging commands based on the pre-staging information identifying the one or more events to be performed at the enterprise center;
send, via the communication interface, to one or more systems associated with the enterprise center, the one or more pre-staging commands generated based on the pre-staging information identifying the one or more events to be performed at the enterprise center;
after sending the one or more pre-staging commands to the one or more systems associated with the enterprise center, generate a navigation augmented reality user interface for the client augmented reality device linked to the client user device; and
send, via the communication interface, to the client augmented reality device linked to the client user device, the navigation augmented reality user interface generated for the client augmented reality device linked to the client user device.

US Pat. No. 10,768,950

SYSTEMS AND METHODS FOR BUILDING DYNAMIC INTERFACES

Capital One Services, LLC...

1. A method of generating a user interface, the method comprising:receiving, at a processor, first data indicative of a first plurality of transactions by a user;
processing, by the processor, the first data to generate first behavioral information describing the user;
causing, by the processor, the first behavioral information to be displayed by an interactive user interface within a sequence of cards displayed by the interactive user interface;
receiving, at the processor, a user input made in response to the first behavioral information being displayed through the interactive user interface;
analyzing, by the processor, the user input to generate user preference information indicating a preference for at least a portion of the first behavioral information;
receiving, at the processor, second data indicative of a second plurality of transactions by the user;
processing, by the processor, the second data and the user preference information to generate second behavioral information describing the user, the second behavioral information being generated to correspond to the preference; and
causing, by the processor and as a result of processing the second data and the user preference information to generate the second behavioral information to correspond to the preference, the second behavioral information to be displayed by the interactive user interface as part of a sequence of cards, wherein a greater number of cards containing the second behavioral information are displayed within the sequence than a number of cards containing the first behavioral information within the sequence by the interactive user interface.

US Pat. No. 10,768,949

AUTOMATED GRAPHICAL USER INTERFACE GENERATION FOR GOAL SEEKING

Wells Fargo Bank, N.A., ...

1. A method for creating a user specific Graphical User Interface (GUI), the method comprising:receiving profile information of a user from a network based service, the profile information describing an attribute of the user;
receiving a selection of a goal from the user, the goal relating to the attribute;
determining a start point of the user with respect to the goal, the start point of the user based upon the profile information;
determining a plurality of paths from the start point to the goal utilizing a planning algorithm, each path in the plurality of paths including a plurality of steps, the plurality of steps including a plurality of milestones;
generating a map based upon the plurality of paths, the plurality of paths being visually represented on the map;
creating at least one GUI descriptor specifying instructions for rendering the user specific GUI such that, when rendered, the user specific GUI displays the map;
tracking progress of the user by representing the user as being at a point along a first path in the plurality of paths on the map;
determining, based upon the progress of the user at a milestone in the plurality of milestones, whether or not, as of the milestone, the user is on track for reaching the goal;
routing the user via a first path in the plurality of paths if, as of the milestone, the user is on track for reaching the goal, and instead routing the user via a second path in the plurality of paths if, as of the milestone, the user is not on track for reaching the goal; and
changing a graphical depiction of a displayed incline of a current path of the user based on the tracking of the progress of the user.

US Pat. No. 10,768,948

APPARATUS AND METHOD FOR DYNAMIC MODIFICATION OF MACHINE BRANDING OF INFORMATION HANDLING SYSTEMS BASED ON HARDWARE INVENTORY

Dell Products, L.P., Rou...

1. A system comprising:a hardware processor; and
a memory device storing instructions that when executed cause the hardware processor to perform operations, the operations including:
executing a boot operation;
determining a planar type associated with a motherboard;
querying an electronic database for the planar type associated with the motherboard, the electronic database having entries that electronically associate branding identities to planar types including the planar type associated with the motherboard; and
identifying a branding identity of the branding identities in the electronic database that is electronically associated with the planar type.

US Pat. No. 10,768,947

METHOD FOR INTERFACE REFRESH SYNCHRONIZATION,TERMINAL DEVICE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

GUANGDONG OPPO MOBILE TEL...

1. A method of refresh synchronization for interface of foregoing application, comprising:when a plurality of foreground applications in a user space of an operating system initiate a plurality of interface refresh operations, obtaining, in the user space, refresh information of the plurality of interface refresh operations, wherein the refresh information comprises a thread number of each interface refresh operation, refresh time and refresh contents corresponding to each thread number; and
transmitting the obtained refresh information of the plurality of interface refresh operations to a kernel space of the operating system;
the method further comprising:
analyzing, in the kernel space, the refresh contents corresponding to each thread number of the plurality of interface refresh operations, to obtain refresh load information of each thread of the plurality of foreground applications; and
performing system frequency modulation and task scheduling, according to the refresh load information of each thread of the plurality of foreground applications.

US Pat. No. 10,768,946

EDGE CONFIGURATION OF SOFTWARE SYSTEMS FOR MANUFACTURING

SAP SE, Walldorf (DE)

1. A method comprising:receiving, by a computing device, a specification of a manufacturing system that is generated using a modeling language, the modeling language allowing a layout of a set of machines in the manufacturing system to be specified;
retrieving, by the computing device, available software systems from a software provider;
analyzing, by the computing device, the specification to determine applicable software systems for the manufacturing system;
generating, by the computing device, a configuration based on the analysis of the layout of the set of machines for the manufacturing system, the configuration specifying one or more instances of software systems to be installed on an edge system between the manufacturing system and a remote computing environment and the set of machines; and
deploying, by the computing device, the one or more instances of the software systems on the edge system and the set of machines, wherein the edge system orchestrates operations on the set of machines operating in the manufacturing system in real-time, and wherein the edge system communicates with the remote computing environment for non-real time operations on the set of machines.

US Pat. No. 10,768,945

SEMANTIC WEAVING OF CONFIGURATION FRAGMENTS INTO A CONSISTENT CONFIGURATION

TELEFONAKTIEBOLAGET LM ER...

1. A method for integrating source models into a system configuration, comprising:generating transformations according to a weaving model which specifies relations among metamodels of the source models and the system configuration, wherein the transformations, when executed, transform the source models into the system configuration including a plurality of target entities;
generating, from the transformations, one or more integration constraints for each target entity to be created or modified by an operation of the transformations, wherein the integration constraints describe the relations specified by the weaving model;
forming system configuration constraints to include the integration constraints in addition to constraints of each source model; and
executing the transformations to transform the source models into the system configuration to thereby generate the system configuration obeying the system configuration constraints.

US Pat. No. 10,768,944

METHOD AND SYSTEM FOR CUSTOMIZING DESKTOP LAUNCHER OF MOBILE TERMINAL

JRD COMMUNICATION INC., ...

1. A desktop-launcher-customizing method for a mobile terminal, wherein the mobile terminal comprises at least two kinds desktop launchers, and the at least two kinds of desktop launchers at least comprises a first desktop launcher and a second desktop launcher, comprising:placing a first resource file for customizing the first desktop launcher into a resource folder within a first file directory, and placing a second resource file for customizing the second desktop launcher into the resource folder within the first file directory, and placing a first configuration file for customizing the first desktop launcher into a configuration folder within the first file directory, and placing a second configuration file for customizing the second desktop launcher into the configuration folder within the first file directory, wherein the first resource file and the second resource file, the resource folder, the first configuration file and the second configuration file, and the configuration folder are different in name;
adding a compilation command, specifying a first saving path of the first resource file and the second resource file and a second saving path of the first configuration file and the second configuration file in the first file directory, and copying the first resource file and the second resource file in the first saving path and the first configuration file and the second configuration file in the second saving path into a second file directory by executing the compilation command, wherein the first resource file and the first configuration file are copied to a first folder for the first desktop launcher in the second file directory, and the second resource file and the second configuration file are copied to a second folder for the second desktop launcher in the second file directory;
writing a third saving path of the first resource file and the first configuration file and a fourth saving path of the second resource file and the second configuration file in the second file directory into a program information file, and copying the program information file to the first folder for the first desktop launcher and the second folder for the second desktop launcher in the second file directory;
reading the third saving path and the fourth saving path from the program information file to obtain the first folder for the first desktop launcher and the second folder for the second desktop launcher in the second file directory, and compiling the first resource file and the first configuration file in the first folder for the first desktop launcher in the second file directory to generate a first installation program file, and compiling the second resource file and the second configuration file in the second folder for the second desktop launcher in the second file directory to generate a second installation program file; and
loading the first or second installation program file according to the first or second desktop launcher selected by a user, and executing the first or second desktop launcher.

US Pat. No. 10,768,943

ADAPTER CONFIGURATION OVER OUT OF BAND MANAGEMENT NETWORK

Hewlett Packard Enterpris...

1. A non-transitory machine-readable storage medium comprising instructions that, when executed, cause a processing resource of a storage array to:receive contact information of a management controller for a host device;
query the management controller, over a management network, for a supported data network adapter on the host device;
in response to a determination that the host device comprises a supported data network adapter:
transmit identifying information of the storage array to the management controller;
query the management controller for identifying information of the supported data network adapter;
query the management controller for a current zone of the supported data network adapter in the data network; and
in response to a determination that the current zone is different from a storage array zone in the data network, add the identifying information of the supported data network adapter to the storage array zone;
receive an identifier of a storage volume associated with the storage array; and
with the storage array and over the management network, configure the supported data network adapter of the host device to boot from the storage volume over a data network that is out of band in relation to the management network.

US Pat. No. 10,768,942

OPTION ROM DISPATCH POLICY CONFIGURATION INTERFACE

American Megatrends Inter...

1. A computer-implemented method, comprising:identifying a device connected to a computer system, the device containing a multi-image option read-only memory (ROM);
accessing an option ROM dispatch policy associated with the device, the option ROM dispatch policy comprising:
an enablement setting indicating whether firmware of a computer executes the multi-image option ROM of the device during a boot sequence of the computer, and
a type setting indicating an image of the multi-image option ROM the firmware of the computer executes during the boot sequence of the computer; and
executing at least a portion of the multi-image option ROM according to the option ROM dispatch policy during the boot sequence of the computer, wherein the device is an add-on device attached to the computer via a slot and the option ROM dispatch policy is associated with the device based at least in part on the option ROM dispatch policy being associated with the slot.

US Pat. No. 10,768,941

OPERATING SYSTEM MANAGEMENT

Hewlett-Packard Developme...

1. A computing device comprising:a processor;
a memory coupled to the processor; and
a non-transitory computer readable storage medium coupled to the processor and comprising instructions, that when executed by the processor, cause the processor to:
change an operating parameter of the computing device based on a domain change, wherein the domain change represents:
a physical change in location of the computing device from a first geographic location having a first network to a second geographic location having a second network; and
a change in a network security level from the first network to the second network;
select, based on the changed operating parameter, a first operating system to be instantiated from a library of operating systems;
instantiate a copy-on-write virtual computing system executing the first operating system based on the domain change;
delete a second operating system from the non-transitory computer readable storage medium or the memory in parallel with the instantiation of the copy-on-write virtual computing system executing the first operating system;
copy the first operating system to the non-transitory computer readable storage medium; and
instantiate the first operating system on the computing device.

US Pat. No. 10,768,940

RESTORING A PROCESSING UNIT THAT HAS BECOME HUNG DURING EXECUTION OF AN OPTION ROM

LENOVO ENTERPRISE SOLUTIO...

1. A computing device, comprising:an accessory containing an option ROM;
a first processing unit adapted to boot the computing device and to execute the option ROM; and
a second processing unit adapted to be activated by the first processing unit to monitor execution of the option ROM by the first processing unit;
wherein the second processing unit is adapted to restore the first processing unit to a state prior to execution of the option ROM in response to the first processing unit becoming hung during execution of the option ROM.

US Pat. No. 10,768,939

LOAD/STORE UNIT FOR A PROCESSOR, AND APPLICATIONS THEREOF

ARM Finance Overseas Limi...

1. A method for processing memory access instructions in a processor that executes instructions out-of-program-order, comprising:storing a completion buffer identification value for a graduating memory access instruction in a load/store graduation buffer;
using bits of the stored completion buffer identification value to access an entry of a load/store queue associated with the instruction;
retrieving information from the load/store queue entry; and
determining whether to allocate an entry for the instruction in a fill/store buffer based on the retrieved information.

US Pat. No. 10,768,938

BRANCH INSTRUCTION

ARM Limited, Cambridge (...

1. Apparatus for processing data comprising:processing circuitry to perform processing operations specified by a sequence of program instructions;
an instruction decoder to decode said sequence of program instructions to generate control signals to control said processing circuitry to perform said processing operations; wherein
said instruction decoder comprises branch-future instruction decoding circuitry to decode a branch-future instruction, said branch-future instruction having a programmable parameter associated with a branch target address and further programmable branch point data parameter indicative of a predetermined instruction following said branch-future instruction within said sequence of program instructions; and
said processing circuitry comprises branch control circuitry controlled by said branch-future instruction decoding circuitry and responsive to said branch point data to trigger a branch to processing of program instructions starting from a branch target instruction corresponding to said branch target address when processing of said sequence of program instructions reaches said predetermined instruction; wherein,
said branch point data comprises one or more of:
address data indicative of an address of said predetermined instruction;
end data indicative of an address of a last instruction that immediately precedes said predetermined instruction;
offset data indicative of a distance between said branch-future instruction and said predetermined instruction;
a proper subset of bits indicative of a memory storage address of said predetermined instruction starting from a least significant bit end of bits of said memory storage address that distinguish between starting storage addresses of instructions;
remaining size instruction data indicative of a number of instructions remaining to be processed before said predetermined instruction; and
remaining size data indicative of a number of program storage locations remaining to be processed before said predetermined instruction is reached.

US Pat. No. 10,768,937

USING RETURN ADDRESS PREDICTOR TO SPEED UP CONTROL STACK RETURN ADDRESS VERIFICATION

Advanced Micro Devices, I...

1. A method for executing a return instruction on a processor, the method comprising:predicting a first target return address for a first return instruction based on a first return address stack entry;
responsive to detecting that a first indicator associated with the first return address stack entry indicates that the first predicted target return address is not guaranteed to match a corresponding entry of a control stack, checking the corresponding entry of the control stack against a corresponding entry of a data stack to verify the return address for the first return instruction;
predicting a second target return address for a second return instruction based on a second return address stack entry; and
responsive to detecting that a second indicator associated with the second return address stack entry indicates that the second predicted target return address is guaranteed to match a corresponding address of the control stack, foregoing checking the corresponding address of the control stack against a corresponding address of the data stack for the second return instruction.

US Pat. No. 10,768,936

BLOCK-BASED PROCESSOR INCLUDING TOPOLOGY AND CONTROL REGISTERS TO INDICATE RESOURCE SHARING AND SIZE OF LOGICAL PROCESSOR

Microsoft Technology Lice...

1. A processor comprising a plurality of physical processor cores for executing a program comprising a plurality of instruction blocks, the processor comprising:a branch predictor configured to select a first group of instructions to be speculatively executed by a logical processor; and
a programmable composition topology register for assigning a number of physical processor cores of the plurality of physical processor cores to the logical processor, the programmable composition topology register being dynamically programmable during execution of the program, the composition topology register comprising a composition field and a mode field for specifying a meaning of the composition field, the composition field providing a bit-map of the physical processor cores of the logical processor when the mode field is programmed with a first value, and the composition field providing the number of the physical processor cores of the logical processor when the mode field is programmed with a second value.

US Pat. No. 10,768,935

BOOSTING LOCAL MEMORY PERFORMANCE IN PROCESSOR GRAPHICS

Intel Corporation, Santa...

1. A method comprising:executing a first work-group on a processor, wherein the first work-group comprises a plurality of work-items that are executed in parallel to perform a defined function, wherein the processor comprises internal registers files, wherein the processor uses a cache hierarchy including a lowest level cache and at least one other cache;
determining an available space in the internal register files of the processor;
allocating the determined available space in the internal register files as local memory for the first work-group, wherein the local memory for the first work-group is only accessible by the plurality of work-items included in the first work-group; and
within the available space of the internal register files that is allocated as the local memory for the first work-group, simulating a barrier of the first work-group by controlling a number of single instruction multiple data instructions executed by one execution thread.

US Pat. No. 10,768,934

DECODING PREDICATED-LOOP INSTRUCTION AND SUPPRESSING PROCESSING IN ONE OR MORE VECTOR PROCESSING LANES

ARM Limited, Cambridge (...

1. Apparatus for processing data comprising:processing circuitry to perform processing operations specified by program instructions including at least one vector processing program instruction supporting processing of up to Nmax vector elements associated with respective vector processing lanes;
an instruction decoder to decode said program instructions to generate control signals to control said processing circuitry to perform said processing operations; wherein
said instruction decoder comprises predicated-loop instruction decoder circuitry to decode a predicated-loop instruction having associated therewith a number of vector elements Nve to be processed during a number of iterations Nit of a program loop body to be performed;
said processing circuitry comprises predication loop control circuitry to operate when Nve/Nmax does not equal a whole number at least partially to suppress processing in one or more of said vector processing lanes during one of more of said iterations such that a total number of vector elements processed during said iterations is Nve; and
said processing circuitry comprises vector registers each having a predetermined number of bits VRwidth and able to store a plurality of said vector elements, each of said vector elements having a number of bits VEwidth; and
floating point registers to store floating point data values, and a floating point control register to store floating point control data and a value indicative of VEwidth; wherein
when floating point data values are accessible to said program instructions, said floating point control register is also accessible; and
when said floating point data values are inaccessible to said program instructions, said floating point control register is also inaccessible.

US Pat. No. 10,768,933

STREAMING ENGINE WITH STREAM METADATA SAVING FOR CONTEXT SWITCHING

TEXAS INSTRUMENTS INCORPO...

1. A data processing apparatus comprising:a processing core;
a memory;
a register file; and
a streaming engine configured to receive a plurality of data elements stored in the memory and to provide the plurality of data elements as a data stream to the processing core, the streaming engine including:
an address generator to generate addresses corresponding to locations in the memory;
a buffer to store the plurality of data elements received from the locations in the memory corresponding to the generated addresses; and
an output to supply the plurality of data elements received from the memory to the processing core as the data stream, wherein the streaming engine is:
responsive to a stream save instruction to store state metadata of the data stream into at least one register of the register file; and
responsive to a stream restore instruction to recall the state metadata of the data stream from the at least one register of the register file, wherein:
the state metadata includes a current address, the current address being generated by the address generator and corresponding to a next data element to be fetched from the memory;
the data stream includes a plurality of loops, each loop having a total iteration count representing a number of times the loop is to repeat; and
the state metadata additionally includes information indicating remaining iterations for each of the loops.

US Pat. No. 10,768,932

INFORMATION PROCESSING SYSTEM, ARITHMETIC PROCESSING CIRCUIT, AND CONTROL METHOD FOR INFORMATION PROCESSING SYSTEM

FUJITSU LIMITED, Kawasak...

1. An information processing system comprising a plurality of information processing apparatuses, each of the plurality of information processing apparatuses incorporating a plurality of arithmetic processing circuits, each of the plurality of arithmetic processing circuits includes:a dividing circuit configured to divide a plurality of data blocks retained by the arithmetic processing circuit into groups of a number equal to the number of the plurality of arithmetic processing circuits included in the information processing apparatus including the own device,
a data selecting circuit configured to select respective first data blocks from the plurality of data blocks included in the respective groups,
a transmission destination selecting circuit configured to select arithmetic processing circuits different from each other as respective transmission destinations from the plurality of arithmetic processing circuits included in the information processing apparatus for the respective first data blocks selected by the data selecting circuit based on destination number information obtained by exclusive disjunction operation on identification number information assigned to each arithmetic processing circuit and cyclic number information assigned to each group, and
a transmitting circuit configured to transmit the respective first data blocks selected by the data selecting circuit to the respective arithmetic processing circuits selected by the transmission destination selecting circuit.

US Pat. No. 10,768,931

FINE-GRAINED MANAGEMENT OF EXCEPTION ENABLEMENT OF FLOATING POINT CONTROLS

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method of facilitating processing within a computing environment, the computer-implemented method comprising:obtaining a user-specified value for a floating point exception control for a selected instruction to be processed, the floating point exception control indicating a particular event;
determining, based on the user-specified value and another value for the floating point exception control provided by a centralized status exception control, an enablement control for the floating point exception control, wherein the determining comprises combining the user-specified value and the other value to provide a value for the enablement control; and
executing the selected instruction, wherein an indication of a floating point exception associated with execution of the selected instruction is enabled or disabled based on the value of the enablement control.

US Pat. No. 10,768,930

PROCESSOR SUPPORTING ARITHMETIC INSTRUCTIONS WITH BRANCH ON OVERFLOW AND METHODS

MIPS Tech, LLC, Santa Cl...

1. A processor comprising:an execution unit comprising an Arithmetic and Logic Unit (ALU) that is configured to accept at least two inputs, to perform an arithmetic operation on the accepted inputs, the arithmetic operation specified by an instruction from a program being executed, and to generate an indication of overflow, which indicates whether the arithmetic operation resulted in overflow, wherein the execution unit is configured to perform a multi-word sized addition, and to check for overflow at multiple boundaries within a multi-word result of the addition; and
an instruction unit configured to receive the indication and to branch to a location in the program being executed, in response to the indication indicating that the arithmetic operation resulted in overflow,
wherein the ALU includes a first overflow detection circuit and a second overflow detection circuit, wherein the first overflow detection circuit is configured to accept a first part of a first input and a first part of a second input, the second overflow detection circuit is configured to accept a second part of the first input and a second part of the second input, and the indication of overflow is generated as a function of a Boolean operation on the outputs of the first overflow detection circuit and the second overflow detection circuit.

US Pat. No. 10,768,929

AUTOMATICALLY UPDATING SOURCE CODE IN VERSION CONTROL SYSTEMS VIA A PULL REQUEST

ATLASSIAN PTY LTD., Sydn...

1. A computer-implemented method for automatically updating source code in a first source code branch using a pull request user interface, the method comprising:displaying, on a display of a user computing device, the pull request user interface associated with the first source code branch and associated with a pull request to merge the first source code branch into a second source code branch, the pull request user interface comprising:
a panel displaying source code associated with an intermediate merge between a tip of the first source code branch and a tip of the second source code branch and a code change suggestion; and
a selectable affordance associated with the code change suggestion to accept the code change suggestion; wherein the code change suggestion includes an original line of source code from the intermediate merge to be changed and a new line of source code suggested by a user via the pull request user interface to replace the original line of source code;
receiving user input on the selectable affordance associated with the code change suggestion;
propagating a source code change from the intermediate merge to the first source code branch such that an original line of source code in the first source code branch that corresponds to the original line of source code from the intermediate merge is replaced with the new line of source code as suggested by the user; and
upon successfully propagating the source code change to the first source code branch, updating the pull request user interface associated with the first source code branch to indicate that the code change suggestion is applied and the pull request is modified.

US Pat. No. 10,768,928

SOFTWARE DEVELOPMENT WORK ITEM MANAGEMENT SYSTEM

INTERNATIONAL BUSINESS MA...

1. An information processor comprising:an acquisition unit configured to acquire, for each of a plurality of work items each representing a work to change at least one file, designation of a file associated with the work item;
a change recording unit configured to record the occurrence of a change associated with a plurality of files and a history of changes made to the plurality of files;
a dependency detection unit configured to detect dependencies among the plurality of files using a static analysis utility program without running build;
a determination unit configured to determine, on the basis of the dependencies among the files, whether there is a dependency relationship between at least two work items based on the dependency relationship between the files detected by the dependency detection unit and generate a related link between the two work items;
an operatively connected recording medium to record a name of the changed file and record an occurrence of change in association with the file, wherein the occurrence of change occurs within a predetermined timing and based at least in part on the determining of the dependencies;
a generation unit operable, in response to a change made to a file associated with one work item, to generate a testing work item for checking whether the change will cause a problem on a changed result by another work item having a dependency relationship with the one work item, the generation unit generating the testing work item on the basis of the other work item; and
a notification unit operable, in response to a changing work performed on a file corresponding to one work item, to notify users associated with other work item(s) having dependency relationship(s) with the one work item, of the occurrence of the changing work at a predetermined timing,
wherein the acquisition unit reads a design document in which related files and changes made to the files are described for each of a plurality of work items, to identify the files related to the respective work items, wherein the design document is a work breakdown structure that describes work items through an entire software developing process and describes files related to the respective work items and any changes made to the plurality of files, and for each of the work items described in the design document, associates the each of the work items with the plurality of files related to the work item and the files whose changes are described.

US Pat. No. 10,768,927

MANAGEMENT SYSTEM AND MANAGEMENT METHOD

HITACHI, LTD., Tokyo (JP...

1. A management system comprising:an interface device connected to an operation target system including one or more operation target apparatuses:
a storage resource that stores a management program; and
a processor that creates or edits a template for operation automation, which is a service template associated with one or more components, by executing the management program,
wherein each property of the service template is included in one or more property groups and each property of the service template is associated with a component property, which is a property of each of the components associated with the service template,
wherein the processor:
(1) receives a version upgrade request which designates the service template;
(2) upgrades a version of a target component associated with the designated service template or a duplicate of the designated service template in response to the version upgrade request by replacing the target component with a different version of the target component;
(3) estimates each of all possible property group configurations as a post-reset configuration caused by the version upgrade of the target component with respect to each property group before resetting including one or more properties of the designated service template or the duplicate of the designated service template, which are associated with the version-upgraded target component, from among the property groups including one or more properties of the designated service template or the duplicate of the designated service template, wherein a configuration of each property group is a combination of a number of the properties of service templates belonging to the property group and the component property to which each of the properties of service templates is associated;
(4) searches for a property group having any of the estimated configurations from among property groups formed that include more properties of a service template other than the designated service template or the duplicate of the designated service template; and
(5) displays setting content of the property group detected by the search.

US Pat. No. 10,768,926

MAINTAINING MANAGEABILITY STATE INFORMATION DISTINCT FROM MANAGED METADATA

salesforce.com, inc., Sa...

1. A method, comprising:storing, by a computing system, metadata for one or more applications in one or more database tables of a multi-tenant database system, wherein the metadata includes one or more custom database table definitions that specify sets of fields included in one or more custom database tables, wherein the fields include one or more tenant-specified custom fields that are not included in a standard database table definition used by one or more other tenants of the multi-tenant database system;
storing, by the computing system in a separate database table from the metadata, manageable state information that specifies permissions for one or more entities to edit the metadata to alter the set of fields included in one of the custom database table definitions; and
generating, by the computing system, information indicating whether an update to at least one of the one or more applications violates a permission specified by the manageable state information by accessing the manageable state information and without accessing the metadata.

US Pat. No. 10,768,925

PERFORMING PARTIAL ANALYSIS OF A SOURCE CODE BASE

1. A computer-implemented method comprising:receiving, from a user, a request for partial analysis results to identify source code violations introduced by the user in changes made by the user to one or more source code files in a first snapshot of a project;
generating an initial set of primary source code files comprising only one or more source code files changed by the user relative to source code files in the first snapshot of the project;
obtaining data representing a dependency graph of source code files in the project, wherein the dependency graph has a directed link from a first node representing a first source code file to a second node representing a second source code file whenever the first source code file imports the second source code file or uses one or more definitions in the second source code file;
generating an augmented set of primary source code files, including adding, to the initial set of primary source code files, one or more additional source code files represented by nodes in the dependency graph having a link to any node representing any of the files in the initial set of primary source code files;
generating a set of secondary source code files including identifying source code files represented by nodes to which a link in the dependency graph exists from any node representing any of the files in the augmented set of primary source code files;
performing a partial analysis of the project using done or more files included in the augmented set of primary source code files and the set of secondary source code files to identify unmatched violations occurring in the augmented set of primary source code files but not occurring in the first snapshot, the unmatched violations being violations introduced by the user in changes made by the user to the one or more source code files in the initial set of primary source code files; and
providing partial analysis results for the files included in the augmented set of primary source code files, wherein the partial analysis results identify locations of the violations introduced by the user in the one or more source code files.

US Pat. No. 10,768,924

AUTOMATED USAGE DRIVEN ENGINEERING

Accenture Global Solution...

1. A computer-implemented method for automating vehicle feature updates, the method being executed by one or more processors and comprising:receiving telematics data identifying an actual usage of a first vehicle;
automatically performing a gap analysis, using a gap analysis component, to determine a difference between the actual usage of the first vehicle and an expected usage of the first vehicle based at least in part on the telematics data;
determining, using a configuration adjustment component, a plurality of feature updates that are implementable as software updates based at least in part on the difference between the actual usage of the first vehicle and the expected usage of the first vehicle;
determining a selected feature update from the plurality of feature updates to provide to an onboard computer system of an active vehicle; and
updating a selected feature on the onboard computer system of the active vehicle using the selected feature update when the selected feature update can be implemented by a software update.

US Pat. No. 10,768,923

RELEASE ORCHESTRATION FOR PERFORMING PRE RELEASE, VERSION SPECIFIC TESTING TO VALIDATE APPLICATION VERSIONS

salesforce.com, inc., Sa...

1. Non-transitory machine-readable storage media that provides instructions that, when executed by a machine, will cause said machine to perform operations comprising:responsive to receiving instructions from a release orchestrator to validate a second application (app) version prior to a transition to sending production traffic to the second app version instead of a first app version, determining an app version identifier assigned to the second app version, wherein the first app version and the second app version are each assigned a different one of a plurality of application version identifiers;
selecting a subset of tests from a plurality of release time, version specific tests based on the app version identifier and a plurality of version rules, wherein the plurality of version rules identify which ones of the plurality of release time, version specific tests to apply based on different ones of the plurality of application version identifiers;
performing the subset of tests by sending test traffic to the second app version via a routing engine while the routing engine routes the production traffic to the first app version;
receiving responses to the test traffic from the second app version;
determining from the responses whether at least one of the subset of tests failed;
responsive to determining from the responses that at least one of the subset of tests failed:
determining, based on failure rules, whether any of the subset of tests that failed were configured to indicate that on a failure the transition should not occur, wherein failure rules indicate which of the plurality of release time, version specific tests are configured to indicate that on failure the transition should not occur, and
responsive to determining that any of the subset of tests that failed were configured to indicate that on failure the transition should not occur,
communicating to the release orchestrator that the second app version is not validated for the production traffic and the transition to sending the production traffic to the second app version instead of the first app version should not occur; and
responsive to determining from the responses that none of the subset of tests failed or responsive to determining that none of the subset of tests that failed were configured to indicate that on failure the transition should not occur, communicating to the release orchestrator that the second app version is validated for the production traffic and the transition to sending the production traffic to the second app version instead of the first app version should occur.