US Pat. No. 10,216,680

RECONFIGURABLE TRANSMITTER

Intel Corporation, Santa...

1. An apparatus comprising:first and second single-ended transmitters; and
a differential driver coupled to the first and second single-ended transmitters, wherein the differential driver is a fully n-type device based push-pull voltage mode driver, wherein the differential driver comprises eight n-type devices such that for a given electrical path from a power supply node to a ground node there are at most three transistors coupled in series between the power supply node and the ground node.

US Pat. No. 10,216,679

SEMICONDUCTOR DEVICE AND CONTROL METHOD THEREOF

Renesas Electronics Corpo...

1. A semiconductor device comprising:a plurality of processors, each of the plurality of processors being configured to execute a program; and
an external register disposed outside the processors, the external register being connected to each of the plurality of processors, wherein
each of the plurality of processors comprises:
a control circuit that controls execution of the program;
an arithmetic circuit that performs an operation related to the program by using the external register; and
at least one internal storage circuit, the at least one internal storage circuit being disposed inside of a respective one of the plurality of processors,
the internal storage circuit stores execution state data regarding a state of the execution of the program, the execution state data being data that is transferred from a transfer-origin processor to a transfer-destination processor when a program executing entity is changed from one of the plurality of processors to another of the plurality of processors halfway through the execution of the program,
before the program executing entity is changed from the one of the plurality of processors to the another of the plurality of processors, the external register stores operation data related to the operation performed in the arithmetic circuit of the one of the plurality of processors, and
after the program executing entity is changed from the one of the plurality of processors to the another of the plurality of processors, the arithmetic circuit of the another of the plurality of processors performs the operation by using the operation data stored in the external register and the external register stores operation data related to the operation performed in the arithmetic circuit of the another of the plurality of processors.

US Pat. No. 10,216,678

SERIAL PERIPHERAL INTERFACE DAISY CHAIN COMMUNICATION WITH AN IN-FRAME RESPONSE

Infineon Technologies AG,...

1. A master device, wherein the master device is configured to:output a master data output to a first servant device of a plurality of servant devices, wherein the plurality of servant devices is connected in a serial-peripheral interface (SPI) daisy chain configuration with the master device, wherein the SPI comprises a chip select signal, a serial data in signal, a serial data out signal and a clock signal; and
receive a master data input directly from a last servant device of the plurality of servant devices, wherein the master data input comprises an in-frame response of the plurality of servant devices, wherein the in-frame response is received by the master device in a single SPI communication frame, and wherein respective responses from the plurality of servant devices are arranged within the in-frame response so that the respective responses from the plurality of servant devices are received by the master device in an order inverse to the SPI daisy chain configuration.

US Pat. No. 10,216,677

ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF WITH IDENTIFICATION OF SENSOR USING HISTORY

Samsung Electronics Co., ...

1. An electronic apparatus comprising:an interface comprising interface circuitry configured to be connectable with at least one of a plurality of sensor modules each sensor module comprising at least one sensor for sensing an object;
a programmable circuit configured to process a sensing signal obtained by sensing the object through each sensor module; and
a controller configured to identify at least one hardware image corresponding to the sensor module connected to the interface from among a plurality of hardware images, to load the at least one identified hardware image to the programmable circuit, and to control the programmable circuit to process a sensing signal corresponding to the at least one hardware image,
wherein the controller is further configured to, in response to the sensor module transmitting the sensing signal through the interface being identified, identify whether a history of using the identified sensor module is present, and to retrieve the hardware image corresponding to the identified sensor module from a storage in response to the history of using the identified sensor module being present.

US Pat. No. 10,216,676

SYSTEM AND METHOD FOR EXTENDED PERIPHERAL COMPONENT INTERCONNECT EXPRESS FABRICS

FutureWei Technologies, I...

1. A system for extending peripheral component interconnect express (PCIe) fabric comprising:a host root complex for a host PCIe fabric associated with a first set of bus numbers and a first memory mapped input/output (MMIO) space;
at least one endpoints connected with the host root complex; and
a root complex end point (RCEP) for an extended PCIe fabric associated with a second set of bus numbers and a second MMIO space separate from the first set of bus numbers and the first MMIO space, respectively, wherein the RCEP is an endpoint of the at least one endpoints connected with the host root complex, wherein the RCEP is a bridge between the extended PCIe fabric and the host PCIe fabric, and wherein the second set of bus numbers allows additional endpoints to be connected to the RCEP beyond a capacity of the host PCIe fabric as provided by resources of the host root complex.

US Pat. No. 10,216,675

TECHNIQUES FOR ESTABLISHING AN EXTERNAL INTERFACE AND A NETWORK INTERFACE WITHIN A CONNECTOR

LENOVO (SINGAPORE) PTE LT...

1. An electronic device comprising:a host system;
a device controller includes a first data channel for communicating with a peripheral device and a second data channel for communicating with a network device;
a first receptacle for simultaneously providing a peripheral interface for said first data channel and a network interface for said second data channel;
a crossbar switch, connected between said device controller and said first receptacle, switches between said first and second data channels of said device controller to establish said peripheral interface and said network interface in said first receptacle; and
a power delivery controller connected to said host system via a first serial bus, and connected to said crossbar switch via a second serial bus.

US Pat. No. 10,216,674

HIGH PERFORMANCE INTERCONNECT PHYSICAL LAYER

Intel Corporation, Santa...

1. An apparatus comprising:physical layer logic, link layer logic, and protocol layer logic, wherein the physical layer logic is to:
generate a supersequence comprising a sequence comprising an electrical ordered set (EOS) and a plurality of training sequences, the plurality of training sequences comprises a predefined number of training sequences corresponding to a respective one of a plurality of training states with which the supersequence is to be associated, each training sequence in the plurality of training sequences is to include a respective training sequence header and a training sequence payload, the training sequence payloads of the plurality of training sequences are to be sent scrambled and the training sequence headers of the plurality of training sequences are to be sent unscrambled.

US Pat. No. 10,216,673

USB DEVICE FIRMWARE SANITIZATION

International Business Ma...

1. A method, comprising:intercepting communications between a universal serial bus (USB) device and a host, at least by implementing first device firmware of the USB device, wherein the second device firmware is implemented in the USB device; and
sanitizing, using at least the implemented first device firmware, intercepted communications from the USB device toward the host, the sanitizing performed so that no communication from the USB device is directly forwarded to the host and instead only sanitized communications are forwarded to the host, wherein:
sanitizing is performed by a sanitizer having a host side and a device side and further comprises:
converting requests from the host to the USB device from USB-level semantics used by the device side to application-level semantics used by the host side, processing the request at an application level to determine first USB-level semantics to use to communicate the request to the USB device, and lowering the application-level semantics to the determined first USB-level semantics for sending to the USB device; and
converting replies from the USB device to the host from the USB-level semantics to the application-level semantics, processing the replies at the application level to determine second USB-level semantics to use to communicate the replies to the host, and lowering the application-level semantics to the determined second USB-level semantics for sending to the host; and
performing the sanitizing based at least on analysis of one or both of the application-level semantics and the USB-level semantics.

US Pat. No. 10,216,672

SYSTEM AND METHOD FOR PREVENTING TIME OUT IN INPUT/OUTPUT SYSTEMS

International Business Ma...

8. A system for preventing time out from occurring during transfer of data to an input/output device, comprising:one or more processors including memory for storing a quantity of data to be transferred to an input/output device in a data transfer;
a data prober, for probing the quantity of data and forwarding the quantity of data to an input/output controller;
the input/output controller, for breaking the quantity of data into data packets and for transferring the data packets in a data stream;
the input/output device, for receiving the data stream transferred by the input/output controller; and
a data dummy generator, for generating dummy data and inserting the dummy data in the data stream, the data dummy generator being configured to generate dummy data and insert same into the data stream at a selected time in order to avoid a time out condition from occurring during the data transfer.

US Pat. No. 10,216,671

POWER AWARE ARBITRATION FOR BUS ACCESS

QUALCOMM Incorporated, S...

1. A method of operating a bus interface unit, the method comprising:receiving three or more words from one or more agents for transmission on to a data bus;
storing the three or more words in three or more respective queues, wherein the three or more respective queues are indexed to a predetermined sequential order;
selecting a subset of the three or more respective queues based on a position of a round robin pointer (RRP) having a RRP value that traverses the three or more queues in accordance with the predetermined sequential order, wherein the subset:
excludes queues having an index value lower than the RRP value; and
includes queues having an index value higher than the RRP value;
identifying a plurality of pending words stored in the selected subset of the three or more queues;
determining which of the plurality of pending words stored in the selected subset will consume the least switching power;
selecting, based on the determining, a next word from the plurality of pending words stored in the selected subset of the three or more queues; and
transmitting the selected next word on to the data bus.

US Pat. No. 10,216,670

SYNCHRONIZATION OF A NETWORK OF SENSORS

STMICROELECTRONICS (GRENO...

3. A system comprising:a plurality of slave boards, each slave board comprising a sensor and a control processor;
a master board comprising a sensor and a control processor, the master board being configured to access measurements of the plurality of slave boards; and
a serial bus connecting the master board and the slave boards;
wherein the control processor of the master board is programmed to calibrate acquisitions of information from each of the slave board, the control processor of the master board being configured to:
receive a count from each slave board, each count representing a time separating an instant of reception of a measurement acquisition start command transmitted by the master board to that slave board and a measurement acquisition end instant for that slave board;
for each slave board, calculate a delay to be applied to an acquisition start command of that slave board, the delay being calculated as a function of the count received from that slave board; and
transmit each acquisition start command to the respective slave board so that the respective slave board delays a start of acquisition by a value of the delay calculated for that board so that acquisitions of all slave boards end at a same instant of time, wherein a delay Ri to be applied to the acquisition start command of a slave board i is calculated according to the following formula:
Ri=(N?i)*C+CN?Ci whereindex i denotes a board and index N denotes the board for which acquisition ends last, where i=0 . . . N,
C is a count, fixed by the master board, between the transmissions, by the master board, of commands to two successive slave boards, and
Ci is a count performed for a board i between the instant of reception of a measurement acquisition start command and the measurement acquisition end instant.

US Pat. No. 10,216,669

BUS BRIDGE FOR TRANSLATING REQUESTS BETWEEN A MODULE BUS AND AN AXI BUS

Honeywell International I...

1. A method for bus bridging comprising:providing a bus interface device communicatively coupled between at least one module bus and at least one advanced extensible interface (AXI) bus for translating bus requests between said module bus and said AXI bus, said bus interface device including logic, wherein said logic is configured to:
receive a read/write (R/W) request that is one of a module bus protocol (module bus protocol R/W request) and an AXI bus protocol (AXI bus protocol R/W request);
buffer said R/W request to provide a buffered R/W request;
translate via said finite state machine (FSM) said buffered R/W request to a first AXI protocol conforming request if said buffered R/W request is said module bus protocol R/W request and translate via said finite state machine (FSM) said buffered R/W request to a first module bus protocol conforming request if said buffered R/W request is said AXI bus protocol R/W request, wherein the FSM is implemented as sequential logic circuits and is defined by a list of its states and triggering conditions for each transition; and
transmit said first AXI protocol conforming request to said AXI bus or said first module bus protocol conforming request to said module bus.

US Pat. No. 10,216,668

TECHNOLOGIES FOR A DISTRIBUTED HARDWARE QUEUE MANAGER

Intel Corporation, Santa...

1. A processor comprising:a plurality of processor cores;
a plurality of hardware queue managers;
interconnect circuitry to connect each hardware queue manager of the plurality of hardware queue managers to each processor core of the plurality of processor cores; and
a plurality of queue mapping units, wherein each of the plurality of processor cores is associated with a different queue mapping unit of the plurality of queue mapping units and each of the plurality of queue mapping units is associated with a different processor core of the plurality of processor cores,
wherein each hardware queue manager of the plurality of hardware queue managers comprises:
enqueue circuitry to store data received from a processor core of the plurality of processor cores in a data queue associated with the respective hardware queue manager in response to an enqueue command generated by the processor core, wherein the enqueue command identifies the respective hardware queue manager;
dequeue circuitry to retrieve the data from the data queue associated with the respective hardware queue manager in response to a dequeue command generated by a processor core of the plurality of processor cores, wherein the dequeue command identifies the respective hardware queue manager; and
wherein each queue mapping unit of the plurality of queue mapping units is configured to:
receive a virtual queue address from the corresponding processor core;
translate the virtual queue address to a physical queue address; and
provide the physical queue address to the corresponding processor core.

US Pat. No. 10,216,667

IMAGE FORMING APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM

Canon Kabushiki Kaisha, ...

1. An image forming apparatus, comprising:a main system;
a sub system that communicates with the main system; and
a device that communicates with the sub system,
wherein:
the main system includes a transfer unit configured to transfer, to a memory of the sub system, a boot program of the sub system and device information that is necessary for the device to perform an activation process of the device,
the sub system includes;
a control unit configured to perform, based on the boot program that has been transferred by the transfer unit and is in the memory of the sub system, an establishment process for establishing communication between the main system and the control unit, and
a transmission unit configured to transmit, to the device, the device information that has been transferred by the transfer unit and is in the memory of the sub system, and
the device includes an execution unit configured to execute the activation process of the device using the device information transmitted by the transmission unit,
wherein the transfer unit included in the main system is configured to transfer, to the memory of the sub system, the boot program of the sub system and the device information that is necessary for the device to perform the activation process of the device, before the establishment process establishes the communication between the main system and the control unit included in the sub system.

US Pat. No. 10,216,666

CACHING METHODS AND SYSTEMS USING A NETWORK INTERFACE CARD

Cavium, LLC, Santa Clara...

1. A machine implemented method, comprising:maintaining a cache entry data structure for storing a sync word associated with a cache entry that points to a storage location at a storage device accessible to a network interface card (NIC) via a peripheral link, the peripheral link couples the NIC, the storage device and a processor of a computing device; wherein the sync word is associated with a plurality of states that are used by the NIC and a caching module executed by the processor of the computing device for processing requests to transmit data cached at the storage device by the NIC using a network link; wherein the plurality of states are an add state, a remove state and a valid state that are updated by the NIC by setting bits associated with each of the plurality of states;
using the cache entry data structure by the NIC to determine that there is a cache hit indicating that data for a read request is cached at the storage device;
posting a first message for the storage device by the NIC via the peripheral link, at a storage device queue located at a host memory of the computing device, the message requesting the data for the read request from the storage device;
in response to the first message, placing the data for the read request for the NIC by the storage device at the host memory via the peripheral link;
posting a second message for the NIC by the storage device at the host memory via the peripheral link for notifying the NIC that the data for the read request has been placed at the host memory;
retrieving the data placed by the storage device at the host memory by the NIC via the peripheral link;
transmitting the data for the read request by the NIC to via the network link; and
updating by the NIC, a state of a cache entry associated with the read request at the cache entry data structure.

US Pat. No. 10,216,665

MEMORY DEVICE, MEMORY CONTROLLER, AND CONTROL METHOD THEREOF

REALTEK SEMICONDUCTOR COR...

1. A control method, comprising:detecting an operational command to a first memory unit;
interrupting an operational status of a second memory unit performing with a write operation or a read operation;
asserting the operational command corresponding to the first memory unit; and
recovering the operational status of the second memory unit,
wherein the first memory unit and the second memory unit are different memory units corresponding to the same channel.

US Pat. No. 10,216,664

REMOTE RESOURCE ACCESS METHOD AND SWITCHING DEVICE

Huawei Technologies Co., ...

1. A remote resource access method, used to access a physical resource device separate from a computer system, the computer system comprising at least one computing node, the computer system and the physical resource device being coupled using a switching device, and the method comprising:obtaining, by the switching device, a first access message from a first computing node in the at least one computing node, the first access message accessing a virtual resource device, and a destination address in the first access message being a virtual address of the virtual resource device;
converting, by the switching device, the first access message into a second access message based on a physical address of the physical resource device corresponding to the virtual address of the virtual resource device, a destination address in the second access message being the physical address of the physical resource device, and the virtual resource device being a virtualized device of the physical resource device;
sending, by the switching device, the second access message to the physical resource device using a network, the physical resource device comprising at least one physical resource;
selecting, by the switching device, a device driver according to physical resource information, the physical resource information being received from a management platform and corresponding to the physical resource device; and
running, by the switching device, the device driver to simulate insertion of the physical resource device into the switching device.

US Pat. No. 10,216,663

SYSTEM AND METHOD FOR AUTONOMOUS TIME-BASED DEBUGGING

NXP USA, INC., Austin, T...

1. A processing system, comprising:a general purpose instruction based data processor;
an input configured to receive a command written by the data processor;
a timer manager controller configured to receive the command, and to execute the command; and
a debug interrupt timer controller (DITC) configured to determine that the command is directed to the DITC, and to store configuration information that associates the command with an element of the processing system that is a source of the command, wherein the configuration information is included in the command.

US Pat. No. 10,216,662

HARDWARE MECHANISM FOR PERFORMING ATOMIC ACTIONS ON REMOTE PROCESSORS

Intel Corporation, Santa...

1. A hardware apparatus comprising:a first register in a processor core to store a memory address of a payload corresponding to an action to be performed associated with a remote action request (RAR) interrupt;
a second register in a processor core to store a memory address of an action list accessible by a plurality of processors;
a remote action handler circuit to:
identify a received RAR interrupt,
access the action list to identify an action to be performed and access the payload associated with the identified action,
perform the action of the received RAR interrupt, and
signal acknowledgment to an initiating processor upon completion of the action.

US Pat. No. 10,216,661

HIGH PERFORMANCE INTERCONNECT PHYSICAL LAYER

Intel Corporation, Santa...

1. An apparatus comprising:a receiver processor comprising an agent to support a layered protocol stack comprising physical layer logic, link layer logic, and protocol layer logic, wherein the agent is to:
receive a link layer data stream within an active link state (L0), wherein the link layer data comprises a set of flits;
intermittently enter a coordination link state (L0c), wherein the coordination link state defines a L0c interval in which physical layer control;
receive a control code within the L0c interval; and
initiate a reset of the link based on a control code mismatch, wherein the control code mismatch is based on an identification that the control code fails to match one of a set of specified codes.

US Pat. No. 10,216,660

METHOD AND SYSTEM FOR INPUT/OUTPUT (IO) SCHEDULING IN A STORAGE SYSTEM

EMC IP Holding Company LL...

1. A computer-implemented method for input/output (IO) scheduling for a storage system, the method comprising:receiving a plurality of input/output (IO) requests at the storage system, the IO requests including random IO requests and sequential IO requests;
determining whether there is a pending random IO request from the plurality of IO requests;
in response to determining that there is a pending random IO request, determining whether a total latency of the sequential IO requests exceeds a predicted latency of the pending random IO request; and
servicing the pending random IO request in response to determining that the total latency of the sequential IO requests exceeds the predicted latency of the pending random IO request.

US Pat. No. 10,216,659

MEMORY ACCESS SIGNAL DETECTION UTILIZING A TRACER DIMM

HEWLETT PACKARD ENTERPRIS...

1. A system, comprising;a memory controller;
a memory bus coupled to the memory controller; and
a dual inline memory module (DIMM) coupled to the memory controller through the memory bus, the DIMM comprising:
a dynamic random access memory (DRAM) portion;
a storage portion comprising one or more storage devices; and
a gate array portion coupled to the memory bus to detect memory access signals and to store information related to the memory access signals on the storage portion,
wherein the gate array portion is configured to detect memory access signals to any of one or more DIMMs coupled to the memory bus and store information related to the memory access signals directed to any of the one or more DIMMs on the storage portion;
a second DIMM coupled to the memory controller through the memory bus, the second DIMM comprising:
a second DRAM portion;
a second storage portion; and
a second gate array portion coupled to the memory bus to detect memory access signals and to store information related to the memory access signals on the second storage portion,
wherein detection of memory access signals by the gate array portion and detection of memory access signals by the second gate array portion are synchronized;
wherein at least one of the DRAM portion or the second DRAM portion includes a selected memory address, wherein an access command to the selected memory address is used to synchronize detection of memory access signals by the gate array portion and detection of memory access signals by the second gate array portion.

US Pat. No. 10,216,658

REFRESHING OF DYNAMIC RANDOM ACCESS MEMORY

VIA ALLIANCE SEMICONDUCTO...

8. A control method for dynamic random access memory, comprising:providing a command queue with access commands queued therein, wherein the access commands are queued in the command queue waiting to be transmitted to a dynamic random access memory;
using a counter to count how many times a rank of the dynamic random access memory is entirely refreshed;
repeatedly performing a per-rank refresh operation on the rank when the counter has not reached an upper limit and no access command corresponding to the rank is waiting in the command queue;
decreasing the counter by 1 every refresh inspection interval;
when there are access commands corresponding to the rank waiting in the command queue and the counter is 0, refreshing the rank bank-by-bank by per-bank refresh operations;
corresponding to a per-bank refresh operation to be performed on a single bank within the rank, priority of access commands queued in the command queue corresponding to remaining banks of the rank except for the single bank is raised; and
when finishing the per-bank refresh operation on the single bank, the priority of the access commands queued in the command queue corresponding to the remaining banks of the rank except for the single bank is restored.

US Pat. No. 10,216,657

EXTENDED PLATFORM WITH ADDITIONAL MEMORY MODULE SLOTS PER CPU SOCKET AND CONFIGURED FOR INCREASED PERFORMANCE

INTEL CORPORATION, Santa...

1. An apparatus comprising:a printed circuit board (PCB) defining a length and a width, the length being greater than the width;
a first row of elements on the printed circuit board, including a first memory region configured to receive at least one memory module;
a second row of elements on the PCB including a first central processing unit (CPU) socket configured to receive a first CPU, and a second CPU socket configured to receive a second CPU, the first CPU socket and the second CPU socket positioned side by side along the width of the PCB; and
a third row of elements on the PCB, including a second memory region configured to receive a at least one memory module;
wherein the second row of elements is positioned between the first row of elements and the third rows of elements.

US Pat. No. 10,216,656

CUT-THROUGH BUFFER WITH VARIABLE FREQUENCIES

INTERNATIONAL BUSINESS MA...

1. A system comprising:a header cut-through buffer operable to be asynchronously read while being written at different clock frequencies, wherein the header cut-through buffer is operable to buffer values from a header portion of a packet;
a data cut-through buffer operable to buffer values from a payload portion of the packet in parallel with the header cut-through buffer; and
a controller operatively connected to the header cut-through buffer and the data cut-through buffer, the controller operable to perform:
writing one or more values into the header cut-through buffer in a first clock domain, wherein the data cut-through buffer is written in the first clock domain;
comparing a number of values written into the header cut-through buffer to a notification threshold;
passing a notification indicator from the first clock domain to a second clock domain based on determining that the number of values written into the header cut-through buffer meets the notification threshold; and
based on receiving the notification indicator, reading the header cut-through buffer from the second clock domain continuously without pausing until the one or more values are retrieved and any additional values written to the header cut-through buffer during the reading of the one or more values are retrieved, wherein the data cut-through buffer is read in the second clock domain, and the notification threshold delays reading of the header cut-through buffer without delaying reading of the data cut-through buffer.

US Pat. No. 10,216,655

MEMORY EXPANSION APPARATUS INCLUDES CPU-SIDE PROTOCOL PROCESSOR CONNECTED THROUGH PARALLEL INTERFACE TO MEMORY-SIDE PROTOCOL PROCESSOR CONNECTED THROUGH SERIAL LINK

ELECTRONICS AND TELECOMMU...

1. A memory interface apparatus, comprising:a central processing unit (CPU)-side protocol processor connected to a CPU through a parallel interface; and
a memory-side protocol processor connected to a memory through a parallel interface,
wherein the CPU-side protocol processor and the memory-side protocol processor are connected through a serial link, and
wherein the CPU-side protocol processor includes:
a front-end bus controller configured to generate a header packet for header processing and a write data payload packet for a data payload;
a header buffer configured to store the header packet; and
a write data buffer configured to store the write data payload packet.

US Pat. No. 10,216,654

DATA SERVICE-AWARE INPUT/OUTPUT SCHEDULING

EMC IP Holding Company LL...

1. A method of request scheduling in a computing environment, comprising:obtaining a segment size for which one or more data services in the computing environment are configured to process data;
obtaining, from a host device in the computing environment, one or more requests to at least one of read data from and write data to one or more storage devices in the computing environment, wherein the one or more requests originate from one or more application threads of the host device;
aligning the one or more requests into one or more segments having the obtained segment size to generate one or more aligned segments, wherein the one or more requests are respectively provided to one or more local request queues corresponding to the one or more application threads, the one or more local request queues performing request merging, based on the obtained segment size, to generate the one or more aligned segments; and
dispatching the one or more aligned segments to the one or more data services prior to sending the one or more requests to the one or more storage devices;
wherein the computing environment is implemented via one or more processing devices operatively coupled via a communication network.

US Pat. No. 10,216,653

PRE-TRANSMISSION DATA REORDERING FOR A SERIAL INTERFACE

International Busiess Mac...

1. A serial communication system, comprising:a transmitting circuit for serially transmitting data via a serial communication link including N channels where N is an integer greater than 1, the transmitting circuit including:
an input buffer having storage for input data frames each including M bytes forming N segments of M/N contiguous bytes;
a reordering circuit coupled to the input buffer, wherein the reordering circuit includes a reorder buffer, and wherein the reordering circuit buffers, in each of multiple entries of the reorder buffer, a byte in a common byte position in each of the N segments of an input data frame, and wherein the reordering circuit sequentially outputs the contents of the entries of the reorder buffer via the N channels of the serial communication link.

US Pat. No. 10,216,652

SPLIT TARGET DATA TRANSFER

EMC IP Holding Company LL...

1. A method of transferring data to an initiator, comprising:providing a first target that exchanges commands and status with the initiator, the first target including a data storage array having a cache memory, at least one storage device, and a host adaptor that is coupled to the initiator and communicates with the cache memory;
providing a second target, coupled to the host adaptor, that exchanges commands and data with the first target and exchanges data with the initiator, the second target including a fast memory unit having memory that is accessible faster than the at least one storage device of the data storage array, wherein at least some data is stored at only one of: the first target or the second target;
the initiator providing a first transfer command to the first target, the first transfer command including a request for requested data;
transferring the requested data from cache memory of the first target through the host adaptor to the initiator in response to the requested data being stored in the cache memory;
determining whether the requested data is stored in the faster memory unit of the second target in response to the requested data not being stored in the cache memory of the data storage array of the first target;
the first target providing a second transfer command to the second target in response to the requested data not being stored in the cache memory of the data storage array of the first target and being stored in the faster memory unit of the second target; and
in response to the second transfer command received from the first target, the second target transferring the requested data to the initiator through the host adaptor.

US Pat. No. 10,216,651

PRIMARY DATA STORAGE SYSTEM WITH DATA TIERING

NexGen Storage, Inc., Lo...

1. A primary data storage system for use in a computer network and having tiering functionality, the system comprising:an input/output port for receiving a block command packet that embodies one of a read block command and a write block command and transmitting a block result packet in reply to a block command packet;
a data store system having at least a first tier and a second tier;
wherein the first tier has a first set of characteristics;
wherein the second tier has a second set of characteristics;
a statistics database configured to receive, store, and provide data for use in making a decision related to tiering of a data block;
a tiering processor for performing tiering functionality to cause a data block associated with a block command packet to be stored in whichever of the first tier and second tier has characteristics that are most compatible with the access pattern of the data block if, based on data obtained from the statistics database, there are sufficient resources for performing the tiering functionality and a calculated weight associated with a future performance of the tiering functionality at a first point in time is dominant relative to a calculated weight associated with a future performance of each of one or more other operations associated with one or more other block command packets that are simultaneously being considered for performance at the first point in time, and if there are insufficient resources for performing the tiering functionality or the calculated weight associated with the future performance of the tiering functionality is not dominant relative to a calculated weight associated with the future performance of each of the one or more other operations associated with one or more other block command packets simultaneously being considered for performance at the first point in time, forgoing any tiering functionality with respect to the data block until at a second point in time that is later than the first point in time, data obtained from the statistics database indicates that there are sufficient resources for performing the tiering functionality and a calculated weight associated with a future performance of the tiering functionality at the second point in time is dominant relative to a calculated weight associated with a future performance of each of whatever one or more other operations associated with one or more other block command packets are simultaneously being considered for future performance at the second point in time;
wherein the tiering processor is adapted for:
copying a first plurality of data blocks from the first tier to the second tier so that the second tier has a second plurality of data blocks that is identical to the first plurality of data blocks; and
after a copying, identifying a retained portion of the space occupied by the second plurality of data blocks on the second tier as being most compatible with the second tier than with the first tier, identifying an available portion of the space occupied by the first plurality of data blocks on the first tier that corresponds to the retained portion of space on the second tier as available, and thereby retaining on the first tier a third data block or third plurality of data blocks that is a subset of the second plurality of data blocks on the second tier.

US Pat. No. 10,216,650

METHOD AND APPARATUS FOR BUS LOCK ASSISTANCE

Intel Corporation, Santa...

1. A method comprising:detecting that a first instruction and a second instruction are locked instructions;
determining that execution of the first instruction and the second instruction each include imposing an initial bus lock; and
executing a bus lock assistance function in response to the determining, wherein the bus lock assistance function comprises:
preventing the initial bus lock from being imposed for the first instruction by raising a flag to cause execution of software including the first instruction to stop, and
permitting the initial bus lock to be imposed for the second instruction.

US Pat. No. 10,216,649

KERNEL TRANSITIONING IN A PROTECTED KERNEL ENVIRONMENT

1. A method for providing multiple kernels in a protected kernel environment, the method comprising:providing, by a hypervisor, a virtual machine that includes a first kernel and a second kernel;
allocating a first portion of memory for the first kernel and a second portion of memory for the second kernel;
executing the first kernel that is stored in the first portion of memory;
disabling, by the hypervisor, access privileges corresponding to the second portion of memory; and
transitioning from executing the first kernel to executing the second kernel, wherein the transitioning from the first kernel to the second kernel occurs while the virtual machine is running and without shutting down or rebooting the virtual machine, the transitioning comprising:
clearing, by the hypervisor, at least some of the first portion of memory;
enabling, by the hypervisor, access privileges corresponding to the second portion of the memory; and
after the enabling, executing the second kernel on the virtual machine.

US Pat. No. 10,216,648

MAINTAINING A SECURE PROCESSING ENVIRONMENT ACROSS POWER CYCLES

Intel Corporation, Santa...

1. A processor comprising:an instruction unit to receive a first instruction, wherein the first instruction is to evict a root version array page entry from a secure cache; and
an execution unit to execute the first instruction, wherein execution of the first instruction includes generating a blob to contain information to maintain a secure processing environment across a power cycle and storing the blob in a non-volatile memory, wherein generating the blob is to include encrypting a combination of inputs reflecting context of the secure cache, the combination of inputs to include a version number of the root version array page.

US Pat. No. 10,216,647

COMPACTING DISPERSED STORAGE SPACE

International Business Ma...

1. A method for execution by a storage unit, the method comprises:receiving an encoded data slice for storage in physical memory of the storage unit, wherein the physical memory includes a plurality of storage locations, wherein the physical memory is virtually divided into a plurality of log files, and wherein each log file of the plurality of log files is associated with a unique set of storage locations of the plurality of storage locations, wherein a data object is partitioned into a plurality of data segments, wherein a data segment of the plurality of data segments is error encoded and sliced in accordance with distributed data storage parameters to produce a plurality of encoded data slices for distributed storage in a plurality of storage units that includes the storage unit, and wherein the plurality of encoded data slices includes the encoded data slice that is received for storage in physical memory of the storage unit;
determining a storage location of the plurality of storage locations for storing the encoded data slice by:
identifying a log file of the plurality of log files based on information regarding the encoded data slice corresponding to information regarding the log file to produce an identified log file, wherein the information regarding the encoded data slice includes at least one of: a data identifier (ID) of a file associated with the encoded data slice, a user ID associated with the encoded data slice, and an indication of the log file contained in a message accompanying the encoded data slice, and wherein the identified log file is storing at least one other encoded data slice;
comparing storage parameters of the identified log file with desired storage parameters associated with the encoded data slice; and
when the storage parameters of the identified log file include at least one of:
the log file is identified as a most recently compacted log file;
the log file is identified in a slice location table lookup; and
the log file is identified based on a slice name associated with the encoded data slice,
identifying a storage location within the unique set of storage locations associated with the identified log file.

US Pat. No. 10,216,646

EVICTING APPROPRIATE CACHE LINE USING A REPLACEMENT POLICY UTILIZING BELADY'S OPTIMAL ALGORITHM

Board of Regents, The Uni...

1. A method for cache replacement, the method comprising:tracking, by a processor, an occupied cache capacity of a simulated cache at every time interval using an occupancy vector, wherein said occupancy vector contains a number of cache lines contending for said simulated cache, wherein said cache capacity corresponds to a number of cache lines of said simulated cache;
retroactively assigning said cache capacity to cache lines of said simulated cache in order of their reuse, wherein a cache line is considered to be a cache hit utilizing Belady's optimal algorithm if said cache capacity is available at all times between two subsequent accesses, wherein a cache line is considered to be a cache miss utilizing said Belady's optimal algorithm if said cache capacity is not available at all times between said two subsequent accesses;
updating said occupancy vector using a last touch timestamp of a current memory address;
determining if said current memory address results in a cache hit or a cache miss utilizing said Belady's optimal algorithm based on said updated occupancy vector; and
storing a replacement state used for evicting a cache line of a cache using results of said determination.

US Pat. No. 10,216,645

MEMORY DATA TRANSFER METHOD AND SYSTEM

Synopsys, Inc., Mountain...

1. A method using one hardware implemented DMA (Direct Memory Access) processor having DMA capability integrated therein, the method comprising:the one hardware implemented DMA processor moving a first data from a plurality of first locations to an internal memory within the one hardware DMA processor in response to an initial command:
retrieving a subset of the first data from the internal memory within the one hardware implemented DMA processor, the internal memory within the one hardware implemented DMA processor for temporary storage of the retrieved data;
storing the retrieved subset from the internal memory within the one hardware implemented DMA processor to a corresponding one of the plurality of second locations; and
storing the retrieved data from the internal memory to a location of the at least a third location simultaneously and by the same DMA process performed by the one hardware implemented DMA processor,
wherein the plurality of second locations forms a memory buffer having the first data duplicated therein and the at least a third location forms one of a memory buffer having the first data duplicated therein and a memory supporting inline processing of data provided therein.

US Pat. No. 10,216,644

MEMORY SYSTEM AND METHOD

Toshiba Memory Corporatio...

1. A memory system comprising:a first memory that is nonvolatile;
a second memory that includes a buffer; and
a memory controller configured to:
manage a logical address space by dividing the logical address space into a plurality of regions, each region including a fixed number of continuous logical addresses, the fixed number being an integer larger than one; and
manage a plurality of pieces of translation information, each piece of translation information correlating a physical address indicating a location in the first memory with a logical address,
wherein
the plurality of pieces of translation information includes a first plurality of pieces of translation information correlating the fixed number of physical addresses with the fixed number of continuous logical addresses included in one region of the plurality of regions,
in a case where the first plurality of pieces of translation information correspond to a second plurality of pieces of translation information, the second plurality of pieces of translation information linearly correlating a plurality of continuous physical addresses with a plurality of continuous logical addresses,
the memory controller caches first translation information correlating a first physical address with a first logical address among the first plurality of pieces of translation information in the buffer and does not cache second translation information correlating a second physical address with a second logical address among the first plurality of pieces of translation information in the buffer, and
in a case where the first plurality of pieces of translation information do not correspond to the second plurality of pieces of translation information,
the memory controller caches the first plurality of pieces of translation information in the buffer.

US Pat. No. 10,216,643

OPTIMIZING PAGE TABLE MANIPULATIONS

INTERNATIONAL BUSINESS MA...

1. A computer program product for optimizing page table manipulations, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions being readable and executable by a processing circuit to cause the processing circuit to:create and maintain a translation table for translating direct memory access (DMA) addresses to real addresses with a translation look-aside buffer (TLB) disposed to cache priority translations;
update the translation table upon de-registration of a DMA address without issuance of a corresponding TLB invalidation instruction;
allocate entries in the translation table from low to high memory addresses during memory registration;
maintain a cursor for identifying where to search for available entries upon performance of a new registration;
advance the cursor from entry-to-entry in the translation table and wrap the cursor from an end of the translation table to a beginning of the translation table; and
issue a synchronous TLB invalidation instruction to invalidate an entirety of the TLB upon at least one wrapping of the cursor and an entry being identified and updated.

US Pat. No. 10,216,642

HARDWARE-BASED PRE-PAGE WALK VIRTUAL ADDRESS TRANSFORMATION WHERE THE VIRTUAL ADDRESS IS SHIFTED BY CURRENT PAGE SIZE AND A MINIMUM PAGE SIZE

International Business Ma...

1. An apparatus comprising:a processor; and
a virtual address transformation unit coupled with the processor, the virtual address transformation unit including a register, the virtual address transformation unit configured to:
receive an indication of a virtual address;
determine, from the register, a current page size of a plurality of available page sizes;
determine a bit shift amount based, at least in part, on the current page size and a base shift amount, the base shift amount corresponding to a minimum page size; and
perform a bit shift of the virtual address to create a transformed virtual address, wherein the virtual address is bit shifted by, at least, the determined bit shift amount.

US Pat. No. 10,216,641

MANAGING AND SHARING ALIAS DEVICES ACROSS LOGICAL CONTROL UNITS

INTERNATIONAL BUSINESS SY...

1. A computer implemented method of managing alias devices across logical control units, the method comprising:establishing, by a thread in a host system, one or more alias management groups associated with a set of one or more logical control units, wherein each logical control unit is associated with one or more devices;
wherein each logical control unit in the set of one or more logical control units associated with an alias management group shares a set of network paths;
wherein the one or more devices are being accessed for read/write requests by one or more operating systems operating on a plurality of central processing units (CPUs) in the host system; and
responsive to one or more changes to the set of network paths of a first logical control unit in the set of logical control units, performing a method comprising:
marking a first alias management group associated with the first logical control unit as invalid for alias borrowing;
performing, by the thread, a first synchronized CPU enablement operation, wherein the first synchronized CPU enablement operation ensures that each of the plurality of CPUs is enabled;
determining whether a second alias management group exists, the second alias management group including a second set of control units having a set of network paths that matches the set of network paths of the first control unit; and
responsive to determining that the second alias management group exists, associating the first control unit with the second alias management group.

US Pat. No. 10,216,640

OPPORTUNISTIC CACHE INJECTION OF DATA INTO LOWER LATENCY LEVELS OF THE CACHE HIERARCHY

SAMSUNG ELECTRONICS CO., ...

1. A method comprising:receiving a request, from a non-central processor device that is configured to perform a direct memory access, to write data within a memory system at a memory address;
determining if a cache tag hit is generated, based upon the memory address, by a caching tier of the memory system that is closer, latency-wise, to a central processor than a coherent memory interconnect;
if the caching tier generated the cache tag hit, injecting the data into the caching tier regardless of whether or not the caching tier has been specifically configured for cache injection; and
wherein injecting the data into the caching tier comprises causing the caching tier to pre-fetch the data from a buffer included by the coherent memory interconnect.

US Pat. No. 10,216,639

IDENTIFICATION OF BLOCKS TO BE RETAINED IN A CACHE BASED ON TEMPERATURE

Hewlett Packard Enterpris...

1. A method for managing a flash memory cache that stores data in multiple segments, each of the segments including multiple blocks, the method comprising:determining respective temperatures of at least some of the blocks of the segments;
selecting one of the segments to be erased based at least in part on the respective temperatures of the blocks included in the selected segment;
identifying, among the blocks included in the selected segment, a block to be retained in the flash memory cache based on its temperature;
writing a new segment in the flash memory cache that includes the identified block; and
erasing the selected segment from the flash memory cache.

US Pat. No. 10,216,638

METHODS AND SYSTEMS FOR REDUCING CHURN IN A CACHING DEVICE

Hewlett Packard Enterpris...

1. A method for a storage device having a caching device and a backend storage device, the method comprising:receiving data at the storage device, the data including sequentially-accessed data; and
performing, by a controller of the storage device, a selective caching of the sequentially-accessed data, wherein the selective caching comprises:
if the sequentially-accessed data can be read from the backend storage device at a substantially similar data rate as from the caching device, writing the sequentially-accessed data only to the backend storage device so as to reduce the amount of data written to the caching device; and
if the sequentially-accessed data cannot be read from the backend storage device at a substantially similar data rate as from the caching device, writing the sequentially-accessed data to both the caching device and the backend storage device.

US Pat. No. 10,216,637

NON-VOLATILE MEMORY CACHE PERFORMANCE IMPROVEMENT

Microsoft Technology Lice...

1. A method comprising:receiving an interrupt from a persistent storage device indicating that the persistent storage device is preparing for access; and
responsive to receiving the interrupt from the persistent storage device:
determining, without a request, that space is needed in a non-volatile memory; and
moving first data from an area in the non-volatile memory to the persistent storage device.

US Pat. No. 10,216,636

CONTROLLED CACHE INJECTION OF INCOMING DATA

Google LLC, Mountain Vie...

1. A computer-implemented method comprising:providing, by a user process to an input-output device, a request for data that is not yet available, wherein the user process is a software application that uses data received from the input-output device;
receiving, by the user process and from the input-output device, a set of memory addresses for the requested data that is not yet available;
determining, by the user process, a subset of the received memory addresses for the requested data that is not yet available, the subset corresponding to data to be cached by a processor;
determining, by the user process, a cache level in a cache of a processor in which to pre-fetch the subset of memory addresses;
providing, by the user process to the processor, a request for the processor to allocate space in the cache of the processor to cache data corresponding to the subset of the memory addresses at the determined cache level for the requested data that is not yet available;
receiving, by a memory controller, the requested data and the set of memory addresses corresponding to the requested data; and
storing, by the memory controller, data, of the received data for the set of memory addresses corresponding to the subset of memory addresses in the allocated space of the cache of the processor and storing remaining data of the received data for the set of memory addresses in a main memory.

US Pat. No. 10,216,635

INSTRUCTION TO CANCEL OUTSTANDING CACHE PREFETCHES

INTERNATIONAL BUSINESS MA...

1. A computer implemented method for handling outstanding cache miss prefetches, the method comprising:recognizing by a processor pipeline that a prefetch canceling instruction is being executed, wherein the prefetch canceling instruction is executed when the prefetch canceling instruction is a next-to-complete instruction;
in response to recognizing that the prefetch canceling instruction is being executed, evaluating all outstanding prefetches according to a criterion as set forth by the prefetch canceling instruction in order to select qualified prefetches;
in response to evaluating, communicating with a cache subsystem to cause canceling of the qualified prefetches that fit the criterion, wherein one criterion is to selectively cancel outstanding cache miss requests which are only launched by flushed instructions; and
in response to successful canceling of the qualified prefetches, preventing a local cache from being updated from the qualified prefetches.

US Pat. No. 10,216,634

CACHE DIRECTORY PROCESSING METHOD FOR MULTI-CORE PROCESSOR SYSTEM, AND DIRECTORY CONTROLLER

HUAWEI TECHNOLOGIES CO., ...

1. A cache directory processing method, wherein the method comprises:obtaining a first directory corresponding to a first-type storage space in shared storage space for a plurality of cores in a multi-core processor system, the first-type storage space comprising a plurality of second-type storage spaces, the first directory comprising a plurality of second-type directory entries each corresponding to one of the plurality of second-type storage spaces, and each of the plurality of second-type directory entries describing an access type and a sharer of the one of the plurality of second-type storage spaces;
performing a directory entry combination operation on the plurality of second-type directory entries in the first directory according to the access type and the sharer of each of the plurality of second-type directory entries to form a first-type directory entry in a second directory corresponding to the first-type storage space; and
replacing the first directory with the second directory when a quantity of directory entries in the second directory is less than a quantity of directory entries in the first directory.

US Pat. No. 10,216,633

HARDWARE BASED COHERENCY BETWEEN A DATA PROCESSING DEVICE AND INTERCONNECT

Arm Limited, Cambridge (...

1. A data processing device comprising:an output port to transmit a request value to an interconnect arranged to implement a coherency protocol, to indicate a request to be subjected to the coherency protocol;
an input port to receive an acknowledgement value from the interconnect in response to the request value;
coherency administration circuitry to define behaviour rules for the data processing device in accordance with the coherency protocol and in dependence on the request value and the acknowledgement value; and
storage circuitry to administer data in accordance with the behaviour rules.

US Pat. No. 10,216,632

MEMORY SYSTEM CACHE EVICTION POLICIES

1. A method comprising:determining, from a plurality of eviction policies, a particular eviction policy associated with information to be retained in a cache;
selecting, from a plurality of lines of the cache, a particular one of the lines to evict from the cache based at least in part on the particular eviction policy, the cache being an N-way cache, and the particular line being from any of the N-ways of the N-way cache, the selecting comprising ascertaining an eviction policy for a trial line of the cache, and when the trial line eviction policy is random, randomly selecting one of the lines of the cache as the particular line and otherwise selecting the trial line as the particular line;
copying an indicator of the particular eviction policy into the particular line;
wherein the determining is based at least in part on an address associated with the information; and
wherein the particular line is simultaneously subject to eviction in accordance with the particular eviction policy as well as at least one other eviction policy of the plurality of eviction policies.

US Pat. No. 10,216,631

REVISING CACHE EXPIRATION

United Services Automobil...

1. A computer-implemented method for revising cache expiration date, the method comprising:tracking attributes of a number of queries of a database, wherein the attributes include a size of data associated with each of the number of queries;
identifying a number of write operations for updating data stored in a primary storage database and for replicating data from the primary storage database to a secondary storage database based on the tracked attributes;
determining that the number of write operations exceeds a database threshold;
identifying, among the number of queries, a query that is associated with the largest size of data based on the tracked attributes; and
revising a cache expiration date for the identified query to bring the number of write operations to within the database threshold;
wherein the database threshold includes a number of write operations that the primary storage database can perform in a period of time.

US Pat. No. 10,216,630

SMART NAMESPACE SSD CACHE WARMUP FOR STORAGE SYSTEMS

EMC IP Holding Company LL...

1. A computer-implemented method for solid state drive (SSD) cache warm up for a storage system, the method comprising:in response to receiving an indication to warm up a SSD cache, identifying namespace data of a filesystem to be warmed up separated from content data;
identifying one or more namespace pages of the namespace data;
for each of the one or more namespace pages,
locking the namespace page in a read-only mode;
determining if the namespace page is dirty;
if the namespace page is dirty, releasing the namespace page from the read-only mode without copying the namespace page to the SSD cache; and
if the page is clean, copying the namespace page to the SSD cache and releasing the namespace page from the read-only mode to reduce cache misses of the namespace on the SSD cache.

US Pat. No. 10,216,629

LOG-STRUCTURED STORAGE FOR DATA ACCESS

Microsoft Technology Lice...

1. A system comprising:a memory device;
a secondary storage device; and
a processor configured via executable instructions to:
store a page of data in the memory device, the page comprising a first portion and a second portion;
modify a log-structured store on the secondary storage device to reflect an update to a first portion of the page of data;
reclaim memory from the memory device by removing the first portion of the page from the memory device while retaining the second portion of the page in the memory device; and
retrieve the first portion of the page using the log-structured store on the secondary storage device and store the first portion of the page in the memory device, the first portion of the page being retrieved into the memory device while the second portion of the page remains in the memory device.

US Pat. No. 10,216,628

EFFICIENT AND SECURE DIRECT STORAGE DEVICE SHARING IN VIRTUALIZED ENVIRONMENTS

International Business Ma...

1. A computer system for direct storage device sharing in a virtualized environment, comprising:a memory unit for storing data; and
one or more processor units connected to the memory unit for transmitting data to and receiving data from the memory unit, the one or more processor units configured to operate as:
a storage controller for assigning each of a plurality of virtual functions an associated memory area of a physical memory, and executing the virtual functions in a single root-input/output virtualization environment to provide each of a plurality of guests with direct access to the physical memory, including:
at a first boot of one of the guests, the storage controller receiving a request from the one of the guests, and in response to the request, triggering an interrupt of a physical function to a hypervisor; and the storage controller receiving from the hypervisor a configuration command over the physical function, the configuration command setting up hardware in the storage controller to allocate storage in the storage device for said one of the guests and providing a mapping function for the one of the guests; and
for a subsequent boot of the one of the guests, after said first boot, the storage controller receiving a subsequent request from the one of the guests, and the storage controller setting up mapping for the one of the guests, without intervention of the hypervisor during said subsequent boot, using the mapping function provided to the storage controller at the first boot of the one of the guests prior to said subsequent boot, to provide said one of the guests with direct access to the physical memory.

US Pat. No. 10,216,627

TREE STRUCTURE SERIALIZATION AND DESERIALIZATION SYSTEMS AND METHODS

Levyx, Inc., Irvine, CA ...

1. A computer-implemented method for saving a traversable data structure to a computer-readable medium, wherein the traversable data structure comprises a set of nodes traversable using a set of memory address pointers, comprising:allocating a first contiguous memory space for the traversable data structure in the computer-readable medium at a first memory address;
assigning a memory address location in the first contiguous memory space to each of the set of nodes;
generating a set of memory offset pointers as a function of the set of memory address pointers and the assigned memory address locations;
converting the traversable data structure into a traversable array structure comprising the set of nodes and the set of memory offset pointers;
saving the traversable array structure to the first contiguous memory space, wherein the set of memory offset pointers are saved to memory blocks of the first contiguous memory space; and
traversing the traversable array structure by reading a memory offset pointer from a memory block of the first contiguous memory space and adding the memory offset pointer to a memory offset origin to obtain a memory address pointer of a destination node.

US Pat. No. 10,216,626

PARALLEL GARBAGE COLLECTION IMPLEMENTED IN HARDWARE

International Business Ma...

1. A method for dynamic memory management implemented in hardware, the method comprising:storing objects in a plurality of heaps; and
operating a hardware garbage collector to free heap space occupied by specified ones of the objects, said hardware garbage collector comprising a memory module and a plurality of engines, and the memory module comprising a data memory and a pointer memory, and each of the engines being in communication with the memory module to receive data therefrom;
storing a group of the objects in a plurality of heaps in the data memory; and
storing in the pointer memory pointers to the group of objects in the data memory; and
wherein the operating a hardware garbage collector to free heap space includes:
operating the engines of the hardware garbage collector for:
traversing the plurality of the heaps and marking selected ones of the objects of the heaps based on given criteria,
using said marks to identify a plurality of the objects, and
sending to the memory module addresses of the identified objects in the data memory; and
the memory module clearing the objects at said addresses in the data memory.

US Pat. No. 10,216,625

HARDWARE INTEGRITY VERIFICATION

SK Hynix Memory Solutions...

1. A system for storing digital data, comprising:a controller that is configured to:
send commands including write commands and
write data chunks associated with the write commands,
wherein the write commands are tagged with sequence numbers and physical “PHY” channels to which the write commands are directed, and the write data chunks are tagged with a same sequence number as its corresponding write command; and
a flash interface configured to:
receive the write commands from the controller via a command path,
transfer the write commands to the tagged PHY channels,
receive, at a time different from when the write commands are received, the write data chunks from the controller via a data path between the controller and the flash interface which is different from the command path, and,
match by tagged sequence number, the write commands with the write data chunks associated with the write commands,
when the sequence numbers tagged to the write commands and write data chunks are matched, transfer the write data chunks to the tagged PHY channels via a data path between the flash interface and the PHY channels, and
when the sequence numbers tagged to the commands and data chunks are mismatched, the flash interface is further configured to abort the transfer of the write commands and the write data.

US Pat. No. 10,216,624

TWO-PASS LOGICAL WRITE FOR INTERLACED MAGNETIC RECORDING

SEAGATE TECHNOLOGY LLC, ...

1. A method comprising:receiving a request to write data to a consecutive sequence of logical block addresses (LBAs);
identifying a first non-contiguous sequence of data tracks mapped to a first portion of the consecutive sequence of LBAs;
identifying a second non-contiguous sequence of data tracks mapped to a second portion of the consecutive sequence of LBAs, the second portion sequentially following the first portion; and
writing the data of the second portion of the consecutive sequence of LBAs to the first non-contiguous sequence of data tracks during a first pass of a transducer head through a radial zone and writing the data of the first portion of the consecutive sequence of LBAs to the second non-contiguous sequence of data tracks during a second, subsequent pass of the transducer head through the radial zone.

US Pat. No. 10,216,623

METHOD FOR VERIFYING THE FUNCTIONALITIES OF A SOFTWARE INTENDED TO BE INTEGRATED INTO A CRYPTOGRAPHIC COMPONENT, SYSTEM

AIRBUS DS SLC, Elancourt...

1. A method for validating operation of first software intended to be embedded in a cryptographic component using a simulator and a test bench which makes it possible to validate at least one first cryptographic function obviating at least some validation of the at least one first cryptographic function of the cryptographic component due to limited accessibility of the cryptographic component's memory, the method comprising:a step carried out on a processor and comprises generation of a first command of instructions to be used in the simulator from a second command of instructions to be use in the test bench, wherein the second command activates a cryptographic function and defines input data of the first software:
a first step carried out in the simulator and comprising a first execution of the at least one first cryptographic function using a first command of instructions by the first software implemented by a first processor and by a first memory,
said first execution of the at least one first cryptographic function generating:
a first status of the first memory, said first status comprising data present in the first memory after execution of a command of instructions; and
a first result of the first command, the first result comprising a value returned by at least one calculation of the first cryptographic function;
a second step carried out in the test bench and comprising a second execution of at least one second cryptographic function using a second command of instructions by a second software implemented by a second processor and by a second memory, with the at least one first cryptographic function and the at least one second cryptographic function carrying out same operations of modifying statuses of their respective memory, wherein the second software is a simplified version of the first software and reproduces a set of cryptographic functions of the first software while excluding some functionality that does not impact the first memory'status and input/output of the first software,
said second execution of the at least one second cryptographic function generating:
a second status of the second memory, said second status comprising data present in the second memory after execution of a command of instructions; and
a second result of the second command, the second result comprising a value returned by at least one calculation of the second cryptographic function; and
a step of validation that compares using a calculator:
the first status of the first memory with the second status of the second memory; and
the first result with the second result
wherein the operation of the first software is validated, when the first status and the first result are respectively identical to the second status and the second result.

US Pat. No. 10,216,622

DIAGNOSTIC ANALYSIS AND SYMPTOM MATCHING

International Business Ma...

1. A method for resolving at least one computer error, the method comprising:receiving, by a computer, a plurality of stored error chains with each of the plurality of stored error chains including one or more stored errors and a sequential order of the one or more stored errors reflecting a different sequence of received computer errors associated with one or more computer systems;
receiving, by the computer, the at least one computer error and diagnostic data associated with the at least one computer error;
based on the received diagnostic data associated with the received at least one computer error, generating, by the computer, at least one error chain including a plurality of detected errors and a sequential order of the plurality of detected errors, with the at least one generated error chain reflecting a detected sequence of errors determined during operation of the one or more computer systems;
comparing, by the computer, the at least one generated error chain to the plurality of stored error chains to determine a matching condition, with the comparing including:
using a set of operation(s) to identify a match between the at least one generated error chain and one or more of the plurality of stored error chains,
based on the set of operation(s), weighing each of the plurality of detected errors associated with the at least one generated error chain and the one or more stored errors associated with the plurality of stored error chains, with the weighing being based, at least in part upon product configuration information associated with the one or more computing systems and a position for each of the plurality of detected errors with respect to the at least one generated error chain and each of the one or more stored errors with respect to the plurality of stored error chains, such that one or more detected errors associated with the plurality of detected errors at a first or last position in the at least one generated error chain have greater weights than the one or more detected errors at intermediate positions with respect to the at least one generated error chain, and the one or more stored errors at a first or last position in the plurality of stored error chains have greater weights than the one or more stored errors at intermediate positions with respect to the plurality of stored error chains,
based on the weighing, comparing, by the computer, the at least one generated error chain to the plurality of stored error chains to determine the matching condition,
ranking, by the computer, a plurality of resolutions associated with the plurality of stored error chains based on the determined matching condition;
presenting, by the computer, a ranked list of the plurality of resolutions with respect to the at least one generated error chain; and
executing, by the computer, at least one resolution associated with the plurality of resolutions to the at least one computer error based on user selection of the at least one resolution from the ranked list.

US Pat. No. 10,216,620

STATIC CODE TESTING OF ACTIVE CODE

Synopsys, Inc., Mountain...

1. A method for automated application testing of computer code comprising:receiving deployed code of an application for execution at a computing system;
prior to executing the deployed code, adding breaking points to the deployed code;
executing the deployed code at the computing system;
during execution of the deployed code, monitoring the execution of the deployed code to identify active code from the deployed code, the identified active code loaded in a memory of the computing system for execution by the computing system during execution of the application,
the identified active code including computer-executable instruction to be executed by the computing system;
sending the identified active code that is loaded in the memory of the computing system for executing by the computing system to a code testing module; and
performing, by the code testing module, static analysis on the identified active code previously loaded in the memory of the computing system for execution by the computing system, the static analysis of the identified active code performed without executing the identified active code.

US Pat. No. 10,216,619

SYSTEM AND METHOD FOR TEST AUTOMATION USING A DECENTRALIZED SELF-CONTAINED TEST ENVIRONMENT PLATFORM

JPMorgan Chase Bank, N.A....

1. A method for test automation, comprising:at a workstation comprising at least one computer processor:
the workstation receiving a plurality of testing tools for testing a program;
the workstation receiving, from a server, a testing dashboard comprising core code that retrieves an external configuration file for one of the plurality of tools, and injects the external configuration file into the one of the plurality of tools at runtime, and a testing script that specifies an order of execution of the plurality of tools;
the workstation executing the core code to retrieve the external configuration file for the one of the plurality of tools;
the workstation executing the testing script to execute the plurality of tools; and
the workstation presenting the results of the execution of the testing script.

US Pat. No. 10,216,618

SYSTEM AND METHOD FOR REMOTELY DEBUGGING APPLICATION PROGRAMS

Versata Development Group...

1. A method of debugging an application program from a first computer system, wherein the application program resides on a second computer system that is remote from the first computer system, the method comprising:invoking the application program and a debugger program from the first computer system via a network interface;
displaying a user frame at the first computer system that includes information generated by the application program;
providing a debug view option at the first computer system for generating a debug frame of the application program;
communicating with the application program and the debugger program one or more requests requesting debugging information from the application program from the first computer system;
receiving the debugging information with the first computer system via the network interface;
displaying the debug frame at the first computer system when the debug view option is selected; and
generating a display in the debug frame that displays the debugging information at the first computer system, and the debugging information includes information generated from the application program and related to one or more components of the application program.

US Pat. No. 10,216,617

AUTOMATICALLY COMPLETE A SPECIFIC SOFTWARE TASK USING HIDDEN TAGS

International Business Ma...

1. A method to detect and diagnose where an error occurs in a source code that is associated with a software program or a website, the method comprising:receiving a log report associated with the software program or the website, wherein by the log report is sent based on a hidden tag associated with the software program or the website;
analyzing the received log report;
detecting at least one error based on the analysis of the received log report;
executing a last successfully executed line in the source code associated with the software program or the website, wherein the executing of the last successfully executed line in the source code is based on the detection of the at least one error; and
executing a plurality of hidden scripts associated with the hidden tag associated with the software program or the website, wherein the executed plurality of hidden scripts includes accessing an online help repository and displaying, via a graphical user interface, a suggested modification dialog.

US Pat. No. 10,216,616

COOPERATIVE TRIGGERING

Intel Corporation, Santa...

1. A processor, comprising:a front end including circuitry to decode instructions from an instruction stream;
a data cache unit including circuitry to cache data for the processor; and
a core triggering block (CTB) to provide integration between two or more different debug capabilities during a software execution, wherein the CTB is to:
provide a logging or tracing function during the software execution based on a first trigger of two or more different triggers associated with the two or more different debug capabilities.

US Pat. No. 10,216,615

DEBUGGABLE INSTANCE CODE IN A CLOUD-BASED INSTANCE PLATFORM ENVIRONMENT

SAP SE, Walldorf (DE)

1. A method executed at least one hardware processor, the method comprising:receiving a request from a runtime platform to run an instance of a software function;
in response to the receiving, loading instance code and instance data corresponding to the instance from a persistent storage into a runtime cache for the instance, the instance code and the instance data being stored together in the persistent storage, wherein the persistent storage is an in-memory database management system that stores the instance code and the instance data in a graph network, wherein the in-memory database management system models the graph network based on column store persistency, using column-optimized tables;
determining that there is an indication that the instance code should be executed in a debug mode; and
in response to a determination that the instance code should be executed in the debug mode:
generating source code for the instance, based on the instance code, in a debug folder in the persistent storage;
invalidating the runtime cache for the instance;
compiling the source code for the instance into an executable file, the compiling including adding one or more breakpoints to the executable file; and
sending the executable file to the runtime platform for execution.

US Pat. No. 10,216,614

SAMPLING APPROACHES FOR A DISTRIBUTED CODE TRACING SYSTEM

Amazon Technologies, Inc....

1. A system, comprising:one or more processors; and
memory to store computer-executable instructions that, if executed, cause the one or more processors to at least:
receive a segment of a code trace corresponding to a request submitted to a particular application of a plurality of applications hosted in a computing environment, wherein the code trace documents at least one call to at least one component service of the particular application to respond to the request;
determine whether to discard the segment based at least in part on at least one sampling parameter;
discard the segment; and
aggregate another segment in a batch before forwarding the other segment to a trace processing system as part of the batch.

US Pat. No. 10,216,612

SYSTEM AND METHOD FOR ACCESSING SERVER INFORMATION

FUNDI SOFTWARE PTY LTD, ...

1. A system for obtaining information about particular application and/or system events, and exception conditions generated during processing of a computer program or a transaction and stored in a CICS internal trace in the form of trace entries, the system comprising a first computing module for reading the CICS internal trace to obtain trace entries written in the CICS internal trace, and storage devices for storing the trace entries therein, wherein reading of the CICS internal trace comprises the steps ofa. locating CICS regions on the logical partition;
b. establishing a recovery environment to detect and recover from abnormal endings that occur if a CICS region suddenly becomes unavailable;
c. establishing AR-mode cross-memory access to the CICS address space to access the control blocks of CICS and the internal trace that resides inside the CICS address space;
d. verifying that the CICS address space is an active and eligible CICS region;
e. locating the current internal trace buffer;
f. iteratively repeating steps c to e until all CICS region have been read; and
g. reading the traces entries of the internal trace for each of the CICS regions.

US Pat. No. 10,216,611

DETECTING MISTYPED IDENTIFIERS AND SUGGESTING CORRECTIONS USING OTHER PROGRAM IDENTIFIERS

Synopsys, Inc., Mountain...

1. A method for suggesting corrections to in computer code prior to execution by a computer comprising:identifying, by a computer system, a set of functions in computer code in a programming language, the programming language of the computer code permitting usage of identifiers without static resolution;
generating, by the computer system, an occurrence table of each identifier in the computer code, the occurrence table identifying instances of each identifier in the computer code;
identifying, by the computer system, a set of candidate identifiers from the identifiers in the occurrence table that have fewer instances in the occurrence table than a threshold, wherein each candidate identifier in the set of candidate identifiers is a possible mistyped identifier;
for each candidate identifier in the set of candidate identifiers, identifying, by the computer system, a set of similar identifiers in the occurrence table;
identifying, by the computer system, a correction for the candidate identifiers in the set of candidate identifiers based on the set of similar identifiers;
suggesting, on a user interface to a user, the correction for the candidate identifier in the set of candidate identifiers to modify the computer code; and
providing a control for applying the correction to modify the candidate identifiers in the computer code in response to a user command.

US Pat. No. 10,216,610

DEBUG SESSION ANALYSIS FOR RELATED WORK ITEM DISCOVERY

International Business Ma...

1. A method for automatic debug session analysis for related work item discovery, the method comprising:recording first metadata describing a particular debug session associated with a user for a respective work item, wherein the recorded first metadata is based on behavioral patterns of the user while the user is developing or debugging the respective work item;
extracting second metadata from a plurality of previously recorded work items of the user to provide a systematic analysis of behavioral patterns of the user during debug sessions, wherein the previously recorded work items are based on stack traces, delivered change sets, saved breakpoint files, and log files that were previously recorded during previous debug sessions;
associating the first metadata recorded in the particular debug session with the extracted second metadata;
in response to the user working on a new issue, comparing metadata for the new issue with the associated first metadata and second metadata; and
in response to identifying, based on the compared metadata, a work item with a predetermined level of similar metadata from debug sessions, notifying the user of a potential work item match.

US Pat. No. 10,216,609

EXCEPTION PREDICTION BEFORE AN ACTUAL EXCEPTION DURING DEBUGGING

International Business Ma...

1. A method of predicting an exception during a debugging of software code before the debugging encounters the exception, the method implemented by computer-readable program code being executed by a processor of a computer, and the method comprising the steps of:the computer receiving a number (X) of lines of the software code, wherein X is an integer greater than one;
during a debugging of a line number L of the software code, the computer executing upcoming lines of the software code consisting of at least line number (L+1) through line number (L+X) of the software code, wherein L is an integer greater than zero;
based on the upcoming lines of the software code being executed, the computer predicting that the exception will be encountered at a line number M of the software code and determining the line number M is within a range of line number (L+1) through line number (L+X), inclusively, wherein M is an integer;
during the debugging of the line number L of the software code and based on the exception being predicted to be encountered at the line number M, the computer displaying a warning that the exception is to be encountered at the line number M;
the computer modifying, using a fix written in response to the predicted exception, the software code; and
during a debugging of the line number M of the software code, the computer executing the modified software code to avoid the predicted exception.

US Pat. No. 10,216,608

LOAD TESTING WITH AUTOMATED SERVICE DEPENDENCY DISCOVERY

Amazon Technologies, Inc....

1. A system, comprising:one or more computing devices configured to implement a load testing system, wherein the load testing system:
receives a request to approve load testing for a service;
initiates, responsive to the request to approve load testing, one or more test calls to the service, wherein a call trace functionality is enabled for the one or more test calls;
identifies, based at least in part on trace information included in one or more responses to the one or more test calls, one or more downstream services upon which the service depends, wherein the trace information is generated by the call trace functionality for the one or more test calls at the downstream services called by the service or by other ones of the downstream services; and
approves or denies the request to approve load testing for the service based at least in part on an availability of the identified one or more downstream services for load testing, wherein the availability of the one or more downstream services is indicated in a load test registry.

US Pat. No. 10,216,607

DYNAMIC TRACING USING RANKING AND RATING

International Business Ma...

1. A computer program product for dynamic tracing, the computer program product comprising:one or more computer-readable storage media and program instructions stored on the one or more computer-readable storage media, the program instructions comprising:
program instructions to monitor a log file, wherein the log file comprises events, wherein an event comprises an event code, an event time stamp, computer network activity, user access, data manipulation, software usage, processor utilization, and software application activity;
program instructions to receive a ranking and rating table (“table”), wherein the table comprises one or more error codes and a ranking for each of the one or more error codes;
program instructions to match the event code with an error code of the one or more error codes;
program instructions to calculate a rating for the error code, wherein program instructions to calculate the rating comprises program instructions to add the ranking of the error code, an occurrence counter of the error code, and a previous rating of the rating;
program instructions to compare the calculated rating to a rating threshold, wherein program instructions to compare the calculated rating to a rating threshold further comprises program instructions to enable a trace level information capture based on the calculated rating exceeding a high rating threshold, program instructions to enable a debug level information capture based on the calculated rating exceeding on a medium rating threshold, and program instructions to enable an info level information capture based on the calculated rating exceeding a low rating threshold, wherein the rating threshold is dynamically configurable;
program instructions to enable an information capture level based on the rating threshold of the calculated rating;
program instructions to start a timer when the information capture level is enabled, wherein the timer is reset after a configurable elapsed time;
in response to program instructions to enable the information capture level, program instructions to copy events from the log file into an abbreviated log file, wherein the copied events include the error code for the calculated rating;
program instructions to create an alert indicating a changed information capture level;
program instructions to move the abbreviated log file into a memory storage;
program instructions to reset the rating and resetting an occurrence counter;
program instructions to reset a configurable timer;
program instructions to reset the information capture level to a default level; and
program instructions to reset the dynamic tracing.

US Pat. No. 10,216,605

ELAPSED TIME INDICATIONS FOR SOURCE CODE IN DEVELOPMENT ENVIRONMENT

International Business Ma...

1. A method for providing elapsed time indications for source code in a development environment, comprising:importing, by one or more processors, source code, into an integrated development environment;
defining, by the one or more processors, blocks of the source code to be timed during source code execution in the integrated development environment, wherein the defining comprises setting breakpoints in the source code to differentiate the blocks, wherein each block comprises a line of the source code;
executing, by the one or more processors, the source code, in the integrated development environment repetitively;
monitoring defined blocks of the source code during the repetitive executing to determine, for each iteration: an elapsed time, an average elapsed time for the execution of each defined block of the defined blocks of the source code, and a standard deviation for each defined block of the defined blocks;
annotating a specific defined block of the source code of the defined blocks of source code with the average elapsed time for the specific defined block of the source code and the standard deviation for the specific defined block of the source code;
subsequent to the annotating, executing, the specific defined block of the source code in the integrated development environment, wherein the executing comprises:
providing an elapsed time indication for the elapsed time for the specific defined block of the source code by displaying the elapsed time indication in a graphical user interface of the integrated development environment; and
displaying, in the graphical interface, an alert, if the elapsed time exceeds the average elapsed time for the specific defined block of the source code by greater than the standard deviation for the specific defined block of the source code; and
resetting, by the one or more processors, the method, when the source code is run on a different hardware environment.

US Pat. No. 10,216,604

MONITORING ENVIRONMENTAL PARAMETERS ASSOCIATED WITH COMPUTER EQUIPMENT

CA, INC., New York, NY (...

1. A computer implemented method for monitoring computer equipment, the method comprising:measuring, using a number of sensors located at and associated with a number of computing devices, a number of environmental parameters related to the number of computing devices;
receiving an image from a camera of a wearable device, comprising glasses, worn by a user, the image representing an actual view of the number of computing devices as seen by the user; and
with a computer, displaying sensed environmental parameters, based on the actual view of the number of computing devices, to create a spatial relation between a display of the sensed environmental parameters and a space to which each sensed environmental parameter pertains, such that the user sees an indication of a sensed environmental parameter superimposed on the actual view of the number of computing devices viewed through the glasses, the displayed indication of the sensed environmental parameter appearing in spatial relation to a corresponding location of the sensed environmental parameter in the actual view.

US Pat. No. 10,216,603

CABLE REMOVAL SYSTEM

International Business Ma...

1. A method for a cable removal system, the method comprising:detecting, by a computer processor, a physical contact of the user with a touch sensor adjacent to a network port of a first device, wherein the network cable is a physical connection between the first device and a second device;
responsive to detecting the physical contact:
determining, by the computer processor, that the user is attempting to remove the network cable;
determining, by the computer processor, whether an information transmission across the network cable can be rerouted over an alternate pathway;
responsive to determining that the information transmission can be rerouted:
rerouting the information transmission over the alternate pathway;
alerting the user that there is no information transmission across the network cable; and
enabling the user to safely disconnect the network cable.

US Pat. No. 10,216,602

TOOL TO MEASURE THE LATENCY OF TOUCHSCREEN DEVICES

Tactual Labs Co., New Yo...

1. A latency measuring head for use in measuring touch-to-response latency in a test device, the test device including a capacitive user interface that responds to touch input, the latency measuring head comprising:a conductive element adapted to be positioned in a first fixed proximity to the capacitive user interface, which the first fixed proximity may include being in contact with the capacitive user interface;
an electron sink operatively connected to the conductive element via a normally open switch having an open and a closed position, the electron sink being adapted to cause dissipation of sufficient charge to trigger a touch event on the test device in response to the switch being closed; and
a photosensitive element adapted to be positioned in a second fixed proximity to the capacitive user interface, which the second fixed proximity may include being in contact with the capacitive user interface, the photosensitive element being further adapted to output signals corresponding to at least one optical property of at least a portion of the capacitive user interface.

US Pat. No. 10,216,601

AGENT DYNAMIC SERVICE

Cisco Technology, Inc., ...

1. A method for monitoring an application, comprising:receiving a first .jar file by an agent within an application on a server;
creating a new .jar file from the received .jar file, wherein the new .jar file modifies references to agent elements found in the received .jar file including at least references to agent class, methods and fields;
running the new .jar file, wherein the new .jar file is placed in a .jar file directory; and
removing the .jar file and code created by the .jar file by a service module by removing the executed new .jar file from the .jar file directory, and removing traces of the new .jar file by removing byte instrumented code generated as a result of running the new .jar file.

US Pat. No. 10,216,600

LINKING SINGLE SYSTEM SYNCHRONOUS INTER-DOMAIN TRANSACTION ACTIVITY

International Business Ma...

1. A method implemented by an information handling system comprising:intercepting an inter-domain event between a first domain and a second domain, wherein the first and second domains are running within a common operating system image, and wherein a first data collector in the first domain generates corresponding execution identifiers for events originating within the first domain and a second data collector in the second domain generates corresponding execution identifiers for events originating within the second domain;
identifying a type of the inter-domain event;
gathering one or more selected execution identifiers pertaining to the inter- domain event, wherein the execution identifiers include a system identifier, a process identifier, and a thread identifier;
generating a unique token that indicates an order that the inter-domain event occurred when compared with a plurality of unique tokens corresponding to other inter-domain events; and
storing the gathered selected execution identifiers, the generated unique token, and the type of inter-domain event in a data store.

US Pat. No. 10,216,599

COMPREHENSIVE TESTING OF COMPUTER HARDWARE CONFIGURATIONS

International Business Ma...

1. A method for testing a computer, wherein the computer includes a plurality of hardware components, wherein the plurality of hardware components includes a plurality of processors, and wherein the method comprises:receiving a signal to determine resources of the computer to allocate to a program, wherein the program is executed by at least one processor included in the plurality of processors;
determining, in response to the signal, to allocate to the program an at least one first hardware component included in the plurality of hardware components;
detecting, in response to the signal, that the computer is operating in a test mode;
selecting, in response to the signal and based at least in part on the computer operating in the test mode, a subset of the plurality of hardware components included in the computer, wherein the subset includes hardware components associated with the at least one first hardware component, wherein the subset includes hardware components not presently allocated to the program, and wherein the subset comprises a number of hardware components no greater than a program limit; and
swapping an at least one second hardware component for an at least one third hardware component, wherein the at least one second hardware component is included in the subset of the plurality of hardware components included in the computer, wherein the at least one third hardware component is included in the plurality of hardware components included in the computer, the at least one third hardware component presently allocated to the program, and wherein the swapping of the at least one second hardware component for the at least one third hardware component comprises de-allocating, from the program, the at least one third hardware component and allocating, to the program, the at least one second hardware component.

US Pat. No. 10,216,598

METHOD FOR DIRTY-PAGE TRACKING AND FULL MEMORY MIRRORING REDUNDANCY IN A FAULT-TOLERANT SERVER

Stratus Technologies Berm...

1. A method of transferring memory from an active to a standby memory in an FT (Fault Tolerant) Server system comprising the steps of:Reserving a portion of memory using BIOS of the FT Server system;
Loading and initializing an FT Kernel Mode Driver into memory;
Loading and initializing an FT virtual machine Manager (FTVMM) including the Second Level Address Table (SLAT) into the reserved portion of memory and synchronizing all processors in the FTVMM;
Tracking the OS (Operating System), driver, software and Hypervisor memory accesses using the FTVMM's SLAT in Reserved Memory;
Tracking Guest VM (Virtual Machine) memory accesses by tracking all pages of the SLAT associated with the guest and intercepting the Hypervisor writes to memory pages that constitute the SLAT;
Entering Brownout—level 0, by executing a full memory copy while keeping track of the dirty bits (D-Bits) in the SLAT to track memory writes by all software in the FT Server;
Clearing all of the D-Bits in the FTVMM SLAT and each Guest's current SLAT;
Entering Brownout of phases 1-4 tracking all D-Bits by:
Collecting the D-Bits;
Invalidating all processors cached translations for the FTVMM SLAT and each current Guest's SLAT;
Copying the data of the modified memory pages from the active memory to the second Subsystem memory;
Entering Blackout by pausing all the processors in the FTVMM, except Processor #0, and disabling interrupts in the FT Driver for processor #0;
Copying the collected data from active to the mirror memory;
Collecting the final set of recently dirtied pages including stack and volatile pages;
Transferring the set of final pages to the mirror memory using an FPGA (Field Programmable Gate Array);
Using the FT Server FPGA and Firmware SMM (System Management Module) to enter the state of Mirrored Execution;
Completing the Blackout portion of the operation by terminating and unloading the FTVMM;
Returning control to the FT Kernel Mode Driver; and
Resuming normal FT System operation.

US Pat. No. 10,216,597

RECOVERING UNREADABLE DATA FOR A VAULTED VOLUME

NETAPP, INC., Sunnyvale,...

1. A method comprising:identifying a sector from a plurality of sectors in a physical memory of a storage system as an unreadable sector;
determining that a logical block address range of the unreadable sector matches a logical block address range of a copy of the sector identified as the unreadable sector that was previously uploaded to a cloud storage, wherein the copy of the sector stores readable data and a match indicates that the logical block address of the unreadable sector has not changed since the copy of the sector was previously uploaded to the cloud storage;
based on the determining, receiving the copy of the sector from the cloud storage; and
replacing the unreadable sector with the copy of the sector at a same location in the physical memory occupied by the unreadable sector.

US Pat. No. 10,216,595

INFORMATION PROCESSING APPARATUS, CONTROL METHOD FOR THE INFORMATION PROCESSING APPARATUS, AND RECORDING MEDIUM

CANON KABUSHIKI KAISHA, ...

1. An information processing apparatus that performs mirroring to store same data in a plurality of storage units, the information processing apparatus comprising:a memory of a main board, the memory being configured to store mirroring information including configuration information about the mirroring in the plurality of storage units;
a mirroring configuration unit configured to configure mirroring in the plurality of storage units based on the configuration information about the mirroring stored in the memory of a main board;
a memory of a sub board, the memory being configured to store the mirroring information;
a detection unit configured to detect replacement of the main board; and
a restoration unit configured to restore the mirroring information stored in the memory of the sub board in a memory of a replaced main board in accordance with detection of the replacement of the main board by the detection unit,
wherein the mirroring configuration unit is configured to configure mirroring in a mirroring state in accordance with the configuration information about the mirroring restored in the memory of the replaced main board being information indicating a mirror, and configure mirroring in a degraded state in accordance with the configuration information about the mirroring being information indicating degradation.

US Pat. No. 10,216,594

AUTOMATED STALLED PROCESS DETECTION AND RECOVERY

INTERNATIONAL BUSINESS MA...

1. A method for use in a dispersed storage network (DSN) including a plurality of storage units, the method comprising:detecting, by a processing unit included in the DSN, a failing storage unit;
issuing, by the processing unit, an error indicator to a recovery unit, to indicate the failing storage unit;
issuing, by the recovery unit, a test request to the failing storage unit;
determining, by the recovery unit, to implement a corrective action; and
facilitating, by the recovery unit, execution of the corrective action.

US Pat. No. 10,216,593

DISTRIBUTED PROCESSING SYSTEM FOR USE IN APPLICATION MIGRATION

HITACHI, LTD., Tokyo (JP...

1. A distributed processing system, comprising:a plurality of application servers; and
a management device,
wherein the application server includes an application portion and a distributed execution platform portion,
the application portion includes
an adapter unit that receives an execution request of an application from a client terminal, and transmits a message of a processing request of the application to the distributed execution platform portion, and
a processor unit that performs a process of the application in response to a request from the distributed execution platform,
the distributed execution platform portion includes
a dispatcher unit that holds the message transmitted from the adapter unit, and selects an application server that performs the process of the application requested through the message according to a routing strategy, and
a statistical information storage unit that holds statistical information of each process of the application by the application server, and
the management device includes
a migration management unit that manages a migration status of the application for every two or more application servers, and
a migration evaluating unit that decides a migration target server group based on performance information of the application server, statistical information of each process of the application, and the number of non-completed processes of each process calculated based on the number of messages held in the dispatcher unit for the application server in which the migration status is an old application operation state.

US Pat. No. 10,216,592

STORAGE SYSTEM AND A METHOD USED BY THE STORAGE SYSTEM

International Business Ma...

1. A computer program product for performing failover processing between a production host and a backup host, a storage system is connected to the production host and the backup host, the computer program product comprising:a computer readable non-transitory article of manufacture tangibly embodying computer readable instructions which, when executed, cause a computer to carry out a method comprising:
in response to a failure of the production host, obtaining metadata of data blocks that have already been cached from an elastic space located in a fast disk of the storage system, and expanding a storage capacity of the elastic space;
in response to a maximum storage capacity of the elastic space being less than a storage capacity of the data blocks to which the metadata corresponds, expanding the elastic space to its maximum capacity;
in response to the maximum storage capacity of the elastic space being more than the storage capacity of the data blocks to which the metadata corresponds, expanding the elastic space until the storage capacity of the elastic space at least is capable of storing the data blocks to which the metadata corresponds;
obtaining data blocks to which the metadata corresponds according to the metadata and the storage capacity of the expanded elastic space, and storing the same in the expanded elastic space; and
in response to the backup host requesting the data blocks to which the metadata corresponds and the data blocks to which the metadata corresponds have already been stored in the expanded elastic space, obtaining the data blocks to which the metadata corresponds from the expanded elastic space and transmitting the same to the backup host.

US Pat. No. 10,216,591

METHOD AND APPARATUS OF A PROFILING ALGORITHM TO QUICKLY DETECT FAULTY DISKS/HBA TO AVOID APPLICATION DISRUPTIONS AND HIGHER LATENCIES

EMC IP Holding Company LL...

1. A method for determining a faulty hardware component within a data storage system, comprising:collecting, by a processor, data relating to a plurality of input/output (IO) errors associated with a first storage processor within the data storage system, wherein the data storage system includes a plurality of disk array enclosures (DAEs), each DAE having one or more disk drives;
compiling, by the processor, IO error statistics based on the data relating to the plurality of IO errors, the IO error statistics being related to a first one of the DAEs of the data storage system; and
determining, by the processor, a faulty hardware component based on the IO error statistics, wherein the determining of the faulty hardware component comprises utilizing a second storage processor of the data storage system independent from the first storage processor, including examining IO access statistics of the second storage processor for accessing the first DAE through a different path, and
wherein the plurality of DAEs are connected to the first storage processor and the second storage processor through an independent first path and an independent second path, and each of the one or more disk drives has a first port connected to the first storage processor through the first path and a second port connected to the second storage processor through the second path.

US Pat. No. 10,216,590

COMMUNICATION CONTROL DETERMINATION OF STORING STATE BASED ON A REQUESTED DATA OPERATION AND A SCHEMA OF A TABLE THAT STORES THEREIN DATA TO BE OPERATED BY THE DATA OPERATION

KABUSHIKI KAISHA TOSHIBA,...

1. A communication control device to be connected to a plurality of server devices configured to store therein data in a distributed manner, the communication control device comprising:a determining unit configured to determine, based on a requested data operation and a schema of a table that stores therein data to be operated by the data operation, any one of a first operation method of storing state information indicating a state of the data operation and a second operation method of avoiding storing the state information as a method for the data operation;
a request unit configured to request the server devices to execute the data operation;
a storage control unit configured to store, when the first operation method is determined, the state information in the storage unit upon execution of the data operation; and
a recovery unit configured to recover, when a failure occurs upon the execution of the data operation, from the failure by a recovery method suited to the operation method determined by the determining unit.

US Pat. No. 10,216,589

SMART DATA REPLICATION RECOVERER

International Business Ma...

1. A computer system for data replication recovery in a heterogeneous environment comprising:one or more processors, one or more computer-readable storage devices, and a plurality of program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors, the plurality of program instructions comprising:
program instructions to receive, by a data replication recoverer (DRR) agent, one or more committed transaction records from a source agent, wherein the source agent is configured to receive the one or more committed transaction records from a source database;
program instructions to create, by the DRR agent, data and metadata records from the received one or more committed transaction records, wherein the metadata comprises a transaction identifier, a timestamp associated with a transaction, transaction statistics wherein the transaction statistics include a number of operations performed on the source database, and a number of rows processed on the source database, and program instructions to save the data and the metadata records in a data replication repository; and
in response to receiving a request to recover a target database, program instructions to selectively recover, by the DRR agent, the target database wherein the target database is recovered using either one or more individual transactions or a bookmark; and
wherein the program instructions to selectively recover the target database further comprises:
program instructions to locate the selected bookmark within the data replication repository associated with the target database;
program instructions to locate an earliest log position entry recorded in the bookmark and a last log position entry recorded in the bookmark in the metadata within the data replication repository associated with the target database;
program instructions to create a plurality of database operations to reverse the transactions recorded within the earliest log position entry and the last log position entry in the selected bookmark;
program instructions to send, by the DRR agent, the plurality of created database operations to a target agent on the target database, wherein the target agent executes the plurality of created database operations on the target database;
based on the created database operations completing on the target database, program instructions to notify the DRR agent, by the target agent; and
in response to receiving the notification, program instructions for the DRR agent to mark the data and metadata in the data replication repository as being recovered.

US Pat. No. 10,216,588

DATABASE SYSTEM RECOVERY USING PRELIMINARY AND FINAL SLAVE NODE REPLAY POSITIONS

SAP SE, Walldorf (DE)

1. One or more non-transitory computer-readable storage media storing computer-executable instructions for causing a computing system to perform processing to carry out a database recovery at a slave database system node, the slave node in communication with a master node, the processing comprising:receiving a preliminary slave log backup position from a backup manager;
replaying at least a portion of one or more log backups until the preliminary slave log backup position is reached;
receiving a final slave log backup position; and
replaying at least a portion of one or more log backups until the final slave log backup position is reached.

US Pat. No. 10,216,587

SCALABLE FAULT TOLERANT SUPPORT IN A CONTAINERIZED ENVIRONMENT

INTERNATIONAL BUSINESS MA...

1. A method for providing failure tolerance to containerized applications by one or more processors, comprising:initializing a layered filesystem to maintain checkpoint information of stateful processes in separate and exclusive layers on individual containers;
transferring a most recent checkpoint layer from a main container exclusively to an additional node to maintain an additional, shadow container;
implementing a maintenance schedule for the main and shadow containers, including transferring additional checkpoint layers at regular intervals; and
organizing the most recent checkpoint layer and additional layers such that the most recent checkpoint layer is a topmost layer.

US Pat. No. 10,216,586

UNIFIED DATA LAYER BACKUP SYSTEM

International Business Ma...

1. A method of managing disaster recovery with a unified data layer, the method comprising:establishing a primary system at a first site, wherein the primary system includes both a primary instance of an application being hosted by the primary system and a primary database for data of the application, wherein the application is hosted for remote users that use the application to manage data of the primary database;
establishing a unified data layer for the primary system at a second site; wherein the unified data layer provides access to data of the primary database without providing access to the primary database, wherein the second site is remote from the first site;
detecting a triggering event of the primary system, wherein the triggering event impairs the ability of the primary system to host the application;
instantiating a recovery system in response to detecting the triggering event, wherein the recovery system includes both a recovery instance of the application and a recovery database for the data of the application, wherein the recovery system is not located at the first site;
populating the recovery database using the unified data layer;
activating the recovery system, wherein the activating the recovery system includes allowing the remote users to access the recovery instance of the application to manage data of the recovery database;
detecting availability of the primary system to host the application without impairment;
updating the primary database using the unified data layer in response to detecting the availability of the primary system;
deactivating the recovery system at a second point in time, wherein the second point in time occurs immediately before activating of the primary system and is prior to a first point in time; and
activating the primary system, wherein activating the primary system includes allowing the remote users to access the primary instance of the application to manage data of the primary database.

US Pat. No. 10,216,585

ENABLING DISK IMAGE OPERATIONS IN CONJUNCTION WITH SNAPSHOT LOCKING

Red Hat Israel, Ltd., Ra...

1. A method comprising:attaching a first snapshot to a first virtual machine, the first snapshot being stored within a disk image, wherein the first snapshot is a copy of a virtual disk at a first point in time;
generating, in view of the first snapshot, a second snapshot while the first snapshot is attached to the first virtual machine, wherein the second snapshot is a copy of the virtual disk at a second point in time;
attaching the second snapshot to a second virtual machine while the first snapshot is attached to the first virtual machine; and
causing, by a processing device, the second snapshot to be locked in view of the second virtual machine performing one or more operations on the second snapshot,
wherein the first virtual machine performs one or more operations on the first snapshot concurrent with the second virtual machine performing one or more operations on the second snapshot while the second snapshot is locked.

US Pat. No. 10,216,584

RECOVERY LOG ANALYTICS WITH A BIG DATA MANAGEMENT PLATFORM

International Business Ma...

1. A computer-implemented method for replicating relational transactional log data to a big data platform, comprising:fetching, using a processor of a computer, change records contained in change data tables;
rebuilding a relational change history with transaction snapshot consistency to generate consistent change records by joining the change data tables and a unit of work table based on a commit sequence identifier, wherein the rebuilding is performed by one of a relational database management system and the big data platform; and
storing the consistent change records on the big data platform, wherein queries are answered on the big data platform using the consistent change records.

US Pat. No. 10,216,583

SYSTEMS AND METHODS FOR DATA PROTECTION USING CLOUD-BASED SNAPSHOTS

Veritas Technologies LLC,...

1. A computer-implemented method for data protection using cloud-based snapshots, at least a portion of the method being performed by a computing device comprising at least one processor, the method comprising:identifying a request to back up an information asset hosted by a cloud-based platform;
discovering, in response to the request, a plurality of snapshots taken at the cloud-based platform, wherein at least some of the plurality of snapshots store data underlying the information asset but do not provide a consistent image of the information asset;
determining that a snapshot subset of the plurality of snapshots provides data sufficient to produce the consistent image of the information asset by iteratively attempting to recover, within a rehearsal environment, the consistent image of the information asset from each snapshot within the plurality of snapshots until encountering at least one snapshot that is sufficient to recover the consistent image;
performing a backup that provides the consistent image of the information asset from the snapshot subset based on a successful attempt to recover the consistent image of the information asset from the snapshot subset within the rehearsal environment.

US Pat. No. 10,216,582

RECOVERY LOG ANALYTICS WITH A BIG DATA MANAGEMENT PLATFORM

International Business Ma...

1. A computer program product for replicating relational transactional log data to a big data platform, the computer program product comprising a non-transitory computer readable storage medium having program code embodied therewith, the program code executable by at least one processor to perform operations comprising:fetching change records contained in change data tables;
rebuilding a relational change history with transaction snapshot consistency to generate consistent change records by joining the change data tables and a unit of work table based on a commit sequence identifier, wherein the rebuilding is performed by one of a relational database management system and the big data platform; and
storing the consistent change records on the big data platform, wherein queries are answered on the big data platform using the consistent change records.

US Pat. No. 10,216,581

AUTOMATED DATA RECOVERY FROM REMOTE DATA OBJECT REPLICAS

International Business Ma...

1. A method for recovering data objects in a distributed data storage system, the method comprising:storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network, wherein a first data center of the one or more data centers includes a first cluster of the one or more clusters and the first cluster includes a plurality of compute nodes, the first cluster further includes a database that stores metadata concerning a replica of each data object of each of the plurality of compute nodes of the first cluster, a first compute node of the plurality of compute nodes includes the first data object, and wherein an availability status of the one or more replicas is maintained in the database at the first cluster per replica, and wherein the availability status is not maintained at the first compute node;
recording health information metadata about said one or more replicas within the database, wherein the health information comprises data about availability of a replica to participate in a restoration process;
determining that the first data object is faulty when at least one replica of the first data object is determined to be lost or damaged;
in response to determining that the first data object is faulty, determining that the first data object is to be recovered when a number of replicas of the first data object that are damaged or lost exceeds a threshold number of replicas;
in response to determining that the first data object is to be recovered, calculating a query-priority for the first data object;
querying, based on the calculated query-priority, the health information metadata within the database for the one or more replicas to determine which of the one or more replicas is available for restoration of the first data object;
calculating a restoration-priority for the first data object based on the health information metadata for the one or more replicas; and
restoring the first data object from the one or more of the available replicas, based on the calculated restoration-priority and based on the availability status, wherein the restoration-priority is calculated based on a priority function P(D)=Func(N(D),C(D),n), where:
D represents a data object with multiple replicas in multiple clusters;
N(D) represents number of remote replicas for which H(D)i is available;
C(D) represents cost of losing N replicas of D;
P(D) represents priority given by the system for the restoration operation of D; and
Func( )represents some function.

US Pat. No. 10,216,580

SYSTEM AND METHOD FOR MAINFRAME COMPUTERS BACKUP AND RESTORE ON OBJECT STORAGE SYSTEMS

MODEL9 SOFTWARE LTD., Ki...

1. A computer-implemented method comprising:receiving a request for backing up a data set from a mainframe onto an object storage;
splitting the data set into a multiplicity of chunks, each chunk of the multiplicity of chunks having a predetermined size;
creating a mapping object;
repeating for each chunk:
allocating a sender thread to the chunk;
transmitting by the sender thread using an object storage Application programming interface (API), the chunk having the predetermined size as an object, to the object storage, to be stored as an object; and
updating the mapping object with details of the chunk;
subject to the data set being fully split and no more chunks to be transmitted, transmitting the mapping object to the object storage by the sender thread; and
writing an identifier of the data set and meta data including an object name of the mapping object, to a database stored in a storage device of the mainframe.

US Pat. No. 10,216,579

APPARATUS, SYSTEM AND METHOD FOR DATA COLLECTION, IMPORT AND MODELING

International Business Ma...

1. A computer program product for data analysis of a backup system, the computer program product comprising:one or more non-transitory computer-readable storage media and program instructions stored on the one or more non-transitory computer-readable storage media, the program instructions comprising:
program instructions to generate a dump file for each of a plurality of backup servers, each dump file comprising configuration and state information about each of the plurality of backup servers in a native format used by each of the plurality of backup servers on which data is stored, wherein the backup servers backup the data from a primary storage layer to a common media layer;
program instructions to extract a first predetermined configuration and state information from the respective dump files of the plurality of backup servers, the first predetermined configuration and state information being in different formats based on the dump file from which it was extracted;
program instructions to translate the first predetermined configuration and state information from the format used by each of the plurality of backup servers into a normalized format, wherein the translated first configuration and state information comprises configuration and state information irrespective of which of the plurality of backup servers from which it was generated;
program instructions to store the translated first configuration and state information in a single database;
program instructions to generate a dump file for each of a plurality of different computer systems of the primary storage layer, each dump file comprising configuration and state information about each of the plurality of computer systems in a format used by each of the plurality of computer systems on which the data is stored, wherein the plurality of computer systems include server computers, desktop computers, and laptop computers which are physically located across various sites and use hardware and software from different vendors;
program instructions to extract a second predetermined configuration and state information from the respective dump files of the plurality of different computer systems, the second predetermined configuration and state information being in different formats based on the dump file from which it was extracted;
program instructions to translate the second predetermined configuration and state information from the format used by each of the plurality of computer systems into a normalized format, wherein the translated second configuration and state information comprises configuration and state information irrespective of which of the plurality of computer systems from which it was generated;
program instructions to store the translated second configuration and state information in the single database; and
program instructions to determine what components are in the backup system, how the backup system works, how data is stored in the backup system, how efficiently data is stored in the backup system, a total capacity of the backup system, a remaining capacity of the backup system, and an operating cost of the backup system by analyzing the normalized first and second configuration and state information stored in the single database.

US Pat. No. 10,216,578

DATA STORAGE DEVICE FOR INCREASING LIFETIME AND RAID SYSTEM INCLUDING THE SAME

Samsung Electronics Co., ...

1. A data storage device comprising:a nonvolatile memory arranged according to drives and stripes;
a buffer configured to store state information of each of the stripes; and
a memory controller comprising a redundant array of independent disks (RAID) controller configured to operate in a spare region mode and perform data recovery using garbage collection based on the state information,
wherein the state information includes at least one of a first state indicating that none of the drives has malfunctioned, a second state indicating that a first drive among the drives has malfunctioned, and a third state indicating that data/parity stored in the first drive has been recovered,
wherein upon detecting malfunction of the first drive, the RAID controller is further configured to change the state information from the first state to the second state, and
wherein the RAID controller is further configured to recover the data/parity stored in the first drive to a spare region of the nonvolatile memory and change state information of a stripe among the stripes including the spare region from the second state to the third state, and to move the recovered data/parity while performing the garbage collection from the spare region to a predetermined drive, and change the state information of the stripe including the spare region from the third state to the first state.

US Pat. No. 10,216,576

ENCODING DATA UTILIZING A ZERO INFORMATION GAIN FUNCTION

International Business Ma...

1. A method for execution by a processing module of a computing device of a dispersed storage network (DSN), the method comprises:dispersed storage error encoding, by the processing module and in accordance with distributed data storage parameters, a data segment to produce a set of encoded data slices and a set of zero information gain (ZIG) encoded data slices, wherein the set of encoded data slices is encoded in accordance with a dispersed storage error encoding scheme and wherein the set of ZIG encoded data slices is encoded using a ZIG function and further wherein a first ZIG encoded data slice of the set of ZIG encoded data slices is generated by matrix multiplying a first decoding matrix and a first encoded data slice of the set of encoded data slices, wherein generating the first ZIG encoded data slice includes generating a first partial encoded data slice based on the first decoding matrix, the first encoded data slice and a row of the encoding matrix corresponding to the first encoded data slice, generating a second partial encoded data slice based on a second decoding matrix, a second encoded data slice and a row of the encoding matrix corresponding to the second encoded data slice, wherein the second encoded data slice is an encoded data slice of the set of encoded data slices and not included in the subset of encoded data slices, and combining the first and second partial encoded data slices to produce the first ZIG encoded data slice;
selecting, by the processing module, a first subset of encoded data slices from the set of encoded data slices, wherein the first subset of encoded data slices includes less than a threshold number of encoded data slices, and further wherein the threshold number of encoded data slices is required to recreate the data segment;
sending, by the processing module via an interface of the computing device, the subset of encoded data slices to a first memory within the DSN for storage therein; and
sending, by the processing module via the interface, the set of ZIG encoded data slices to a second memory within the DSN for storage therein.

US Pat. No. 10,216,575

DATA CODING

SanDisk Technologies LLC,...

1. A data storage device comprising:an encoder coupled to a memory controller and configured to receive input data and to map, based on a frequency of occurrence of groups of bits included in the input data, at least one input group of bits of the input data to generate output data including at least one output group of bits, wherein each input group of bits of the at least one input group of bits has the same number of bits as each corresponding output group of bits of the at least one output group of bits, wherein a first input group of bits occurs more frequently in the input data than a second input group of bits; and
a memory including multiple storage elements, each storage element of the multiple storage elements configured to be programmed to a voltage state corresponding to an output group of bits of the at least one output group of bits associated with the storage element, wherein a first voltage state corresponding to a first output group of bits mapped from the first input group of bits is lower than a second voltage state corresponding to a second output group of bits mapped from the second input group of bits.

US Pat. No. 10,216,574

ADAPTIVE ERROR CORRECTION CODES FOR DATA STORAGE SYSTEMS

WESTERN DIGITAL TECHNOLOG...

1. A data storage system, comprising:a non-volatile memory array comprising a plurality of memory pages; and
a controller configured to:
access coding parameters used to encode user data and parity data to be stored in the plurality of memory pages;
encode, using the coding parameters, first user data and first parity data as a first data unit;
store the first data unit in the plurality of memory pages;
decode, using the coding parameters, the first data unit retrieved from the plurality of memory pages;
detect a first number of bit errors encountered during decoding the first data unit retrieved from the plurality of memory pages;
in response to determining that the first number of bit errors exceeds a first threshold, adjust the coding parameters to increase an amount of parity data per total data that is included in data units subsequently encoded and stored in the plurality of memory pages; and
encode, using the adjusted coding parameters, second user data and second parity data as a second data unit to be stored in the plurality of memory pages.

US Pat. No. 10,216,573

METHOD OF OPERATING A MEMORY DEVICE

Infineon Technologies AG,...

1. A method of correcting and/or detecting an error in a memory device, the method comprising:in a first operations mode, applying a first code to detect and/or correct an error; and
in a second operations mode after an inactive mode and before entering the first operations mode, applying a second code for correcting and/or detecting an error, wherein the first code and the second code have different code words.

US Pat. No. 10,216,572

FLASH CHANNEL CALIBRATION WITH MULTIPLE LOOKUP TABLES

NGD Systems, Inc., Invin...

1. A method, comprising:calibrating a flash memory; and
performing an adaptive multi-read operation based on the calibration,
the calibrating comprising:
performing a first read operation on a first plurality of flash memory cells, at a first word line voltage, to form a first raw data word;
performing a second read operation on the first plurality of flash memory cells, at a second word line voltage, to form a second raw data word;
executing a first error correction code decoding attempt with a first set of one or more raw data words including the first raw data word;
determining that the first error correction code decoding attempt has succeeded; and
based on the determining that the first error correction code decoding attempt has succeeded:
generating,
from bit differences between
 the first set of one or more raw data words and
 one or more corresponding decoded data words generated by the first error correction code decoding attempt,
a first lookup table including:
 a first log likelihood ratio corresponding to a first range of word line voltages and
 a second log likelihood ratio corresponding to a second range of word line voltages; and
generating,
from bit differences between
 a second set of two or more raw data words and
 two or more corresponding decoded data words generated by the first error correction code decoding attempt,
a second lookup table including:
 a third log likelihood ratio corresponding to a third range of word line voltages;
 a fourth log likelihood ratio corresponding to a fourth range of word line voltages; and
 a fifth log likelihood ratio corresponding to a fifth range of word line voltages.

US Pat. No. 10,216,571

SYSTEM AND METHODOLOGY FOR ERROR MANAGEMENT WITHIN A SHARED NON-VOLATILE MEMORY ARCHITECTURE USING BLOOM FILTERS

WESTERN DIGITAL TECHNOLOG...

1. A system, comprising:a non-volatile memory (NVM) component configured to store data in an NVM array;
an error tracking table (ETT) component configured to store error correction vector (ECV) information associated with the NVM array, wherein the ETT component is within one of a dynamic random access memory (DRAM) or a second NVM component;
a controller configured to perform a parallel query of the NVM array and the ETT component, wherein the parallel query includes a query of the NVM array that yields a readout of the NVM array and a query of the ETT component that yields a construction of an ECV corresponding to the readout of the NVM array; and
at least one Bloom filter configured to predict at least one subset of ETT component entries in which at least one of the ETT component entries corresponds to a reporting of an error in the NVM array.

US Pat. No. 10,216,570

MEMORY DEVICE AND CONTROL METHOD THEREOF

Winbond Electronics Corpo...

1. A memory device, comprising:a memory block including a plurality of sectors and a plurality of refresh units, each refresh unit including at least one of the plurality of sectors; and
a control unit configured to:
pre-store a plurality of first indicators in a storage unit, the plurality of first indicators respectively corresponding to the plurality of refresh units in the memory block, and each one of the plurality of first indicators being generated based on data obtained by reading a corresponding one of the plurality of refresh units with a first reference voltage level; and
in an erase cycle for erasing a target sector of the plurality of sectors in the memory block:
selecting one of the plurality of refresh units;
read data from the selected refresh unit with a second reference voltage level different from the first reference voltage level;
generate a second indicator for the selected refresh unit based on the data read from the selected refresh unit with the second reference voltage level;
compare one of the plurality of first indicators that corresponds to the selected refresh unit with the second indicator of the selected refresh unit;
if the second indicator of the selected refresh unit is not equal to the one of the plurality of first indicators that corresponds to the selected refresh unit, refresh data in the selected refresh unit; and
if the second indicator of the selected refresh unit is equal to the one of the plurality of first indicators that corresponds to the selected refresh unit, select a next one of the plurality of refresh units.

US Pat. No. 10,216,569

SYSTEMS AND METHODS FOR ADAPTIVE DATA STORAGE

FIO Semiconductor Technol...

1. A method, comprising:managing, via a storage module, storage operations for a solid-state storage array;
queuing storage requests for the solid-state storage array in an ordered request buffer;
reordering the storage requests in the ordered request buffer, wherein the storage module comprises:
a logical-to-physical translation layer; and
the ordered request buffer, wherein the ordered request buffer is configured to receive storage requests from one or more store clients, and to buffer storage requests received via a bus;
generating, via an error-correcting code write module, an error-correcting code codeword comprising data for storage on the solid-state storage array, wherein the error-correcting code codeword is used to detect errors in data read from the solid-state storage array, correct errors in data read from the solid-state storage array, or a combination thereof;
generating, via a write module, data rows for storage within columns of the solid-state storage array, wherein each of the data rows comprises data of two or more different error-correcting code codewords;
generating, via a parity module, respective parity data for each of the data rows; and
reconstructing, via a data reconstruction module, an uncorrectable error-correcting code codeword of the two or more different error-correcting code codewords by accessing data rows and the respective parity data comprising the two or more different error-correcting code codewords.

US Pat. No. 10,216,568

LIVE PARTITION MOBILITY ENABLED HARDWARE ACCELERATOR ADDRESS TRANSLATION FAULT RESOLUTION

International Business Ma...

1. A method for memory translation fault resolution between a processing core and a hardware accelerator, the method comprising:configuring a first table having an entity identifier associated with an effective address for an entity, the first table operatively coupled to an operating system;
forwarding an operation from the processing core to a first buffer associated with the hardware accelerator;
determining at least one memory address translation related to the operation having a fault;
flushing the operation and the fault memory address translation from the hardware accelerator, including augmenting the operation with the entity identifier;
forwarding the operation with the fault memory address translation, including the entity identifier, from the hardware accelerator to a second buffer, the second buffer operatively coupled to an element selected from the group consisting of: a hypervisor and the operating system;
repairing the fault memory address translation, including an interruption of execution of the operating system;
sending the operation with the repaired memory address translation to the processing core utilizing the effective address for the entity based on the first table and the entity identifier within the fault memory address translation;
forwarding the operation with the repaired memory address translation from the second buffer to the first buffer supported by the processing core; and
executing the operation with the repaired memory address translation.

US Pat. No. 10,216,566

FIELD PROGRAMMABLE GATE ARRAY

Hitachi, Ltd., Tokyo (JP...

1. A field programmable gate array, comprising:a hard macro CPU in which a circuit structure is fixed;
a programmable logic in which a circuit structure is changeable;
a diagnosis circuit which diagnoses an abnormality of the programmable logic;
a fail-safe interface circuit which is able to control an external output from the programmable logic to a safe side; and
a function in which the hard macro CPU is instructed to output a fail-safe signal which is an output to a safe side to the fail-safe interface circuit when an error is detected by the diagnosis circuit;
wherein the fail-safe interface circuit is provided in the programmable logic, and
wherein an instruction from the hard macro CPU to the fail-safe interface circuit is issued through a communication path in which data is able to be transmitted only from the hard macro to the programmable logic.

US Pat. No. 10,216,565

ROOT CAUSE ANALYSIS

International Business Ma...

1. A method for performing a root cause analysis, said method comprising:opening, by a central processing unit (CPU), a file comprising event data;
recording, by the CPU, recordation data of a user's observable behavior while viewing the event data of the file, wherein the user's observable behavior includes the user's eye gaze;
identifying, by the CPU, a presence of one or more events of interest as a function of the user's observable behavior while viewing the event data of the file;
calculating, by the CPU, an interest score for each of the identified events of interest, wherein the interest score is a probability of each of the identified events of interest being a root cause of a defect; and
tagging, by the CPU, each of the events of interest within the file with a tag as a function of each calculated interest score;
wherein said identifying comprises:
tracking, by the CPU, a focal point of the user's eye gaze;
correlating, by the CPU, the focal point of the user's eye gaze to a viewing position of a display device displaying the file;
identifying, by the CPU, as a function of the viewing position, the event data being viewed and an amount of time that the event data is viewed by the user; and
further identifying, by the CPU, an emotive expression of the user during an amount of time focused on the viewing position, and
wherein said calculating comprises:
assigning, by the CPU, a numerical value to the viewing position, amount of time, emotive expression and event data viewed by the user; and
inserting, by the CPU, the numerical value assigned to the viewing position, amount of time, emotive expression and text of the event data, into a linear regression model; and
outputting, by the CPU, as a function of the linear regression model, a value of the interest score.

US Pat. No. 10,216,564

HIGH VOLTAGE FAILURE RECOVERY FOR EMULATED ELECTRICALLY ERASABLE (EEE) MEMORY SYSTEM

NXP USA, Inc., Austin, T...

1. A method for managing failing sectors in a semiconductor memory device that includes a volatile memory, a non-volatile memory, and a memory controller coupling the volatile memory and the non-volatile memory, the method comprising:detecting that a failure to program (FTP) error occurred during a sector identifier (ID) update action for a sector in the non-volatile memory, wherein
the sector is associated with a failure status indicator that indicates healthy status;
in response to determining the FTP error occurred while attempting to program a sector ID of the sector to one of a READY, READYQ, FULL, and FULLQ erase status, updating the failure status indicator to indicate a dead status; and
in response to determining the FTP error occurred while attempting to program the sector ID to one of a FULLE, FULLEQ, FULLC, and FULLCQ erase status, updating the failure status indicator to indicate a read-only status.

US Pat. No. 10,216,562

GENERATING DIAGNOSTIC DATA

International Business Ma...

1. An apparatus comprising:a processor;
a memory storing code executable by the processor to:
detect that a first address space of an application references one or more second address spaces during execution of the application, the first address space comprising a main address space for the execution of the application and the one or more second address spaces comprising address spaces that comprise information that the application references during execution;
dynamically create an entry in a data structure for mapping the first address space of an application executing in the first address space to the one or more second address spaces in response to detecting the application referencing the one or more second address spaces during execution of the application;
if, during execution of the application, a diagnostic trigger for the first address space is detected:
check the data structure for one or more second address spaces mapped to the first address space; and
generate one or more dump files comprising diagnostic data for the first address space and the one or more second address spaces; and
if, during execution of the application, a diagnostic trigger for the address space is not detected and a second address space of the one or more second address spaces is no longer referenced by the first address space, dynamically remove the entry in the data structure of the mapping of the first address space to the second address space that is no longer referenced by the first address space.

US Pat. No. 10,216,560

INTEGRATION BASED ANOMALY DETECTION SERVICE

Amazon Technologies, Inc....

1. A system comprising:a memory storing data regarding operating parameters related to performance of a computing system; and
a computer processor in communication with the memory, the computer processor programmed by computer-executable instructions to at least:
receive, from a monitored source, a first set of input data for an operating parameter at a first time;
determine, based at least in part on the first set of input data, a predicted value for the operating parameter that is expected at a second time;
determine a permitted relationship between the predicted value and a second set of input data for the operating parameter that is expected at the second time;
receive the second set of input data for the operating parameter at the second time;
determine that the second set of input data for the operating parameter at the second time does not satisfy the permitted relationship;
in response to determining that the second set of input data for the operating parameter at the second time does not satisfy the permitted relationship, identify an anomaly detection; and
cause display of a graphical interface presenting an anomaly notification, wherein the graphical interface enables receipt of an indication that the anomaly notification is erroneous.

US Pat. No. 10,216,558

PREDICTING DRIVE FAILURES

EMC IP Holding Company LL...

1. A computer-implemented method for predicting drive failures, the method comprising:collecting any one or more samples of drive health indicators from a drive over a specified time period, wherein the samples of drive health indicators include one or more Self-Monitoring, Analysis and Reporting Technology (SMART) attributes obtained from the drive;
performing a first feature selection modeling of a last collected sample of SMART drive health indicators to generate a drive feature for the drive, the drive feature for modeling a drive health at a time of the last collected sample;
performing a second feature engineering modeling of collected samples of SMART drive health indicators over the specified time period to generate one or more drive behavior history features for the drive, the drive behavior history features for modeling the drive health over the specified time period; and
classifying the drive as more likely to experience failure than other drives, the classifying based on predicted drive failure probabilities representing the drive health, including:
the drive health at the time of the last collected sample as modeled by the drive feature, and
the drive health over the specified time period as modeled by the drive behavior history features.

US Pat. No. 10,216,557

METHOD AND APPARATUS FOR MONITORING AND ENHANCING ON-CHIP MICROPROCESSOR RELIABILITY

International Business Ma...

1. A system for projecting reliability to manage system functions, comprising:an activity module which determines activity in the system that occurs during operation of the system;
a reliability module interacting with the activity module to determine a reliability measurement for regions for a current period within the system in real-time based upon the activity and measured operational quantities of the system, wherein the reliability measurement characterizes one or more potential physical failure mechanisms; and
a management module comprising a processor comparing the reliability measurement within the system to a locally stored reliability target and increase activity of the system during operation of the system based on whether the reliability measurement is determined to be above or below the stored reliability target;
wherein increasing the activity of the system includes one of increasing a clock rate, reallocating resources, and increasing current or voltage.

US Pat. No. 10,216,556

MASTER DATABASE SYNCHRONIZATION FOR MULTIPLE APPLICATIONS

SAP SE, Walldorf (DE)

1. An apparatus comprising:a hardware processor; and
a memory having stored therein instructions that, when executed by the hardware processor, cause the apparatus to perform operations for reducing resources consumed by a first application of a plurality of applications when accessing a master data store, the operations comprising:
accessing master data from a master data source, the master data to be employed by the plurality of applications;
accessing schema of the master data from the master data source, the schema of the master data to be employed by the plurality of applications;
generating one or more publication requests to store the master data and the schema of the master data to a master data store accessible by the plurality of applications; and
causing the schema of the master data stored in the master data store to be stored in a local cache of a first application of the plurality of applications for access by the first application, the causing of the schema to be stored in the local cache allowing the application to access the schema without consuming resources of the master data store.

US Pat. No. 10,216,555

PARTIALLY RECONFIGURING ACCELERATION COMPONENTS

Microsoft Technology Lice...

1. A method for partially reconfiguring an acceleration component programmed with a role, the role linked via an area network to one or more of: a downstream role at a downstream neighbor acceleration component and an upstream role at an upstream neighbor acceleration component to compose a graph providing an instance of service acceleration, the method comprising:detecting a reason for changing the role;
halting the role, including instructing at least one of: the downstream role and the upstream role to stop receiving data from the role;
partially reconfiguring the acceleration component by writing an image for the role to the acceleration component;
maintaining a network interface programmed into the acceleration component and a second role programmed into the acceleration component as operational during partially reconfiguring the acceleration component, maintaining the network interface permitting the second role to exchange network communication via the area network with one or more other roles at other acceleration components, the second role linked to the one or more other roles to compose another graph providing another instance of service acceleration, wherein the graph provides service acceleration for a service selected from among: document ranking, data encryption, data compression, speech translation, computer vision, or machine learning; and
activating the role at the acceleration component after partially reconfiguring the acceleration component is complete, including notifying the at least one of: the downstream role and the upstream role that the role is operational.

US Pat. No. 10,216,554

API NOTEBOOK TOOL

Mulesoft, Inc., San Fran...

1. A system, comprising:a processor configured to:
dynamically generate a client for calling an API for a service using a library for an API specification and an application programming interface (API) notebook tool, wherein the API specification includes a description of one or more APIs in an API modeling language including the API for the service, and wherein the client for calling the API for the service is dynamically generated based on the API specification;
convert the API into an object model stored as a note in a data store, wherein the note includes a coded implementation of the client and documentation for a documented usage scenario of the API implemented by the client, wherein user credentials for authenticating with the service are cached locally and are not stored in the data store, and wherein confidential content is removed from a results cell;
load the note from the data store using the API notebook tool, wherein the note was previously saved in the data store; and
save a modified version of the note in the data store using the API notebook tool, wherein the data store is an open collaboration repository, wherein the note is shared with a plurality of users, wherein each of the plurality of users can execute and/or edit the note to provide for a modified usage scenario of the API, and wherein user credentials of each of the plurality of users for authenticating with the service are cached locally and are not stored with the note in the open collaboration repository in the data store; and
a memory coupled to the processor and configured to provide the processor with instructions.

US Pat. No. 10,216,553

MESSAGE ORIENTED MIDDLEWARE WITH INTEGRATED RULES ENGINE

International Business Ma...

1. A message processing data processing system for managing a messaging component in message oriented middleware, the system comprising:a host computer including:
a processor set including at least one processor,
memory,
a messaging engine, and
a rules engine coupled to the messaging engine;
wherein:
the rules engine and messaging engine are programmed to establish working memory in shared memory of message oriented middleware executing by the processor set of the host computer for use by the messaging engine, to detect a change in the messaging component, to determine if the change corresponds to an addition of an object to the messaging component and, on condition the change corresponds to an addition of a new object to the messaging component, to create a token in the working memory, but on condition the change corresponds to a deletion of an existing object from the messaging component, to delete a token from the working memory, and on condition the change corresponds to a change to an existing object of the messaging component that is not a deletion of the existing object, to apply a change to an existing token in the working memory to observe the working memory to detect changes in one or more tokens in the working memory and in response to detecting a change to one or more of the tokens in the working memory, to apply by the rules engine and a messaging engine management rules to the tokens in the working memory in order to direct management actions in the messaging component, wherein the rules engine and messaging engine further ensure that tokens in the memory correspond to but are separate from objects in the messaging engine by placing a message on a queue, inserting a token corresponding to the placed message in memory, and linking the token to the corresponding message.

US Pat. No. 10,216,552

ROBUST AND ADAPTABLE MANAGEMENT OF EVENT COUNTERS

INTERNATIONAL BUSINESS MA...

1. A method for improved accuracy of a counter design implemented in computer hardware to ensure that the counter design captures a design event and avoids a race condition between a context event and the design event, the method comprising:receiving a plurality of events within the counter design, the plurality of events including the context event and the design event;
dynamically determining, by the computer hardware, a tolerance window defined around the context event, the tolerance window comprising a first window portion before the context event and a second window portion after the context event;
statically determining, by the computer hardware, a width of the tolerance window based on maximum effective path delays of the context event and the design event; and
performing a verification algorithm to determine that the design event is captured within the tolerance window and is accounted for by a design model counter of the counter design to avoid the race condition between the context event and the design event.

US Pat. No. 10,216,551

USER INFORMATION DETERMINATION SYSTEMS AND METHODS

Intertrust Technologies C...

1. A method performed by a system comprising a processor and a non-transitory computer-readable storage medium storing instructions that, when executed, cause the system to perform the method, the method comprising:receiving, at an interface of the system, application usage information from an electronic device associated with a user, the application usage information being associated with an application installed on the electronic device;
mapping the application usage information to one or more interest taxonomies to identify one or more interests associated with the user;
determining one or more relative adjusted weights associated with the identified one or more interests based on the application usage information, wherein determining the one or more relative adjusted weights comprises:
determining one or more decay rates based an indication of a momentum associated with the application, the momentum being determined based on a first density of use of the application over a first time period and a second density of use of the application over a second time period, the second time period being longer than the first time period; and
adjusting one or more initial relative weights based on the one or more decay rates to generate the one or more one or more relative adjusted rates;
associating the one or more relative adjusted weights with the identified one or more interests to generate one or more weighted interests;
identifying one or more content items based on the one or more weighted interests; and
transmitting the one or more content items to the electronic device.

US Pat. No. 10,216,550

TECHNOLOGIES FOR FAST BOOT WITH ADAPTIVE MEMORY PRE-TRAINING

Intel Corporation, Santa...

1. A computing device for memory parameter pre-training, the computing device comprising:a processor, a memory controller, and a non-volatile storage device; and
a boot loader to (i) determine whether a pre-trained memory parameter data set is inconsistent in response to a reset of the processor, wherein the pre-trained memory parameter data is stored by the non-volatile storage device, (ii) send a message that requests full memory training to a safety microcontroller via a serial link in response to a determination that the pre-trained memory parameter data set is inconsistent, (iii) determine whether a full memory training signal is raised via a general-purpose I/O link with the safety microcontroller in response to a determination that the pre-trained memory parameter data set is consistent, (iv) execute a fast boot path to initialize the memory controller with the pre-trained memory parameter data set in response to a determination that the full memory training signal is not raised, and (v) execute a slow boot path to generate the pre-trained memory parameter data set in response to a determination that the full memory training signal is raised.

US Pat. No. 10,216,549

METHODS AND SYSTEMS FOR PROVIDING APPLICATION PROGRAMMING INTERFACES AND APPLICATION PROGRAMMING INTERFACE EXTENSIONS TO THIRD PARTY APPLICATIONS FOR OPTIMIZING AND MINIMIZING APPLICATION TRAFFIC

SEVEN NETWORKS, LLC, Mar...

1. A method for optimizing and minimizing application traffic in a wireless network, the method comprising:defining an application programming interface (API) for controlling application traffic between an application client residing on a mobile device that operates within a wireless network and an application server not residing on the mobile device; and
using the API to optimize application traffic in the wireless network including controlling, by the mobile device, traffic sent by the application server to the mobile device, wherein using the API to optimize application traffic includes using the API for:
providing a subscriber tiering and reporting service having a premium subscriber tier;
providing delivery notification to a sending entity subscribing to the premium subscriber tier;
sending a plurality of data packets together as a batch within a defined window of time, wherein the defined window of time is determined by a time criticality of the plurality of data packets;
adjusting message priority for entities subscribing to the premium subscriber tier; and
providing special traffic reporting to a reporting server based on a reporting policy received from a policy management server.

US Pat. No. 10,216,547

HYPER-THREADED PROCESSOR ALLOCATION TO NODES IN MULTI-TENANT DISTRIBUTED SOFTWARE SYSTEMS

International Business Ma...

1. A computer program product comprising a non-transitory computer readable storage medium having a computer readable program for allocating a hyper-threaded processor to nodes of multi-tenant distributed software systems stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to:responsive to receiving a request to provision a node of the multi-tenant distributed software system on the host data processing system, identify a cluster of nodes to which the node belongs;
determine whether the node is a first type of node or a second type of node;
responsive to the node being the second type of node, determine whether another second type of node in the same cluster has been provisioned on the host data processing system;
responsive to determining that another second type of node in the same cluster has been provisioned on the host data processing system, determine whether a number of unallocated virtual processors (VPs) on different physical processors from that of the other second type of node is greater than or equal to a requested number of VPs for the second type of node;
responsive to the number of unallocated VPs on different physical processors from that of the other second type of node being greater than or equal to the requested number of VPs for the second type of node, allocate the requested number of VPs for the second type of node each to a different physical processor from that of the other second type of node; and
responsive to the number of unallocated VPs on different physical processors from that of the other second type of node being less than the requested number of VPs for the second type of node, allocate up to the requested number of VPs for the second type of node to as many different physical processors as supported by the different physical processors from that of the other second type of node; and
allocate any remaining unallocated VPs from the requested number of VPs for the second type of node to other physical processors.

US Pat. No. 10,216,546

COMPUTATIONALLY-EFFICIENT RESOURCE ALLOCATION

Insitu Software Limited, ...

1. A method operative in a computing system to associate a set of first entities to a set of second entities, wherein a second entity corresponds to a vertex in a network graph, comprising:associating a grouping of first entities with a particle of a set of particles, each particle having an attribute set;
configuring the set of particles into a force directed graph, wherein the particles are configured with respect to one another according to attractions or repulsions derived from the particle attribute sets;
bringing the force directed graph into an equilibrium state;
thereafter mapping the particles of the force directed graph onto the network graph; and
executing a simulation against the network graph that has been mapped with the particles of the force directed graph to associate the set of first entities to the set of second entities;
wherein mapping the network graph with the particles of the force directed graph improves efficiency of the computing system executing the simulation by obviating random mapping of the network graph, and by avoiding local neighbor searching with respect to one or more regions of the network graph that otherwise provide substantially equally-fit solutions.

US Pat. No. 10,216,544

OUTCOME-BASED SOFTWARE-DEFINED INFRASTRUCTURE

International Business Ma...

1. A method for outcome-based adjustment of a software-defined environment (SDE), the method comprising:dividing a business operation into a set of prioritized tasks including high priority tasks and low priority tasks, each task having a corresponding set of key performance indicators (KPIs);
establishing a set of outcome links between the set of prioritized tasks and a first resource configuration for the SDE, the set of outcome links favoring the high priority tasks over the low priority tasks with respect to the first resource configuration;
establishing a monitoring mechanism for continuously measuring a current state of the SDE while performing each of the prioritized tasks;
predicting a triggering event based on a first outcome of a behavior model of the SDE;
responsive to predicting the triggering event, determining to change from the first resource configuration to a second resource configuration for the SDE according to the set of outcome links for performing the business operation based on a second outcome of the behavior model;
wherein:
the set of outcome links include at least one of a utility of services for the business operation, a cost of a set of resources consumed by the first resource configuration, and a risk of the set of resources becoming unavailable; and
at least the using the behavior model steps are performed by computer software running on computer hardware.

US Pat. No. 10,216,543

REAL-TIME ANALYTICS BASED MONITORING AND CLASSIFICATION OF JOBS FOR A DATA PROCESSING PLATFORM

Mitylytics Inc., Alameda...

1. A method comprising:selecting, by a computing device, a new job to schedule for execution on a data processing system, the new job including a classification in a plurality of classifications, wherein the classification is determined by:
using a process to analyze a set of operations to determine which operation is to be used to classify the current job, wherein a first operation in the set of operations is selected; and
classifying the first operation in a first classification based on resource usage for the first operation, wherein the first classification is determined based on a resource being used by the first operation in a highest percentage usage in the data processing platform compared to other resources used by the first operation;
retrieving, by the computing device, performance information for a set of current jobs that are being executed in the data processing system, wherein the set of jobs are assigned to a plurality of queues and currently classified with a current classification in the plurality of classifications;
analyzing, by the computing device, the performance information to determine when one or more current jobs in the set of current jobs should be re-classified due to resource usage of a respective current job when being executed in the data processing system, wherein analyzing comprises:
determining a second operation in the set of operations; and
determining that the first classification should be changed to the second classification when a resource being used by the second operation has a higher percentage usage in the data processing platform compared to the highest percentage usage for the first operation;
re-classifying, by the computing device, the classifications for the one or more current jobs in the plurality of queues, wherein the first classification is re-classified to the second classification for the first operation; and
assigning, by the computing device, the new job to one of the queues based on the classification of the new job and the classifications of jobs in the plurality of queues including the re-classified classifications for the one or more current jobs.

US Pat. No. 10,216,542

RESOURCE COMPARISON BASED TASK SCHEDULING METHOD, APPARATUS, AND DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A task scheduling method for scheduling tasks to be executed by a data warehouse system, the method comprising:scheduling, at a task scheduling system, a set of configured tasks to be executed by the data warehouse system, wherein scheduling the set of configured tasks includes:
managing a preset task scheduling condition;
acquiring real-time available resource information about one or more computing resources available for task execution in the data warehouse system;
receiving, from a task deployment system, instructions to schedule the set of configured tasks;
determining resource consumption information regarding each configured task in the set of the configured tasks;
comparing the resource consumption information regarding each configured task in the set with the available resource information to obtain a comparison result for the configured task; and
identifying a target task from the set of configured tasks by virtue of the target task having a corresponding comparison result that meets the preset task scheduling condition, the preset task scheduling condition specifying consumption resource regarding the target task is less than the one or more available computing resources in the data warehouse system; and
delivering, at the task scheduling system, the target task to the task deployment system for the task deployment system to deploy the target task on the data warehouse for execution; and, wherein
comparing the resource consumption information of each configured task with the available resource information comprises:
determining, from the set according to a task cluster type indicated by the resource consumption information of each configured task in set and an available-cluster type indicated by the information about the computing resource available for task execution, a task subset whose task cluster type matches the available-cluster type;
comparing a resource consumption amount indicated by resource consumption information of a configured task in the task subset with an available-resource amount indicated by the information about the computing resource available for task execution; and
when a comparison result indicates that the resource consumption amount of the task is less than the available-resource amount, recording that the comparison result corresponding to the task meets the preset task scheduling condition; and, wherein identifying the target task from the set of the configured tasks comprises:
using at least one task in the task subset having a recorded comparison result that meets the task scheduling condition as a target task in the current scheduling period.

US Pat. No. 10,216,541

SCHEDULER OF PROCESSES HAVING TIMED PREDICTIONS OF COMPUTING LOADS

Harmonic, Inc., San Jose...

1. A non-transitory computer-readable storage medium storing one or more sequences of instructions for a scheduler of computer processes to be executed upon a cluster of processing capabilities, wherein execution of the one or more sequences of instructions cause:obtaining predictions of a computing load of at least one computer process to allocate, wherein said predictions are associated with a period of time;
retrieving predictions of available computing capacities of the cluster of processing capabilities for the period of time;
determining, based on the predictions of the computing load for the period of time and the predictions of the available computing capacities for the period of time, a processing capability to allocate said at least one computer process during said period of time;
creating at least one Operating-System-Level virtual environment for said at least one computer process, said at least one Operating-System-Level virtual environment having a computing capacity equal to or higher than at least one of said predictions of the computing load of said at least one computer process to allocate at a start of the period of time; and
adapting the computing capacity of said at least one Operating-System-Level virtual environment to the predictions of the computing load of said at least one computer process during said period of time.

US Pat. No. 10,216,540

LOCALIZED DEVICE COORDINATOR WITH ON-DEMAND CODE EXECUTION CAPABILITIES

Amazon Technologies, Inc....

1. A system to remotely configure a coordinator computing device managing operation of coordinated devices, the system comprising:a non-transitory data store including a device shadow for the coordinator computing device, the device shadow indicating a version identifier for a desired configuration of the coordinator computing device;
a deployment device in communication with the non-transitory data store, the deployment device comprising a processor configured with computer-executable instructions to:
obtain configuration information for the coordinator computing device, the configuration information indicating one or more coordinated devices to be managed by the coordinator computing device and one or more tasks to be executed by the coordinator computing device to manage the one or more coordinated devices, wherein individual tasks of the one or more tasks correspond to code executable by the coordinator computing device, and wherein the configuration information further specifies an event flow table indicating criteria for determining an action to be taken by the coordinator computing device in response to a message obtained from an execution of the one or more tasks;
generate a configuration package including the configuration information, wherein the configuration package is associated with an additional version identifier;
modify the device shadow to indicate that the desired configuration corresponds to the additional version identifier;
notify the coordinator computing device of the modified device shadow;
obtain a request from the coordinator computing device for the configuration package; and
transmit the configuration package to the coordinator computing device, wherein the coordinator computing device is configured to utilize the configuration package to retrieve the one or more tasks to be executed by the coordinator computing device to manage the one or more coordinated devices indicated within the configuration package.

US Pat. No. 10,216,539

LIVE UPDATES FOR VIRTUAL MACHINE MONITOR

AMAZON TECHNOLOGIES, INC....

1. A computing system comprising:an offload computing device comprising one or more processors and memory, wherein the offload computing device is configured to electronically communicate with a physical computing device, wherein the one or more processors are configured to execute instructions that, upon execution, configure the offload computing device to:
receive, from a remote update manager, an update notification for a virtual machine monitor executing on the physical computing device;
store an update data package in the memory of the offload computing device, the update data package comprising an update to the virtual machine monitor;
send an interrupt to the physical computing device;
transmit, to the physical computing device, an indication of the update to the virtual machine monitor, wherein the virtual machine monitor is configured to suspend operation of one or more virtual machine instances in a first state of operation based, at least in part, on the indication; and
provide the update data package to the virtual machine monitor, wherein the virtual machine monitor is configured to execute the update data package in first memory to update the virtual machine monitor, wherein the execution of the update data package implements an updated virtual machine monitor within the first memory,
wherein the updated virtual machine monitor is configured to retrieve state information associated with the first state of operation of the one or more virtual machine instances, and cause the one or more virtual machine instances to resume operation in the first state of operation based, at least in part, on the state information.

US Pat. No. 10,216,538

AUTOMATED EXPLOITATION OF VIRTUAL MACHINE RESOURCE MODIFICATIONS

International Business Ma...

1. A method for automated exploitation of virtual machine resource modifications, the method comprising:deploying, by one or more computer processors, at least one application in a distributed computing environment;
providing, by one or more computer processors, at least one resource of a virtual machine to the at least one application in the distributed computing environment, wherein the at least one resource of the virtual machine provided is recorded in metadata and the at least one application receives the metadata and using the metadata, the at least one application determines how much of the at least one resource of the virtual machine to utilize;
determining, by one or more computer processors, a change to the at least one resource of the virtual machine using a metalayer, wherein the metalayer includes, in the metadata, a factor, and wherein the factor is a level of utilization not to be exceeded for any resource of the at least one resource to protect against overusing the at least one resource of the virtual machine; and
responsive to determining the change to the at least one resource of the virtual machine, modifying, by one or more computer processors, the metadata, wherein the at least one application uses the modified metadata to determine how much of the changed at least one resource of the virtual machine to utilize.

US Pat. No. 10,216,536

SWAP FILE DEFRAGMENTATION IN A HYPERVISOR

VMware, Inc., Palo Alto,...

1. A method, comprising:creating a swap file for storing memory data of a virtual machine executing on a first host, wherein the swap file comprises a plurality of storage blocks including a first storage block;
executing a defragmentation procedure on the swap file while the virtual machine is powered on, the defragmentation procedure comprising:
selecting a first memory page frame of the virtual machine having first memory data that has been swapped out to the first storage block of the swap file;
determining an overall density of the swap file based on a first ratio of a first number of memory page frames stored in the swap file to a second number of memory pages for which space is allocated in the swap file;
determining a density of the first storage block based on a second ratio of a third number of memory page frames stored in the first storage block to a fourth number of memory pages for which space is allocated in the first storage block;
responsive to determining that the density of the first storage block is less than the overall density of the swap file, moving the first memory data from the first storage block to a second storage block; and
updating the first memory page frame with a location of the first memory data in the second storage block.

US Pat. No. 10,216,535

EFFICIENT MAC ADDRESS STORAGE FOR VIRTUAL MACHINE APPLICATIONS

MEDIATEK INC., Hsin-Chu ...

1. A method, comprising:determining at least one common property each having a respective value commonly shared by a plurality of addresses associated with one or more virtual machines executed on a computing apparatus, each address of the plurality of addresses being different from one another;
generating at least one first field each containing the respective value of a corresponding property of the at least one common property;
generating at least one second field each containing a respective value distinguishably identifying each virtual machine of the one or more virtual machines;
storing, in a memory, the at least one first field and the at least one second field as an address entry representative of the plurality of addresses associated with the one or more virtual machines; and
utilizing a mapping table, which stores an organizationally unique identifier (OUI) of each virtual machine of the one or more virtual machines, along with the memory such that an amount of memory required to store an address associated with a respective one of the one or more virtual machines is reduced,
wherein each address of the plurality of addresses includes an index pointing to a corresponding entry in the mapping table,
wherein the at least one first field comprises a first field that includes a first number of bits indicative of the OUI of each virtual machine of the one or more virtual machines,
wherein the at least one second field comprises one or more second fields each of which corresponding to a respective virtual machine of the one or more virtual machines,
wherein each second field of the one or more second fields comprises an index pointing to the first field and a second number of least significant bits distinguishably identifying the respective virtual machine, and
wherein the first number is greater than the second number.

US Pat. No. 10,216,534

MOVING STORAGE VOLUMES FOR IMPROVED PERFORMANCE

Amazon Technologies, Inc....

1. A computer-implemented method, comprising:analyzing capacity for a first network locality in a multi-tenant environment, the first network locality including a first subset of resources sharing state information and network interconnection, the first subset of resources including at least one server hosting a virtual machine for serving input and output (I/O) operations for at least one data volume;
identifying a detached data volume hosted by the first subset of resources, the detached data volume disconnected from a corresponding virtual machine for serving I/O operations;
determining that a probability of the data volume being reattached to the corresponding virtual machine is above a specified probability threshold;
determining sufficient capacity for the detached data volume in a second subset of resources corresponding to a second network locality;
causing, by a placement management service, the detached data volume to be hosted by the second subset of resources;
receiving, by a placement management service, a request to place a new data volume in the multi-tenant environment;
causing, by the placement management service, the new data volume to be hosted by the first subset of resources where the corresponding virtual machine for serving I/O operations for the new data volume is provided by the first subset of resources; and
attaching the new data volume to the corresponding virtual machine in the first subset of resources, wherein the new data volume is capable of serving I/O operations for the corresponding virtual machine within the first subset of resources corresponding to the first network locality.

US Pat. No. 10,216,533

EFFICIENT VIRTUAL I/O ADDRESS TRANSLATION

Altera Corporation, San ...

1. A method, comprising:using a network interface controller to monitor a transmit ring, wherein the transmit ring comprises a circular ring data structure that stores descriptors, wherein a descriptor describes a fragment of a packet of data and comprises a guest bus address that provides a virtual memory location of the fragment of the packet of data;
using the network interface controller to determine that a first descriptor describing a first fragment of a first packet of data has been written to the transmit ring based on monitoring the transmit ring;
using the network interface controller to attempt to retrieve a first translation for a first guest bus address of the first descriptor in response to determining that the first descriptor has been written to the transmit ring while a second descriptor describing a second fragment of the first packet of data is written to the transmit ring;
using the network interface controller to determine that the second descriptor has been written to the transmit ring;
using the network interface controller to attempt to retrieve a second translation for a second guest bus address of the second descriptor in response to determining that the second descriptor has been written to the transmit ring; and
using the network interface controller to read the first descriptor and the second descriptor from the transmit ring.

US Pat. No. 10,216,532

MEMORY AND RESOURCE MANAGEMENT IN A VIRTUAL COMPUTING ENVIRONMENT

Intel Corporation, Santa...

1. An apparatus for memory management in a virtual computing environment, comprising:a storage device including first memory area for a host machine, second memory area for a guest machine hosted by the host machine, a cache memory to selectively cache content of the first and second memory areas;
a hardware processor;
memory page comparison logic executed by the hardware processor coupled to the storage device to determine that a first memory page of instructions, stored in the second memory area of the storage device, for the guest machine in the virtual computing environment is identical to a second memory page of instructions, stored in the first memory area of the storage device, for a host machine in the virtual computing environment; and
merge logic executed by the hardware processor to, in response to a determination that the first memory page of instructions is identical to the second memory page of instructions, map the first memory page of instructions to the second memory page of instructions to cause a copy of the second memory page of instructions cached in the cache memory also serves as a cache copy of the first memory page of instructions.

US Pat. No. 10,216,531

TECHNIQUES FOR VIRTUAL MACHINE SHIFTING

NETAPP, INC., Sunnyvale,...

1. A computer-implemented method, comprising:validating by a universal application programming interface (API) a source virtual machine (VM) of a source hypervisor having a first platform, for migrating the source VM to a destination hypervisor with a second platform different from the first platform;
generating a clone of the source VM, prior to migration, by:
creating an empty data object in a destination logical storage unit of the destination hypervisor; and
mapping a source block range used by the source VM to store data to a destination block range of the empty data object without having to create a physical copy of source VM data;
migrating the source VM to the destination hypervisor using the clone;
reconfiguring by the API a network interface of the source VM for use by a destination VM at the destination hypervisor; and converting by the API, prior to initializing the destination VM, a virtual disk used by the source VM from a source format to a destination format with same storage blocks used to store VM data before and after migration of the source VM;
wherein the source VM is migrated to the destination hypervisor by reading meta-data of the source VM; creating an empty destination VM and meta-data on the destination hypervisor according to specification of the destination hypervisor; and creating at the empty destination VM, the clone of the source VM on the hypervisor.

US Pat. No. 10,216,530

METHOD FOR MAPPING BETWEEN VIRTUAL CPU AND PHYSICAL CPU AND ELECTRONIC DEVICE

HUAWEI TECHNOLOGIES CO., ...

1. A method for mapping between a virtual central processing unit (CPU) and a physical CPU, the method being applied to a multi-core system, the multi-core system comprising at least two physical CPUs, a virtual machine manager, and at least one virtual machine, the at least one virtual machine comprising at least two virtual CPUs, and the method comprising:obtaining, by the virtual machine manager, a set of to-be-mapped first virtual CPUs from the at least two virtual CPUs in a current time period;
obtaining, from the at least two physical CPUs, a first physical CPU that has a fewest to-be-run tasks;
obtaining, by the virtual machine manager, a first attribute value of each first virtual CPU in the set of first virtual CPUs and a second attribute value of the first physical CPU, the first attribute value of each first virtual CPU representing an attribute of a physical CPU to which the first virtual CPU is mapped in a previous time period, and the second attribute value representing an attribute of the first physical CPU in the previous time period;
obtaining, by the virtual machine manager from all the first attribute values, a target attribute value that matches the second attribute value by:
obtaining, according to the second attribute value and the first attribute value of each first virtual CPU, a similarity value between the second attribute value and the first attribute value of each first virtual CPU;
obtaining a first attribute value corresponding to a similarity value that is in a specified value range in all similarity values; and
using the first attribute value as the target attribute value that matches the second attribute value; and
mapping a target virtual CPU corresponding to the target attribute value to the first physical CPU for running.

US Pat. No. 10,216,529

METHOD AND SYSTEM FOR SHARING DRIVER PAGES

Virtuozzo International G...

1. A computer-implemented method for sharing driver pages among Containers, the method comprising:on a computer system having a processor, a single operating system (OS) and a first instance of a dedicated system driver installed and performing dedicated system services, instantiating a plurality of Containers that virtualize the OS, wherein code and data of the first instance of the dedicated system driver are loaded from an image into a plurality of pages arranged in a virtual memory, and
instantiating a second instance of the dedicated system driver upon a first request from one of the Containers for dedicated system services by:
(a) loading, from the image, pages of the second instance into a physical memory and allocating virtual memory pages for the second instance;
(b) associating the second instance with the first instance and acquiring virtual addresses of identical pages of the first instance compared to the second instance;
(c) mapping the virtual addresses of the identical pages of the second instance to physical pages to which virtual addresses of the corresponding identical pages of the first instance are mapped, while protecting the physical pages from modification;
(d) wherein virtual addresses of non-identical pages of the second instance remain mapped to the physical pages of the second instance;
(e) releasing the physical memory occupied by the identical physical pages of the second instance; and
(f) starting the second instance for responding to requests for the dedicated system services from the one of the Containers.

US Pat. No. 10,216,528

DYNAMICALLY LOADED PLUGIN ARCHITECTURE

Bitvore Corp., Los Angel...

1. A non-transitory computer-readable storage medium storing instructions which, when executed by one or more processors, provides an architecture for dynamically loading plugins, the architecture comprising:a parent context comprising data to configure one or more reusable software components;
a plugin repository operable to store a first plugin and a second plugin;
a first child context produced dynamically when the first plugin is loaded, the first child context being associated with a first version of the first plugin, the first child context inheriting the one or more reusable software components from the parent context; and
a second child context produced dynamically when the first plugin is loaded a second time, the second child context being associated with a second version of the first plugin, the second child context inheriting the one or more reusable software components from the first child context, wherein a violation is indicated if the first plugin returns a plugin object that belongs to the second plugin.

US Pat. No. 10,216,527

AUTOMATED SOFTWARE CONFIGURATION MANAGEMENT

Cisco Technology, Inc., ...

1. A method, comprising:detecting, by an agent at runtime, loading of a compiled file in an application, the application being one of a plurality of applications that provide a distributed business transaction;
responsive to the detecting, identifying, by the agent, parts of the compiled file;
performing, by the agent, a hash of the parts of the compiled file to generate corresponding hash values;
constructing, by the agent, a hash tree from the generated hash values;
determining, by the agent, whether a previously constructed hash tree from a previously detected load of the file is available to perform a comparison;
comparing, by the agent, the constructed hash tree against the previously constructed hash tree to identify changes to blocks of code inside the compiled file, wherein the identified changes indicate a change in the distributed business transaction by tracking one or more changes to the blocks code inside the compiled file; and
reporting, by the agent, results of the comparison.

US Pat. No. 10,216,526

CONTROLLING METHOD FOR OPTIMIZING A PROCESSOR AND CONTROLLING SYSTEM

MEDIATEK INC., Hsin-Chu ...

1. A controlling method for optimizing a processor, comprising:determining an actual utilization state of the processor in a first period;
extracting an actual utilization value from the determined actual utilization state to evaluate the overall utilization of the processor after the step of determining the actual utilization state;
determining an integral parameter and a derivative parameter by a PID (Proportional Integral Derivative) governor to obtain a dynamic adjustment value based on the actual utilization state, wherein the integral parameter corresponds to a low frequency, and the derivative parameter corresponds to a high frequency;
determining a proportional parameter by the PID governor to obtain the dynamic adjustment value based on the actual utilization state; and
adjusting performance and/or power of the processor in a second period by the PID governor based on the actual utilization state in the first period, wherein the second period is after the first period, the proportional parameter is determined from recent error values of the first period, the integral parameter is determined from error values in a long time of the first period, and the derivative parameter is determined from error values in a short time of the first period, and ten times of the short time is less than the long time.

US Pat. No. 10,216,525

VIRTUAL DISK CAROUSEL

American Megatrends, Inc....

1. A computer-implemented method for providing an automated installation of a plurality of operating systems during a boot of at least one computer, the method comprising performing computer-implemented operations for:receiving, by a bridge device, a request to expose one of a plurality of operating systems stored on a virtual disk carousel to at least one computer during a boot of the at least one computer, wherein the at least one computer is connected to the bridge device by a single first USB port and the bridge device is connected to the virtual disk carousel by a single second USB port; and
in response to the bridge device receiving the request for the selected operating system, the bridge device requesting the selected operating system from the virtual disk carousel through the single second USB port, the bridge device receiving the selected operating system from the virtual disk carousel in a standard disk image format through the single second USB port, the bridge device translating the selected operating system received from the virtual disk carousel in the standard disk image format to one of a plurality of standard mass storage device formats prior to transmission to the computer, wherein the selected standard mass storage device format is identified to the bridge device in a header in a disk image of the selected operating system, and the bridge device responding to the request with the selected operating system received from the virtual disk carousel by way of the selected standard mass storage device format exposed to the computer by the bridge device through the single first USB port.

US Pat. No. 10,216,524

SYSTEM AND METHOD FOR PROVIDING FINE-GRAINED MEMORY CACHEABILITY DURING A PRE-OS OPERATING ENVIRONMENT

Dell Products, LP, Round...

1. An information handling system, comprising:a memory including a cache; and
a processor to execute pre-operating system (pre-OS) code before the processor executes boot loader code, the pre-OS code to:
set up a Memory Type Range Register (MTRR) to define a first memory type for a memory region of the memory, wherein the first memory type specifies a first cacheability setting on the processor for data from the memory region;
set up a page attribute table (PAT) with an entry to define a second memory type for the memory region, wherein the second memory type specifies a second cacheability setting on the processor for data from the memory region;
disable the PAT; and
pass execution by the processor to the boot loader code.

US Pat. No. 10,216,523

SYSTEMS AND METHODS FOR IMPLEMENTING CONTROL LOGIC

General Electric Company,...

1. A system comprising:one or more hardware processors configured to implement a control-logic-agnostic virtual control engine to control a controlled system by executing control logic defined in attributed data, the control logic comprising a plurality of control nodes, and the attributed data comprising, for each of the control nodes, an attributed data item comprising a sample-data class structure and an attributes class structure, the attributes class structure comprising metadata specifying an output variable, one or more input variables, and a control operator generating the output variable from the one or more input variables; and
an attributed-data dictionary stored in non-transitory computer memory and configured to interpret the attributed data in response to a service call from the virtual control engine and to return a control-engine-specific interpretation to the virtual control engine,
wherein the control-engine-specific interpretation of each attributed data item comprises program code that, when instantiated and executed, implements the control operator specified in that data item, and
wherein the virtual control engine, upon execution of the control logic to control a controlled system, writes values of the output variables generated by the control operators of the plurality of control nodes to the sample-data class structures of the respective attributed data items.

US Pat. No. 10,216,522

TECHNOLOGIES FOR INDIRECT BRANCH TARGET SECURITY

Intel Corporation, Santa...

1. A computing device for executing an indirect branch instruction, the computing device comprising:a processor comprising:
an activation record key register; and
an indirect branch target module to: (i) load an encrypted indirect branch target, (ii) decrypt the encrypted indirect branch target using an activation record key stored in the activation record key register to generate an indirect branch target, (iii) and perform a jump to the indirect branch target.

US Pat. No. 10,216,521

ERROR MITIGATION FOR RESILIENT ALGORITHMS

NVIDIA Corporation, Sant...

1. A computer-implemented method, comprising:receiving, by a processing unit, a set of program instructions including a first program instruction that is responsive to error detection, wherein the first program instruction includes an opcode;
detecting an error in a value of a first operand of the first program instruction;
determining that error coping execution is selectively enabled for the first program instruction;
replacing the value for the first operand with a substitute value; and
executing, by the processing unit, the first program instruction including the opcode and the substitute value.

US Pat. No. 10,216,520

COMPRESSING INSTRUCTION QUEUE FOR A MICROPROCESSOR

VIA TECHNOLOGIES, INC., ...

1. A compressing instruction queue for a microprocessor, the microprocessor including an instruction translator having Q outputs providing up to P microinstructions per clock cycle in any one of multiple combinations of the Q outputs while maintaining program order in which Q is greater than or equal to P, wherein said compressing instruction queue comprises:a storage queue comprising a matrix of storage locations including N rows and M columns for storing microinstructions of the microprocessor in sequential order, wherein said sequential order comprises a zigzag pattern from a first column to a last column of each row and in only one direction from each row to a next adjacent row of said storage queue, and wherein N and M are each greater than one; and
a redirect logic circuit that is configured to receive and write said up to P microinstructions per cycle of a clock signal into sequential storage locations of said storage queue in said sequential order beginning with a next available sequential storage location that is next to a last storage location that was written in said storage queue in a last cycle;
wherein P>M, and wherein said redirect logic circuit writes a first of said up to P microinstructions into any of said M columns of said storage queue in which said next available sequential storage location is located, and to write any remaining ones of said up to P microinstructions following said zigzag pattern in each cycle; and
wherein said redirect logic circuit comprises:
a first select logic circuit that is configured to select said up to P microinstructions from among the Q outputs of the instruction translator, wherein said first select logic circuit comprises a first set of P multiplexers including a multiplexer in each of P positions in which each multiplexer has inputs receiving only those microinstructions that are allowed in a corresponding position of said each multiplexer; and
a second select logic circuit that reorders said up to P microinstructions according to sequential storage locations of said storage queue beginning with a column position of said next available sequential storage location in said storage queue;
wherein said second select logic circuit comprises a second set of P multiplexers, each having inputs coupled to outputs of a selected subset of said first set of P multiplexers; and
wherein said redirect logic circuit comprises a redirect controller that controls said second set of P multiplexers based on which of said M columns of said storage queue includes said next available sequential storage location.

US Pat. No. 10,216,519

MULTICOPY ATOMIC STORE OPERATION IN A DATA PROCESSING SYSTEM

International Business Ma...

1. A method of data processing in a data processing system implementing a weak memory model, wherein the data processing system includes a plurality of processing units coupled to an interconnect fabric, the method comprising:in response to executing a multicopy atomic store instruction in a processor core, an initiating processing unit broadcasting a store request on the interconnect fabric to a plurality of processing units to obtain coherence ownership of a target cache line of the multicopy atomic store instruction;
the initiating processing unit posting a kill request to at least one of the plurality of processing units to request invalidation of a copy of the target cache line said at least one of the plurality of processing units;
in response to successful posting of the kill request, the initiating processing unit broadcasting a store complete request on the interconnect fabric to enforce completion of the invalidation of the copy of the target cache line by the said at least one of the plurality of processing units; and
in response to the store complete request receiving a coherence response indicating success, permitting an update to the target cache line requested by the multicopy atomic store instruction to be visible to all of the plurality of processing units.

US Pat. No. 10,216,518

CLEARING SPECIFIED BLOCKS OF MAIN STORAGE

International Business Ma...

1. A computer system for a data processing system comprising:one or more computer processors;
one or more non-transitory computer readable storage media;
program instructions stored on the one or more non-transitory computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising:
program instructions to determine, from an instruction stream, an extended asynchronous data mover (EADM) start subchannel instruction, wherein the EADM start subchannel instruction comprises: a subsystem identification operand and an EADM operation request block operand both configured to designate a location of an EADM operation request block;
program instructions to execute the EADM start subchannel instruction, wherein executing the EADM start subchannel instruction comprises notifying a system assist processor (SAP) that includes:
an architecture that yields a same performance capability as a CPU on which operating systems and application programs execute; and
wherein the SAP includes program instructions to perform memory clearing operations in a same manner as the CPU on which operating systems and application programs execute;
program instructions to receive, by the SAP, an asynchronous data mover (ADM) request block;
program instructions to determine, by the SAP, whether the ADM request block specifies a main-storage-clearing operation command, wherein the main-storage-clearing operation command operates asynchronously from a program on a CPU;
responsive to determining the ADM request block specifies the main-storage-clearing operation command, program instructions to obtain one or more move specification blocks (MSBs), wherein an address associated with the one or more MSBs is designated by the ADM request block;
program instructions to determine, by the SAP, based on the one or more MSBs, an address and a size of a main storage block to clear;
responsive to determining the address and the size of the main storage block, program instructions to clear, by the SAP, the main storage block, wherein if the SAP is associated with a predetermined time period for clearing the main storage block, then a set of partially completed instructions are placed on a queue by the SAP, to continue the clearing of the main storage block at a later time;
responsive to clearing, by the SAP, the main storage block, program instructions to notify asynchronously, the CPU, when the SAP successfully completes the main-storage-clearing operation command;
responsive to determining that the main-storage-clearing operation command did not complete successfully, program instructions to provide an indication, in an ADM response block, of an error associated with at least one of: a request block, the one or more MSBs, and a memory access;
responsive to executing the EADM start subchannel instruction and notifying the SAP, program instructions to monitor the main storage clearing operations;
responsive to monitoring the main storage clearing operations, program instructions to receive a set of frequency statistics associated with the main-storage clearing operations, wherein the set of frequency statistics comprises:
a quantity of blocks cleared;
a size of the blocks cleared; and
a reason for clearing the specified blocks;
responsive to receiving the set of frequency statistics, program instructions to determine whether it is more efficient to use a combination of both the CPU memory cleaning operation and the SAP main storage cleaning operation, to clear the main storage block, wherein determining whether the combination of both the CPU and the SAP is more efficient comprises:
program instructions to analyze the set of frequency statistics and a current workload in the CPU; and
responsive to determining it is more efficient to use a combination of both the CPU and the SAP to clear the main storage block, continuously, at predetermined intervals, program instructions to analyze the set of frequency statistics to identify a breakpoint by comparing if it is more effective to use the CPU to when it is more effective to use the SAP for main storage clearing operations.

US Pat. No. 10,216,517

CLEARING SPECIFIED BLOCKS OF MAIN STORAGE

International Business Ma...

1. A non-transitory computer readable storage medium and program instructions stored on the non-transitory computer readable storage medium, the program instructions comprising:program instructions to determine, from an instruction stream, an extended asynchronous data mover (EADM) start subchannel instruction, wherein the EADM start subchannel instruction comprises: a subsystem identification operand and an EADM operation request block operand both configured to designate a location of an EADM operation request block;
program instructions to execute the EADM start subchannel instruction, wherein executing the EADM start subchannel instruction comprises notifying a system assist processor (SAP) that includes:
an architecture that yields a same performance capability as a CPU on which operating systems and application programs execute; and
wherein the SAP includes program instructions to perform memory clearing operations in a same manner as the CPU on which operating systems and application programs execute;
program instructions to receive, by the SAP, an asynchronous data mover (ADM) request block;
program instructions to determine, by the SAP, whether the ADM request block specifies a main-storage-clearing operation command, wherein the main-storage-clearing operation command operates asynchronously from a program on a CPU;
responsive to determining the ADM request block specifies the main-storage-clearing operation command, program instructions to obtain one or more move specification blocks (MSBs), wherein an address associated with the one or more MSBs is designated by the ADM request block;
program instructions to determine, by the SAP, based on the one or more MSBs, an address and a size of a main storage block to clear;
responsive to determining the address and the size of the main storage block, program instructions to clear, by the SAP, the main storage block, wherein if the SAP is associated with a predetermined time period for clearing the main storage block, then a set of partially completed instructions are placed on a queue by the SAP, to continue the clearing of the main storage block at a later time;
responsive to clearing, by the SAP, the main storage block, program instructions to notify asynchronously, the CPU, when the SAP successfully completes the main-storage-clearing operation command;
responsive to determining that the main-storage-clearing operation command did not complete successfully, program instructions to provide an indication, in an ADM response block, of an error associated with at least one of: a request block, the one or more MSBs, and a memory access;
responsive to executing the EADM start subchannel instruction and notifying the SAP, program instructions to monitor the main storage clearing operations;
responsive to monitoring the main storage clearing operations, program instructions to receive a set of frequency statistics associated with the main-storage clearing operations, wherein the set of frequency statistics comprises:
a quantity of blocks cleared;
a size of the blocks cleared; and
a reason for clearing the specified blocks;
responsive to receiving the set of frequency statistics, program instructions to determine whether it is more efficient to use a combination of both the CPU memory cleaning operation and the SAP main storage cleaning operation, to clear the main storage block, wherein determining whether the combination of both the CPU and the SAP is more efficient comprises:
program instructions to analyze the set of frequency statistics and a current workload in the CPU; and
responsive to determining it is more efficient to use a combination of both the CPU and the SAP to clear the main storage block, continuously, at predetermined intervals, program instructions to analyze the set of frequency statistics to identify a breakpoint by comparing if it is more effective to use the CPU to when it is more effective to use the SAP for main storage clearing operations.

US Pat. No. 10,216,516

FUSED ADJACENT MEMORY STORES

Intel Corporation, Santa...

15. A method comprising:identifying a pair of store instructions among a plurality of instructions in an instruction queue, wherein the pair of store instructions comprise a first store instruction and a second store instruction, wherein a first data of the first store instruction corresponds to a first memory region of a memory, the first memory region adjacent to a second memory region of the memory, and wherein a second data of the second store instruction corresponds to the second memory region;
responsive to determining that the first store instruction and the second store instruction correspond to adjacent memory regions, fusing the first store instruction with the second store instruction resulting in a fused store instruction; and
determining whether the first data and the second data is to be stored in one of an ascending storage order or a descending storage order based on a first operand and a second operand of the first instruction and a third operand and a fourth operand of the second instruction, wherein the first, second, third, and fourth operands are different than the first and second data.

US Pat. No. 10,216,515

PROCESSOR LOAD USING A BIT VECTOR TO CALCULATE EFFECTIVE ADDRESS

Oracle International Corp...

1. An apparatus, comprising:a register configured to store a bit vector, wherein the bit vector includes a plurality of elements that occupy N ordered element positions, wherein N is a positive integer; and
circuitry configured to:
identify a particular element position of the bit vector, wherein a value of a element occupying the particular element position matches a first value;
determine an address value using the particular element position of the bit vector and a base address; and
store a data value in the particular element position of the bit vector based on results of a comparison between a second value and data loaded from a location in a memory specified by the address value.

US Pat. No. 10,216,514

IDENTIFICATION OF A COMPONENT FOR UPGRADE

Hewlett Packard Enterpris...

1. A method comprising:receiving a first topology map that describes a desired software configuration for multiple components in a system, wherein the desired software configuration includes a desired software version;
accessing a second topology map that describes a current software configuration for the multiple components in the system, wherein the current software configuration includes a current software version;
determining based on the first topology map and the second topology map that the desired software configuration differs from the current software configuration;
responsive to the determination that the desired software configuration differs from the current software configuration, identifying which of the multiple components to upgrade, including:
identifying redundant components according to a common functionality from among the multiple components to upgrade; and
identifying an order of upgrade for each of the redundant components by prioritizing the redundant components that have dependencies on other components;
automating an upgrade at the identified component among the multiple components by upgrading a current software configuration of the identified components from the current software version to the desired software version.

US Pat. No. 10,216,513

PLUGIN FOR MULTI-MODULE WEB APPLICATIONS

Oracle International Corp...

1. A non-transitory computer-readable storage medium carrying program instructions thereon, the instructions when executed by one or more processors cause the one or more processors to perform operations comprising:determining, at a server, dependencies associated with each software module of a process defined by software modules, wherein each of the software modules is associated with a respective JavaScript Object Notation (JSON) file that lists a unique set of the dependencies specific to each of the software modules;
aggregating the dependencies associated with the software modules;
storing the aggregated dependencies in one or more configuration files, wherein a configuration file includes one or more dependency paths associated with each of the dependencies and includes at least one internal dependency unique to each of the software modules; and
updating one or more of the dependency paths in the configuration files based on one or more changes to one or more of the dependency paths.

US Pat. No. 10,216,512

MANAGED MULTI-CONTAINER BUILDS

Amazon Technologies, Inc....

1. A computer-implemented method for managing multi-container builds, comprising:under control of one or more computer systems configured with executable instructions,
receiving, at a software build management service, a software build task description, the software build task description specifying a software object to build, the software build task description including a set of environments, each environment of the set of environments specifying a corresponding set of parameters usable to build a corresponding version of the software object;
instantiating, for each environment of the set of environments, a corresponding container of a set of containers on a build instance of one or more build instances, each build instance of the one or more build instances associated with the software build task, the corresponding container based at least in part on one or more parameters of the set of parameters;
for a selected build state of a set of build states of the software build management service:
sending, for each environment of the set of environments, a command to the corresponding container of the set of containers, the command based at least in part on the build state and the environment;
receiving, from the corresponding container of each environment of the set of environments, a corresponding response to the command;
waiting until the corresponding response to the command is received from the corresponding container for all environments of the set of environments; and
determining the next build state of the set of build states; and
providing a build status of the software build task, the build status indicating whether the software build task completed.

US Pat. No. 10,216,511

METHOD APPARATUS AND SYSTEMS FOR ENABLING DELIVERY AND ACCESS OF APPLICATIONS AND SERVICES

1. A non-transitory computer-readable storage medium having at least a computer-readable program stored therein, said computer-readable program comprising a first set of instructions, at least one or more of accessing and executing said first set of instructions by a processor associated with a generator device enables said generator device to at least:a. enabling a determination of an audio-visual content comprising at least one or more of a sample of an audio content and a sample of a visual content, wherein said determination is enabled due to at least a capture of a portion of a one or more of the sample of the audio content and the sample of the visual content, by at least a one or more sensors associated with said generator device;
b. determining a tag related information based on at least a portion of one or more of:
the sample of the audio content;
the sample of the video content;
c. enabling transmission of at least said tag related information on a communication interface associated with said generator device, wherein said transmission enables a one or more computing devices to at least:
i. determining a first contextual tag, wherein said first contextual tag comprises information determined based on at least a portion of said tag related information;
ii. determining an application identification information based on at least
a portion of said first contextual tag, said application identification information
identifying an application, wherein at least a portion of at least one of said application and said application identification information can be one or
more of identified, determined and selected based on at least a portion of information in an application repository, said application repository allows
data associated with at least one or more of an application and an application identification information to be:
i. added to said application repository;
ii. updated in the said application repository;
iii. modified in the said application repository;
iv. deleted from said application repository;
iii. enabling an activation of said application, wherein said activation
comprises enabling a first execution of a second set of instructions associated with said application;
d. receiving a first plurality of information on said communication interface associated with said generator device;
e. determining, based on at least a portion of said first plurality of information, at least a one or more of:
i. a second contextual tag; and
ii. a context value:
f. determining a third set of instructions based on at least a portion of said second contextual tag; and
g. enabling a second execution of at least said third set of instructions on said processor associated with said generator device, wherein said second execution enables said processor to at least one or more of processing and accessing at least a portion of said context value.

US Pat. No. 10,216,510

SILENT UPGRADE OF SOFTWARE WITH DEPENDENCIES

AIRWATCH LLC, Atlanta, G...

1. A non-transitory computer-readable medium embodying program instructions executable in a client device for performing a silent upgrade of a first client application on the client device that, when executed, cause the client device to:identify, by a second client application, that a new version of the first client application is available that upgrades a current version of the first client application to the new version, wherein the new version is required for a state of the client device to be in compliance with at least one compliance rule;
download, by the second client application, an installation package file for the new version of the first client application;
search, by the second client application, a registry of an operating system installed on the client device using a unique identifier identified for the first client application to locate information associated with the current version of the first client application in the registry;
identify, by the second client application, a file path for the current version of the first client application from the registry;
modify, by the second client application, the installation package file using the information in the registry, wherein the installation package file is modified by performing:
renaming a file name of the installation package file to be the same as a name of an initial installation package file used to install the first client application; and
moving the installation package file to a directory in the file path of the current version of the first client application; and
generate and execute, by the second client application, a command line query that causes a default installer application executable on the client device to perform a silent upgrade of the first client application, wherein the silent upgrade comprises replacing the current version of the first client application with the new version of the first client application without user interaction.

US Pat. No. 10,216,509

CONTINUOUS AND AUTOMATIC APPLICATION DEVELOPMENT AND DEPLOYMENT

TUPL, INC., Bellevue, WA...

1. A system, comprising:one or more processors; and
memory including a plurality of computer-executable components, the plurality of computer-executable components comprising:
a continuous integration component that generates a completed version of a deployment project in a development environment by concurrently:
generating an updated second version of a first project element via a first pipeline, and
integrating, via an integration pipeline, a first version of the first project element with a first version of a second project element to generate the completed version,
wherein the integration pipeline performs the integrating under command of a version controller that tracks versions of applications, application components, and infrastructures to enable concurrent compilation and testing of multiple versions of the applications, application components, and infrastructures;
an orchestration component that configures a production environment that includes at least one computing node to execute a development image that is created from the completed version of the deployment project, the production environment being mirrored by the development environment; and
an automatic deployment component that deploys a production image that is a copy of the development image into the production environment for execution,
wherein the first project element or the second project element is one of an application component, at least one portion of an application, or at least one portion of an infrastructure that supports the execution of the application.

US Pat. No. 10,216,508

SYSTEM AND METHOD FOR CONFIGURABLE SERVICES PLATFORM

Bank of America Corporati...

1. A system for installing and managing service software, comprising:one or more computers, each comprising a respective processor, the processors collectively configured to execute program instructions to implement a plurality of services on behalf of users in one or more client domains;
a configurable services platform communicatively coupled to the one or more computers, the configurable services platform including:
a memory storing respective configuration information associated with each of the one or more client domains, the configuration information including, for each client domain, information defining one or more of an indexing key, a configuration attribute, or filtering criteria associated with service requests submitted on behalf of users in the client domain and targeting at least one of the plurality of services; and
a service request processor configured to process service requests received on behalf of the users in the one or more client domains and targeting respective ones of the plurality of services based on the configuration information stored in the memory and associated with each of the client domains, the processing including routing the service requests to the targeted services;
a platform administration portal communicatively coupled to the configurable services platform and configured to:
present a user interface through which the configuration information associated with each of the one or more client domains is input to the system by a platform administrator;
receive, through the user interface, input indicating a requested change in the configuration information associated with a given one of the client domains;
the configurable services platform further including a configuration object builder configured to create a configuration object including updated configuration information associated with the given client domain, the updated configuration information reflecting the requested change;
the configurable services platform further configured to push the configuration object to an application cache accessible by a given one of the plurality of services targeted by service requests submitted on behalf of users in the given client domain; and
the given one of the plurality of services configured to apply the configuration object in fulfilling the service requests submitted on behalf of users in the given client domain without modification of the program instructions executable to implement the given service.

US Pat. No. 10,216,507

CUSTOMIZED APPLICATION PACKAGE WITH CONTEXT SPECIFIC TOKEN

Twitter, Inc., San Franc...

1. A method comprising:receiving, at an application distribution platform, an initial application package comprising one or more files of an application;
obtaining, at the application distribution platform, an application token specific to a user account, the application token providing a context-specific functionality to the application, wherein the context of the application token includes information specific to the user account;
assembling, by an assembler module of the application distribution platform, a customized application package comprising the initial application package and the application token, wherein assembling the customized application package includes incorporating the token into a directory structure of the initial application package; and
providing, to a client device, the customized application package in response to a user request, wherein the customized application package is configured to configure the application during installation according to the context specified by the application token, including incorporating the information specific to the user account to preconfigure the installed application.

US Pat. No. 10,216,506

LOCATION-BASED AUTOMATIC SOFTWARE APPLICATION INSTALLATION

INTERNATIONAL BUSINESS MA...

1. A computer-implemented method comprising:collecting device data of a mobile device of a user, the device data comprising information indicative of a location at which the user will be present at a future time;identifying, based on the collecting the device data, a software application associated with that location;downloading an installer for the software application to the mobile device of the user;
automatically installing the software application on the mobile device based on a triggering event, the installing being prior to arrival of the user at the location at the future time; and
automatically authorizing, during the automatic installation of the software application, at least one application permission required for the software application based on a received grant of one or more application permissions to a sandbox application.

US Pat. No. 10,216,505

USING MACHINE LEARNING TO OPTIMIZE MINIMAL SETS OF AN APPLICATION

VMware, Inc., Palo Alto,...

1. A method, comprising:deploying, by a management server, an initial minimal set of application components stored on the management server to each endpoint device in a plurality of endpoint devices, wherein the initial minimal set comprises a subset of components of the application that enables a portion of functionality of the application to be executed without including all application components;
on each endpoint device in the plurality of endpoint devices,
detecting execution of the application from the initial minimal set;
during execution from the initial minimal set, recording by an agent operating on the endpoint device accesses made by the application to application components located in the initial minimal set on the endpoint device and to application components missing from the initial minimal set on the endpoint device, and storing the recordings;
selecting one or more endpoint devices in the plurality of endpoint devices;
retrieving the recordings of application accesses for the selected endpoint devices; and
processing the retrieved recordings to produce an optimized minimal set on the management server based on recorded accesses to application components located in the initial minimal set on the endpoint device and to application components missing from the initial minimal set on the endpoint device in the retrieved recordings, wherein the optimized minimal set is produced by at least one of: removing one or more application components from the initial minimal set or adding one or more application components to the initial minimal set.

US Pat. No. 10,216,504

SYSTEM AND METHOD FOR INSULATING A WEB USER INTERFACE APPLICATION FROM UNDERLYING TECHNOLOGIES IN AN INTEGRATION CLOUD SERVICE

ORACLE INTERNATIONAL CORP...

1. A system for insulating a web user interface application from runtime engines in a cloud service runtime, the system comprising:a computer comprising one or more microprocessors;
a cloud service, executing on the computer, the cloud service comprising:
a web interface application for creating an integration flow between a source application and a target application; and
a runtime for executing the integration flow, the runtime comprising a plurality of runtime engines; and
an abstraction application programming interface that exposes a plurality of services to the web interface application, for use by the web interface application in designing, deploying and monitoring an integration project,
wherein the abstraction application programming interface operates to:
persist the integration project into a persistence store in a runtime-engine-neutral format, wherein the persistence store uses a pluggable persistence framework that insulates the integration project from the plurality of services exposed to the web interface application used by an operation manager to perform create, read, update, and delete operations on the integration project insulated within the persistence store during a development of the integration project with agnostic knowledge by the web interface application of the plurality of runtime engines;
use a template particular to a first runtime engine of the plurality of runtime engines to transform the integration project developed within the persistence store to a format specific to the first runtime engine of the plurality of runtime engines at deployment time, and
deploy the integration project transformed to the format specific to the first runtime engine on the first runtime engine for execution.

US Pat. No. 10,216,503

DEPLOYING, MONITORING, AND CONTROLLING MULTIPLE COMPONENTS OF AN APPLICATION

ElasticBox Inc., Broomfi...

1. A method for deploying an application, the method comprising:electronically receiving a request to deploy a cloud-based application, the request comprising information about the application but not including the application or portions thereof, wherein the request further includes information indicating for the application at least one of minimum, maximum, or average memory requirements, processing requirements, storage requirements, or bandwidth requirements;
assigning a unique identifier to the received request;
selecting a server from a plurality of servers upon which to deploy the application;
sending a message to the server to allocate for the application at least one of memory, processing power, storage, or throughput on the server before the application is deployed;
causing installation of an agent program on the selected server;
storing a plurality of commands in a script queue in a computer memory, the commands comprising computer instructions for the installation and configuration of the application; and
automatically sending the unique identifier that identifies the received request to deploy the cloud-based application to the agent program and sending the commands to the agent program for execution of the commands on the server, the execution of the commands causing installation and configuration of the application on the server.

US Pat. No. 10,216,501

GENERATING CODE IN STATICALLY TYPED PROGRAMMING LANGUAGES FOR DYNAMICALLY TYPED ARRAY-BASED LANGUAGE

The MathWorks, Inc., Nat...

1. A method, comprising:generating, by one or more processors, second program code in a statically typed programming language, from first program code in a programming language supporting dynamic typing, the first program code containing or, when executed, generating or operating on a dynamically typed array, the generating the second program code comprising:
classifying, by the one or more processors, the dynamically typed array into one of multiple categories, based on an array content, an array usage, or a user input, and
the multiple categories including a homogeneous category, a heterogeneous category, a table structure category, or a soft homogenizable category,
selecting, by the one or more processors, a translation rule for translating the dynamically typed array into a statically typed representation based on a category in which the dynamically typed array is classified,
a first translation rule being selected when the dynamically typed array is classified into a first category of the multiple categories, or
a second translation rule, different than the first translation rule, being selected when the dynamically typed array is classified in a second category, different than the first category, of the multiple categories; and
generating, by the one or more processors, the second program code in the statically typed programming language based on the selected translation rule.

US Pat. No. 10,216,500

METHOD AND APPARATUS FOR SYNCHRONIZATION ANNOTATION

Oracle International Corp...

1. A method for providing synchronization of a multi-threaded application, comprising:analyzing a source file of the application to identify a synchronization annotation contained therein, the source file including procedural code of the application and the synchronization annotation defined for the procedural code, wherein the synchronization annotation includes one or more declarative statements that are different from procedural statements implementing business logic within the procedural code;
identifying a synchronization annotation processor for processing the synchronization annotation identified in the source file separately from the procedural code of the source file, wherein the synchronization annotation processor is configured to process one or more kinds of synchronization annotation;
invoking the synchronization annotation processor for processing the synchronization annotation to generate one or more code files; and
compiling the source file using a compiler to generate one or more class files, wherein the compiling includes,
compiling the procedural code within the source file to generate corresponding one or more class files for the procedural code;
compiling the one or more code files of the synchronization annotation into corresponding one or more class files for the code files, wherein the class files for the code files are maintained separately from the class files for the procedural code and are linked during compile time, the linking enables the class files associated with the one or more code files to use a synchronization service via an application programming interface (API), during execution of the multi-threaded application, to provide arbitration by coordinating activities of multiple threads to methods and data manipulated by classes within the class files associated with the procedural code, the maintaining of the class files for the procedural code separately from the class files for the synchronization annotation allow implementation of different synchronization strategies without requiring changes to the procedural code or the synchronization annotation.

US Pat. No. 10,216,499

METHOD, PROGRAM, AND SYSTEM FOR CODE OPTIMIZATION

International Business Ma...

1. A method comprising:detecting, within one of a COBOL source code or a COBOL binary executable program, a sign assignment instruction having an input operand and an output operand identical in size to each other, the sign assignment instruction comprising a ZAP instruction operating on the input operand having a packed decimal format, the sign assignment instruction operating to assign a value of zero to a packed decimal data value of the input operand having a value of negative zero;
analyzing, based on the detecting, the input operand of the sign assignment instruction to determine whether a value of the input operand results from an add or subtract operation and whether the value is greater than the value prior to the operation;
based on the analyzing determining at least one of that the value of the input operand does not result from an add or subtract operation, or that the value is not greater than the value prior to the operation:
checking, based on a determination that addresses of the input operand and the output operand of the sign assignment instruction are not exactly the same and do not overlap each other, a possibility that the value of the input operand of the sign assignment instruction is negative zero;
inserting the sign assignment instruction only when there is the possibility that the value is negative zero; and
generating and inserting a copy instruction when there is no possibility that the value is negative zero;
based on the detecting the sign assignment instruction, generating a first instruction that checks a bit representation of an input value contained in the input operand for a possibility that the input value is negative zero, the first instruction further, based on determining that there is no possibility that the input value is negative zero:
skips the sign assignment instruction based on the input operand and the output operand having identical addresses; and
executes a copy instruction copying the input value in the input operand to the output operand based on a determination that the input and output addresses are different from each other and not overlapping each other;
inserting the first instruction into the one of the COBOL source code or the COBOL binary executable program to create an optimized COBOL binary executable program; and
outputting a modification of the COBOL source code or the COBOL binary executable program that further comprises the first instruction.

US Pat. No. 10,216,497

SELECTIVE COMPILING METHOD, DEVICE, AND CORRESPONDING COMPUTER PROGRAM PRODUCT

Google LLC, Mountain Vie...

1. A method for compiling a software application for execution via a virtual machine of a hardware platform, the method comprising:identifying a predetermined portion of a software code for the software application; and
prior to execution of the software application, compiling the predetermined portion of the software code to generate a software application part comprising binary instructions for execution on the hardware platform,
wherein the compiling comprises incorporating a set of reference instructions enabling the software application part to manage a table for referencing one or more objects used by the software application part, and wherein during execution of the software application:
(i) the virtual machine allocates a memory space to the software application part, (ii) the software application part uses the allocated memory space to store the table and the one or more objects, (iii) the virtual machine accesses the table to determine the one or more objects being used by the software application part within the allocated memory space, and (iv) a garbage collector of the virtual machine removes the determined one or more objects from the allocated memory space.

US Pat. No. 10,216,496

DYNAMIC ALIAS CHECKING WITH TRANSACTIONAL MEMORY

International Business Ma...

1. A method for dynamic run-time alias checking, the method comprising:creating, by a dependency checker, a main thread and a helper thread of a transaction, wherein a synchronization mechanism between the main thread and the helper thread is enabled by a pair ed instructions;
performing, by the dependency checker, a speculative computation of an optimized version of a first region of code in a rollback-only transactional memory associated with the main thread, wherein the optimized version of the first region of code is a compiler optimization performed on a region of code corresponding to an un-optimized version of the first region of code;
transmitting, by the dependency checker via the main thread, instructions setting the helper thread to a busy-wait mode;
peforming a first check, by the dependency checker, for one or more alias dependencies in the un-optimized version of the first region of code;
responsive to determining in a predetermined amount of time an absence of alias dependencies in the un-optimized version of the first region of code, committing, by the dependency checker, the transaction; and
responsive to at least one of: a failure to determine results of the first check for one or more alias dependencies in a predetermined amount of time, and a determination within the predetermined amount of time that alias dependencies are present in the un-optimized version of the first region of code, performing, by the dependency checker, a rollback of the transaction and executing the un-optimized version of the first region of code.

US Pat. No. 10,216,495

PROGRAM VARIABLE CONVERGENCE ANALYSIS

NATIONAL INSTRUMENTS CORP...

1. A non-transitory computer accessible memory medium that stores program instructions executable by a processor to implement:initiating an analysis of a first program;
in response to initiating the analysis, determining, based on dependencies of one or more variables in the first program, one or more state variables of the first program;
creating, based on the one or more state variables and dependencies of the one or more state variables, a second program corresponding to the first program;
executing the second program a plurality of times, comprising:
for each execution:
recording values of the one or more state variables;
incrementing an execution count;
comparing the values to corresponding values from previous executions of the second program; and
terminating said executing in response to determining that the values match corresponding values from at least one previous execution of the second program;
determining, based on the execution count, a convergence property for the first program that indicates a number of executions of the first program required to generate all possible values of the one or more variables, wherein the convergence property is useable to optimize the first program; and
displaying the convergence property,
wherein said executing the second program comprises one or more of:
running compiled code on a computer, wherein the compiled code is generated from at least a portion of the second program;
interpreting program statements of at least a portion of the second program; or
evaluating operations in a graph generated from at least a portion of the second program.

US Pat. No. 10,216,494

SPREADSHEET-BASED SOFTWARE APPLICATION DEVELOPMENT

1. A computer implemented method for generating an interactive web application comprising at least one web page, the method comprising:determining one or more data sources within a spreadsheet, each data source having zero or more data records, wherein the data sources comprise a first portion of the spreadsheet;
determining one or more user interface templates from within the spreadsheet, each user interface template comprising a data format for one or more of the data sources, wherein the user interface templates comprise a second portion of the spreadsheet;
generating a web data store by extracting data records from at least the first portion of the spreadsheet;
generating a particular web page of the interactive web application based on one or more user interface templates corresponding to the particular web page of the one or more user interface templates from within the spreadsheet, wherein the particular web page references one or more data sources identified based on information in the one or more user interface templates corresponding to the particular web page;
responsive to a request for a presentation of the particular web page of the interactive web application, generating the presentation of the particular web page including one or more data records retrieved from the web data store, wherein the one or more data records are identified by and formatted according to the one or more user interface templates corresponding to the particular web page;
responsive to receiving user input via an input page of the interactive web application generated based on the spreadsheet, updating at least one data record of the web data store based on one or more rules identified based on at least one first user interface template of the one or more user interface templates;
updating the spreadsheet responsive to updating the web data store; and
responsive to a request for a second presentation of the particular web page of the interactive web application, generating the second presentation of the particular web page including at least one updated data record retrieved from the web data store, wherein content of the at least one updated data record is based at least in part on the user input received via the input page.

US Pat. No. 10,216,493

DISTRIBUTED UI DEVELOPMENT HARMONIZED AS ONE APPLICATION DURING BUILD TIME

SAP SE, Walldorf (DE)

1. A system for distributed user interface development for harmonization as a unified web-based application, the system comprising:an application frame repository hosting an application frame that defines a user interface for a web-based application, the application frame hosting one or more pages linked to the application frame, the application frame linked to a function library comprising a plurality of application functions;
a plurality of code versioning repositories linked to the application frame repository via a communications network, each of the plurality of code versioning repositories providing a page of the one or more pages linked to the application frame, each of the one or more pages having at least one application function of the plurality of application functions that is independently accessible from the application frame by a user that accesses the web-based application, each of the plurality of code versioning repositories configured to:
receive code representing the at least one application function for the page associated with corresponding code versioning repository, and
store the code in the corresponding code versioning repository, the code developed in a development environment independent from the application frame repository; and
an application builder module, comprising at least one computer processor, configured to:
retrieve the plurality of pages from corresponding code versioning repositories of the plurality of code versioning repositories,
assemble the one or more pages within the application frame to generate the unified web-based application at a build time, and
test, after the assembling, the unified web-based application for functional correctness of each of the one or more pages assembled in the unified web-based application.

US Pat. No. 10,216,492

CONFIGURATION AND MANAGEMENT OF MENUS

SONY INTERACTIVE ENTERTAI...

1. A method of customizing menus for a consumer electronics device, the method comprising:receiving a menu customization request from the consumer electronics device upon triggering of an event including changing of an IP address of the consumer electronic device;
preparing menu definitions for the menus to be customized on the consumer electronics device,
wherein the menu definitions restrict choices for parameters that are set through the menus, wherein the menu definitions include at least one menu action to specify a variety of actions to perform;
performing the variety of actions including: (a) running a program code on the consumer electronics device; (b) launching an Internet service; (c) navigating to a uniform resource locator (URL) including running a web application; and (d) setting parameters on the consumer electronics device including volume level, channel selection, and picture settings, wherein each menu of the menus includes a plurality of menu items,
wherein the menus are tailored based on the capabilities of the consumer electronics device based on a menu item definition indicating a functionality required by the consumer electronics device for each menu item,
wherein the same menu item definition is given to a plurality of consumer electronics devices with differing capabilities and each menu item is only displayed on the plurality of consumer electronics devices that supports the indicated functionality;
generating menu configuration information using the menu definitions; and
transmitting the menu configuration information to the consumer electronics device.