US Patent No. 9,690,507

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS


Patent No. 9,690,507
Issue Date June 27, 2017
Title System And Method For Enabling High Read Rates To Data Element Lists
Inventorship William Brad Matthews, San Jose, CA (US)
Bruce H. Kwan, Sunnyvale, CA (US)
Mohammad K. Issa, Los Altos, CA (US)
Neil Barrett, Palo Alto, CA (US)
Avinash Gyanendra Mani, San Jose, CA (US)
Assignee Innovium, Inc., San Jose, CA (US)

Claim of US Patent No. 9,690,507

1. A memory system for a network device comprising:
a main memory configured to store data elements;
a link memory including a plurality of memory banks including a first memory bank and a different second memory bank, wherein
each memory bank has a latency for consecutively executed memory access operations,

wherein each memory bank of the plurality of memory banks is configured to store a plurality of nodes that each maintains
metadata representing (i) data-element pointers to the main memory for accessing data elements referenced by the data-element
pointers, and (ii) next-node pointers for accessing next nodes in the same memory bank,

wherein the data-element pointers in the plurality of memory banks point to data elements stored in memory locations in the
main memory to form one or more data packets, and

wherein the next-node pointers connect each node to a next node in the respective memory bank to form a respective separate
skip list of a plurality of skip lists, each skip list having an initial node and a final node, and all the nodes of each
skip list being stored in a respective single memory bank; and

a context manager including a respective head entry and a respective tail entry for each memory bank of the plurality of memory
banks,

wherein each head entry of the plurality of head entries is configured to include a unique address pointing to the initial
node of the skip list in the respective memory bank,

wherein each tail entry of the plurality of tail entries is configured to include a unique address pointing to the final node
of the skip list in the respective memory bank; and

circuitry configured to use the head and tail entries in the context manager to:
access a first initial node stored in the first memory bank for a first skip list of the plurality of skip lists;
(i) before accessing, for the first skip list, a next node stored in the first memory bank and (ii) within less time than
the latency for the first memory bank, access a second initial node stored in the second memory bank for a second skip list;
and

(i) after accessing the second initial node and (ii) after satisfying the latency for the first memory bank, access, for the
first skip list, the next node stored in the first memory bank.