US Pat. No. 10,091,873

PRINTED CIRCUIT BOARD AND INTEGRATED CIRCUIT PACKAGE

Innovium, Inc., San Jose...

1. An apparatus comprising:a printed circuit board that includes:
a multilayer lamination of one or more ground layers, one or more power layers, and a plurality of signal layers;
a plurality of vias that are located on a first surface of the printed circuit board; and
a plurality of bonding pads that couple a ball grid array of an integrated circuit package to the one or more ground layers, the one or more power layers, and the plurality of signal layers through the plurality of vias, wherein the plurality of bonding pads includes:
first bonding pads that are arranged in a first area of the printed circuit board and that are configured to transfer multiple pairs of first differential signals between the printed circuit board and the integrated circuit package, each of the first bonding pads being coupled to a via of the plurality of vias in the first area,
second bonding pads that are arranged in a second area of the printed circuit board and that are configured to transfer multiple pairs of second differential signals between the printed circuit board and the integrated circuit package, each of the second bonding pads being coupled to a via of the plurality of vias in the second area, and
third bonding pads that are arranged in a third area of the printed circuit board and that couple the integrated circuit package to ground of the printed circuit board, each of the third bonding pads being coupled to two or more vias of the plurality of vias in the third area,
wherein the third area is located between the first area and the second area.

US Pat. No. 9,450,604

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:
a profiler configured to associate compression profiles with unpacked data units, each of the unpacked data units having allocated
space for storing field data for each field in a master set of fields, but only carrying values for a subset of fields in
the master set, each of the compression profiles indicating a specific combination of value-carrying fields in the master
set of fields, and packed value lengths for the indicated value-carrying fields; and

a data packer component configured to generate packed field data for a given unpacked data unit based on a given compression
profile, of the compression profiles, that the profiler associated with the given unpacked data unit, the packed field data
including values for the specific combination of value-carrying fields indicated by the given compression profile, the values
condensed within the packed field data to corresponding packed value lengths specified by the given compression profile, the
data packer component further configured to store or transmit the packed field data in association with information identifying
the given compression profile.

US Pat. No. 9,894,670

IMPLEMENTING ADAPTIVE RESOURCE ALLOCATION FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. A resource management system for a network device comprising:
a resource manager that is configured to:
generate a sequence of a plurality of access time slots for a plurality of candidate entities to access a plurality of resources
associated with the network device,

wherein a candidate entity of the plurality of candidate entities is a network entity that requests access to one or more
of the plurality of resources, and

wherein an access time slot provides a time period during which a candidate entity associated with the access time slot can
access a resource of the plurality of resources;

determine a priority level associated with each of the plurality of candidate entities, wherein each of the candidate entities
is associated with a respective one of a predetermined plurality of priority levels, including:

determining a first priority level for a first candidate entity of the of the plurality of candidate entities, and
determining a second priority level for a second candidate entity of the of the plurality of candidate entities, wherein the
second priority level is lower than the first priority level;

in response to determining that the first candidate entity has the first priority level that is higher than the second priority
level of the second candidate entity, assign a greater number of access time slots to the first candidate entity, and a lower
number of access time slots to the second candidate entity;

determine that at least one of the first candidate entity or the second candidate entity is not using one or more access time
slots assigned to the respective candidate entity; and

in response to determining that at least one of the first candidate entity or the second candidate entity is not using one
or more access time slots assigned to the respective candidate entity, redistribute an unused access time slot presently assigned
to one of the first candidate entity or the second candidate entity to a third candidate entity that is without any assigned
access time slot; and

a resource monitor that is configured to detect usage of each of the access time slots by the respective candidate entities
that are assigned the access time slots.

US Pat. No. 9,841,913

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A network device comprising:
a main memory storing data elements from groups of data elements, wherein a group of data elements includes an ordered sequence
of data elements;

a child link memory including a plurality of memory banks, wherein each memory bank stores one or more entries, each entry
including: (i) a main memory location address that stores a data element identified by the entry, and (ii) a child link memory
location address storing a next data element in a group of data elements corresponding to the data element identified by the
entry;

a child manager including, for each memory bank of the plurality of memory banks in the child link memory, one or more head
nodes and one or more tail nodes, wherein each head node maintains (i) a data-element pointer to a main memory location for
accessing a particular head data element that is a first data element in the respective memory bank for a group of data elements
corresponding to the particular head data element, (ii) a child memory pointer for accessing a child link memory entry in
the same memory bank, and (iii) a data-element sequence number that represents a position of the particular head data element
in an ordered sequence of data elements in the group of data elements corresponding to the particular head data element, and
wherein each tail node maintains a data-element pointer to a main memory location for accessing a particular tail data element
that is a most recent data element in the respective memory bank for a group of data elements corresponding to the particular
tail data element;

a parent manager that includes one or more head entries for each memory bank in the plurality of memory banks in the child
link memory,

wherein each head entry of the one or more head entries stores (i) a snapshot pointer for accessing a particular snapshot
in a snapshot memory; and (ii) a snapshot sequence number to determine which snapshot from one or more snapshots stored in
the snapshot memory is to be accessed next,

wherein each snapshot stores snapshot list metadata for a particular group of data elements, wherein the snapshot list metadata
includes (i) one or more pointers to one or more nodes in the plurality of memory banks that include information for one or
more data elements in the particular group of data elements, and (ii) a data-element sequence number that represents a position
in an ordered sequence of data elements in the particular group of data elements; and

circuitry configured to use the one or more head entries to:
determine, based on the respective snapshot sequence number stored in each head entry, a snapshot order for accessing the
one or more snapshots;

access a snapshot in the snapshot memory based on the determined snapshot order; and
for each accessed snapshot, use the respective snapshot list metadata to determine a sequential order for accessing the one
or more nodes in the plurality of memory banks that include information for data elements in a group of data elements corresponding
to the snapshot.

US Pat. No. 10,055,153

IMPLEMENTING HIERARCHICAL DISTRIBUTED-LINKED LISTS FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. An apparatus, comprising:a main memory configured to store data elements;
write circuitry configured to:
write a first data packet as first data elements to the main memory;
write a first child distributed linked list that includes first data-element pointers to the main memory to interconnect the first data elements stored in the main memory;
write a parent distributed linked list to include a first snapshot that represents (i) a first child pointer to the first child distributed linked list and (ii) a first sequence identifier associated with the first snapshot;
after writing the first data packet to the main memory, write a second data packet as second data elements to the main memory;
write a second child distributed linked list that includes second data-element pointers to the main memory to interconnect the second data elements stored in the main memory; and
update the parent distributed linked list to include a second snapshot that represents (i) a second child pointer to the second child distributed linked list and (ii) a second sequence identifier associated with the second snapshot; and
read circuitry configured to read the first data packet and the second data packet in sequence using respectively the first snapshot and the second snapshot included in the parent distributed linked list, wherein an order of the read is based on sequence identifiers.

US Pat. No. 9,785,367

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory including memory locations that store data elements;
a link memory including a plurality of memory banks that includes a first memory bank and a second memory bank, each memory
bank of the plurality of memory banks having a latency for consecutively executed memory access operations, wherein each memory
bank is configured to store a plurality of nodes that each store (i) a respective data-element pointer to the main memory
for accessing a respective memory location, and (ii) a respective next-node pointer for accessing a next node in the same
memory bank, wherein each of the data-element pointers in the plurality of memory banks points to a respective data element
stored the main memory, and wherein one or more data packets are formed by linking the data elements stored in the main memory;

circuitry configured to:
store at least a first data element, a second data element, and a third data element in a first memory location, a second
memory location, and a third memory location of the main memory, respectively, wherein the first data element is to be read
before the second data element, and wherein the second data element is to be read before the third data element;

store, in a first node of a first memory bank of the plurality of memory banks, (i) a first data-element pointer to the first
memory location and (ii) a first next-node pointer for accessing a second node of the first memory bank;

after storing the first data-element pointer in the first node, determine that accessing the second node of the first memory
bank to store a second data-element pointer to the second memory location does not satisfy the latency for consecutively executed
memory access operations for the first memory bank;

in response to determining that accessing the second node of the first memory bank to store a second data-element pointer
to the second memory location of the main memory does not satisfy the latency for consecutively executed memory access operations
for the first memory bank, store, in a first node of a second memory bank of the plurality of memory banks, (i) the second
data-element pointer to the second memory location of the main memory and (ii) a second next-node pointer for accessing a
second node of the second memory bank;

after storing the second data-element pointer in the first node of the second memory bank, determine that accessing the second
node of the first memory bank to store a third data-element pointer to the third memory location of the main memory satisfies
the latency for consecutively executed memory access operations for the first memory bank; and

in response to determining that accessing the second node of the first memory bank to store a third data-element pointer to
the third memory location of the main memory satisfies the latency for consecutively executed memory access operations for
the first memory bank, store, in the second node of the first memory bank, (i) the third data-element pointer to the third
memory location and (ii) a third next-node pointer for accessing a third node of the first memory bank,

wherein at least the first next-node pointer and the third next-node pointer form a first skip list for the first memory bank,
and

wherein at least the second next-node pointer and forms a second skip list for the second memory bank; and
a context manager including a first head entry for the first memory bank and a second head entry for the second memory bank,
the context manager configured to maintain at least the first skip list and the second skip list wherein the first head entry
stores a first link-memory pointer pointing to the first node of the first memory bank, and wherein the second head entry
stores a second link-memory pointer pointing to the first node of the second memory bank.

US Pat. No. 9,690,507

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory configured to store data elements;
a link memory including a plurality of memory banks including a first memory bank and a different second memory bank, wherein
each memory bank has a latency for consecutively executed memory access operations,

wherein each memory bank of the plurality of memory banks is configured to store a plurality of nodes that each maintains
metadata representing (i) data-element pointers to the main memory for accessing data elements referenced by the data-element
pointers, and (ii) next-node pointers for accessing next nodes in the same memory bank,

wherein the data-element pointers in the plurality of memory banks point to data elements stored in memory locations in the
main memory to form one or more data packets, and

wherein the next-node pointers connect each node to a next node in the respective memory bank to form a respective separate
skip list of a plurality of skip lists, each skip list having an initial node and a final node, and all the nodes of each
skip list being stored in a respective single memory bank; and

a context manager including a respective head entry and a respective tail entry for each memory bank of the plurality of memory
banks,

wherein each head entry of the plurality of head entries is configured to include a unique address pointing to the initial
node of the skip list in the respective memory bank,

wherein each tail entry of the plurality of tail entries is configured to include a unique address pointing to the final node
of the skip list in the respective memory bank; and

circuitry configured to use the head and tail entries in the context manager to:
access a first initial node stored in the first memory bank for a first skip list of the plurality of skip lists;
(i) before accessing, for the first skip list, a next node stored in the first memory bank and (ii) within less time than
the latency for the first memory bank, access a second initial node stored in the second memory bank for a second skip list;
and

(i) after accessing the second initial node and (ii) after satisfying the latency for the first memory bank, access, for the
first skip list, the next node stored in the first memory bank.

US Pat. No. 9,742,436

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:
a data unpacker component configured to unpack at least portions of data units represented by packed field data, each of the
data units having values for one or more fields in a master set of fields, the data unpacker component comprising:

an input configured to receive particular packed field data for a particular data unit, of the data units;
an input configured to receive particular compression profile information associated with the particular data unit;
profile processing logic configured to, based on the particular compression profile information associated with the particular
data unit, identify for which particular one or more fields, in the master set of fields, the particular packed field data
stores values;

parsing logic configured to extract values from the particular packed field data for the particular one or more fields that
the profile processing logic identifies;

outputs, each output of the outputs corresponding to a different field in the master set of fields and configured to, responsive
to the parsing logic extracting a particular value for the field corresponding to the output, output data based on the particular
value.

US Pat. No. 9,767,014

SYSTEM AND METHOD FOR IMPLEMENTING DISTRIBUTED-LINKED LISTS FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory configured to store data elements;
a link memory including a plurality of memory banks, each of the memory banks configured to store a plurality of nodes that
each stores (i) a respective data-element pointer to the main memory for accessing a respective data element referenced by
the respective data-element pointer, and (ii) a respective sequence identifier for determining an order for accessing the
plurality of memory banks, wherein the data-element pointers in the plurality of memory banks point to the data elements stored
in the main memory to form a list of data elements that represent a data packet;

a free-entry manager configured to generate an available bank set including one or more locations in the link memory; and
a context manager configured to maintain the metadata for forming the list of data elements, the context manager including
a plurality of head entries that correspond to the plurality of memory banks,

wherein each head entry of the plurality of head entries is configured to store (i) a respective link-memory pointer pointing
to a respective node in the respective memory bank of the link memory and (ii) the respective sequence identifier for the
respective node,

circuitry configured to use the head entries in the context manager to:
determine, based on the respective sequence identifier stored in each head entry of the plurality of head entries, the order
for accessing the plurality of memory banks; and

access the plurality of memory banks based on the determined order to reconstruct the data packet.

US Pat. No. 9,654,137

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:
one or more memories storing compression profiles that indicate different extraction lengths and extraction offsets for fields
in a master set of fields;

a vector profiler component configured to input vectors having portions allocated to different fields in the master set, and
to output associated compression profiles selected for the inputted vectors, a given compression profile selected for a given
vector based at least on for which of the different fields in the master set the given vector carries values;

a data packing component configured to input the vectors and the associated compression profiles, and to output packed vectors
generated for the vectors using the associated compression profiles, a given packed vector generated from a given vector based
on given extraction offsets and given extraction lengths indicated by a given compression profile that was selected for the
given vector, the given packed vector outputted in association with information identifying the given compression profile.

US Pat. No. 10,067,690

SYSTEM AND METHODS FOR FLEXIBLE DATA ACCESS CONTAINERS

Innovium, Inc., San Jose...

1. A memory system for a network device, the memory system comprising:a first network interface configured to receive network data in units of a first data width;
a second network interface configured to receive network data in units of a second data width different from the first data width;
a third network interface configured to output network data in units of a third data width;
a packing data buffer including a plurality of memory banks arranged in a plurality of rows and a plurality of columns, the packing data buffer configured to store, in the plurality of memory banks, received network data of the first data width or the second data width as storage data elements in units of a fixed data width, wherein the third data width of the third network interface is a multiple of the fixed data width;
a free address manager configured to generate an available bank set that includes one or more free memory banks in the plurality of memory banks; and
distributed link memory configured to maintain one or more pointers to interconnect a set of one or more memory locations of the plurality of memory banks in the packing data buffer to generate at least one list to maintain a sequential relationship between the network data stored in the plurality of memory banks.

US Pat. No. 9,941,899

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more data unpacker components, each data unpacker component configured to:
receive packed field data for data units;
receive compression profiles associated with the data units, each particular compression profile of the compression profiles identifying a particular set of fields and particular packed value lengths for at least some of the fields;
unpack the packed field data into values based on the compression profiles associated with the data units; and
output unpacked field data for the data units, the outputting comprising, for each particular data unit of the data units, outputting particular values unpacked for the particular set of fields identified by the particular compression profile associated with the particular data unit.

US Pat. No. 9,753,660

SYSTEM AND METHOD FOR IMPLEMENTING HIERARCHICAL DISTRIBUTED-LINKED LISTS FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory configured to store data elements;
write circuitry configured to:
write a first portion of a first data packet as first data elements to the main memory;
write a first child distributed-linked list to maintain list metadata that includes first data-element pointers to the main
memory to interconnect the first data elements stored in the main memory;

after writing the first portion of the first data packet and before writing a second portion of the first data packet to the
main memory:

write a first portion of a second data packet as second data elements to the main memory;
write a parent distributed linked list that includes a first snapshot that represents (i) a first child pointer to the first
child distributed-linked list and (ii) a first sequence identifier associated with the first snapshot; and

write a second child distributed-linked list that includes second data-element pointers to the main memory to interconnect
the second data elements stored in the main memory;

after writing the first portion of the second data packet to the main memory, write the second portion of the first data packet
as third data elements to the main memory;

update the parent distributed linked list to include a second snapshot that represents (i) a second child pointer to the second
child distributed-linked list and (ii) a second sequence identifier associated with the second snapshot;

write a third child distributed-linked list that includes third data-element pointers to the main memory to interconnect the
third data elements stored in the main memory; and

after writing the second portion of the first data packet to the main memory, update the parent distributed linked list to
include a third snapshot that represents (i) a third child pointer to the third child distributed-linked list and (ii) a third
sequence identifier associated with the third snapshot, wherein a value of the third sequence identifier is between values
of the first sequence identifier and the second sequence identifier; and

read circuitry configured to read the main memory to reconstruct the first data packet and the second data packet in sequence
using (i) snapshots included in the parent distributed linked list and (ii) data-element pointers included in the child distributed-linked
lists, wherein the reconstruction of the first data packet and the second data packet are read in an order according to the
values of the sequence identifiers associated with the snapshots.

US Pat. No. 9,823,867

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A network device comprising:
a main memory storing data elements from groups of data elements, wherein a group of data elements includes an ordered sequence
of data elements;

a child link memory including a plurality of memory banks, wherein each memory bank stores one or more entries, each entry
including: (i) a main memory location address that stores a data element identified by the entry, and (ii) a child link memory
location address storing a next data element in a group of data elements corresponding to the data element identified by the
entry;

a child manager including, for each memory bank of the plurality of memory banks in the child link memory, one or more head
nodes and one or more tail nodes, wherein each head node maintains (i) a data-element pointer to a main memory location for
accessing a particular head data element that is a first data element in the respective memory bank for a group of data elements
corresponding to the particular head data element, (ii) a child memory pointer for accessing a child link memory entry in
the same memory bank, and (iii) a data-element sequence number that represents a position of the particular head data element
in an ordered sequence of data elements in the group of data elements corresponding to the particular head data element, and
wherein each tail node maintains a data-element pointer to a main memory location for accessing a particular tail data element
that is a most recent data element in the respective memory bank for a group of data elements corresponding to the particular
tail data element;

a parent manager that includes one or more head entries for each memory bank in the plurality of memory banks in the child
link memory,

wherein each head entry of the one or more head entries stores (i) a snapshot pointer for accessing a particular snapshot
in a snapshot memory; and (ii) a snapshot sequence number to determine which snapshot from one or more snapshots stored in
the snapshot memory is to be accessed next,

wherein each snapshot stores snapshot list metadata for a particular group of data elements, wherein the snapshot list metadata
includes (i) one or more pointers to one or more nodes in the plurality of memory banks that include information for one or
more data elements in the particular group of data elements, and (ii) a data-element sequence number that represents a position
in an ordered sequence of data elements in the particular group of data elements; and

circuitry configured to use the one or more head entries to:
determine, based on the respective snapshot sequence number stored in each head entry, a snapshot order for accessing the
one or more snapshots;

access a snapshot in the snapshot memory based on the determined snapshot order; and
for each accessed snapshot, use the respective snapshot list metadata to determine a sequential order for accessing the one
or more nodes in the plurality of memory banks that include information for data elements in a group of data elements corresponding
to the snapshot.

US Pat. No. 9,929,970

EFFICIENT RESOURCE TRACKING

Innovium, Inc., San Jose...

1. A networking apparatus comprising:communication hardware interfaces coupled to one or more networks, the communication hardware interfaces configured to receive and send messages;
networking hardware configured to process routable messages received over the communication hardware interfaces;
one or more first memories configured to store full status counters;
one or more second memories configured to store intermediate counters, the one or more second memories being different than the one or more first memories, each of the intermediate counters corresponding to a different one of the full status counters;
a counting subsystem configured to increment the intermediate counters responsive to the communication hardware interfaces receiving the routable messages and to decrement the intermediate counters responsive to at least one of: the communication hardware interfaces sending the routable messages, or the networking hardware disposing of the routable messages;
a status update subsystem configured to update the full status counters by: adding the intermediate counters to the full status counters to which the intermediate counters respectively correspond, and resetting the intermediate counters;
an update controller configured to identify times for the status update subsystem to update specific full status counters of the full status counters;
a threshold application subsystem configured to compare thresholds to the full status counters, and to assign states to the full status counters based on the comparing;
one or more third memories configured to store status indicators of the assigned states.

US Pat. No. 10,244,629

PRINTED CIRCUIT BOARD INCLUDING MULTI-DIAMETER VIAS

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more connection elements that are configured to transfer signals; and
a printed circuit board (PCB) that includes:
a multilayer lamination of one or more ground layers, one or more power layers, and a plurality of signal layers,
a plurality of vias that pass through one or more layers of the multilayer lamination,
a plurality of traces that are distinct from the one or more connection elements, each trace of the plurality of traces being in contact with a respective via of the plurality of vias and being configured to transfer the signals,
a plurality of pads that are directly in contact with the one or more connection elements and that are configured to transfer the signals between the one or more connection elements and the plurality of traces,
wherein a first via of the plurality of vias includes:
a first portion that has a first diameter, and
a second portion that has a second diameter that is smaller than the first diameter,
wherein a second via of the plurality of vias includes:
a third portion that has a third diameter, and
a fourth portion that has a fourth diameter that is smaller from the third diameter,
wherein the first portion of the first via is adjacent to the fourth portion of the second via and the second portion of the first via is adjacent to the third portion of the second via,
wherein the one or more connection elements include:
a first element that is coupled to a first surface of the PCB, and
a second element that is coupled to a second surface of the PCB, the second surface being different from the first surface, and
wherein the first element includes a plurality of surface mount contacts that are respectively coupled to the plurality of pads.

US Pat. No. 10,230,639

ENHANCED PREFIX MATCHING

Innovium, Inc., San Jose...

1. A network device comprising:one or more memories storing a prefix table represented as a prefix index coupled to a plurality of prefix arrays;
forwarding logic configured to search for a longest prefix match in the prefix table for a particular input key by:
searching for a first longest prefix match for the particular input key in the prefix index;
reading address information that corresponds to the first longest prefix match;
reading a particular prefix array from a particular location indicated by the address information;
examining each prefix entry in the particular prefix array until determining that a particular prefix entry corresponds to a second longest prefix match in the particular prefix array, the second longest prefix match being the longest prefix match for the particular input key in the prefix table;
prefix table management logic configured to:
generate a prefix tree representing the prefix table, each prefix entry in the prefix table having a corresponding node in the prefix tree;
divide the prefix tree into non-overlapping subtrees;
for each subtree of the subtrees:
store, within a prefix array for the subtree, a set of all prefix entries in the prefix table that correspond to nodes in the subtree;
add, to the prefix index, an entry comprising: a location at which the prefix array is stored and a prefix corresponding to a root node of the subtree, each subtree having a single root node.

US Pat. No. 10,277,518

INTELLIGENT PACKET QUEUES WITH DELAY-BASED ACTIONS

Innovium, Inc., San Jose...

14. A method comprising:assigning network packets received by a network device to packet queues;
based on the packet queues, determining when to process specific packets of the network packets received by the network device, the specific packets dequeued from the packet queues when processed;
tracking a delay associated with a particular packet queue of the packet queues, the tracking including designating one of the network packets assigned to the particular packet queue as a marker packet, the delay based on a duration of time for which the currently designated marker packet has been in the particular packet queue, another network packet assigned to the particular packet queue being designated as the marker packet whenever the currently designated marker packet departs from the queue;
when the delay exceeds a monitoring threshold, annotating one or more network packets departing from the particular packet queue with a tag indicating that the monitoring threshold has been surpassed;
sending copies of the one or more network packets that were annotated with the tag to a first component configured to, based on the tag and the one or more packets, perform one or more of: changing one or more network settings of the network device, updating packet statistics for the network device, or storing copies of the one or more network packets in a log or data buffer.

US Pat. No. 10,251,270

DUAL-DRILL PRINTED CIRCUIT BOARD VIA

Innovium, Inc., San Jose...

1. A printed circuit board having multiple layers of circuitry, the printed circuit board comprising:a first layer having a first cylindrical opening with a first diameter, the first cylindrical opening formed through at least the first layer and formed about a particular axis;
a second layer having a second cylindrical opening with a second diameter, the second cylindrical opening formed through at least the second layer and formed about the particular axis,
wherein the first cylindrical opening is a portion of a conductive via, and
wherein the second diameter is smaller than the first diameter; and
a third layer having a third cylindrical opening with a third diameter, the third cylindrical opening formed through at least the third layer and formed about the particular axis,
wherein the third layer is arranged between the first layer and the second layer,
wherein the third diameter is smaller than the second diameter,
wherein the third cylindrical opening is a portion of the conductive via, and
wherein the second cylindrical opening is non-conductive.

US Pat. No. 10,263,919

BUFFER ASSIGNMENT BALANCING IN A NETWORK DEVICE

Innovium, Inc., San Jose...

1. A system comprising:one or more data unit processors configured to process data units;
a plurality of buffers comprising entries configured to store the data units as the data units await processing by the one or more data unit processors;
an accounting component configured to generate buffer state information indicating levels of utilization for the buffers;
prioritization logic configured to generate an ordered list of the buffers sorted based at least partially on the indicated levels of utilization;
reprioritization logic configured to modify the ordered list by reordering sets of one or more buffers within the ordered list through exchanging positions of the sets within the ordered list, the reprioritization logic varying the sets selected for reordering between defined time periods;
a buffer writer configured to write the data units to the buffers as the data units are received, the buffer writer selecting which of the buffers to write particular data units to in an order indicated by the modified ordered list.

US Pat. No. 10,218,589

EFFICIENT RESOURCE STATUS REPORTING APPARATUSES

Innovium, Inc., San Jose...

1. A networking apparatus comprising:communication hardware interfaces coupled to one or more networks, the communication hardware interfaces configured to receive and send messages;
a switching subsystem configured to process routable messages received over the communication hardware interfaces;
a tracking subsystem configured to track resources used by the apparatus while processing the routable messages, at least by tracking an aggregate count of resources assigned for each object in a first set of objects, each object in the first set corresponding to one of: an ingress port, egress port, processing queue, or group of ports;
a status update system configured to update resource status information for each object in the first set by comparing a current aggregate count of resource assignments for the object to one or more thresholds for the object, the resource status information including a priority indicator indicating whether the object has a priority status;
a reporting subsystem configured to send, to a receiver, granular measures of resource assignments for priority objects within the first set, the priority objects being objects that currently have the priority status, each of the granular measures for a particular object reflecting how many resources have been assigned to a different combination of the particular object with another object in a second set of objects;
wherein the reporting subsystem is further configured to send the granular measures of resource assignments for the priority objects more frequently than granular measures of resource assignments for other objects in the first set that do not have the priority status.

US Pat. No. 10,313,255

INTELLIGENT PACKET QUEUES WITH ENQUEUE DROP VISIBILITY AND FORENSICS

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more network interfaces configured to receive packets over one or more networks;
a packet processor configured to:
assign the packets to packet queues;
responsive to a failure to add a particular packet to a particular packet queue to which the particular packet was assigned, designate a queue forensics feature of the particular packet queue as active;
traffic management logic configured to:
based on the packet queues, determine when to process specific packets of the received packets, the specific packets dequeued from the packet queues when processed;
while the queue forensics feature of the particular packet queue is designated as active, annotate one or more packets departing from the particular packet queue with a tag indicating that a drop event occurred with respect to the particular packet queue while the one or more packets were in the particular packet queue;
deactivate the queue forensics feature when a first packet in the particular packet queue has been dequeued from the particular packet queue;
a visibility component configured to, based on the tag and the one or more packets, perform one or more of: changing one or more settings of the apparatus, storing copies of the one or more packets in a log or data buffer, updating packet statistics, or sending copies of the one or more packets to an external device for analysis.

US Pat. No. 10,320,691

VISIBILITY PACKETS

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more communication interfaces configured to receive packets from one or more devices over a network;
queue management logic configured to queue the packets in one or more processing queues while the packets await processing by forwarding logic;
the forwarding logic, configured to:
process first packets of the packets and, based thereon, forward the first packets to destinations identified by the first packets;
determine that a particular packet of the packets is to be dropped without being forwarded from the apparatus to a destination associated with a destination address identified by the particular packet;
in response to the determining that the particular packet is to be dropped, tag the particular packet with a visibility tag, the visibility tag including an identifier of an error or type of drop that led to the forwarding logic determining to drop the particular packet;
further in response to the determining that the particular packet is to be dropped, forward at least a starting portion of the particular packet, with the visibility tag, to a visibility subsystem instead of the destination associated with the destination address identified by the particular packet.

US Pat. No. 10,716,207

PRINTED CIRCUIT BOARD AND INTEGRATED CIRCUIT PACKAGE

Innovium, Inc., San Jose...

1. An apparatus comprising:a printed circuit board that includes:
a multilayer lamination of layers that includes one or more ground layers, one or more power layers, and a plurality of signal layers;
a plurality of vias that are formed through the multilayer lamination of layers; and
a plurality of bonding pads that provide coupling for a ball grid array of an integrated circuit (IC) package to the one or more ground layers, the one or more power layers, and the plurality of signal layers through the plurality of vias,
wherein the multilayer lamination of layers includes:
one or more first signal layers of the plurality of signal layers for transferring a first type of signal; and
one or more second signal layers of the plurality of signal layers for transferring a second type of signal,
wherein the one or more first signal layers are arranged in a first section of the multilayer lamination of layers and the one or more second signal layers being arranged in a different second section of the multilayer lamination of layers,
wherein the one or more first signal layers in the first section are separated from the one or more second signal layers in the second section by at least one ground layer of the one or more ground layers,
wherein the first section is closer to a surface of the printed circuit board that includes the plurality of bonding pads, and
wherein the one or more second signal layers transfer signals closer to a central area of the printed circuit board compared to the one or more first signal layers, the central area corresponding to an area of the printed circuit board that couples the ball grid array of the integrated circuit (IC) package.

US Pat. No. 10,389,639

DYNAMIC WEIGHTED COST MULTIPATHING

Innovium, Inc., San Jose...

1. A method comprising:identifying a group of paths from a network device to a particular destination address within a network;
assigning weights to each path in the group of paths;
determining to send particular packets from the network device to the particular destination address;
using load-balancing based at least partially upon the weights, dynamically selecting, by the network device, from the identified group of paths to the particular destination address, particular paths along which to send the particular packets from the network device to the particular destination address, said dynamically selecting comprising assigning a first packet a first path between the network device and the particular destination address and assigning a second packet a second path between the network device and the particular destination address;
collecting state information for each path in the group of paths;
determining metrics associated with the paths in the group of paths based on the collected state information;
dynamically adjusting the weights assigned to the paths based on the metrics associated with the paths.

US Pat. No. 10,511,538

EFFICIENT RESOURCE TRACKING

Innovium, Inc., San Jose...

1. An apparatus comprising:a first memory storing a delayed resource counter of an amount of a resource utilized for a particular objective within the apparatus;
a second memory, other than the first memory, storing a delayed status indicator that indicates a status associated with the delayed resource counter;
a third memory, other than the first memory, storing an intermediate resource counter indicating a change in the amount of the resource utilized for the particular objective since the delayed resource counter was last updated;
a tracking subsystem configured to update the intermediate resource counter in response to each of a plurality of events;
a resource status update subsystem configured to:
update the delayed resource counter, based on the intermediate resource counter, less frequently than the events occur;
update the delayed status indicator, based on the delayed resource counter, less frequently than the events occur.

US Pat. No. 10,432,429

EFFICIENT TRAFFIC MANAGEMENT

Innovium, Inc., San Jose...

1. A networking apparatus comprising:communication interfaces coupled to one or more networks;
a message handling subsystem configured to process messages received over the communication interfaces;
a traffic management subsystem configured to apply traffic shaping or traffic policing logic to specific messages of the messages selected based on status indicators associated with traffic action control groups to which the specific messages are assigned;
one or more first memories configured to store full counters associated with the traffic action control groups;
one or more second memories configured to store intermediate counters associated with the traffic action control groups, the one or more second memories being different than the one or more first memories, each of the intermediate counters corresponding to a different one of the full counters;
an intermediate counting subsystem configured to adjust particular intermediate counters for particular traffic action control groups by amounts corresponding to particular messages sent over the communication interfaces in association with the particular traffic action control groups;
a full count update subsystem configured to update the full counters based on respectively corresponding intermediate counters, and to reset the corresponding intermediate counters;
a replenishment subsystem configured to update the intermediate counters or to update the full counters, based on replenishment amounts determined from replenishment rates associated with the traffic action control groups;
a status update subsystem configured to update the status indicators by comparing the full counters to one or more applicable thresholds.

US Pat. No. 10,389,643

REFLECTED PACKETS

Innovium, Inc., San Jose...

1. A system comprising a network of devices configured to send and receive packets over the network, the devices including:sending devices configured to send packets over network paths within the network;
annotating devices configured to annotate selected packets by inserting state information in the selected packets as the selected packets traverse through the annotating devices;
reflecting devices configured to reflect certain packets of the selected packets that were annotated by the annotating devices by sending copies of the certain packets back to particular sending devices from which the certain packets were respectively sent, or back to collection devices associated with the particular sending devices, the reflecting devices also sending the certain packets on to respective destinations identified by the certain packets;
the collection devices, configured to collect the reflected packets and record metrics based on the state information that was inserted into the reflected packets by the annotating devices;
action devices, configured to reconfigure one or more settings affecting traffic flow on the network based on the metrics.

US Pat. No. 10,516,613

NETWORK DEVICE STORAGE OF INCREMENTAL PREFIX TREES

Innovium, Inc., San Jose...

1. A method, comprising:identifying, based on data received by a computing device, an input string, the data received by the computing device including a network packet, the input string based on metadata associated with the network packet;
searching a first memory component to select a data entry storing a first portion of a data string that matches a first portion of the input string, the selected data entry returning a reference to a storage location in a second memory component;
searching the storage location in the second memory component to select a longest data string match entry storing a second portion of the data string that matches a second portion of the input string, the selected longest data string match entry returning data indicating one or more actions;
wherein the longest data string match entry does not store the first portion of the data string;
causing performance of the one or more actions.

US Pat. No. 10,574,577

LOAD BALANCING PATH ASSIGNMENTS TECHNIQUES

Innovium, Inc., San Jose...

1. A method comprising:determining a destination for a network packet;
identifying a first group of network paths to the destination and a second group of network paths to the destination, the first group of network paths being a group of optimal paths to the destination;
calculating a primary index using a first hash function of information associated with the network packet;
using the primary index calculated from the information associated with the network packet to select, from the first group of network paths to the destination, a primary path for sending the network packet to the destination;
determining that the primary path is in a low-quality state;
calculating a secondary index using a second hash function of information associated with the network packet;
using the secondary index calculated from the information associated with the network packet to select, from the second group of network paths to the destination, a different path for sending the network packet to the destination;
responsive to determining that the primary path is in the low-quality state, sending the network packet out a network port associated with the different path.

US Pat. No. 10,540,867

HARDWARE-ASSISTED MONITORING AND REPORTING

Innovium, Inc., San Jose...

1. A network device adapted for indicating status information of a network switching subsystem via a light-emitting diode (“LED”) array, the network device comprising:a plurality of network interfaces;
said network switching subsystem, comprising:
one or more packet processors configured to where to forward packets received via the plurality of network interfaces;
one or more buffer memories configured to temporarily store the packets while the packets await processing by the one or more packet processors; and
a reporting component configured to collect status information of components of the network switching subsystem, generate status messages based on the collected status information, and send the status messages to a message decoder coupled to the network switching subsystem;
said message decoder, configured to receive at least particular status messages generated by the reporting component, determine one or more manners in which to light particular LED indicators of the LED array based on the particular status messages, and send formatted output data to the LED array configured to cause the particular LED indicators of the LED array to light in the determined one or more manners.

US Pat. No. 10,505,851

TRANSMISSION BURST CONTROL IN A NETWORK DEVICE

Innovium, Inc., San Jose...

1. A system comprising:a plurality of network interfaces;
one or more buffers configured to temporarily store data units received over the plurality of network interfaces;
an arbitration component configured to release buffered data units from the one or more buffers to a downstream component for further processing;
a congestion detection component configured to detect when an amount of buffer space, in the one or more buffers, used to store data units associated with an entity, indicates a particular level of congestion in the one or more buffers related to the entity;
wherein the arbitration component is further configured to, when the congestion detection component detects, in the one or more buffers, the particular level of congestion related to the entity, enable a burst control mechanism configured to limit a release rate of the data units that are associated with the entity, from the one or more buffers, to a particular rate, wherein the arbitration component is configured to not limit the release rate to the particular rate when the burst control mechanism is disabled.

US Pat. No. 10,554,572

SCALABLE INGRESS ARBITRATION FOR MERGING CONTROL AND PAYLOAD

Innovium, Inc., San Jose...

1. An apparatus comprising:a merger subsystem configured to receive payload data for data units and corresponding control information for the data units, the merger subsystem comprising one or more memories in which the merger subsystem is configured to buffer at least some of the payload data for a given data unit at least until receiving given control information corresponding to the given data unit, the data units including at least particular data units, each of whose payload data is divided into multiple portions;
one or more interconnects configured to input portions of the data units and output the portions of the data units to destinations that are respectively indicated by the corresponding control information for the data units;
a scheduler subsystem configured to schedule dispatch of the portions of the data units from the merger subsystem to the one or more interconnects, the scheduling subsystem comprises a plurality of scheduler components, each scheduler component configured to select, during a particular interval, a different data unit portion to dispatch to the one or more interconnects, each different data unit portion outputted to a different destination of the one or more interconnects.

US Pat. No. 10,511,531

ENHANCED LENS DISTRIBUTION

Innovium, Inc., San Jose...

14. One or more non-transitory media storing instructions that, when executed by one or more computing devices, cause performance of:generating one or more folded key values from a key value, each folded key value generated by folding sub-elements of the key value together;
constructing one or more addend values based on the one or more folded key values, each addend value generated by performing operations between a folded value and individual fields of a manipulation value, the folded value being derived from a particular folded key value of the one or more folded key values;
transforming a first value by performing an addition operation between the first value and one or more addend-based values, the one or more addend-based values being derived from the one or more addend values;
forwarding a network message based on an output of a hashing function, wherein either the first value is the output of the hashing function, or the transformed first value is an input to the hashing function.

US Pat. No. 10,469,345

EFFICIENT RESOURCES STATUS REPORTING SYSTEMS

Innovium, Inc., San Jose...

1. A method comprising:tracking aggregate resource utilization of a particular type of processing resource within a network device by, for each first component in a first set of network device components, tracking an aggregate amount of the particular type of processing resource that the network device maps to the first component;
identifying priority components in the first set that satisfy prioritization criteria based at least in part on the aggregate resource utilization;
tracking granular resource utilization of the particular type of processing resource within the network device for each first component of at least the priority components by, for each combination of the first component with a different second component in a second set of network device components, tracking a granular amount of the particular type of processing resource that the network device maps to both the first component and the second component;
sending granular resource utilization information from the network device to a reporting application in a manner that emphasizes the priority components over non-priority components in the first set.

US Pat. No. 10,447,578

REDISTRIBUTION POLICY ENGINE

Innovium, Inc., San Jose...

1. A system comprising:multiple path candidate selection logics configured to select paths to network destinations to assign to data units based on functions of data unit information associated with the data units, including a first logic and a second logic, the first logic configured to select, based on particular data unit information, a different path than the second logic is configured to select based on the same particular data unit information;
a redistribution bucket manager configured to identify sets of redistributable values to associate with the network destinations;
a traffic distribution function manager configured to select between the multiple path candidate selection logics when determining a path candidate selection logic to utilize for selecting a path to assign to a given flow of the data units, based on output of a redistribution function of given data unit information shared by the data units in the given flow, the traffic distribution function manager concurrently selecting the first logic for a first flow to a particular destination and the second logic for a second flow to the particular destination, responsive to the output of the redistribution function falling into one of the sets of redistributable values for the second flow, but not for the first flow.

US Pat. No. 10,581,759

SHARING PACKET PROCESSING RESOURCES

Innovium, Inc., San Jose...

1. A network switching apparatus comprising:multiple data unit sources configured to receive data units, each of the data units having a control portion and a payload portion;
multiple control paths, each control path coupled to a different data unit source, the data unit sources configured to send first portions of the data units along the control paths, the first portions including control portions;
multiple data paths, separate from the control paths, each data path coupled to a different data unit source, the data unit sources configured to send payload portions of the data units along the data paths;
an adaptive distributor configured to receive the first portions via the multiple control paths, the adaptive distributor comprising a buffer memory in which the adaptive distributor is configured to temporarily buffer the first portions until the first portions are ready for processing by a shared packet processor;
the shared packet processor, configured to receive the first portions from the adaptive distributor, and to generate control information based on the control portions found in the first portions;
a demuxer configured to receive the control information from the shared packet processor;
merger subsystems, each merger subsystem configured to receive payload portions via a different data path of the multiple data paths, and to receive control information from the demuxer for the data units whose payload portions they receive, the merger subsystems further configured to output the data units with the control information generated for the data units.

US Pat. No. 10,540,101

TRANSMIT BUFFER DEVICE FOR OPERATIONS USING ASYMMETRIC DATA WIDTHS

Innovium, Inc., San Jose...

1. A device, comprising:a packing unit;
a buffer manager; and
a plurality of aggregated port buffers each coupled to receive output from the packing unit;
wherein:
the packing unit is configured to:
receive, from an aggregated packet processor, packet data as input segments of a first size, wherein the aggregated packet processor is configured to receive packet data from an external source and segment the packet data into the input segments for forwarding to the packing unit;
generate, from one or more input segments, storage units of a second size; and
for each storage unit:
receive information from the buffer manager identifying a particular aggregated port buffer from the plurality of aggregated port buffers for storing the storage unit, and
write the storage unit to particular aggregated port buffer; and
the buffer manager is configured to:
for each storage unit, select, from the plurality of aggregated port buffers, a particular aggregated port buffer, and send information to the buffer manager for writing the storage unit to the particular aggregated port buffer;
monitor availability of storage space in the aggregated port buffers;
based on the availability of storage space, control reception of input segments at the device; and
manage transmission of the storage units from the aggregated port buffers to one or more external destinations as output segments of a third size.

US Pat. No. 10,523,576

HIGH-PERFORMANCE GARBAGE COLLECTION IN A NETWORK DEVICE

Innovium, Inc., San Jose...

1. A system comprising:buffer memory banks configured to temporarily buffer data units received over a plurality of network interfaces;
a traffic manager configured to select particular entries in particular buffer memory banks in which to store particular portions of the data units, the traffic manager further configured to write linking data that defines chains of entries within the buffer memory banks, each chain linking all of the entries in a single buffer memory bank that store data belonging to a same data unit;
a garbage collector configured to free previously utilized entries in the buffer memory banks for use in storing data for newly received data units;
one or more memories storing garbage collection lists for the buffer memory banks, each of the buffer memory banks having a different garbage collection list;
wherein the garbage collector is configured to free previously utilized entries in part by gradually traversing particular chains of entries and freeing traversed entries, the garbage collection lists indicating to the garbage collector starting addresses of the particular chains;
wherein the traffic manager is further configured to, in response to a determination to dispose of a given data unit, for each given buffer memory bank in at least a set of the buffer memory banks, write, to the given buffer memory bank's garbage collection list, data indicating a starting address of a chain of entries that store data for the given data unit in the given buffer memory bank.

US Pat. No. 10,361,713

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:a data packer component, implemented by logic within circuitry of a computing device, configured to input vectors having values corresponding to different fields in a master set of fields and output packed vectors generated using compression profiles selected for the vectors;
a memory storing the compression profiles, different compression profiles indicating different portions of the vectors to ignore when packing the vectors;
a profiler, coupled to the memory and the data packer component, configured to select a particular compression profile of the compression profiles to pack a particular vector, based at least on determining that the particular compression profile indicates to discard one or more portions of the particular vector in which the particular vector carries insignificant data;
wherein the data packer component is configured to generate a particular packed vector for the particular vector by removing the one or more portions of the particular vector indicated by the particular compression profile;
wherein the data packer component is further configured to associate the particular packed vector with a particular decompression profile that corresponds to the particular compression profile.

US Pat. No. 10,355,981

SLIDING WINDOWS

Innovium, Inc., San Jose...

1. A method comprising:assigning paths for sending network packets to a destination associated with the network packets, the assigning including:
executing path selection functions that select between the paths based on inputted packet information from the network packets, including a primary path selection function and one or more other path selection functions;
executing a move-eligibility function that indicates when flows of the network packets are eligible for redistribution from primary paths selected by the primary path selection function to paths selected by the one or more other path selection functions, the move-eligibility function outputting, for each given packet of the network packets, a value calculated based on information associated with the given packet;
responsive to the move-eligibility function outputting non-move-eligible values with respect to first packets, assigning each given packet of the first packets to a primary path selected by the primary path selection function using input information associated with the given packet;
responsive to the move-eligibility function outputting move-eligible values with respect to second packets, assigning each given packet of the second packets an alternative path selected by a different path selection function than the primary path selection function using input information associated with the given packet;
sending the network packets out network ports associated with their respectively assigned paths.

US Pat. No. 10,355,994

LENS DISTRIBUTION

Innovium, Inc., San Jose...

1. An apparatus comprising:a key generation component configured to input a data unit, extract values from the data unit, and form a hash key using the values extracted from the data unit;
a key extension component configured to input the hash key and extend the hash key by prepending or appending a predefined extension value to the hash key;
a key transformation component configured to input the extended hash key and transform the extended hash key based on an input mask value;
a hashing component configured to input the transformed extended hash key and generate one or more hash values by performing one or more hash functions on the transformed extended hash key;
a hash value manipulation component configured to input the one or more hash values and transform the one or more hash values based on an output mask value and at least one of the extended hash key or the transformed extended hash key;
a hash-based operation component configured to perform a hash-based operation with respect to the data unit based on information stored at a location whose address is determined using a particular transformed hash value generated by the hash value manipulation component.

US Pat. No. 10,601,711

LENS TABLE

Innovium, Inc., San Jose...

1. A system comprising:at least a plurality of network devices interconnected by one or more networks, each device in the plurality comprising:
one or more network interfaces configured to receive and send messages to other devices over the one or more networks;
key generation logic configured to generate hash keys based on the messages;
a transformation component; configured to input hash keys, generate transformed hash keys based on the inputted hash keys, and output the transformed hash keys, the transformation component configured to generate the transformed hash keys by, for each given hash key of the hash keys:
generating a folded key using XOR operations between each of one or more key elements within the given hash key;
generating a masked addend using AND or XOR operations between the folded key and each mask element of one or more mask elements in a fixed mask value; and
generating a given transformed hash key using an addition operation between the masked addend and the given hash key;
a hashing component configured to input the transformed hash keys, generate hash values based on applying a hash function to the transformed hash keys, and output the hash values; and
message handling logic configured to handle the messages based on the hash values;
wherein the transformation component at each device is configured to utilize a different fixed mask value to generate the masked addend.

US Pat. No. 10,541,946

PROGRAMMABLE VISIBILITY ENGINES

Innovium, Inc., San Jose...

1. A networking apparatus comprising:communication interfaces configured to receive and send packets over one or more networks;
forwarding logic configured to process the packets before forwarding the packets out the communication interfaces to destinations of the packets;
one or more traffic managers configured to manage flows of the packets to the forwarding logic;
one or more memories storing one or more statistics related to operations of the forwarding logic, the one or more traffic managers, and/or the communication interfaces;
at least one or more hardware programmable visibility engines, each programmable visibility engine of the one or more hardware programmable visibility engines comprising function logic implementing a defined set of functions, each of the functions configured to perform one or more calculations on one or more inputs to produce one or more outputs, the programmable visibility engine executing, in a given execution cycle, a set of functions, from them the defined set of functions, that are indicated to be active by a function selector bitmap that was inputted into the programmable visibility engine for the given execution cycle, a given active function operating on one or more data value inputs bound dynamically to the given active function, the one or more data value inputs including a particular address of the one or more memories or an input signal from one of the forwarding logic or one or more traffic managers;
for each programmable visibility engine of the one or more hardware programmable visibility engines, a dynamically configurable address map that maps specific memory locations in the one or more memories to specific outputs of the functions of the programmable visibility engine, a given active function of the defined set of functions configured to write a result value to a particular memory location, of the memory locations, that the address map mapped to the given active function.

US Pat. No. 10,587,536

BUFFER ASSIGNMENT BALANCING IN A NETWORK DEVICE

Innovium, Inc., San Jose...

1. A method comprising:receiving a plurality of data units over time;
enqueuing at least certain data units of the data units in a queue;
dequeuing the certain data units from the queue for processing by a processing component associated with the queue;
repeatedly updating queue state information during said enqueueing and dequeuing, comprising:
transitioning to a first state upon determining that the queue is of a size that surpasses a state entry threshold;
transitioning away from the first state upon determining that the queue is of a size that falls below a state release threshold, the state release threshold being lower than the state entry threshold;
repeatedly adjusting the state release threshold during said enqueueing and dequeuing, the state release threshold adjusted based on at least one of: randomly selected values, pseudo-randomly selected values, or a pattern of values;
determining one or more actions to take with respect to particular data units of the data units based on a current state indicated by the queue state information.

US Pat. No. 10,652,154

TRAFFIC ANALYZER FOR AUTONOMOUSLY CONFIGURING A NETWORK DEVICE

Innovium, Inc., San Jose...

1. A system comprising:a network switching device comprising:
network communication interfaces configured to send data units to other network devices via one or more communications networks;
one or more packet processors configured to process the data units prior to sending the data units;
one or more traffic managers configured to control flow of the data units to the one or more packet processors;
a data collector configured to send state information out an analyzer interface, the state information describing operational states of the one or more packet processors and/or the one or more traffic managers;
device control logic configured to adjust device settings of the network switching device in response to control instructions;
an analyzer device coupled to the analyzer interface and configured to receive the state information, the analyzer device comprising analysis logic configured to, based on the state information received from the collector, generate first control instructions identifying first device settings of the network switching device to change, and to send the first control instructions to the network switching device via the analyzer interface.

US Pat. No. 10,673,770

INTELLIGENT PACKET QUEUES WITH DELAY-BASED ACTIONS

Innovium, Inc., San Jose...

13. A method comprising:receiving packets over one or more networks;
assigning the packets to packet queues;
based on the packet queues, determining when to process specific packets of the packets, the specific packets dequeued from the packet queues when processed;
tracking a delay associated with a particular packet queue of the packet queues, the delay based on a duration of time for which a designated marker packet has been in the particular packet queue, another packet being designated as the marker packet whenever the currently designated marker packet departs from the particular packet queue;
when the delay exceeds an expiration threshold, marking the particular packet queue as expired;
while the particular packet queue is marked as expired, dropping one or more packets assigned to the particular packet queue, the one or more packets including at least one packet other than the currently designated marker packet.

US Pat. No. 10,735,337

PROCESSING PACKETS IN AN ELECTRONIC DEVICE

Innovium, Inc., San Jose...

1. A device, comprising:a first group of ingress ports that includes a first ingress port and a second ingress port;
a group of egress ports that includes a first egress port and a second egress port;
a first queue for the first egress port and a second queue for the second egress port; and
a network traffic manager including a packet buffer and coupled to the first queue and the second queue, the network traffic manager configured to:
receive, from the first ingress port in the first group of ingress ports, a cell of a first network packet destined for the first egress port, wherein the first network packet comprises a plurality of cells;
determine whether a number of cells stored in the first queue is at least equal to a first threshold value;
in response to determining that the number of cells stored in the first queue is at least equal to the first threshold value, determine whether the first ingress port has been assigned a first queue token corresponding to the first queue;
upon determining that the first ingress port has been assigned the first queue token:
determine whether one or more other cells of the first network packet are stored in the packet buffer, and
in response to determining that one or more other cells of the first network packet are stored in the packet buffer:
store the received cell in the packet buffer, and
store linking information for the received cell in a receive context, wherein the receive context further includes linking information for the one or more other cells of the first network packet;
determine whether all cells of the first network packet have been received; and
in response to determining that all cells of the first network packet have been received:
copy linking information for the cells of the first network packet from the receive context to one of the first queue or a copy generator queue, and
release the first queue token from the first ingress port.

US Pat. No. 10,735,339

INTELLIGENT PACKET QUEUES WITH EFFICIENT DELAY TRACKING

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more memories and/or registers storing at least:
queue data describing a queue of data units, the queue having a head from which data units are dequeued and a tail to which data units are enqueued;
a first marker identifier field whose value identifies a data unit, currently within the queue, that has been designated as a first marker;
a first marker timestamp field whose value identifies a time at which the data unit designated as the first marker was enqueued;
a second marker identifier field whose value identifies a data unit, currently within the queue, that has been designated as a second marker;
a second marker timestamp field whose value identifies a time at which the data unit designated as the second marker was enqueued; and
a queue delay field;
queue management logic, coupled to the one or more memories and/or registers, configured to:
whenever a data unit that is currently designated as the first marker is dequeued from the head of the queue, set the first marker identifier field to the value of the second marker identifier field, and set the first marker timestamp field to the value of the second marker timestamp field, the second marker thereby becoming the first marker;
when a new data unit is added to the tail of the queue, update the value of the second marker timestamp field to reflect a time at which the new data unit was enqueued and update the value of the second marker identifier field to identify the new data unit as the second marker; and
repeatedly update the queue delay field based on a difference between a current time and the value of the first marker timestamp field.

US Pat. No. 10,742,558

TRAFFIC MANAGER RESOURCE SHARING

Innovium, Inc., San Jose...

17. A method comprising:receiving data units at an egress arbiter;
buffering first portions of the data units in an ingress buffer;
processing the first portions from the ingress buffer with one or more ingress blocks to generate control information for the data units;
forwarding the control information to a shared traffic manager;
sending payload portions of the data units from the ingress arbiter to the shared traffic manager without waiting for the one or more ingress blocks to process the corresponding first portions of the data units;
receiving the data units at the shared traffic manager;
buffering the data units in a shared buffer memory of the shared traffic manager;
scheduling particular data units, of the data units, to be released from the shared buffer memory to multiple egress blocks coupled to the shared traffic manager, a given data unit being released to one or more of the egress blocks;
processing the particular data units at the egress blocks with egress packet processors, each egress block having a separate egress packet processor; and
based on the processing, forwarding the particular data units to egress ports, each egress block coupled to a different set of the egress ports.

US Pat. No. 10,740,006

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A network device comprising:a main memory storing data elements from groups of data elements, each group of data elements including an ordered sequence of data elements;
a child link memory including a plurality of memory banks, wherein each memory bank stores one or more entries, each entry including: (i) an address of a main memory location that stores a data element identified by the entry, and (ii) an address of a child link memory bank that stores a next data element in a group of data elements that includes the data element identified by the entry;
a child manager including, for each memory bank of the plurality of memory banks in the child link memory, one or more head nodes and one or more tail nodes,
wherein each head node maintains (i) an address of a main memory location that stores a particular head data element that is a first data element in a respective memory bank for a group of data elements that includes the particular head data element, (ii) an address to a child link memory entry in the same memory bank, and (iii) a data element sequence number that represents a position of the particular head data element in an ordered sequence of data elements in the group of data elements that includes the particular head data element, and
wherein each tail node maintains an address to a main memory location that stores a particular tail data element that is a most recent data element in the respective memory bank for a group of data elements that includes the particular tail data element; and
a parent manager that includes one or more head entries for each memory bank in the plurality of memory banks in the child link memory,
wherein each head entry of the one or more head entries stores (i) a snapshot pointer for accessing metadata about a particular group of data elements in a snapshot memory; and (ii) a snapshot sequence number indicating another group of data elements to be accessed next,
wherein the metadata about a group of data elements includes (i) pointers to at least one of the one or more head nodes or the one or more tail nodes corresponding to the group of data elements in the child manager, and (ii) sequence numbers that correspond to data element sequence numbers for at least one of the one or more head nodes or the one or more tail nodes corresponding to the group of data elements, and
wherein the snapshot sequence number corresponds to an arrival order in which groups of data elements arrive at the network device, the particular group of data elements and the another group of data elements being successive groups of data elements to have arrived at the network device.