US Pat. No. 10,091,873

PRINTED CIRCUIT BOARD AND INTEGRATED CIRCUIT PACKAGE

Innovium, Inc., San Jose...

1. An apparatus comprising:a printed circuit board that includes:
a multilayer lamination of one or more ground layers, one or more power layers, and a plurality of signal layers;
a plurality of vias that are located on a first surface of the printed circuit board; and
a plurality of bonding pads that couple a ball grid array of an integrated circuit package to the one or more ground layers, the one or more power layers, and the plurality of signal layers through the plurality of vias, wherein the plurality of bonding pads includes:
first bonding pads that are arranged in a first area of the printed circuit board and that are configured to transfer multiple pairs of first differential signals between the printed circuit board and the integrated circuit package, each of the first bonding pads being coupled to a via of the plurality of vias in the first area,
second bonding pads that are arranged in a second area of the printed circuit board and that are configured to transfer multiple pairs of second differential signals between the printed circuit board and the integrated circuit package, each of the second bonding pads being coupled to a via of the plurality of vias in the second area, and
third bonding pads that are arranged in a third area of the printed circuit board and that couple the integrated circuit package to ground of the printed circuit board, each of the third bonding pads being coupled to two or more vias of the plurality of vias in the third area,
wherein the third area is located between the first area and the second area.

US Pat. No. 9,450,604

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:
a profiler configured to associate compression profiles with unpacked data units, each of the unpacked data units having allocated
space for storing field data for each field in a master set of fields, but only carrying values for a subset of fields in
the master set, each of the compression profiles indicating a specific combination of value-carrying fields in the master
set of fields, and packed value lengths for the indicated value-carrying fields; and

a data packer component configured to generate packed field data for a given unpacked data unit based on a given compression
profile, of the compression profiles, that the profiler associated with the given unpacked data unit, the packed field data
including values for the specific combination of value-carrying fields indicated by the given compression profile, the values
condensed within the packed field data to corresponding packed value lengths specified by the given compression profile, the
data packer component further configured to store or transmit the packed field data in association with information identifying
the given compression profile.

US Pat. No. 9,894,670

IMPLEMENTING ADAPTIVE RESOURCE ALLOCATION FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. A resource management system for a network device comprising:
a resource manager that is configured to:
generate a sequence of a plurality of access time slots for a plurality of candidate entities to access a plurality of resources
associated with the network device,

wherein a candidate entity of the plurality of candidate entities is a network entity that requests access to one or more
of the plurality of resources, and

wherein an access time slot provides a time period during which a candidate entity associated with the access time slot can
access a resource of the plurality of resources;

determine a priority level associated with each of the plurality of candidate entities, wherein each of the candidate entities
is associated with a respective one of a predetermined plurality of priority levels, including:

determining a first priority level for a first candidate entity of the of the plurality of candidate entities, and
determining a second priority level for a second candidate entity of the of the plurality of candidate entities, wherein the
second priority level is lower than the first priority level;

in response to determining that the first candidate entity has the first priority level that is higher than the second priority
level of the second candidate entity, assign a greater number of access time slots to the first candidate entity, and a lower
number of access time slots to the second candidate entity;

determine that at least one of the first candidate entity or the second candidate entity is not using one or more access time
slots assigned to the respective candidate entity; and

in response to determining that at least one of the first candidate entity or the second candidate entity is not using one
or more access time slots assigned to the respective candidate entity, redistribute an unused access time slot presently assigned
to one of the first candidate entity or the second candidate entity to a third candidate entity that is without any assigned
access time slot; and

a resource monitor that is configured to detect usage of each of the access time slots by the respective candidate entities
that are assigned the access time slots.

US Pat. No. 9,841,913

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A network device comprising:
a main memory storing data elements from groups of data elements, wherein a group of data elements includes an ordered sequence
of data elements;

a child link memory including a plurality of memory banks, wherein each memory bank stores one or more entries, each entry
including: (i) a main memory location address that stores a data element identified by the entry, and (ii) a child link memory
location address storing a next data element in a group of data elements corresponding to the data element identified by the
entry;

a child manager including, for each memory bank of the plurality of memory banks in the child link memory, one or more head
nodes and one or more tail nodes, wherein each head node maintains (i) a data-element pointer to a main memory location for
accessing a particular head data element that is a first data element in the respective memory bank for a group of data elements
corresponding to the particular head data element, (ii) a child memory pointer for accessing a child link memory entry in
the same memory bank, and (iii) a data-element sequence number that represents a position of the particular head data element
in an ordered sequence of data elements in the group of data elements corresponding to the particular head data element, and
wherein each tail node maintains a data-element pointer to a main memory location for accessing a particular tail data element
that is a most recent data element in the respective memory bank for a group of data elements corresponding to the particular
tail data element;

a parent manager that includes one or more head entries for each memory bank in the plurality of memory banks in the child
link memory,

wherein each head entry of the one or more head entries stores (i) a snapshot pointer for accessing a particular snapshot
in a snapshot memory; and (ii) a snapshot sequence number to determine which snapshot from one or more snapshots stored in
the snapshot memory is to be accessed next,

wherein each snapshot stores snapshot list metadata for a particular group of data elements, wherein the snapshot list metadata
includes (i) one or more pointers to one or more nodes in the plurality of memory banks that include information for one or
more data elements in the particular group of data elements, and (ii) a data-element sequence number that represents a position
in an ordered sequence of data elements in the particular group of data elements; and

circuitry configured to use the one or more head entries to:
determine, based on the respective snapshot sequence number stored in each head entry, a snapshot order for accessing the
one or more snapshots;

access a snapshot in the snapshot memory based on the determined snapshot order; and
for each accessed snapshot, use the respective snapshot list metadata to determine a sequential order for accessing the one
or more nodes in the plurality of memory banks that include information for data elements in a group of data elements corresponding
to the snapshot.

US Pat. No. 9,785,367

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory including memory locations that store data elements;
a link memory including a plurality of memory banks that includes a first memory bank and a second memory bank, each memory
bank of the plurality of memory banks having a latency for consecutively executed memory access operations, wherein each memory
bank is configured to store a plurality of nodes that each store (i) a respective data-element pointer to the main memory
for accessing a respective memory location, and (ii) a respective next-node pointer for accessing a next node in the same
memory bank, wherein each of the data-element pointers in the plurality of memory banks points to a respective data element
stored the main memory, and wherein one or more data packets are formed by linking the data elements stored in the main memory;

circuitry configured to:
store at least a first data element, a second data element, and a third data element in a first memory location, a second
memory location, and a third memory location of the main memory, respectively, wherein the first data element is to be read
before the second data element, and wherein the second data element is to be read before the third data element;

store, in a first node of a first memory bank of the plurality of memory banks, (i) a first data-element pointer to the first
memory location and (ii) a first next-node pointer for accessing a second node of the first memory bank;

after storing the first data-element pointer in the first node, determine that accessing the second node of the first memory
bank to store a second data-element pointer to the second memory location does not satisfy the latency for consecutively executed
memory access operations for the first memory bank;

in response to determining that accessing the second node of the first memory bank to store a second data-element pointer
to the second memory location of the main memory does not satisfy the latency for consecutively executed memory access operations
for the first memory bank, store, in a first node of a second memory bank of the plurality of memory banks, (i) the second
data-element pointer to the second memory location of the main memory and (ii) a second next-node pointer for accessing a
second node of the second memory bank;

after storing the second data-element pointer in the first node of the second memory bank, determine that accessing the second
node of the first memory bank to store a third data-element pointer to the third memory location of the main memory satisfies
the latency for consecutively executed memory access operations for the first memory bank; and

in response to determining that accessing the second node of the first memory bank to store a third data-element pointer to
the third memory location of the main memory satisfies the latency for consecutively executed memory access operations for
the first memory bank, store, in the second node of the first memory bank, (i) the third data-element pointer to the third
memory location and (ii) a third next-node pointer for accessing a third node of the first memory bank,

wherein at least the first next-node pointer and the third next-node pointer form a first skip list for the first memory bank,
and

wherein at least the second next-node pointer and forms a second skip list for the second memory bank; and
a context manager including a first head entry for the first memory bank and a second head entry for the second memory bank,
the context manager configured to maintain at least the first skip list and the second skip list wherein the first head entry
stores a first link-memory pointer pointing to the first node of the first memory bank, and wherein the second head entry
stores a second link-memory pointer pointing to the first node of the second memory bank.

US Pat. No. 9,654,137

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:
one or more memories storing compression profiles that indicate different extraction lengths and extraction offsets for fields
in a master set of fields;

a vector profiler component configured to input vectors having portions allocated to different fields in the master set, and
to output associated compression profiles selected for the inputted vectors, a given compression profile selected for a given
vector based at least on for which of the different fields in the master set the given vector carries values;

a data packing component configured to input the vectors and the associated compression profiles, and to output packed vectors
generated for the vectors using the associated compression profiles, a given packed vector generated from a given vector based
on given extraction offsets and given extraction lengths indicated by a given compression profile that was selected for the
given vector, the given packed vector outputted in association with information identifying the given compression profile.

US Pat. No. 9,742,436

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:
a data unpacker component configured to unpack at least portions of data units represented by packed field data, each of the
data units having values for one or more fields in a master set of fields, the data unpacker component comprising:

an input configured to receive particular packed field data for a particular data unit, of the data units;
an input configured to receive particular compression profile information associated with the particular data unit;
profile processing logic configured to, based on the particular compression profile information associated with the particular
data unit, identify for which particular one or more fields, in the master set of fields, the particular packed field data
stores values;

parsing logic configured to extract values from the particular packed field data for the particular one or more fields that
the profile processing logic identifies;

outputs, each output of the outputs corresponding to a different field in the master set of fields and configured to, responsive
to the parsing logic extracting a particular value for the field corresponding to the output, output data based on the particular
value.

US Pat. No. 9,690,507

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory configured to store data elements;
a link memory including a plurality of memory banks including a first memory bank and a different second memory bank, wherein
each memory bank has a latency for consecutively executed memory access operations,

wherein each memory bank of the plurality of memory banks is configured to store a plurality of nodes that each maintains
metadata representing (i) data-element pointers to the main memory for accessing data elements referenced by the data-element
pointers, and (ii) next-node pointers for accessing next nodes in the same memory bank,

wherein the data-element pointers in the plurality of memory banks point to data elements stored in memory locations in the
main memory to form one or more data packets, and

wherein the next-node pointers connect each node to a next node in the respective memory bank to form a respective separate
skip list of a plurality of skip lists, each skip list having an initial node and a final node, and all the nodes of each
skip list being stored in a respective single memory bank; and

a context manager including a respective head entry and a respective tail entry for each memory bank of the plurality of memory
banks,

wherein each head entry of the plurality of head entries is configured to include a unique address pointing to the initial
node of the skip list in the respective memory bank,

wherein each tail entry of the plurality of tail entries is configured to include a unique address pointing to the final node
of the skip list in the respective memory bank; and

circuitry configured to use the head and tail entries in the context manager to:
access a first initial node stored in the first memory bank for a first skip list of the plurality of skip lists;
(i) before accessing, for the first skip list, a next node stored in the first memory bank and (ii) within less time than
the latency for the first memory bank, access a second initial node stored in the second memory bank for a second skip list;
and

(i) after accessing the second initial node and (ii) after satisfying the latency for the first memory bank, access, for the
first skip list, the next node stored in the first memory bank.

US Pat. No. 9,767,014

SYSTEM AND METHOD FOR IMPLEMENTING DISTRIBUTED-LINKED LISTS FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory configured to store data elements;
a link memory including a plurality of memory banks, each of the memory banks configured to store a plurality of nodes that
each stores (i) a respective data-element pointer to the main memory for accessing a respective data element referenced by
the respective data-element pointer, and (ii) a respective sequence identifier for determining an order for accessing the
plurality of memory banks, wherein the data-element pointers in the plurality of memory banks point to the data elements stored
in the main memory to form a list of data elements that represent a data packet;

a free-entry manager configured to generate an available bank set including one or more locations in the link memory; and
a context manager configured to maintain the metadata for forming the list of data elements, the context manager including
a plurality of head entries that correspond to the plurality of memory banks,

wherein each head entry of the plurality of head entries is configured to store (i) a respective link-memory pointer pointing
to a respective node in the respective memory bank of the link memory and (ii) the respective sequence identifier for the
respective node,

circuitry configured to use the head entries in the context manager to:
determine, based on the respective sequence identifier stored in each head entry of the plurality of head entries, the order
for accessing the plurality of memory banks; and

access the plurality of memory banks based on the determined order to reconstruct the data packet.

US Pat. No. 9,823,867

SYSTEM AND METHOD FOR ENABLING HIGH READ RATES TO DATA ELEMENT LISTS

Innovium, Inc., San Jose...

1. A network device comprising:
a main memory storing data elements from groups of data elements, wherein a group of data elements includes an ordered sequence
of data elements;

a child link memory including a plurality of memory banks, wherein each memory bank stores one or more entries, each entry
including: (i) a main memory location address that stores a data element identified by the entry, and (ii) a child link memory
location address storing a next data element in a group of data elements corresponding to the data element identified by the
entry;

a child manager including, for each memory bank of the plurality of memory banks in the child link memory, one or more head
nodes and one or more tail nodes, wherein each head node maintains (i) a data-element pointer to a main memory location for
accessing a particular head data element that is a first data element in the respective memory bank for a group of data elements
corresponding to the particular head data element, (ii) a child memory pointer for accessing a child link memory entry in
the same memory bank, and (iii) a data-element sequence number that represents a position of the particular head data element
in an ordered sequence of data elements in the group of data elements corresponding to the particular head data element, and
wherein each tail node maintains a data-element pointer to a main memory location for accessing a particular tail data element
that is a most recent data element in the respective memory bank for a group of data elements corresponding to the particular
tail data element;

a parent manager that includes one or more head entries for each memory bank in the plurality of memory banks in the child
link memory,

wherein each head entry of the one or more head entries stores (i) a snapshot pointer for accessing a particular snapshot
in a snapshot memory; and (ii) a snapshot sequence number to determine which snapshot from one or more snapshots stored in
the snapshot memory is to be accessed next,

wherein each snapshot stores snapshot list metadata for a particular group of data elements, wherein the snapshot list metadata
includes (i) one or more pointers to one or more nodes in the plurality of memory banks that include information for one or
more data elements in the particular group of data elements, and (ii) a data-element sequence number that represents a position
in an ordered sequence of data elements in the particular group of data elements; and

circuitry configured to use the one or more head entries to:
determine, based on the respective snapshot sequence number stored in each head entry, a snapshot order for accessing the
one or more snapshots;

access a snapshot in the snapshot memory based on the determined snapshot order; and
for each accessed snapshot, use the respective snapshot list metadata to determine a sequential order for accessing the one
or more nodes in the plurality of memory banks that include information for data elements in a group of data elements corresponding
to the snapshot.

US Pat. No. 9,929,970

EFFICIENT RESOURCE TRACKING

Innovium, Inc., San Jose...

1. A networking apparatus comprising:communication hardware interfaces coupled to one or more networks, the communication hardware interfaces configured to receive and send messages;
networking hardware configured to process routable messages received over the communication hardware interfaces;
one or more first memories configured to store full status counters;
one or more second memories configured to store intermediate counters, the one or more second memories being different than the one or more first memories, each of the intermediate counters corresponding to a different one of the full status counters;
a counting subsystem configured to increment the intermediate counters responsive to the communication hardware interfaces receiving the routable messages and to decrement the intermediate counters responsive to at least one of: the communication hardware interfaces sending the routable messages, or the networking hardware disposing of the routable messages;
a status update subsystem configured to update the full status counters by: adding the intermediate counters to the full status counters to which the intermediate counters respectively correspond, and resetting the intermediate counters;
an update controller configured to identify times for the status update subsystem to update specific full status counters of the full status counters;
a threshold application subsystem configured to compare thresholds to the full status counters, and to assign states to the full status counters based on the comparing;
one or more third memories configured to store status indicators of the assigned states.

US Pat. No. 10,067,690

SYSTEM AND METHODS FOR FLEXIBLE DATA ACCESS CONTAINERS

Innovium, Inc., San Jose...

1. A memory system for a network device, the memory system comprising:a first network interface configured to receive network data in units of a first data width;
a second network interface configured to receive network data in units of a second data width different from the first data width;
a third network interface configured to output network data in units of a third data width;
a packing data buffer including a plurality of memory banks arranged in a plurality of rows and a plurality of columns, the packing data buffer configured to store, in the plurality of memory banks, received network data of the first data width or the second data width as storage data elements in units of a fixed data width, wherein the third data width of the third network interface is a multiple of the fixed data width;
a free address manager configured to generate an available bank set that includes one or more free memory banks in the plurality of memory banks; and
distributed link memory configured to maintain one or more pointers to interconnect a set of one or more memory locations of the plurality of memory banks in the packing data buffer to generate at least one list to maintain a sequential relationship between the network data stored in the plurality of memory banks.

US Pat. No. 9,941,899

ELASTIC DATA PACKER

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more data unpacker components, each data unpacker component configured to:
receive packed field data for data units;
receive compression profiles associated with the data units, each particular compression profile of the compression profiles identifying a particular set of fields and particular packed value lengths for at least some of the fields;
unpack the packed field data into values based on the compression profiles associated with the data units; and
output unpacked field data for the data units, the outputting comprising, for each particular data unit of the data units, outputting particular values unpacked for the particular set of fields identified by the particular compression profile associated with the particular data unit.

US Pat. No. 9,753,660

SYSTEM AND METHOD FOR IMPLEMENTING HIERARCHICAL DISTRIBUTED-LINKED LISTS FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. A memory system for a network device comprising:
a main memory configured to store data elements;
write circuitry configured to:
write a first portion of a first data packet as first data elements to the main memory;
write a first child distributed-linked list to maintain list metadata that includes first data-element pointers to the main
memory to interconnect the first data elements stored in the main memory;

after writing the first portion of the first data packet and before writing a second portion of the first data packet to the
main memory:

write a first portion of a second data packet as second data elements to the main memory;
write a parent distributed linked list that includes a first snapshot that represents (i) a first child pointer to the first
child distributed-linked list and (ii) a first sequence identifier associated with the first snapshot; and

write a second child distributed-linked list that includes second data-element pointers to the main memory to interconnect
the second data elements stored in the main memory;

after writing the first portion of the second data packet to the main memory, write the second portion of the first data packet
as third data elements to the main memory;

update the parent distributed linked list to include a second snapshot that represents (i) a second child pointer to the second
child distributed-linked list and (ii) a second sequence identifier associated with the second snapshot;

write a third child distributed-linked list that includes third data-element pointers to the main memory to interconnect the
third data elements stored in the main memory; and

after writing the second portion of the first data packet to the main memory, update the parent distributed linked list to
include a third snapshot that represents (i) a third child pointer to the third child distributed-linked list and (ii) a third
sequence identifier associated with the third snapshot, wherein a value of the third sequence identifier is between values
of the first sequence identifier and the second sequence identifier; and

read circuitry configured to read the main memory to reconstruct the first data packet and the second data packet in sequence
using (i) snapshots included in the parent distributed linked list and (ii) data-element pointers included in the child distributed-linked
lists, wherein the reconstruction of the first data packet and the second data packet are read in an order according to the
values of the sequence identifiers associated with the snapshots.

US Pat. No. 10,055,153

IMPLEMENTING HIERARCHICAL DISTRIBUTED-LINKED LISTS FOR NETWORK DEVICES

Innovium, Inc., San Jose...

1. An apparatus, comprising:a main memory configured to store data elements;
write circuitry configured to:
write a first data packet as first data elements to the main memory;
write a first child distributed linked list that includes first data-element pointers to the main memory to interconnect the first data elements stored in the main memory;
write a parent distributed linked list to include a first snapshot that represents (i) a first child pointer to the first child distributed linked list and (ii) a first sequence identifier associated with the first snapshot;
after writing the first data packet to the main memory, write a second data packet as second data elements to the main memory;
write a second child distributed linked list that includes second data-element pointers to the main memory to interconnect the second data elements stored in the main memory; and
update the parent distributed linked list to include a second snapshot that represents (i) a second child pointer to the second child distributed linked list and (ii) a second sequence identifier associated with the second snapshot; and
read circuitry configured to read the first data packet and the second data packet in sequence using respectively the first snapshot and the second snapshot included in the parent distributed linked list, wherein an order of the read is based on sequence identifiers.

US Pat. No. 10,218,589

EFFICIENT RESOURCE STATUS REPORTING APPARATUSES

Innovium, Inc., San Jose...

1. A networking apparatus comprising:communication hardware interfaces coupled to one or more networks, the communication hardware interfaces configured to receive and send messages;
a switching subsystem configured to process routable messages received over the communication hardware interfaces;
a tracking subsystem configured to track resources used by the apparatus while processing the routable messages, at least by tracking an aggregate count of resources assigned for each object in a first set of objects, each object in the first set corresponding to one of: an ingress port, egress port, processing queue, or group of ports;
a status update system configured to update resource status information for each object in the first set by comparing a current aggregate count of resource assignments for the object to one or more thresholds for the object, the resource status information including a priority indicator indicating whether the object has a priority status;
a reporting subsystem configured to send, to a receiver, granular measures of resource assignments for priority objects within the first set, the priority objects being objects that currently have the priority status, each of the granular measures for a particular object reflecting how many resources have been assigned to a different combination of the particular object with another object in a second set of objects;
wherein the reporting subsystem is further configured to send the granular measures of resource assignments for the priority objects more frequently than granular measures of resource assignments for other objects in the first set that do not have the priority status.

US Pat. No. 10,244,629

PRINTED CIRCUIT BOARD INCLUDING MULTI-DIAMETER VIAS

Innovium, Inc., San Jose...

1. An apparatus comprising:one or more connection elements that are configured to transfer signals; and
a printed circuit board (PCB) that includes:
a multilayer lamination of one or more ground layers, one or more power layers, and a plurality of signal layers,
a plurality of vias that pass through one or more layers of the multilayer lamination,
a plurality of traces that are distinct from the one or more connection elements, each trace of the plurality of traces being in contact with a respective via of the plurality of vias and being configured to transfer the signals,
a plurality of pads that are directly in contact with the one or more connection elements and that are configured to transfer the signals between the one or more connection elements and the plurality of traces,
wherein a first via of the plurality of vias includes:
a first portion that has a first diameter, and
a second portion that has a second diameter that is smaller than the first diameter,
wherein a second via of the plurality of vias includes:
a third portion that has a third diameter, and
a fourth portion that has a fourth diameter that is smaller from the third diameter,
wherein the first portion of the first via is adjacent to the fourth portion of the second via and the second portion of the first via is adjacent to the third portion of the second via,
wherein the one or more connection elements include:
a first element that is coupled to a first surface of the PCB, and
a second element that is coupled to a second surface of the PCB, the second surface being different from the first surface, and
wherein the first element includes a plurality of surface mount contacts that are respectively coupled to the plurality of pads.

US Pat. No. 10,230,639

ENHANCED PREFIX MATCHING

Innovium, Inc., San Jose...

1. A network device comprising:one or more memories storing a prefix table represented as a prefix index coupled to a plurality of prefix arrays;
forwarding logic configured to search for a longest prefix match in the prefix table for a particular input key by:
searching for a first longest prefix match for the particular input key in the prefix index;
reading address information that corresponds to the first longest prefix match;
reading a particular prefix array from a particular location indicated by the address information;
examining each prefix entry in the particular prefix array until determining that a particular prefix entry corresponds to a second longest prefix match in the particular prefix array, the second longest prefix match being the longest prefix match for the particular input key in the prefix table;
prefix table management logic configured to:
generate a prefix tree representing the prefix table, each prefix entry in the prefix table having a corresponding node in the prefix tree;
divide the prefix tree into non-overlapping subtrees;
for each subtree of the subtrees:
store, within a prefix array for the subtree, a set of all prefix entries in the prefix table that correspond to nodes in the subtree;
add, to the prefix index, an entry comprising: a location at which the prefix array is stored and a prefix corresponding to a root node of the subtree, each subtree having a single root node.

US Pat. No. 10,251,270

DUAL-DRILL PRINTED CIRCUIT BOARD VIA

Innovium, Inc., San Jose...

1. A printed circuit board having multiple layers of circuitry, the printed circuit board comprising:a first layer having a first cylindrical opening with a first diameter, the first cylindrical opening formed through at least the first layer and formed about a particular axis;
a second layer having a second cylindrical opening with a second diameter, the second cylindrical opening formed through at least the second layer and formed about the particular axis,
wherein the first cylindrical opening is a portion of a conductive via, and
wherein the second diameter is smaller than the first diameter; and
a third layer having a third cylindrical opening with a third diameter, the third cylindrical opening formed through at least the third layer and formed about the particular axis,
wherein the third layer is arranged between the first layer and the second layer,
wherein the third diameter is smaller than the second diameter,
wherein the third cylindrical opening is a portion of the conductive via, and
wherein the second cylindrical opening is non-conductive.