US Pat. No. 9,632,959

EFFICIENT SEARCH KEY PROCESSING METHOD

Netronome Systems, Inc., ...

1. A method comprising:
(a) writing a first search key data set and a second search key data set into a memory, wherein the memory is written with
search key data sets only on a word by word basis, wherein each of the first and second search key data sets includes a header
along with a string of search keys, wherein the header of a search key data set indicates a common lookup operation to be
performed using each of the search keys of the search key data set, wherein the header of a search key data set is immediately
followed in memory by a search key of the search key data set, wherein the search keys of the search key data set are located
contiguously in the memory, and wherein at least one word contains search keys from both the first and second search key data
sets;

(b) reading the memory word by word and thereby reading the first search key data set and the second search key data set;
(c) outputting a first plurality of lookup command messages, wherein each respective one of the first plurality of lookup
command messages includes a corresponding respective one of the search keys of the first search key data set; and

(d) outputting a second plurality of lookup command messages, wherein each respective one of the second plurality of lookup
command messages includes a corresponding respective one of the search keys of the second search key data set.

US Pat. No. 9,535,851

TRANSACTIONAL MEMORY THAT PERFORMS A PROGRAMMABLE ADDRESS TRANSLATION IF A DAT BIT IN A TRANSACTIONAL MEMORY WRITE COMMAND IS SET

Netronome Systems, Inc., ...

1. A transactional memory, comprising:
a memory; and
interface and decoding circuit that receives a command onto the transactional memory via a bus, wherein the command includes
an incoming address and a “Do Address Translation” (DAT) bit, wherein if the command is a command to access the memory and
the DAT bit is set then the interface and decoding circuit performs address translation on the incoming address by deleting
certain bits and shifting other bits of the incoming address thereby generating a translated address so that the transactional
memory uses the translated address to access the memory, whereas if the DAT bit is not set then the interface and decoding
circuit does not perform address translation but rather the transactional memory uses the incoming address to access the memory,
and wherein the transactional memory is not a processor that fetches instructions.

US Pat. No. 9,612,841

SLICE-BASED INTELLIGENT PACKET DATA REGISTER FILE

Netronome Systems, Inc., ...

1. A method involving an intelligent packet data register file, wherein the intelligent packet data register file comprises
a plurality of slice portions, the method comprising:
(a) storing bytes of packet data in a packet buffer memory;
(b) supplying a start byte value and a number of bytes required value B to each of the slice portions, wherein the number
of bytes required value B indicates a number of bytes of the packet data that is required for execution of an instruction
on a processor of which the intelligent packet data register file is a part, and wherein the start byte value is a byte number
of a first byte of the bytes of packet data required for execution of the instruction;

(c) determining, in each slice portion, a number of bytes that the slice portion is responsible for supplying to an execute
stage of the processor so that the processor can successfully execute the instruction;

(d) determining, in each slice portion, based at least in part upon the start byte value and the number of bytes required
value B, whether the slice portion is storing all the bytes that the slice portion is responsible for supplying to the execute
stage;

(e) issuing, from each slice portion, a fetch request to fetch any bytes that the slice portion is responsible for supplying
as determined in (c) but that the slice portion is not storing as determined in (d), wherein any fetch request issued in (e)
is communicated to the packet buffer memory, wherein the slice portions make the determinations of (c) and (d) independently
of one another, and wherein the slice portions issue fetch requests independently of one another;

(f) merging any valid bytes of packet data output from the slice portions into a single value that includes all the valid
bytes as output by all the slice portions, and supplying the single value at one time to the execute stage; and

(g) using the single value in the execute stage to complete execution of the instruction, wherein each slice portion can store
at most A bytes of packet data, and wherein the number of bytes required value B can be less than A, equal to A, or more than
A depending on the instruction being executed, wherein steps (a) through (q) are performed by a processor integrated circuit.

US Pat. No. 9,237,095

ISLAND-BASED NETWORK FLOW PROCESSOR INTEGRATED CIRCUIT

Netronome Systems, Inc., ...

1. An island-based network flow processor (IB-NFP) integrated circuit comprising:
a first island (MAC island) that converts incoming symbols into a packet, wherein the packet comprises a header portion and
a payload portion;

a second island (first NBI island) that analyzes at least part of the packet and generates therefrom first information indicative
of whether the packet is a first type of packet or a second type of packet;

a third island (ME island) that receives the first information and the header portion from the second island via a configurable
mesh data bus, wherein the third island generates second information, the second information indicates a location where the
header portion is stored and indicates a location wherein the payload portion is stored;

a fourth island (MU island) that receives the payload portion from the second island via the configurable mesh data bus;
a fifth island (second NBI island) that receives second information from the third island via the configurable mesh data bus,
the header portion from the third island, and the payload portion from the fourth island; and

a sixth island (second MAC island) that receives the header portion and the payload portion from the fifth island and converts
the header portion and the payload portion into outgoing symbols.

US Pat. No. 9,071,545

NETWORK APPLIANCE THAT DETERMINES WHAT PROCESSOR TO SEND A FUTURE PACKET TO BASED ON A PREDICTED FUTURE ARRIVAL TIME

NETRONOME SYSTEMS, INCORP...

1. A network appliance for communicating packets of a flow pair, wherein the flow pair comprises a first flow of packets and
a second flow of packets, and wherein the network appliance comprises:
a plurality of processor integrated circuits; and a network processor that receives a packet of the first flow and without
performing deep packet inspection on any packet of the flow pair determines a predicted future time when a future packet of
the flow pair will be received, wherein the network processor determines to send the future packet to a selected one of the
plurality of processor integrated circuits based at least in part on the predicted future time, wherein the future packet
is a packet of the second flow, the network appliance further comprising:

a bulk storage system coupled to each of the plurality of processor integrated circuits, wherein the bulk storage system stores
data associated with the second flow, wherein the network processor initiates a preloading of the data from the bulk storage
system to the selected one of the processor integrated circuits prior to the predicted future time when the future packet
will be received and before the future packet is actually received onto the network appliance, and wherein the network processor
initiates the preloading by waiting an amount of time and then initiating the preloading.

US Pat. No. 9,727,499

HARDWARE FIRST COME FIRST SERVE ARBITER USING MULTIPLE REQUEST BUCKETS

Netronome Systems, Inc., ...

1. A device, comprising:
a circuit that receives a first request from a first processor and a grant enable from a shared resource during a first clock
cycle and outputs a grant value during the first clock cycle, wherein the grant value is communicated to the first processor
and wherein the grant value is based at least on the grant enable indicating that the shared resource is available; and

a storage device including a plurality of request buckets, wherein the first request is stored in a first request bucket if
the first request is not granted during the first clock cycle, whereas the first request is not stored in a first request
bucket if the first request is granted during the first clock cycle, and wherein the first request stored in the first request
bucket is moved from the first request bucket to a second request bucket if the first request is not granted during a second
clock cycle, whereas the first request is not stored in the second request bucket if the first request is granted during the
second clock cycle.

US Pat. No. 9,485,195

INSTANTANEOUS RANDOM EARLY DETECTION PACKET DROPPING WITH DROP PRECEDENCE

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a packet descriptor including a drop precedence value and a queue number, wherein the queue number indicates
a queue stored within a memory, and wherein the packet descriptor is not part of a packet;

(b) determining an instantaneous queue depth of the queue;
(c) applying a drop probability to determine if the packet descriptor will be dropped, wherein the drop probability is a function
of the instantaneous queue depth and the drop precedence value;

(d) storing the packet descriptor in the queue if it is determined in (c) that the packet descriptor is not to be dropped;
and

(e) not storing the packet descriptor in the queue if it is determined in (c) that the packet descriptor should be dropped.

US Pat. No. 9,069,649

DISTRIBUTED CREDIT FIFO LINK OF A CONFIGURABLE MESH DATA BUS

NETRONOME SYSTEMS, INCORP...

1. An integrated circuit comprising:
a first distributed credit First-In-First-Out (FIFO) structure that communicates information from a center part of a first
rectangular island, through an output port of the first island, through an input port of a second rectangular island, to a
center part of the second island, wherein the first distributed credit FIFO structure comprises:

a first FIFO associated with the output port of the first island;
a first chain of registers;
a second FIFO associated with the input port of the second island, wherein a data signal path extends from the first FIFO
in the first island, through the first chain of registers, and to the second FIFO in the second island;

a second chain of registers; and
a credit count circuit disposed in the first island, wherein a taken signal path extends from the center part of the second
island, through the second chain of registers, and to the credit count circuit in the first island, and wherein the credit
count circuit determines a credit value of the first distributed credit FIFO structure.

US Pat. No. 9,280,297

TRANSACTIONAL MEMORY THAT SUPPORTS A PUT WITH LOW PRIORITY RING COMMAND

Netronome Systems, Inc., ...

8. A transactional memory, comprising:
a memory unit that stores a ring of buffers, wherein the ring of buffers includes a tail buffer and a head buffer; and
means for: 1) receiving a put into ring low priority command onto the transactional memory from a bus, 2) in response to the
receiving of the put into ring low priority command determines if the ring has an amount of unused buffer space, 3) if the
ring is determined to have the amount of unused buffer space then writing a value into the tail buffer of the ring, whereas
if the ring is determined not to have the amount of unused buffer space then not writing the value into the tail buffer, 4)
maintaining the head pointer so that the head pointer points to the head buffer of the ring, and 5) maintaining the tail pointer
so that the tail pointer points to the tail buffer of the ring, wherein the means does not have an instruction counter, wherein
the means is also for outputting an error message onto the bus if the ring is determined not to have the amount of unused
buffer space.

US Pat. No. 9,069,602

TRANSACTIONAL MEMORY THAT SUPPORTS PUT AND GET RING COMMANDS

NETRONOME SYSTEMS, INCORP...

1. A transactional memory, comprising:
a memory unit that stores a ring of buffers, wherein one of the buffers is a tail buffer; and
a ring buffer control circuit that receives a put into ring command, wherein the ring buffer control circuit does not have
an instruction counter that it uses to fetch instructions from any memory, wherein the ring buffer control circuit comprises:

a memory access portion coupled to read from and write to the memory unit; and
a ring operation portion, wherein the ring operation portion: 1) stores and maintains a tail pointer so that the tail pointer
points to the tail buffer of the ring, 2) in response to the receiving of the put into ring command uses the tail pointer
to determine an address, and 3) supplies the address to the memory access portion such that the memory access portion uses
the address to write to the tail buffer of the ring.

US Pat. No. 9,098,353

TRANSACTIONAL MEMORY THAT PERFORMS A SPLIT 32-BIT LOOKUP OPERATION

NETRONOME SYSTEMS, INC., ...

1. A method comprising:
(a) receiving a lookup command onto a transactional memory, wherein the lookup command includes a memory address, and wherein
the transactional memory includes a lookup engine and a memory unit;

(b) receiving an input value (IV);
(c) using the memory address to read a word out of the memory unit, wherein the word includes a plurality of result values
(RVs) and a plurality of threshold values;

(d) storing the RVs, the threshold values, and the IV into a storage device in the lookup engine;
(e) determining a lookup key value;
(f) determining a lookup key range value, wherein the lookup key range value identifies a lookup key range that includes the
lookup key value;

(g) selecting one of the plurality of RVs based on the lookup key range value determined in (f), wherein (b) through (g) are
performed by the lookup engine.

US Pat. No. 9,092,284

ENTROPY STORAGE RING HAVING STAGES WITH FEEDBACK INPUTS

NETRONOME SYSTEMS, INC., ...

1. A circuit comprising:
a configuration register that outputs a plurality of enable bit signals; and
signal storage ring comprising:
a signal storage ring input node;
a signal storage ring output node; and
a plurality of stages, wherein each of at least two of the stages comprises:
an exclusive OR circuit having a first input lead, a second input lead, and an output lead, wherein the first input lead of
the exclusive OR circuit is a ring data input of the stage;

a combinatorial logic circuit having a first input lead, a second input lead, and an output lead, wherein the output lead
of the combinatorial logic circuit is coupled to the second input lead of the exclusive OR circuit, wherein the second input
lead of the combinatorial logic circuit is coupled to the signal storage ring output node, wherein the first input lead of
the combinatorial logic circuit is coupled to receive one of the enable bit signals; and

a delay element having an input lead and an output lead, wherein the input lead of the delay element is coupled the output
lead of the exclusive OR circuit, wherein the output lead of the delay element is a ring data output of the stage, and wherein
the delay element is taken from the group consisting of: a single inverter, an odd number of series-connected inverters, an
even number of series-connected inverters.

US Pat. No. 9,152,452

TRANSACTIONAL MEMORY THAT PERFORMS A CAMR 32-BIT LOOKUP OPERATION

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a lookup command onto a transactional memory, wherein the transactional memory includes a lookup engine and
a memory unit;

(b) receiving an input value (IV);
(c) selecting a first portion of the IV and a second portion of the IV;
(d) reading a word out of the memory unit, wherein the word includes a plurality of result values and a plurality of reference
values;

(e) storing the result values, the reference values, and the second portion of the IV in a storage device within the lookup
engine;

(f) comparing the second portion of the IV to each of the reference values; and
(g) selecting one of the plurality of result values associated with the reference value that matches the second portion of
the IV, wherein (b) through (g) are performed by the lookup engine.

US Pat. No. 9,146,920

TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC LOOK-UP, ADD AND LOCK OPERATION

NETRONOME SYSTEMS, INC., ...

1. A transactional memory that receives a hash key, comprising:
a memory that stores a hash table and a data structure table, wherein the data structure table includes a plurality of data
structures, wherein the hash table includes a plurality of hash buckets, wherein each hash bucket includes a plurality of
hash bucket locations, wherein each hash bucket location includes a lock field and a hash key field, and wherein each hash
bucket location corresponds to an associated data structure in the data structure table; and

a hardware engine that causes the transactional memory to perform an Atomic Look-Up, Add, and Lock (ALAL) operation by receiving
a hash index that corresponds to a hash bucket and a hash key, and determining a hash bucket address based on the hash index,
and by performing a hash lookup operation and thereby determining if the hash key is present in the hash table, and returning
a result packet when the hash key is present in the table, wherein the hardware engine further comprises:

a state machine array comprising a plurality of state machines, wherein an ALAL command is received by one of the plurality
of state machines; and

a register pool comprising a controller and a plurality of registers, wherein one of the plurality of state machines causes
a pull command to be sent to a processor, and wherein the pull command causes the hash key to be communicated from the processor
and to the transactional memory such that the hash key is written into one of the plurality of registers within the register
pool.

US Pat. No. 9,124,644

SCRIPT-CONTROLLED EGRESS PACKET MODIFIER

NETRONOME SYSTEMS, INC., ...

1. An egress packet modifier comprising: an input port through which the egress packet modifier receives a packet and a script
code;
an output port;
a script parser that receives the script code and therefrom generates a first opcode and a second opcode, wherein the first
opcode comprises: 1) a first instruction, 2) a first offset, 3) a first indication of an amount of data, and 4) a first argument,
and wherein the second opcode comprises: 1) a second instruction, 2) a second offset, 3) a second indication of an amount
of data, and 4) a second argument; and

a pipeline comprising:
a first processing stage that receives the first opcode from the script parser, wherein the first processing stage receives
a first part of the packet and performs a first modification on the first part, wherein the first modification is determined
by the first opcode; and

a second processing stage that receives the second opcode from the script parser, wherein the second processing stage receives
the first part of the packet from the first processing stage and performs a second modification on the first part, wherein
the second modification is determined by the second opcode, wherein the first part of the packet as modified by the pipeline
is output from the egress packet modifier via the output port.

US Pat. No. 9,098,264

TRANSACTIONAL MEMORY THAT PERFORMS A DIRECT 24-BIT LOOKUP OPERATION

NETRONOME SYSTEMS, INC., ...

1. A method comprising:
(a) receiving a lookup command onto a transactional memory, wherein the lookup command includes address information, and wherein
the transactional memory includes a lookup engine and a memory unit;

(b) receiving an input value;
(c) using the address information to select a portion of the input value (IV);
(d) using the address information and the portion of the IV to read a word out of the memory unit, wherein the word includes
a plurality of result values (RVs);

(e) using the portion of the IV to generate a result location value;
(f) storing the RVs and the result location value into a storage device in the lookup engine; and
(g) using the result location value to select one of the RVs, wherein (b) through (e) are performed by the lookup engine.

US Pat. No. 9,100,212

TRANSACTIONAL MEMORY THAT PERFORMS A DIRECT 32-BIT LOOKUP OPERATION

NETRONOME SYSTEMS, INC., ...

1. A method comprising:
(a) receiving a lookup command onto a transactional memory, wherein the lookup command includes address information, and wherein
the transactional memory includes a lookup engine and a memory unit;

(b) receiving an input value;
(c) using the address information to select a portion of the input value (IV);
(d) using the address information and the portion of the IV to read a word out of the memory unit, wherein the word includes
a plurality of result values (RVs);

(e) storing the RVs and the portion of the IV into a storage device in the lookup engine; and
(f) using the portion of the IV to select one of the RVs, wherein (b) through (e) are performed by the lookup engine.

US Pat. No. 9,465,651

TRANSACTIONAL MEMORY HAVING LOCAL CAM AND NFA RESOURCES

Netronome Systems, Inc., ...

1. A method comprising:
(a) causing a byte stream to be transferred into a transactional memory and to be stored into a memory of the transactional
memory, wherein the transactional memory includes the memory, a BWC (Byte-Wise Comparator) circuit and an NFA (Non-deterministic
Finite Automaton) engine;

(b) causing the transactional memory to use the BWC circuit to find a character signature in the byte stream thereby determining
byte position information indicative of a byte position of the character signature in the byte stream;

(c) receiving the byte position information from the transactional memory; and
(d) causing the NFA engine to use a first NFA to process the byte stream starting at a byte position determined based at least
in part on the byte position information of (b), wherein (a) through (d) are performed by a processor that is not a part of
the transactional memory, and wherein the byte stream is not read out of the transactional memory at any time between the
transferring of (a) until the processing of (d) is completed.

US Pat. No. 9,069,558

RECURSIVE USE OF MULTIPLE HARDWARE LOOKUP STRUCTURES IN A TRANSACTIONAL MEMORY

NETRONOME SYSTEMS, INCORP...

1. A method comprising:
(a) receiving a lookup command and an input value onto a transactional memory, wherein the transactional memory includes a
lookup engine and a memory unit;

(b) reading a first block of first information from the memory unit;
(c) using the first information to configure the lookup engine in a first configuration;
(d) using the lookup engine as configured in (c) to perform a first lookup operation on at least a part of the input value;
(e) obtaining a first result value as a result of the first lookup operation; and
(f) determining from the first result value to do one of the following: 1) perform a second lookup operation, 2) output the
first result value from the transactional memory as a result of the lookup command, wherein (a) through (f) are performed
by the transactional memory.

US Pat. No. 9,417,844

STORING AN ENTROPY SIGNAL FROM A SELF-TIMED LOGIC BIT STREAM GENERATOR IN AN ENTROPY STORAGE RING

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a command onto a random number generator thereby causing a self-timed logic bit stream generator within the
random number generator to output a bit stream;

(b) supplying the bit stream onto an input of a signal storage ring so that entropy of the bit stream is captured in the signal
storage ring;

(c) causing the self-timed logic bit stream generator to stop outputting the bit stream while the signal storage ring is circulating;
(d) using a signal output by the signal storage ring to generate a random number after the self-timed logic bit stream generator
has been stopped in (c) but while the signal storage ring is circulating; and

(e) outputting the random number from the random number generator.

US Pat. No. 9,417,656

NFA BYTE DETECTOR

Netronome Systems, Inc., ...

1. A circuit comprising:
a hardware byte characterizer that receives an m-bit incoming byte value and uses combinatorial logic and no sequential logic
to generate a plurality of single bit characterizations signals, wherein the single bit characterization signals together
form a 2m-bit hardware byte characterizer output value;

a first matching circuit that performs a (Ternary Content-Addressable Memory) TCAM match function, wherein the first matching
circuit receives the m-bit incoming byte value, an n-bit mask value, and an o-bit match value;

a second matching circuit that performs a wide match function, wherein the second matching circuit receives the 2m-bit hardware
byte characterizer output value and an n+o-bit mask value;

a multiplexer having a first data input coupled to a data output of the first matching circuit, a second data input coupled
to a data output of the second matching circuit, and a plurality of multiplexer select inputs; and

a storage device that has a first plurality of storage locations, a second plurality of storage locations, and a third plurality
of storage locations, wherein n data values stored in the first plurality of storage locations are supplied to the first matching
circuit as the n-bit mask value and are simultaneously supplied to the second matching circuit as n bits of the n+o-bit mask
value, and wherein o data values stored in the second plurality of storage locations are supplied to the first matching circuit
as the o-bit match value and are simultaneously supplied to the second matching circuit as o bits of the n+o-bit mask value,
and wherein p data values stored in the third plurality of storage locations are supplied onto the select inputs of the multiplexer.

US Pat. No. 9,389,908

TRANSACTIONAL MEMORY THAT PERFORMS A TCAM 32-BIT LOOKUP OPERATION

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a lookup command onto a transactional memory, wherein the lookup command includes a memory address, and wherein
the transactional memory includes a lookup engine and a memory unit;

(b) receiving an input value (IV);
(c) using the memory address to read a word out of the memory unit, wherein the word includes a plurality of result values;
(d) storing the values and the IV into a storage device in the lookup engine;
(e) determining a lookup key value;
(f) determining a selector value based at least in part on the lookup key value;
(g) selecting one of the plurality of values based on the selector value determined in (f), wherein (b) through (g) are performed
by the lookup engine.

US Pat. No. 9,348,778

TRANSACTIONAL MEMORY THAT PERFORMS AN ALUT 32-BIT LOOKUP OPERATION

Netronome Systems, Inc., ...

1. A hardware lookup engine, comprising:
a state machine selector circuit that receives a lookup command via a bus;
a plurality of state machine circuits, wherein one the plurality of state machines receive the lookup command from the state
machine selector circuit; and

a pipeline circuit comprising a plurality of pipeline stages, wherein the pipeline circuit receives an instruction from one
of the plurality of state machine circuits, wherein the pipeline circuit outputs a read request to a memory unit and in response
receives data including a plurality of result values (RVs) from the memory unit, and wherein the pipeline circuit selects
one of the RVs and outputs the selected RV onto the bus.

US Pat. No. 9,298,495

TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC METERING COMMAND

Netronome Systems, Inc., ...

1. A transactional memory, comprising:
a memory unit that stores a data structure table, wherein the data structure table includes a pair of long term credit values
and a pair of short term credit values, and wherein the credit pairs are both associated with a packet; and

a hardware engine that causes the transactional memory to perform an Atomic Metering Command (AMC) operation by receiving
an AMC command from a processor via a bus, receiving an input value (IV), reading the pair of long term credit values and
the pair of short term credit values from the memory unit, performing an atomic metering operation thereby determining if
a decremented long term credit value is greater than a first threshold value and if a decremented short term credit value
is greater than a second threshold value, and determining an action to be performed on the packet.

US Pat. No. 9,223,555

HIERARCHICAL RESOURCE POOLS IN A LINKER

Netronome Systems, Inc., ...

1. A method comprising:
(a) supplying a first instruction to a linker to declare a first pool of resource instances of a resource;
(b) supplying a second instruction to the linker to declare a second pool of resource instances from the first pool; and
(c) supplying a third instruction to the linker to allocate one or more resource instances from the second pool to be associated
with a symbol, wherein the first, second and third instructions are parts of an amount of object code supplied to the linker,
and wherein steps (a) through (c) are performed by a computer including a processor and a program memory.

US Pat. No. 9,489,337

PICOENGINE MULTI-PROCESSOR WITH TASK ASSIGNMENT

Netronome Systems, Inc., ...

9. An apparatus comprising:
a data input port;
a data output port;
a picoengine pool comprising a plurality of picoengines and a plurality of memories;
a data characterizer that receives a stream of input data values and for each input data value of the stream generates a characterization
value;

a task assignor that receives the stream of input data values from the data input port and that receives a stream of characterization
values from the data characterizer, and that outputs input data value/task assignment pairs to the picoengine pool, wherein
each input data value/task assignment pair includes an input data value and a corresponding task assignment;

an input data picoengine selector that supplies picoengine select signals to the picoengine pool such that the picoengine
select signals cause picoengines of the pool to be selected one-by-one in a selected sequence, and wherein each of the selected
picoengines receives a corresponding one of the input data value/task assignment pairs; and

an output data picoengine selector that supplies picoengine select signals to the picoengine pool such that output data values
are output by the picoengine pool and are supplied onto the data output port, wherein the output data values are received
from picoengines in the selected sequence.

US Pat. No. 9,489,202

PROCESSOR HAVING A TRIPWIRE BUS PORT AND EXECUTING A TRIPWIRE INSTRUCTION

Netronome Systems, Inc., ...

1. A processor coupled to a memory system, the processor comprising:
an input data port, wherein the processor receives data to be processed via the input data port;
an output data port, wherein the processor outputs processed data onto the output data port;
a memory interface port;
a tripwire bus port;
a fetch stage that causes a sequence of instructions to be fetched from the memory system via the memory interface port, wherein
the sequence of instructions includes a tripwire instruction;

a decode stage that detects an opcode of the tripwire instruction;
a register file read stage coupled to the decode stage, wherein the register file read stage includes a register file; and
an execute stage coupled to the register file read stage, wherein the execute stage carries out an instruction operation of
the tripwire instruction by outputting a first multi-bit value along with a second multi-bit value in parallel onto the tripwire
bus port for one clock cycle when the tripwire instruction operation is being performed by the execute stage, wherein the
second multi-bit value is a processor number that identifies the processor, wherein the tripwire bus port only carries a valid
multi-bit value if the processor is processing a tripwire instruction, and wherein the processor neither reads data nor writes
data across the tripwire bus port.

US Pat. No. 9,450,890

PIPELINED EGRESS PACKET MODIFIER

Netronome Systems, Inc., ...

1. An egress packet modifier comprising:
an input port through which the egress packet modifier receives a packet, wherein the packet includes a first part and a second
part;

a pipeline comprising:
a first processing stage that receives the first part of the packet and performs a selectable one of a plurality of modifications
on the first part, and that then receives the second part of the packet and performs a selectable one of the plurality of
modifications on the second part; and

a second processing stage that receives the first part of the packet from the first processing stage and performs a selectable
one of the plurality of modifications on the first part, and that then receives the second part of the packet from the first
processing stage and performs a selectable one of the plurality of modifications on the second part, wherein the first part
of the packet is output from the pipeline and then the second part of the packet is output from the pipeline, wherein neither
the first nor the second processing stage comprises a processor that fetches instructions from a memory, decodes the instructions,
and executes the instructions; and

an output port through which the packet is output from the egress packet modifier, wherein the second processing stage may
receive an opcode from the first processing stage across a plurality of opcode conductors, wherein the opcode comprises: 1)
an instruction, 2) an offset, 3) an indication of an amount of data, and 4) an argument.

US Pat. No. 9,405,713

COMMONALITY OF MEMORY ISLAND INTERFACE AND STRUCTURE

Netronome Systems, Inc., ...

1. An integrated circuit comprising: a first island comprising a first memory and first data bus interface circuitry, wherein
the first data bus interface circuitry is an interface to a command/push/pull (CPP) data bus; and a second island comprising
a processor and a second memory and second data bus interface circuitry, wherein the second data bus interface circuitry is
an interface to the CPP data bus, wherein the processor can issue a command for a target memory to do an action, wherein if
a field in the command has a first value then the target memory is the first memory in the first island whereas if the field
in the command has a second value then the target memory is the second memory in the second island, wherein the second data
bus interface circuitry adds destination information onto the command if the command is to pass out of the second island and
to the first island, wherein the destination information identifies the first island to be the destination of the command,
whereas if the command is not to pass out of the second island then the second data bus interface circuitry does not add the
destination information onto the command.

US Pat. No. 9,304,706

EFFICIENT COMPLEX NETWORK TRAFFIC MANAGEMENT IN A NON-UNIFORM MEMORY SYSTEM

Netronome Systems, Inc., ...

1. An apparatus, comprising:
a first processor;
a first storage device, wherein a first status information is stored in the first storage device, and wherein the first processor
is coupled to the first storage device;

a second processor;
a second storage device, wherein a queue of data is stored in the second storage device, wherein the first status information
indicates if traffic data stored in the queue of data is permitted to be transmitted, wherein the second processor is coupled
to the second storage device, and wherein the first processor communicates to the second processor a control message indicating
if the traffic data stored in the queue of data is permitted to be transmitted;

a third processor; and
a third storage device, wherein a second status information is stored in the third storage device, wherein the third processor
is coupled to the third storage device, and wherein the second status information is an active bit that indicates that the
queue of data contains an occupied data block.

US Pat. No. 9,299,434

DEDICATED EGRESS FAST PATH FOR NON-MATCHING PACKETS IN AN OPENFLOW SWITCH

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a first packet of a flow onto an OpenFlow switch;
(b) generating a hash from a header of the first packet, and using the hash to direct a descriptor for the first packet to
one of a plurality of processors such that the descriptors for all packets of the flow are directed to the same one processor;

(c) performing a flow table lookup operation on a flow table and finding no flow entry in the flow table for the flow, wherein
the flow table lookup operation is performed by a flow table lookup functionality;

(d) performing a TCAM (Ternary Content Addressable Memory) lookup operation and finding no TCAM entry for the flow, wherein
the TCAM lookup operation is performed by a TCAM lookup functionality;

(e) sending a request OpenFlow message to an OpenFlow controller;
(f) receiving a response OpenFlow message from the OpenFlow controller, wherein the response OpenFlow message is indicative
of an action;

(g) applying the action to the first packet and thereby outputting the first packet from an output port of the OpenFlow switch;
(h) in response to the response OpenFlow message updating the TCAM lookup functionality so that the TCAM lookup functionality
has a TCAM entry for the flow, wherein the updating of (h) occurs before a second packet of the flow is received;

(i) receiving the second packet of the flow onto the OpenFlow switch;
(j) generating a hash from a header of the second packet and using the hash to direct a descriptor for the second packet to
the same one processor to which the descriptor in (b) was directed;

(k) performing a TCAM lookup operation and thereby using the TCAM entry to identify the action;
(l) updating the flow table functionality such that a flow table entry for the flow indicates the action, and wherein (l)
occurs after the second packet is received in (i); and

(m) applying the action to the second packet and thereby outputting the second packet from the output port of the OpenFlow
switch, wherein (a) through (m) are performed by the OpenFlow switch.

US Pat. No. 9,069,603

TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC METERING COMMAND

NETRONOME SYSTEMS, INCORP...

3. A method comprising:
(a) receiving an Atomic Metering Command (AMC) from a first processor onto a second processor, wherein a memory unit is coupled
to the second processor;

(b) receiving an input value (IV);
(c) using a memory address to read a word out of the memory unit, wherein the memory address is included in the AMC, and wherein
the word includes a plurality of credit values;

(d) storing the plurality of credit values and the IV into a storage device coupled to the second processor;
(e) selecting a pair of credit values from the plurality of credit values;
(f) subtracting the IV from the pair of credit values, thereby generating a pair of decremented credit values;
(g) determining if each of the decremented credit values generated in (f) is greater than a threshold value; and
(h) determining a meter color based upon the determination in (g) and returning a result value representing the meter color
to the first processor; wherein (b) through (h) are performed by the second processor.

US Pat. No. 9,413,665

CPP BUS TRANSACTION VALUE HAVING A PAM/LAM SELECTION CODE FIELD

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a bus transaction value onto a memory system, wherein the bus transaction value includes a Packet Addressing
Mode (PAM) enable indicator and first value and a second value, and wherein the memory system includes a memory; and

(b) translating the first value into an address and using the address to write packet data into a memory if the PAM enable
indicator is asserted, whereas if the PAM enable indicator is not asserted then the memory system uses the second value as
an address to write the packet data into the memory.

US Pat. No. 9,342,313

TRANSACTIONAL MEMORY THAT SUPPORTS A GET FROM ONE OF A SET OF RINGS COMMAND

Netronome Systems, Inc., ...

1. A transactional memory, comprising:
a memory unit that stores a first ring and a second ring, wherein the first ring includes a first tail buffer and a first
head buffer, and wherein the second ring includes a second tail buffer and a second head buffer; and

a ring buffer control circuit that receives a get from one of a set of rings command onto the transactional memory from a
bus, wherein the ring buffer control circuit does not have an instruction counter that it uses to fetch instructions from
any memory, wherein the ring buffer control circuit comprises:

a memory access portion coupled to read from and write to the memory unit; and
a ring operation portion, wherein the ring operation portion: 1) maintains a first head pointer so that the first head pointer
points to the first head buffer of the first ring, maintains a first tail pointer so that the first tail pointer points to
the first tail buffer of the first ring, maintains a second head pointer so that the second head pointer points to the second
head buffer of the second ring, and maintains a second tail pointer so that the second tail pointer points to the second tail
buffer of the second ring, 2) determines if the first ring is empty, 3) determines if the second ring is empty, 4) if the
first ring is determined not be empty then the ring operation portion supplies a first address to the memory access portion
such that the memory access portion uses the first address to read a first value out of the head buffer of the first ring,
whereas if the first ring is determined to be empty and the second ring is determined not to be empty then the ring operation
portion supplies a second address to the memory access portion such that the memory access portion uses the second address
to read a second value out of the head buffer of the second ring.

US Pat. No. 9,311,004

TRANSACTIONAL MEMORY THAT PERFORMS A PMM 32-BIT LOOKUP OPERATION

Netronome Systems, Inc., ...

1. A method comprising:
(a) sending a lookup command to a transactional memory, wherein the lookup command includes a memory address, and wherein
the transactional memory includes a memory unit;

(b) sending an input value (IV) to the transactional memory, wherein the memory address is used by the transactional memory
to read a word out of the memory unit, wherein the word includes a plurality of result values (RVs), a plurality of reference
values, and a plurality of prefix values; wherein

the plurality of RVs, the plurality of reference values, the plurality of prefix values, and the IV are used to determine
a lookup key value and a selector value; and wherein one of the plurality of RVs is selected based on the selector value.

US Pat. No. 9,262,136

ALLOCATE INSTRUCTION AND API CALL THAT CONTAIN A SYBMOL FOR A NON-MEMORY RESOURCE

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a first amount of code, onto a compiler, wherein the first amount of code includes an allocate instruction and
an API (Application Programming Interface) call, wherein the allocate instruction includes a symbol that identifies one of
a plurality of non-memory resource instances and the allocate instruction further includes a scope level value, wherein the
scope level value indicates a scope level of the symbol of the allocate instruction, which is a circuit level of a circuit
hierarchy, wherein the API call is a call to perform an operation on one of a plurality of non-memory resource instances,
wherein the API call also includes the symbol, and wherein said one of the non-memory resource instances is indicated by the
symbol of the API call and the non-memory resource instances are hardware circuits;

(b) replacing, by the compiler, the API call in the first amount of code with a plurality of API instructions, wherein the
plurality of API instructions include the symbol;

(c) receiving, onto a linker, location information indicting where the plurality of API instructions are to be loaded in a
circuit;

(d) generating, by the linker, a modified symbol using the location information and the scope level value, wherein the modified
symbol is unique within the scope level indicated by the scope value;

(e) replacing, by the linker, the symbol in the plurality of API instructions with the modified symbol;
(f) allocating, by the linker, a value to be associated with the symbol, wherein the value is one of a plurality of values,
and wherein each value of the plurality of values corresponds to a respective one of the plurality of non-memory resource
instances; and

(g) generating, by the linker, a second amount of code, wherein the second amount of code includes the API instructions, and
wherein the API instructions are for using the value allocated in (f) to generate an address of a register and are for using
the address of the register to perform an access of the register, wherein the register is a register within said one of the
non-memory resource instances.

US Pat. No. 9,098,373

SOFTWARE UPDATE METHODOLOGY

NETRONOME SYSTEMS INCORPO...

1. A method of performing a software update, comprising:
(a) loading a first application data from a first storage device onto an operating memory, wherein the operating memory stores
a first kernel update flag value, and a first application update flag value;

(b) loading a first kernel data from a second storage device onto the operating memory;
(c) receiving software update information onto a device, wherein the software update information includes a second kernel
data, a second kernel update flag value, and a second application update flag value, and wherein the device includes the first
storage device, the second storage device, the operating memory, and a processor;

(d) storing the software update information on the first storage device;
(e) replacing the first kernel update flag value stored on the operating memory with the second kernel update flag value included
in the software update information;

(f) replacing the first application update flag value stored on the operating memory with the second application update flag
value included in the software update information;

(g) initiating a first reboot;
(h) erasing the second storage device when the kernel update flag value stored in the operating memory is a first value;
(i) not erasing the second storage device when the kernel update flag value stored in the operating memory is a second value;
(j) copying the second kernel data included in the software update information from the first device to the second device
when the kernel update flag value stored on the operating memory is the first value;

(k) setting the kernel update flag to the second value;
(l) initiating a second reboot;
(m) loading the second kernel data stored on the second device onto the operating memory; and
(n) loading the first application data stored on the first device onto the operating memory, wherein steps (a) through (n)
are performed at least in part by the processor.

US Pat. No. 9,503,372

SDN PROTOCOL MESSAGE HANDLING WITHIN A MODULAR AND PARTITIONED SDN SWITCH

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a plurality of egress ethernet ports;
a plurality of second ingress ethernet ports, wherein a second ingress ethernet port is configurable to operate in a selected
one of a command mode and a data mode, wherein the second ingress ethernet port does not power up in the command mode and
can only be put into the command mode as a result of a port modeset command being received onto an ingress ethernet port operating
in the command mode;

a first ingress ethernet port that powers up in the command mode, wherein in the command mode the first ingress ethernet port
can receive and carry out a port modeset command, wherein receiving and carrying out of the port modeset command causes one
of the second ingress ethernet ports identified by the port modeset command to operate in the command mode; and

a flow table structure adapted to store flow entries, wherein the flow table structure is used by the integrated circuit to
determine which egress ethernet port will output a packet that was received onto the integrated circuit via one of the second
ingress ethernet ports operating in the data mode.

US Pat. No. 9,344,384

INTER-PACKET INTERVAL PREDICTION OPERATING ALGORITHM

Netronome Systems, Inc., ...

20. A device comprising:
a cache memory;
an external memory; and
a microengine executing code, causing the device to:
receive a first packet of a flow pair and in response, predict a time when a next packet of the flow pair will be received,
wherein the predicting is based at least in part on an estimated application protocol of the first packet, a packet sequence
number of the first packet, a previous packet arrival time, and an arrival time of the first packet;

preload packet flow data related to the next packet, from the external memory to the cache memory, before the predicted time
when the next packet of the flow pair will be received; and

read the preloaded packet flow data from the cache memory once the next packet of the flow pair is received.

US Pat. No. 9,331,906

COMPARTMENTALIZATION OF THE USER NETWORK INTERFACE TO A DEVICE

Netronome Systems, Inc., ...

1. A method of configuring a network appliance, wherein a host operating system, a virtual machine, and a backend process
execute on the network appliance, wherein the network appliance includes a physical network interface port, wherein the virtual
machine and the backend process are application layer programs executing on the host operating system, wherein the host operating
system includes a stack, and wherein the virtual machine includes a stack and a user interface process, the method comprising:
(a) communicating one or more frames from the physical network interface port of the network appliance to the virtual machine;
(b) processing the one or more frames in the stack of the virtual machine such that a first application layer message is generated
by the stack;

(c) processing the first application layer message in the user interface process of the virtual machine thereby generating
a second application layer message;

(d) processing the second application layer message in the stack of the virtual machine thereby generating one or more Ethernet
frames;

(e) communicating the one or more Ethernet frames from the virtual machine via a virtual secure network link and to the host
operating system;

(f) processing the one or more Ethernet frames in the stack of the host operating system thereby generating a third application
layer message;

(g) processing the third application layer message in the backend process; and
(h) as a result of the processing of the third application layer message the backend process causes a part of the network
appliance to be configured.

US Pat. No. 9,270,488

REORDERING PCP FLOWS AS THEY ARE ASSIGNED TO VIRTUAL CHANNELS

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving configuration information onto a Media Access Control (MAC) layer interface circuit of a Network Flow Processor
(NFP) integrated circuit, wherein the configuration information includes port definition configuration information and Priority
Code Point (PCP) remap information, wherein the PCP remap information includes a plurality of portions;

(b) using the port definition configuration information to configure the MAC layer interface circuit to include a first number
of physical MAC ports, wherein the MAC layer interface circuit can alternatively be configured by the other port definition
configuration information into another configuration that includes another number of physical MAC ports;

(c) receiving a plurality of PCP flows of ethernet frames via the physical MAC ports onto the NFP integrated circuit, wherein
all the frames of a PCP flow are received via the same physical MAC port and wherein all of the frames of the PCP flow have
the same PCP value, wherein a first PCP flow received via a physical MAC port has a larger PCP value as compared to a second
PCP flow received via the same physical MAC port that has a smaller PCP value;

(d) storing each respective portion of the PCP remap information in association with a corresponding respective one of the
physical MAC ports; and

(e) for each frame received via a particular physical MAC port using the PCP value of the frame and the portion of the PCP
remap information associated with the physical MAC port to assign the frame to one of a second number of virtual channels,
wherein a first of the virtual channels is a higher priority channel through the NFP integrated circuit as compared to second
of the virtual channels that is of a lower priority, wherein the assigning of (e) involves assigning the first PCP flow to
the second virtual channel and assigning the second PCP flow to the first virtual channel, wherein the first number multiplied
by eight is greater than the second number.

US Pat. No. 9,268,600

PICOENGINE POOL TRANSACTIONAL MEMORY ARCHITECTURE

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a lookup command onto a transactional memory, wherein the lookup command includes a source identification value,
data, a table set value, and a table number value, wherein the transactional memory includes a selectable bank of hardware
lookup engines and a memory unit, and wherein the memory unit stores a plurality of result values (RVs);

(b) determining a base value and a size value based on the table set value and the table number value;
(c) determining a memory address value based on the base value, the size value, and an index value;
(d) using the memory address value to read a first word out of the memory unit, wherein the first word includes a plurality
of lookup data operands; and

(e) using the source identification value, the data, an algorithm number value, and the plurality of lookup operands to select
one of the plurality of result values (RVs), wherein (e) is performed by one hardware lookup engine selected from the bank
of hardware lookup engines.

US Pat. No. 9,319,333

INSTANTANEOUS RANDOM EARLY DETECTION PACKET DROPPING

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a packet descriptor and a queue number, wherein the queue number indicates a queue stored within a memory unit;
(b) determining an instantaneous queue depth of the queue;
(c) using a drop probability to determine if the packet descriptor will be dropped, wherein the drop probability is a function
of the instantaneous queue depth;

(d) storing the packet descriptor in the queue if it is determined in (c) that the packet descriptor is not to be dropped;
and

(e) not storing the packet descriptor in the queue if it is determined in (c) that the packet descriptor should be dropped,
wherein the queue has a first queue depth range and a second queue depth range, wherein a first drop probability is used in
(c) when the instantaneous queue depth is within the first queue depth range, wherein a second drop probability is used in
(c) when the instantaneous queue depth is within the second queue depth range, and wherein the first queue depth range does
not overlap with the second queue depth range.

US Pat. No. 10,031,754

KICK-STARTED RUN-TO-COMPLETION PROCESSOR HAVING NO INSTRUCTION COUNTER

Netronome Systems, Inc., ...

1. A pipelined processor, comprising:an input data port;
a fetch stage that fetches instructions from an external memory system, wherein the fetch stage is prompted to fetch from the external memory in response either: 1) to a receiving of input data onto the pipelined processor via the input data port, or 2) to a receiving of fetch information from another stage as a result of an execution of a fetch instruction by the pipelined processor, wherein the input data and the fetch information does not include a clock signal or a signal output from a counter circuit, and wherein the pipelined processor only fetches instructions in response to either receiving input data onto the pipelined processor via the input data port or as a result of execution of a fetch instruction;
a decode stage that decodes instructions fetched from the external memory;
a register file read stage that comprises a register file and an input data register, wherein the input data register is loaded with input data from the input data port; and
an execute stage coupled to the register file read stage, wherein the execute stage can output fetch information to the fetch stage such that the fetch stage is prompted to perform a fetch from the external memory, wherein the pipelined processor comprises no instruction counter.

US Pat. No. 9,519,482

EFFICIENT CONDITIONAL INSTRUCTION HAVING COMPANION LOAD PREDICATE BITS INSTRUCTION

Netronome Systems, Inc., ...

1. A processor coupled to a memory system, the processor comprising:
a fetch stage that causes a sequence of instructions to be fetched from the memory system, wherein the sequence of instructions
includes a first instruction and a second instruction;

a decode stage that first decodes the first instruction and then decodes the second instruction, wherein the decode stage
can decode skip instructions;

a register file read stage coupled to the decode stage, wherein the register file read stage includes a plurality of flag
bits and a plurality of predicate bits; and

an execute stage that can carry out an instruction operation of an instruction, wherein the second instruction defines an
instruction operation to be performed when the second instruction is executed by the processor, wherein if the first instruction
is a skip instruction and if a predicate condition is satisfied then the instruction operation of the second instruction is
not carried out by the execute stage even though the second instruction was decoded by the decode stage, wherein the predicate
condition for the skip instruction is specified by values of the predicate bits, and wherein the predicate condition is a
specified function of values of at least one of the plurality of flag bits.

US Pat. No. 9,495,158

MULTI-PROCESSOR SYSTEM HAVING TRIPWIRE DATA MERGING AND COLLISION DETECTION

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a first pool of processors, wherein each of the processors has a tripwire bus port, wherein each of the processors can decode
and execute a tripwire instruction, wherein execution of the tripwire instruction causes a valid multi-bit value to be output
from the processor onto the tripwire bus port of the processor; and

a tripwire data merging and collision detection circuit (TDMCDC), wherein the TDMCDC is coupled to the tripwire bus port of
each of the processors of the pool, wherein: 1) if more than one of the processors is outputting a valid multi-bit value onto
its tripwire bus port at a given time then the TDMCDC asserts a collision bit signal and supplies the asserted collision bit
signal onto a set of conductors; 2) if one and only one of the processors is outputting a valid multi-bit value onto its tripwire
bus port at a given time then the TDMCDC supplies the valid multi-bit value onto the set of conductors along with a deasserted
collision bit signal; 3) if none of the processors is outputting a valid multi-bit value onto its tripwire bus port at a given
time then the TDMCDC does not output a valid multi-bit value onto the set of conductors.

US Pat. No. 9,264,256

MERGING PCP FLOWS AS THEY ARE ASSIGNED TO A SINGLE VIRTUAL CHANNEL

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving configuration information onto a Media Access Control (MAC) layer interface circuit of a Network Flow Processor
(NFP) integrated circuit, wherein the configuration information includes port definition configuration information and Priority
Code Point (PCP) remap information, wherein the PCP remap information includes a plurality of portions;

(b) using the port definition configuration information to configure the MAC layer interface circuit to include a first number
of physical MAC ports;

(c) receiving a plurality of PCP flows of ethernet frames via the physical MAC ports onto the NFP integrated circuit, wherein
all the frames of a PCP flow are received via the same physical MAC port and wherein all of the frames of the PCP flow have
the same PCP value;

(d) storing each respective portion of the PCP remap information in association with a corresponding respective one of the
physical MAC ports; and

(e) for each frame received via a particular physical MAC port using the PCP value of the frame and the portion of the PCP
remap information associated with the physical MAC port to assign the frame to one of a second number of virtual channels,
wherein the assigning of (e) involves assigning more than one PCP flow to the same virtual channel, wherein each PCP flow
received in (c) via a particular physical MAC port has a relative priority with respect to each other PCP flow received in
(c) via the physical MAC port, wherein the relative priorities of PCP flows within the particular physical MAC port are maintained
with respect to one another after the assigning of (e) even though some PCP flows may be assigned to the same virtual channel,
and wherein the first number multiplied by eight is greater than the second number.

US Pat. No. 9,164,730

SELF-TIMED LOGIC BIT STREAM GENERATOR WITH COMMAND TO RUN FOR A NUMBER OF STATE TRANSITIONS

Netronome Systems, Inc., ...

1. A method comprising:
(a) sending a command to a random number generator via a digital bus, wherein the random number generator includes a self-timed
logic bit stream generator, and wherein the command includes a value;

(b) causing the self-timed logic bit stream generator to state transition a number of times and then to stop automatically,
wherein the number of times is dependent upon the value, and wherein as a result of the state transitioning the self-timed
logic bit stream generator outputs a bit stream; and

(c) causing the bit stream to be used to generate a multi-bit random number.

US Pat. No. 9,483,439

PICOENGINE MULTI-PROCESSOR WITH POWER CONTROL MANAGEMENT

Netronome Systems, Inc., ...

13. An apparatus comprising:
a data input port that receives a stream of input data values;
a picoengine pool comprising a first plurality of picoengines and a second plurality of picoengines;
a task assignor that assigns each input data value to one of the first plurality of picoengines along with a task to be performed
on the input data value such that the picoengine generates an output data value, wherein the picoengines of the first plurality
are assigned one-by-one in a selected sequence, wherein the selected sequence is a sequence of a plurality of selectable sequences,
wherein the selected sequence is determined by configuration information, and wherein none of the second plurality of picoengines
is assigned an input data value;

an output data picoengine selector that supplies picoengine select signals to the first plurality of picoengines and to the
second plurality of picoengines such that an output data value is read from each of the first plurality of picoengines, wherein
the output data values are read from the first plurality of picoengines one-by-one in the selected sequence thereby generating
a stream of output data values;

a picoengine power enable signal generator circuit that generates a plurality of picoengine power enable signals from the
configuration information and that supplies one of the picoengine power enable signals to each respective one of the first
plurality of picoengines and to each respective one of the second plurality of picoengines, wherein the plurality of picoengine
power enable signals enables the first plurality of picoengines and disables the second plurality of picoengines; and

a data output port that outputs the stream of output data values.

US Pat. No. 9,417,916

INTELLIGENT PACKET DATA REGISTER FILE THAT PREFETCHES DATA FOR FUTURE INSTRUCTION EXECUTION

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a packet buffer memory that stores an amount of packet data, wherein the packet data stored in the packet buffer memory is
a plurality of bytes of packet data;

a packet portion request bus system;
a packet portion data bus; and
a plurality of processors, wherein each processor comprises:
a decode stage;
a register file read stage comprising an intelligent packet data register file, wherein the intelligent packet data register
file cannot store all of the amount of packet data that is stored in the packet buffer memory; and

an execute stage, wherein the intelligent packet data register file initiates a prefetching of a packet portion from the packet
buffer memory by causing a packet portion request to be sent across the packet portion request bus system to the packet buffer
memory such that in response packet data is communicated from the packet buffer memory via the packet portion data bus and
is loaded into the intelligent packet data register file, wherein the sending of the packet portion request occurs at a time
that precedes an execution by the processor of an instruction that requires the packet portion data to be supplied by the
intelligent packet data register file to the execute stage, and wherein there is only one packet buffer memory that is shared
by the plurality of processors.

US Pat. No. 9,401,880

FLOW CONTROL USING A LOCAL EVENT RING IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

11. An island-based network flow processor (IB-NFP) integrated circuit, comprising:
a first island through which a flow of packet information passes, wherein the IB-NFP integrated circuit has an amount of a
processing resource available to the first island for handling the incoming packet information;

a third island through which the flow of packet information passes, wherein the flow of packet information passes in a path
onto the IB-NFP integrated circuit, through the first island, through the third island, and out of the IB-NFP integrated circuit;
and

a second island that receives a configuration packet from the first island and in response sends a first communication to
the third island across a data bus thereby causing the third island to send a second communication to the first island across
the data bus, wherein the second communication causes the amount of the processing resource available to the first island
to be increased, wherein the first, second and third islands all have the same rectangular size and shape.

US Pat. No. 9,385,957

FLOW KEY LOOKUP INVOLVING MULTIPLE SIMULTANEOUS CAM OPERATIONS TO IDENTIFY HASH VALUES IN A HASH BUCKET

Netronome Systems, Inc., ...

1. A device, comprising:
a processor that receives a flow key and in response generates a lookup command,
a lookup engine that includes a plurality of Content Addressable Memory (CAM) lookup blocks, wherein the lookup engine can
perform a plurality of simultaneous CAM lookup operations; wherein lookup engine determines a hash value A and a hash value
B from the flow key, wherein the lookup engine uses the hash value A to identify a hash bucket in the hash table, wherein
the hash bucket includes a plurality of hash bucket entry fields, and wherein the hash value B is stored in at least one of
the hash bucket entry fields of the hash bucket, wherein each CAM lookup block simultaneously performs a CAM lookup operation
on the content of a hash bucket entry field, and wherein each CAM lookup block outputs a CAM lookup output value that indicates
if the content of the hash bucket entry field is the same as the hash value B;

a bus that communicates the lookup command from the processor to the lookup engine; and
a memory unit that stores a hash table, wherein the hash table includes a plurality of hash buckets, wherein CAM lookup output
values identify one or more flow keys stored in a key table, wherein the key table is stored in the memory unit, wherein each
identified flow key corresponds to a hash bucket entry field that stores the hash value B, wherein each flow key stored in
the key table has a corresponding lookup output information value, wherein the lookup output information value is an action
value, wherein the action value indicates one of a plurality of actions to be performed on the packet by the processor, wherein
the device has a fast path whereby packets pass through the device without being processed by a host processor, wherein the
device has a slow path whereby packets are processed by the host processor, and wherein the action is to transfer the packet
to the host processor for slow path processing.

US Pat. No. 9,258,256

INVERSE PCP FLOW REMAPPING FOR PFC PAUSE FRAME GENERATION

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving frames onto a physical MAC port of a Network Flow Processor (NFP) integrated circuit;
(b) supplying configuration information to an Inverse PCP Remap Look Up Table (IPRLUT) circuit of the NFP integrated circuit;
(c) writing frame data of the frames into a linked list of buffers, wherein the linked list of buffers stores frame data for
a single virtual channel, wherein the single virtual channel is identified by a virtual channel number;

(d) maintaining a buffer count for the linked list of buffers;
(e) determining that the buffer count has exceeded a predetermined threshold value;
(f) supplying the virtual channel number to the IPRLUT circuit such that the IPRLUT circuit outputs a multi-bit value, wherein
the multi-bit value includes a plurality of bits, wherein each respective one of the plurality of bits corresponds to a respective
one of eight PCP (Priority Code Point) priority levels;

(g) in response to the determining of (e) using the multi-bit value to generate a Priority Flow Control (PFC) pause frame,
wherein the PFC pause frame has a priority class enable vector field, wherein the priority class enable vector field includes
a plurality of enable bits, wherein each respective one of the plurality of enable bits corresponds to a respective one of
the eight PCP priority levels, and wherein multiple ones of the enable bits are set thereby indicating that multiple ones
of the PCP priority levels are to be paused; and

(h) outputting the PFC pause frame from the physical MAC port.

US Pat. No. 9,467,378

METHOD OF GENERATING SUBFLOW ENTRIES IN AN SDN SWITCH

Netronome Systems, Inc., ...

1. A method involving a Software-Defined Networking (SDN) switch, wherein the SDN switch comprises a fabric of Network Flow
Switch (NFX) circuits and a Network Flow Processor (NFP) circuit, wherein none of the NFX circuits comprises any SDN protocol
stack, wherein packets received onto the SDN switch from external sources are received via the fabric of NFX circuits, wherein
packets output from the SDN switch to external destinations are output via the fabric of NFX circuits, wherein each of the
NFX circuits maintains at least one flow table, wherein the NFP circuit maintains at least one flow table, the method comprising:
(a) receiving a packet onto the SDN switch via one of the NFX circuits;
(b) determining in the NFX circuit that the packet matches no flow entry stored in any flow table in the NFX circuit;
(c) forwarding the packet from the NFX circuit to the NFP circuit;
(d) determining in the NFP circuit that the packet matches a first flow entry stored in a flow table in the NFP circuit, wherein
the first flow entry applies to a relatively broad flow of packets;

(e) generating a second flow entry in the NFP circuit, wherein the second flow entry applies to a relatively narrow subflow
of packets, wherein the packet received in (a) is one of the packets of the subflow, wherein all packets of the subflow are
packets of the broad flow, and wherein some packets of the broad flow are not packets of the subflow;

(f) forwarding the second flow entry from the NFP circuit to the NFX circuit and storing the second flow entry in a flow table
in the NFX circuit; and

(g) receiving a subsequent packet of the subflow onto the SDN switch via the NFX circuit and determining that the subsequent
packet matches the second flow entry and using the second flow entry to output the subsequent packet from the SDN switch without
forwarding the subsequent packet to the NFP circuit.

US Pat. No. 9,330,041

STAGGERED ISLAND STRUCTURE IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a plurality of functional circuits, wherein each of the functional circuits is part of a corresponding one of a plurality
of rectangular islands of identical shape, wherein some of the rectangular islands are islands of a first row, wherein others
of the rectangular islands are islands of a second row, wherein others of the rectangular islands are islands of a third row,
wherein the rectangular islands of the first row are oriented in staggered relation with respect to the rectangular islands
of the second row, and wherein the rectangular islands of the second row are oriented in staggered relation with respect to
the rectangular islands of the third row; and

a configurable mesh bus coupled to each of the plurality of rectangular islands of identical shape, wherein a first link extends
in a substantially straight first line from a crossbar switch of a first rectangular island of the second row to a crossbar
switch of a first rectangular island of the first row, wherein a second link extends in a substantially straight second from
the crossbar switch of the first rectangular island of the second row to a crossbar switch of a second rectangular island
of the first row, wherein the substantially straight first line is not parallel to the substantially straight second line,
and wherein the substantially straight first line is not perpendicular to the substantially straight second line.

US Pat. No. 9,164,794

HARDWARE PREFIX REDUCTION CIRCUIT

Netronome Systems, Inc., ...

1. A circuit, comprising:
a plurality of levels, each level comprising:
an input conductor coupled to a first sequential logic device;
an output conductor coupled to a second sequential logic device; and
a plurality of nodes coupled to the input conductor and the output conductor, each node comprising:
a storage device that stores a digital logic level; and
a buffer, wherein the digital logic level is coupled to the output conductor by the buffer, wherein only one digital logic
level is coupled to the output conductor, and wherein the nodes included in the first level are not included in any subsequent
levels.

US Pat. No. 9,154,468

EFFICIENT FORWARDING OF ENCRYPTED TCP RETRANSMISSIONS

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving onto a network appliance a first retransmit Transmission Control Protocol (TCP) segment, wherein the first retransmit
TCP segment has: 1) a TCP sequence number in a TCP header of the first retransmit TCP segment, and 2) a first encrypted Secure
Socket Layer (SSL) payload;

(b) determining from the TCP sequence number in the TCP header a second TCP sequence number, wherein the second TCP sequence
number corresponds to a start of the first encrypted SSL payload;

(c) determining from the second TCP sequence number a first start byte position in SSL sequence space, wherein the first start
byte position in SSL sequence space corresponds to the start of the first encrypted SSL payload;

(d) using the first start byte position to determine: 1) a second start byte position in SSL sequence space, and 2) a decrypt
engine state associated with the second start byte position;

(e) putting a decrypt engine into the decrypt engine state determined in (d);
(f) incrementing the state of the decrypt engine an amount, wherein the amount is a function of the first start byte position
and the second start byte position;

(g) using the decrypt engine to decrypt the first encrypted SSL payload thereby generating a decrypted SSL payload;
(h) encrypting the decrypted SSL payload thereby generating a second encrypted SSL payload; and
(i) transmitting from the network appliance a second retransmit TCP segment, wherein the second retransmit TCP segment includes
the second encrypted SSL payload that was generated in (h), and wherein the first and second retransmit TCP segments are both
communicated across a first flow of a first TCP connection.

US Pat. No. 9,519,484

PICOENGINE INSTRUCTION THAT CONTROLS AN INTELLIGENT PACKET DATA REGISTER FILE PREFETCH FUNCTION

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a packet buffer memory that stores an amount of packet data, wherein the packet data is a plurality of bytes of a packet,
wherein the bytes have an order in the packet, and wherein the bytes are stored in the packet buffer memory in the same order
that the bytes have in the packet;

a processor comprising:
a register file read stage comprising an intelligent packet data register file, wherein the intelligent packet data register
file cannot store all of the amount of packet data that is stored in the packet buffer memory;

an execute stage, wherein execution of an instruction by the processor involves the execute stage receiving a packet portion
from the intelligent packet data register file and using the packet portion to carry out an execute stage operation;

a packet portion prefetch enable bit, wherein if the packet portion prefetch enable bit has a first digital value then the
intelligent packet data register file is enabled to prefetch the packet portion from the packet buffer memory before the instruction
is decoded by the processor if the intelligent packet data register is not already storing the packet portion, and wherein
if the packet portion prefetch enable bit has a second digital value then the intelligent packet data register file is disabled
from prefetching any packet portion from the packet buffer memory;

wherein the processor can execute a second instruction, wherein execution of the second instruction by the processor causes
the packet portion prefetch enable bit to be loaded with a digital value specified by the second instruction.

US Pat. No. 9,176,905

RECURSIVE USE OF MULTIPLE HARDWARE LOOKUP STRUCTURES IN A TRANSACTIONAL MEMORY

Netronome Systems, Inc., ...

1. An integrated circuit, comprising:
a lookup engine that reads a first result value and a first indicator from a memory and determines if the first result value
is a final result value, wherein the lookup engine outputs the first result value when the first indicator indicates that
the first result value is the final result value, and wherein the lookup engine does not output the first result value and
reads a second result value and a second indicator from the memory when the first indicator indicates that the first result
value is not the final result value.

US Pat. No. 9,515,929

TRAFFIC DATA PRE-FILTERING

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving traffic data onto a networking appliance;
(b) separating a field within the traffic data into multiple subfields, wherein the separating is performed by a network flow
processor;

(c) performing a lookup on each subfield in parallel, wherein each lookup generates a lookup result, wherein the parallel
lookups are performed by a plurality of lookup operators;

(d) determining a compliance result based on the multiple lookup results generated in (c);
(e) determining an action based at least in part upon the compliance result determined in (d), wherein the network flow processor
is an Island-Based Network Flow Processor, wherein the determining of (d) is performed by a lookup result analyzer, and wherein
the determining of (e) is performed by an action identifier.

US Pat. No. 9,208,844

DDR RETIMING CIRCUIT

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a first input terminal that receives a double date rate (DDR) data signal onto the integrated circuit;
a first input buffer;
a second input terminal that receives a DDR clock signal onto the integrated circuit;
a second input buffer;
a first sequential logic element (SLE) having a data input lead and a data output lead, wherein the DDR data signal passes
from the first input terminal, through the first input buffer, and to the data input lead of the first SLE;

a second SLE having a data input lead and a data output lead, wherein the DDR data signal passes from the first input terminal,
through the second input buffer, and to the data input lead of the second SLE;

a first data signal path;
a second data signal path;
a third SLE having a data input lead and a data output lead, wherein the data input lead of the third SLE is coupled via the
first data signal path to the data output lead of the first SLE;

a fourth SLE having a data input lead and a data output lead, wherein the data input lead of the fourth SLE is coupled via
the second data signal path to the data output lead of the second SLE;

a multiplexer having first data input lead, a second data input lead, a select input lead, and a data output lead, wherein
the first data input lead of the multiplexer is coupled to the data output lead of the third SLE, wherein the second data
input lead of the multiplexer is coupled to the data output lead of the fourth SLE; and

a DDR Clock Signal Supplying Circuit (DCSSC) having a first portion, a second portion, and a third portion, wherein the DDR
clock signal passes from the second input terminal, through the second input buffer, through the first portion of the DCSSC,
through the second portion of the DCSSC, and through the third portion of the DCSSC to the select input lead of the multiplexer,
wherein clock input leads of the first and second SLEs are coupled to the first portion of the DCSSC such that the first SLE
is clocked on rising edges of the DDR clock signal and the second SLE is clocked on falling edges of the DDR clock signal,
and wherein clock input leads of the third and fourth SLEs are coupled to the third portion of the DCSSC such that the third
SLE is clocked on falling edges of the DDR clock signal and the fourth SLE is clocked on rising edges of the DDR clock signal.

US Pat. No. 9,626,306

GLOBAL EVENT CHAIN IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a plurality of rectangular islands disposed in rows, wherein there are at least three rows of the rectangular islands, wherein
each rectangular island includes a functional circuit and switch without intersecting another functional circuit or another
switch, and wherein each rectangular island comprises:

a Run Time Configurable Switch (RTCS); and
a plurality of half links, wherein the rectangular islands are coupled together such that the half links and the RTCSs together
form a configurable mesh event bus, wherein the configurable mesh event bus is configured to form a local event ring and a
global event chain, wherein the local event ring provides a communication path along which an event packet is communicated
to each rectangular island along the local event ring, wherein the event packet can pass from the local event ring and onto
the global event chain such that the event packet is then communicated along the global event chain, wherein the global event
chain is not a ring, wherein the local event ring comprises a plurality of event ring circuits and a plurality of event ring
sediments, wherein the event ring circuits are clocked by a clock signal, and wherein a transitioning of the clock signal
causes an event packet being output by first event ring circuit onto a first ring segment to be latched into a second event
ring circuit such that the second event ring circuit begins outputting the event packet onto a second event ring segment and
such that the first event ring circuit stops outputting the event packet onto the first event ring segment.

US Pat. No. 9,577,832

GENERATING A HASH USING S-BOX NONLINEARIZING OF A REMAINDER INPUT

Netronome Systems, Inc., ...

1. A method comprising:
(a) storing an amount of incoming data in a processor, wherein the amount of incoming data comprises a first portion and a
second portion;

(b) maintaining a hash register value in a hash register;
(c) supplying the first portion of the amount of incoming data onto a set of input leads of a modulo-2 multiplier;

(d) using the modulo-2 multiplier to modulo-2 multiply the first portion by a multiplier value thereby generating a product value, wherein the product value comprises
a first portion and a second portion;

(e) using a programmable nonlinearizing function circuit to perform a nonlinearizing function on the hash register value and
thereby generating a modified version of the hash register value;

(f) using a first modulo-2 summer to modulo-2 sum the first portion of the product value and the modified version of the hash register value and thereby generating a first
sum value;

(g) using a modulo-2 divider to modulo-2 divide the first sum value by a divisor value and thereby outputting a division remainder value;

(h) using a second modulo-2 summer to modulo-2 sum the second portion of the product value and the division remainder value; and

(i) loading the division remainder value into the hash register, wherein (a) through (i) are performed by the processor, and
wherein the hash register, the modulo-2 multiplier, the programmable nonlinearizing function circuit, the first modulo-2 summer, the modulo-2 divider, and the second modulo-2 summer are parts of the processor.

US Pat. No. 9,612,981

CONFIGURABLE MESH DATA BUS IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

16. An integrated circuit comprising:
a command mesh comprising:
a first crossbar switch CB 1 that is located centrally within a first rectangular island I1, a second crossbar switch CB2 that is located centrally within a second rectangular island I2, a third crossbar switch CB3 that is located centrally within a third rectangular island I3, wherein the first and the second island are disposed in a first row along a horizontal dimension, and wherein the third
island is disposed in a second row that extends from the first row along a vertical dimension;

a first link L1 that extends in a substantially straight line between CB1 and CB2, a second link L2 that extends in a substantially straight line between CB 1 and CB3, a third link L3 that extends in a substantially straight line between CB2 and CB3, wherein L1, L2 and L3 form an isosceles triangle; and

a configurable mesh command/push/pull (CPP) data bus, wherein the configurable mesh CPP data bus is coupled to functional
circuitry in each of the islands of the integrated circuit, wherein the islands are disposed in at least three rows.

US Pat. No. 9,537,801

DISTRIBUTED PACKET ORDERING SYSTEM HAVING SEPARATE WORKER AND OUTPUT PROCESSORS

Netronome Systems, Inc., ...

1. A network flow processor integrated circuit comprising:
an ingress circuit that receives packets of a plurality of flows and applies a hash function to the packets, wherein a first
set of the flows belongs to a first ordering context, wherein a second set of the flows belongs to a second ordering context;

a plurality of Worker Processors (WPs), wherein more than one WP receives packets of the first ordering context, wherein each
WP that receives packets of the first ordering context: 1) causes metadata of each packet of the first ordering context to
be stored in a memory in association with a sequence number of the packet, 2) issues release requests to release packets of
the first ordering context, and 3) issues release messages to release packets of the first ordering context;

a plurality of Output Processors (OPs), wherein one and only one of the OPs handles generating transmit commands to transmit
packets of the first ordering context, wherein another of the OPs handles generating transmit commands to transmit packets
of the second ordering context, wherein said one OP: 1) receives release messages to release packets of the first ordering
context, wherein the release messages are received from multiple ones of the WPs, 2) retrieves metadata of the packets that
was stored in the memory, 3) uses the metadata retrieved to generate first transmit commands to transmit packets of the first
ordering context, and 4) uses the metadata retrieved to generate second transmit commands to transmit packets of the first
ordering context, wherein the first transmit commands have a format, wherein the second transmit commands have a format, and
wherein the format of the first transmit commands is different than the format of the second transmit commands;

a first egress circuit that receives the first transmit commands from said one OP to transmit packets of the first ordering
context; and

a second egress circuit that receives the second transmit command from the said one OP to transmit packets of the first ordering
context.

US Pat. No. 9,282,051

CREDIT-BASED RESOURCE ALLOCATOR CIRCUIT

Netronome Systems, Inc., ...

1. A method comprising:
(a) operating an allocator circuit in a sequence of output determining phases and bubble sorting phases, wherein each output
determining phase is followed by a bubble sorting phase, and wherein each bubble sorting phase is followed by an output determining
phase;

(b) maintaining, on the allocator circuit, 1) a resource value for each of a plurality of processing entities and 2) an indication
of a processing entity for each of the resource values, wherein the allocator circuit comprises a bubble sorting module circuit
comprised of combinatory logic and a state machine to form one or more chains, wherein each chain corresponds to a set of
processing entities, wherein the resource value for a processing entity is indicative of an amount of a resource the processing
entity has available, wherein the plurality of processing entities includes the set of processing entities;

(c) receiving onto the allocator circuit an allocation request, wherein the allocation request is indicative of: 1) an amount
of the resource requested, and 2) the set of processing entities;

(d) in a first output determining phase, 1) determining, based at least in part on a bubble sort output, one processing entity
from the set of processing entities, and 2) adjusting the resource value for the determined processing entity, wherein the
resource value is adjusted in (d) by the amount of the resource indicated by the allocation request;

(e) sending an allocation command from the allocator circuit to the processing entity determined in (d);
(f) in a bubble sorting phase bubble sorting indications of processing entities of the set of processing entities based on
the resource values of the processing entities and thereby determining a bubble sort output for the set of processing entities;

(g) receiving onto the allocator circuit an allocation response, wherein the allocation response is indicative of: 1) the
determined processing entity, and 2) the amount of the resource; and

(h) in a second output determining phase, adjusting the resource value of the determined processing entity indicated by the
allocation response, wherein the resource value is adjusted in (h) by the amount of the resource indicated by the allocation
response, and wherein (b) through (h) are performed by the allocator circuit.

US Pat. No. 9,584,637

GUARANTEED IN-ORDER PACKET DELIVERY

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a packet descriptor from an ingress island and a sequence number onto an egress island, wherein both the ingress
island and the egress island are included in an island based network flow processor;

(b) storing the packet descriptor in a buffer in a memory unit, wherein the memory unit comprises a plurality of buffers,
wherein a register comprises a plurality of valid bits, wherein there is a one-to-one correspondence between the valid bits
and the buffers, and wherein a head pointer points to one valid bit in the register;

(c) setting the valid bit corresponding to the buffer in (b);
(d) if the sequence number is in a first range then: (i) outputting all packet descriptors stored in the memory unit, and
(ii) clearing all valid bits that are associated with the buffers that stored the packet descriptors;

(e) if the sequence number is in a second range then: (i) outputting the packet descriptor received in (a), and (ii) clearing
the valid bit that is associated with the buffer that stored the packet descriptor; and

(f) if the sequence number is in a third range then: (i) if the head pointer points to a set valid bit then outputting the
packet descriptor received in (a), clearing the valid bit associated with the buffer that stored the packet descriptor, incrementing
the head pointer, and repeating (f), or (ii) if the head pointer does not point to a set valid bit then returning to (a).

US Pat. No. 10,031,878

CONFIGURABLE MESH DATA BUS IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

2. An integrated circuit comprising:a configurable mesh command/push/pull (CPP) data bus, wherein the configurable mesh CPP data bus is coupled to functional circuitry in each of a plurality of islands of the integrated circuit, wherein the islands are disposed in at least three rows, wherein the configurable mesh CPP data bus comprises first links that are oriented to be collinear with respect to one another and extend in a first direction, wherein the configurable mesh CPP data bus comprises second links that are oriented to be collinear with respect to one another and extend in a second direction, wherein the configurable mesh CPP data bus comprises third links that are oriented to be collinear with respect to one another and extend in a third direction, wherein none of the first, second and third directions is perpendicular to another of the first, second and third directions, and wherein none of the first, second and third directions is parallel to another of the first, second and third directions.

US Pat. No. 10,033,638

EXECUTING A SELECTED SEQUENCE OF INSTRUCTIONS DEPENDING ON PACKET TYPE IN AN EXACT-MATCH FLOW SWITCH

Netronome Systems, Inc., ...

1. A method comprising:(a) maintaining an exact-match flow table on an integrated circuit, wherein the exact-match flow table comprises a plurality of flow entries, wherein each flow entry comprises a Flow Identification value (Flow Id) and an action value;
(b) receiving a first packet onto the integrated circuit;
(c) analyzing the first packet and determining that the first packet is of a first type;
(d) as a result of the determining of (c) initiating execution of a first sequence of instructions by a processor of the integrated circuit, wherein execution of the first sequence causes bits of the first packet to be concatenated and modified thereby generating a first Flow Id, wherein the Flow Id is of a first form;
(e) determining that the first Flow Id generated in (d) is a bit-by-bit exact-match of a Flow Id of a first flow entry in the exact-match flow table;
(f) using an action value of the first flow entry in outputting packet information of the first packet out of the integrated circuit;
(g) receiving a second packet onto the integrated circuit;
(h) analyzing the second packet and determining that the second packet is of a second type;
(i) as a result of the determining of (h) initiating execution of a second sequence of instructions by the processor, wherein execution of the second sequence causes bits of the second packet to be concatenated and modified thereby generating a second Flow Id, wherein the second Flow Id is of a second form;
(j) determining that the second Flow Id generated in (i) is a bit-by-bit exact-match of a Flow Id of a second flow entry in the exact-match flow table; and
(k) using an action value of the second flow entry in outputting packet information of the second packet out of the integrated circuit, wherein (a) through (k) are performed by the integrated circuit, wherein the execution of the first sequence of instructions causes a first select value to be supplied onto select input leads of a multiplexer circuit such that the multiplexer circuit outputs the first Flow Id, and wherein the execution of the second sequence of instructions causes a second select value to be supplied onto the select input leads of the multiplexer circuit such that the multiplexer circuit outputs the second Flow Id.

US Pat. No. 9,998,374

METHOD OF HANDLING SDN PROTOCOL MESSAGES IN A MODULAR AND PARTITIONED SDN SWITCH

Netronome Systems, Inc., ...

1. A method involving a Software-Defined Networking (SDN) switch, wherein the SDN switch comprises a plurality of Network Flow Switch (NFX) integrated circuits, a Network Flow Processor (NFP) circuit, and a controller processor, the method comprising:(a) maintaining a flow table on the each of the NFX integrated circuits, wherein none of the NFX integrated circuits executes any SDN protocol stack;
(b) maintaining a second SDN flow table on the NFP circuit, wherein no SDN protocol stack is executing on the NFP circuit;
(c) maintaining a copy on the NFP circuit of each of the flow tables of (a);
(d) maintaining a first SDN flow table on the controller processor, wherein the controller processor is coupled to the NFP circuit by a serial bus;
(e) executing a SDN protocol stack on the controller processor;
(f) receiving a SDN protocol message onto the SDN switch via one of the NFX integrated circuits, and communicating the SDN protocol message across a network link to the NFP circuit, and across the serial bus from the NFP circuit to the controller processor such that the SDN protocol message is received and processed by the SDN protocol stack executing on the controller processor, wherein the receiving and processing of the SDN protocol message by the SDN protocol stack causes a first flow entry to be loaded into the first SDN flow table;
(g) communicating a copy of the first flow entry across the serial bus from the controller processor to the NFP circuit such that the copy of the first flow entry is loaded into the second SDN flow table; and
(h) receiving a packet onto the SDN switch via one of the NFX integrated circuits and outputting the packet from the SDN switch via one of the NFX integrated circuits in accordance with flow entries stored in the first SDN flow table.

US Pat. No. 9,641,448

PACKET ORDERING SYSTEM USING AN ATOMIC TICKET RELEASE COMMAND OF A TRANSACTIONAL MEMORY

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving packets of a plurality of flows onto an integrated circuit;
(b) assigning the packets of some but not all of the flows to an ordering context, wherein each packet of the ordering context
is assigned a corresponding ordering sequence number, wherein the ordering sequence number is not a part of the packet as
the packet is received in (a) onto the integrated circuit;

(c) maintaining a ticket release bitmap in a transactional memory;
(d) using the ticket release bitmap to track which packets of the ordering context have been flagged for future release by
an ordering system but have not yet been released from the ordering system; and

(e) using a plurality of processors to perform application layer processing on the packets, wherein each processor further
executes a corresponding amount of ordering system code, wherein each packet of the ordering context is processed by one of
the processors as a result of execution of the amount of ordering system code such that after the application layer processing
of the packet the processor issues an atomic ticket release command to the transactional memory thereby accessing the ticket
release bitmap and such that the processor receives information in return back from the transactional memory, wherein the
ordering system includes: 1) a plurality of ticket release bitmaps, one of which is the ticket release bitmap of (d), and
2) the amount of ordering system code executing on each processor of the plurality of processors, and wherein the transactional
memory and the plurality of processors are parts of the integrated circuit.

US Pat. No. 10,032,119

ORDERING SYSTEM THAT EMPLOYS CHAINED TICKET RELEASE BITMAP BLOCK FUNCTIONS

Netronome Systems, Inc., ...

1. An ordering system comprising:a plurality of ticket order release bitmap blocks that together store a ticket order release bitmap, wherein each Ticket Order Release Bitmap Block (TORBB) stores a different part of the ticket order release bitmap, wherein the ticket order release bitmap has a next first sequence number expected value;
a bus; and
a Ticket Order Release Command Dispatcher and Sequence Number Translator (TORCDSNT) processing circuitry that: 1) receives a release request onto the ordering system, wherein the release request includes a first sequence number, 2) identifies, in response to the receiving of the release request, a selected one of the TORBBs and determines a second sequence number, 3) sends an atomic ticket release command via the bus to a transactional memory that maintains the selected TORBB, wherein the atomic ticket release command identifies the selected TORBB and includes the second sequence number, 4) receives back from the transactional memory a return data value, wherein in determination that the first sequence number is the next first sequence number expected value the return data value indicates one or more consecutive second sequence numbers that are to be released from the ordering system, 5) translating the one or more consecutive second sequence numbers into one or more first sequence numbers, 6) outputting one or more release messages, wherein the one or more release messages indicate the one or more consecutive first sequence numbers and indicate an order of the one or more consecutive first sequence numbers.

US Pat. No. 10,031,758

CHAINED-INSTRUCTION DISPATCHER

Netronome Systems, Inc., ...

1. An instruction dispatcher circuit comprising:an instruction loader that receives from an instructing entity and onto the instruction dispatcher circuit a set of first instructions and that also receives a set of second instructions, wherein the set of first instructions includes one or more first instructions of a first type, one or more first instructions of a second type, and one or more first instructions of a third type, and wherein the set of second instructions includes one or more second instructions of the first type, one or more second instructions of the second type, and one or more second instructions of the third type;
a first queue circuit: 1) that receives and stores instructions of the first type from the instruction loader, 2) that dispatches instructions of the first type to one or more processing engines, 3) that for each dispatched instruction of the first type receives a done indication from a processing engine, and 4) that outputs a go signal if a done indication has been received for all dispatched first instructions that are of the first type, wherein the first queue circuit does not store instructions of the second type and does not store instructions of the third type;
a second queue circuit: 1) that receives and stores instructions of the second type from the instruction loader, 2) that in response to receiving the go signal from the first queue circuit dispatches instructions of the second type to one or more processing engines, 3) that for each dispatched instruction of the second type receives a done indication from a processing engine, and 4) that outputs a go signal if a done indication has been received for all dispatched first instructions of the second type, wherein the second queue circuit does not store instructions of the first type and does not store instructions of the third type; and
a third queue circuit: 1) that receives and stores instructions of the third type from the instruction loader, 2) that in response to receiving the go signal from the second queue circuit dispatches instructions of the third type to one or more processing engines, 3) that for each dispatched instruction of the third type receives a done indication from a processing engine, wherein if a done indication has been received for all dispatched first instructions of the third type then an instructions done signal is output from the instruction dispatcher circuit to the instructing entity, wherein the third queue circuit does not store instructions of the first type and does not store instructions of the second type, and wherein the instructions done signal indicates that all the instructions of the set of first instructions have been performed.

US Pat. No. 10,034,070

LOW COST MULTI-SERVER ARRAY ARCHITECTURE

Netronome Systems, Inc., ...

1. A system of host server devices, comprising:a plurality of columns of host server devices, wherein each respective one of the columns is mounted in a corresponding one of a plurality of racks, and the racks are disposed side-by-side one another in a row, and every one of the host server devices is coupled to at least one other of the host server devices by a networking cable, wherein each host server device of a majority of the host server devices is coupled: by a first non-optical cable to a first other host server device in a rack to one side of said each host server device, by a second non-optical cable to a second other host server device in a rack to an opposite side of said each host server device, by a third non-optical cable to a third other host server device above said each host server device in the same rack as said each host server device, and by a fourth non-optical cable to a fourth other host server device below said each host server device in the same rack as said each host server device, wherein said each host server device is not coupled to any of the other host server devices by any optical cable, and wherein said each host server device of the majority of host server devices includes: 1) a host processor, and 2) an exact-match packet switching integrated circuit that is identical to the exact-match packet switching integrated circuit of each of the other host server devices of the majority of host server devices, 3) a first cable socket port that is coupled to a first non-optical cable, 4) a second cable socket port that is coupled to a second non-optical cable, 5) a third cable socket port that is coupled to a third non-optical cable, 6) a fourth cable socket port that is coupled to a fourth non-optical cable, wherein the exact-match packet switching integrated circuit comprises an exact-match flow table structure, wherein the exact-match flow table structure comprises a Static Random Access Memory (SRAM), wherein the SRAM stores an exact-match flow table, wherein the exact-match flow table stores flow identifiers (Flow IDs), wherein the exact-match flow table structure does not and cannot store a Flow ID that includes any wildcard identifier, wherein said each host server device: a) receives a packet via one of its first through fourth cable socket ports, b) determines a Flow ID from the packet, c) uses the determined Flow ID to perform a lookup operation using the exact-match flow table structure to find a Flow ID stored in the exact-match flow table structure that is a bit-for-bit exact match for the determined Flow ID and thereby to obtain a result value that is stored in association with the stored Flow ID, and d) uses the result value to determine how to output the packet from the host server device.

US Pat. No. 10,031,755

KICK-STARTED RUN-TO-COMPLETION PROCESSING METHOD THAT DOES NOT INVOLVE AN INSTRUCTION COUNTER

Netronome Systems, Inc., ...

1. A method comprising:(a) receiving initial fetch information onto a processor via an initial fetch information port;
(b) in response to the receiving of the initial fetch information of (a) fetching a first block of information from an external memory system without use by the processor of any instruction counter, wherein the first block of information includes a plurality of instructions, and wherein at least one of the instructions of the first block is a fetch instruction; and
(c) executing the fetch instruction of the first block of instructions fetched in (b) and as a result fetching a second block of information from the external memory system, wherein the second block of information includes a plurality of instructions, wherein (a) through (c) are performed by the processor, and wherein the processor only fetches instructions in response to receiving initial fetch information on the initial fetch information port or as a result of executing a fetch instruction.

US Pat. No. 9,866,480

HASH RANGE LOOKUP COMMAND

Netronome Systems, Inc., ...

1. A method comprising:
(a) providing access to a hash table, wherein the hash table comprises a plurality of hash buckets, wherein each hash bucket
comprises a plurality of entry fields;

(b) receiving a hash lookup command;
(c) using the hash lookup command to receive hash command parameters, a hashed index value, and a flow key value;
(d) generating a hash value (address) using the hash command parameters and the hashed index value to access a selected entry
field in a selected hash bucket, wherein the selected entry field contains an entry value, and wherein generating of the hash
value comprises using a base address and the hashed index value to generate the hash value, and wherein the base address is
communicated as part of the hash lookup command;

(e) matching selected bits of the entry value to selected bits of the flow key value;
(f) repeating the operations of (d) and (e) until the selected bits of the entry value match the selected bits of the flow
key value or until a selectable number of hash buckets have been accessed; and

(g) returning one of an address of the selected entry field or a result associated with the selected entry field if the selected
bits of the entry value match the selected bits of the flow key value.

US Pat. No. 9,804,959

IN-FLIGHT PACKET PROCESSING

Netronome Systems, Inc., ...

1. An integrated circuit, comprising:
an internal memory;
a packet processing circuit, wherein the processing circuit sends an add-to-work-queue request for packet processing;
and
a packet engine, wherein the packet engine
receives the add-to-work-queue request from the packet processing circuit onto a work queue,
receives an a Packet Portion Identifier (PPI) allocation request for an incoming packet and an incomplete portion of packet
data from a packet data source device, the packet data comprising a header portion and a payload portion, and wherein the
incomplete portion of packet data is associated with an unused portion of memory;

in response to receiving the allocation request, allocates a PPI to the incoming packet;
determines whether the incoming packet is ready for processing, wherein the incoming packet is ready for processing when a
threshold amount of the header portion is written into the internal memory;

in response to determining that the incoming packet is ready for processing, transfers the header portion including a the
packet descriptor generated by the integrated circuit to the packet processing circuit after the incoming packet is ready
for processing such that the packet processing circuit starts processing the incoming packet after the packet descriptor is
written into the internal memory and before the entire packet data is written into the internal memory and an external memory.

US Pat. No. 9,755,983

MINIPACKET FLOW CONTROL

Netronome Systems, Inc., ...

11. A method, comprising:
(a) routing a minipacket to a first channel;
(b) updating a current credit counter value associated with the first channel in response to (a);
(c) comparing the current credit counter value with a credit limit value;
(d) outputting a minipacket flow control signal indicating that an additional minipacket is allowed to be transmitted via
the first channel when the current credit counter value is greater than the credit limit value; and

(e) outputting a minipacket flow control signal indicating that an additional minipacket is not allowed to be transmitted
via the first channel when the current credit counter value is not greater than the credit limit value, wherein the updating
of (b) further comprises:

(b1) counting the amount of data transmitted via the first channel;
(b2) comparing the amount of data with a data threshold value;
(b3) outputting a decrement credit counter signal when the amount of data is greater than the data threshold value;
(b4) decrementing the current credit counter signal when the decrement credit counter signal is output; and
(b5) not decrementing the current credit counter signal when the decrement credit counter signal is not output.

US Pat. No. 9,590,926

GLOBAL RANDOM EARLY DETECTION PACKET DROPPING BASED ON AVAILABLE MEMORY

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a packet descriptor and a queue number, wherein the queue number indicates a queue stored within a memory unit,
and wherein the packet descriptor is not part of a packet;

(b) determining an amount of free memory in the memory unit;
(c) determining if the amount of free memory is within a first range;
(d) when the amount of free memory is within the first range, applying a first drop probability to determine if a packet associated
with the packet descriptor is to be dropped, wherein the first drop probability is not a function of an instantaneous queue
depth;

(e) storing the packet descriptor in the queue if it is determined in (d) that the packet is not to be dropped; and
(f) not storing the packet descriptor in the queue if it is determined in (d) that the packet is to be dropped.

US Pat. No. 9,854,072

SCRIPT-CONTROLLED EGRESS PACKET MODIFIER

Netronome Systems, Inc., ...

1. An egress packet modifier comprising:
an input port through which the egress packet modifier receives a packet and a script code;
an output port;
a script parser that receives the script code and therefrom generates a first opcode and a second opcode; and a pipeline comprising:
a first processing stage that receives the first opcode from the script parser, wherein the first processing stage receives
a first part of the packet and performs a first modification on the first part of the packet, wherein the first modification
is determined by the first opcode; and

a second processing stage that receives the second opcode from the script parser, wherein the second processing stage receives
the first part of the packet from the first processing stage and performs a second modification on the first part of the packet,
wherein the second modification is determined by the second opcode, wherein the first part of the packet as modified by the
pipeline is output from the egress packet modifier via the output port.

US Pat. No. 9,830,153

SKIP INSTRUCTION TO SKIP A NUMBER OF INSTRUCTIONS ON A PREDICATE

Netronome Systems, Inc., ...

1. A processor coupled to a memory system, the processor comprising:
a fetch stage that causes a sequence of instructions to be fetched from the memory system, wherein the sequence of instructions
includes a first instruction and a second instruction;

a decode stage that first decodes the first instruction and then decodes the second instruction, wherein the decode stage
can decode skip instructions;

a register file read stage coupled to the decode stage, wherein the register file read stage includes a plurality of flag
bits; and

an execute stage that can carry out an instruction operation of an instruction, wherein the second instruction defines an
instruction operation to be performed when the second instruction is executed by the processor, wherein if the first instruction
is a skip instruction and if a predicate condition is satisfied then the instruction operation of the second instruction is
not carried out by the execute stage even though the second instruction was decoded by the decode stage, wherein the predicate
condition is specified by a predicate field of the skip instruction, and wherein the predicate condition is a specified function
of values of at least one of the plurality of flag bits.

US Pat. No. 9,819,585

MAKING A FLOW ID FOR AN EXACT-MATCH FLOW TABLE USING A PROGRAMMABLE REDUCE TABLE CIRCUIT

Netronome Systems, Inc., ...

8. An integrated circuit comprising:
a Static Random Access Memory (SRAM);
a programmable lookup circuit coupled to receive a data output value output by the SRAM, wherein the programmable lookup circuit
outputs a result value;

a multiplexer circuit receives the result value, wherein the multiplexer circuit outputs a Flow Identification value (Flow
Id), wherein the multiplexer circuit is controllable such that at least a part of the result value is a part of the Flow Id;
and

an exact-match flow table structure that receives the Flow Id from the multiplexer circuit, wherein the exact-match flow table
structure stores an exact-match flow table, wherein the exact-match flow table comprises a plurality of flow entries, wherein
each flow entry comprises a Flow Id, wherein the exact-match flow table structure determines whether the Flow Id received
from the multiplexer circuit is a bit-by-bit exact-match of any Flow Id of any flow entry stored in the exact-match flow table
structure.

US Pat. No. 9,807,006

CROSSBAR AND AN EGRESS PACKET MODIFIER IN AN EXACT-MATCH FLOW SWITCH

Netronome Systems, Inc., ...

1. A method comprising:
(a) maintaining an exact-match flow table on an integrated circuit, wherein the exact-match flow table comprises a plurality
of flow entries, wherein each flow entry comprises a Flow Identification value (Flow Id) and an egress action value;

(b) receiving an incoming packet onto the integrated circuit and generating therefrom a Flow Id, wherein the incoming packet
has a plurality of header fields, wherein the Flow Id is a concatenation of at least one bit from each of the header fields
of the incoming packet, and wherein the Flow Id does not include all the bits of at least one of the header fields of the
incoming packet;

(c) determining that the Flow Id generated in (b) is an exact-match for the Flow Id of one flow entry in the exact-match flow
table, wherein the determining of (c) occurs on the first part of the integrated circuit;

(d) storing header information in a first egress packet modifier memory;
(e) communicating the egress action value and a portion of the incoming packet across a crossbar switch;
(f) receiving the egress action value onto an egress packet modifier;
(g) using the egress action value on the egress packet modifier to retrieve the header information from the first egress packet
modifier memory and adding the header information to the portion of the incoming packet thereby generating an output packet;
and

(h) outputting the output packet from the integrated circuit, wherein the first egress packet modifier memory, the crossbar
switch, and the egress packet modifier are parts of the integrated circuit.

US Pat. No. 9,727,512

IDENTICAL PACKET MULTICAST PACKET READY COMMAND

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a packet ready command from a first memory system via a bus and onto a network interface circuit, wherein the
packet ready command includes a multicast value;

(b) determining a communication mode as a function of the multicast value, wherein the multicast value indicates a single
packet is to be communicated by the network interface circuit to a first number of destinations;

(c) outputting a free packet command from the network interface circuit onto the bus, wherein the free packet command includes
a Free On Last Transfer (FOLT) value, wherein the FOLT value indicates that the packet will not be freed from the first memory
system by the network interface circuit once the packet is transmitted, and wherein (a) through (c) are performed by the network
interface circuit.

US Pat. No. 9,753,725

PICOENGINE HAVING A HASH GENERATOR WITH REMAINDER INPUT S-BOX NONLINEARIZING

Netronome Systems, Inc., ...

1. A processor comprising:
a fetch stage that fetches a hash instruction;
a decode stage that decodes the hash instruction;
a register file read stage coupled to the decode stage, wherein the register file read stage includes a hash register; and
an execute stage that executes the hash instruction, wherein the execute stage comprises an ALU (arithmetic logic unit) and
a hash generating circuit, wherein the hash generating circuit comprises:

a modulo-2 multiplier that receives an incoming data value on a first set of the input leads and receives a multiplier value
on a second set of input leads and that, as a result of execution of the hash instruction, multiplies the incoming data value
by the multiplier value and outputs a product value, wherein the product value comprises a first portion and a second portion;

a nonlinearizing function circuit that receives a hash register value stored in the hash register and that outputs a programmably
nonlinearized version of the hash register value (PNVHRV);

a first modulo-2 summer that receives the first portion of the product value and that receives the PNVHRV and that outputs
a first sum value;

a modulo-2 divider that receives the first sum value and that divides the sum value by a fixed divisor value and that outputs
a division remainder value; and

a second modulo-2 summer that receives the division remainder value from the modulo-2 divider and that receives the second
portion of the product value from the modulo-2 multiplier and that outputs a second sum value, wherein execution of the hash
instruction causes the second sum value to be loaded into the hash register.

US Pat. No. 9,753,883

NETWORK INTERFACE DEVICE THAT MAPS HOST BUS WRITES OF CONFIGURATION INFORMATION FOR VIRTUAL NIDS INTO A SMALL TRANSACTIONAL MEMORY

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a first address from a host via a bus onto an island-based network flow processor in a network interface device
(NID), wherein the first address is a part of a write bus transaction, wherein the write bus transaction further includes
data to be written, wherein the first address includes a first portion, a second portion, and a third portion, wherein the
first, second and third portions are contiguous in the first address, wherein the first portion includes an LSB bit of the
first address;

(b) detecting that the first address is in a first predetermined address range, wherein the first predetermined address range
corresponds to a first portion of a memory located on an island of the island based network flow processor;

(c) in response to the detecting of (b) translating the first address into a second address, wherein the translation involves
deleting the second portion such that the first portion and the third portion are parts of the second address but the second
portion is not a part of the second address;

(d) using the second address to write the data into the first portion of the memory, wherein the data is configuration information
that configures one of multiple virtual NIDs provided by the NID;

(e) receiving a third address in a second predetermined address range onto the island based network flow processor, wherein
the second predetermined address range corresponds to a second portion of the memory located on the island; and

(f) using the third address to write data into the second portion of the memory without performing address translation on
the third address, wherein (a), (b), (c), (d), (e) and (f) are performed by the island-based network flow processor, and wherein
the host is not located on the NID.

US Pat. No. 9,665,519

USING A CREDITS AVAILABLE VALUE IN DETERMINING WHETHER TO ISSUE A PPI ALLOCATION REQUEST TO A PACKET ENGINE

Netronome Systems, Inc., ...

14. An integrated circuit comprising:
a packet engine that uses a PPI Addressing Mode (PAM) and that handles storing data into and retrieving data from a memory
on behalf of other parts of the integrated circuit;

a bus; and
means for: (a) maintaining a first Credits Available value and a second Credits Available value, (b) using the first and second
Credits Available values to make a determination to send a PPI (Packet Portion Identifier) allocation request to the packet
engine via the bus, (c) sending the PPI allocation request to the packet engine via the bus, (d) reducing the first Credits
Available value and reducing the second Credits Available value, (e) receiving a communication back from the packet engine
via the bus, wherein the communication includes a first credits value and a second credits value, and (f) adding the first
credits value received in the communication to the first Credits Available value that was reduced thereby generating an updated
first Credits Available value, and adding the second credits value received in the communication to the second Credits Available
value that was reduced thereby generating an updated second Credits Available value, wherein the means is one of the other
parts of the integrated circuit, wherein the first Credits Available value is a number of PPIs, and wherein the second Credits
Available value indicates an amount of memory space.

US Pat. No. 9,594,702

MULTI-PROCESSOR WITH EFFICIENT SEARCH KEY PROCESSING

Netronome Systems, Inc., ...

1. An apparatus, comprising:
a shared memory that stores a search key data set, wherein the search key data set includes a plurality of search keys;
a processor that generates a descriptor;
a Direct Memory Access (DMA) controller that (i) receives the descriptor from the processor via a bus, (ii) in response to
receiving the descriptor generates a search key data request and sends the search key data request to the shared memory via
the bus, (iii) receives the search key data set from the shared memory via the bus, (iv) selects a first search key from the
search key data set, (v) generates Interlaken Look Aside (ILA) packet header information, and (vi) outputs the first search
key and the ILA packet header information; and

an Interlaken Look Aside (ILA) interface circuit that receives the first search key and the ILA packet header information
from the DMA controller and supplies an ILA packet to an external transactional memory device across an ILA bus, wherein the
ILA packet includes the ILA packet header information and the first search key.

US Pat. No. 9,594,706

ISLAND-BASED NETWORK FLOW PROCESSOR WITH EFFICIENT SEARCH KEY PROCESSING

Netronome Systems, Inc., ...

1. An Island-Based Network Flow Processor (IBNFP) integrated circuit, comprising:
a bus;
a first island, comprising:
a memory; and
a processor that writes a search key data set into the memory, wherein the search key data set comprises a plurality of search
keys;

a second island, comprising:
a Direct Memory Access (DMA) controller that (i) receives a descriptor from the processor in the first island via the bus,
(ii) generates a search key data request in response to receiving the descriptor and communicates the search key data request
to the memory in the first island via the bus, (iii) receives the search key data set from the memory in the first island
via the bus, (iv) selects a first search key from the search key data set, (v) generates header information, and (vi) outputs
the first search key and the header information; and
a third island, comprising:
an Interlaken Look Aside (ILA) interface circuit that receives the first search key and the header information from the DMA
controller and outputs an ILA packet; and

an interface circuit, wherein the interface circuit receives the ILA packet from the ILA interface circuit and outputs the
ILA packet from the IBNFP integrated circuit to an external transactional memory device.

US Pat. No. 9,559,988

PPI ALLOCATION REQUEST AND RESPONSE FOR ACCESSING A MEMORY SYSTEM

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving an allocation request for a Packet Portion Identifier (PPI) via a bus and onto a memory system, wherein the
memory system comprises a packet engine and a memory;

(b) determining the PPI, wherein the PPI is associated with an unused block of memory in the memory of the memory system;
(c) outputting the PPI from the memory system and onto the bus;
(d) receiving packet data onto the memory system via the bus, wherein the PPI is received by the memory system along with
packet data; and

(e) using the PPI to write the packet data into the block of memory, wherein (a) through (e) are performed by the packet engine.

US Pat. No. 9,515,946

HIGH-SPEED DEQUEUING OF BUFFER IDS IN FRAME STORING SYSTEM

Netronome Systems, Inc., ...

1. A dual linked list system comprising:
a pipelined memory adapted to store queue elements, wherein the pipelined memory has a plurality of pipeline stages, and wherein
the pipelined memory has a read access latency time for reading a value stored in a queue element out of the pipelined memory;

a link manager comprising a first head pointer queue element, a second head pointer queue element, a first tail pointer queue
element, and a second tail pointer queue element, wherein the dual linked list system can maintain a dual linked list involving
a first linked list of queue elements and a second linked list of queue elements, wherein the first linked list of queue elements
includes the first head pointer queue element, the first tail pointer queue element, and one or more first queue elements
stored in the pipelined memory if the first linked list of queue elements includes three or more queue elements, and wherein
a second linked list of queue element includes the second head pointer queue element, the second tail pointer queue elements,
and one or more second queue elements stored in the pipelined memory if the second linked list of queue elements includes
three or more queue elements;

an enqueue engine that can cause a sequence of values to be enqueued into the dual linked list such that odd values of the
sequence are enqueued by pushing the odd values into the first linked list of queue elements, and such that even values of
the sequence are enqueued by pushing the even values into the second linked list of queue elements, wherein values are enqueued
into the first and second linked lists in alternating fashion under control of the link manager; and

a dequeue engine that can cause the sequence of values to be dequeued from the dual linked list such that the odd values of
the sequence are dequeued by popping the odd values from the first linked list of queue elements, and such that the even values
of the sequence are dequeued by popping the even values from the second linked list of queue elements, wherein values are
dequeued out of the first and second linked lists in alternating fashion under control of the link manager, wherein the dual
linked list can be popped to output the sequence of values at a sustained rate of more than one value per the read access
latency time, wherein during operation at the sustained rate the pipelined memory is performing multiple read operations of
multiple queue elements at a given time with the read operations following each other in sequence through the stages of the
pipelined memory.

US Pat. No. 9,899,996

RECURSIVE LOOKUP WITH A HARDWARE TRIE STRUCTURE THAT HAS NO SEQUENTIAL LOGIC ELEMENTS

Netronome Systems, Inc., ...

1. A circuit comprising:
a storage device that stores a plurality of multi-bit node control values (NCVs) and a plurality of multi-bit result values
(RVs); and

a means for receiving a multi-bit input value (IV) and outputting one of the plurality of RVs, wherein the means is a hardware
trie structure comprising:

a plurality of internal node circuits, wherein each of the internal node circuits receives a corresponding respective one
of the plurality of NCVs, wherein each of the internal node circuits further receives at least some of the bits of the IV.

US Pat. No. 9,891,898

LOW-LEVEL PROGRAMMING LANGUAGE PLUGIN TO AUGMENT HIGH-LEVEL PROGRAMMING LANGUAGE SETUP OF AN SDN SWITCH

Netronome Systems, Inc., ...

1. A method comprising:
(a) compiling, by a processor, a first amount of high-level programming language code thereby obtaining a first section of
native code and compiling a second amount of a low-level programming language code thereby obtaining a second section of native
code, wherein the first amount of high-level programming language code at least in part defines how a Software Defined Network
(SDN) switch performs a matching in a first condition, and wherein the second amount of low-level programming language code
at least in part defines how the SDN switch performs matching in a second condition, wherein the matching specified by the
second amount of low-level programming language code cannot be specified using the high-level programming language;

(b) loading the first section of native code into the SDN switch such that a first processor of the SDN switch can execute
at least part of the first section of native code; and

(c) loading the second section of native code into the SDN switch such that a second processor of the SDN switch can execute
at least part of the second section of native code.

US Pat. No. 9,804,976

TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC LOOK-UP, ADD AND LOCK OPERATION

Netronome Systems, Inc., ...

1. A transactional memory that receives a hash key, comprising:
a memory that stores a hash table and a data structure table, wherein the data structure table includes a plurality of data
structures, wherein the hash table includes a plurality of hash buckets, wherein each hash bucket includes a plurality of
hash bucket locations, wherein each hash bucket location includes a lock field and a hash key field, and wherein each hash
bucket location corresponds to an associated data structure in the data structure table; and

a hardware engine that causes the transactional memory to perform an Atomic Look-Up, Add, and Lock (ALAL) operation by receiving
a hash index that corresponds to a hash bucket and a hash key, and determining a hash bucket address based on the hash index,
and by performing a hash lookup operation and thereby determining if the hash key is present in the hash table, and returning
a result packet when the hash key is present in the table.

US Pat. No. 9,729,442

METHOD OF DETECTING LARGE FLOWS WITHIN A SWITCH FABRIC OF AN SDN SWITCH

Netronome Systems, Inc., ...

12. A method of Software-Defined Networking (SDN) switching, wherein the SDN switch comprises a fabric of Network Flow Switch
(NFX) circuits and a Network Flow Processor (NFP) circuit, wherein none of the NFX circuits comprises any SDN protocol stack,
wherein packets received onto the SDN switch from external sources are received via the fabric of NFX circuits, wherein packets
output from the SDN switch to external destinations are output via the fabric of NFX circuits, wherein each of the NFX circuits
maintains at least one flow table, wherein the NFP circuit maintains at least one flow table, the method comprising:
(a) receiving a packet of a flow onto the SDN switch via a first NFX circuit;
(b) determining in the first NFX circuit that the packet matches a first flow entry stored in a flow table in the first NFX
circuit and that a number of packets received of the flow is greater than a threshold value, wherein the first flow entry
applies to a relatively broad flow of packets;

(c) outputting the packet from the SDN switch in accordance with the first flow entry;
(d) generating a second flow entry in the NFP circuit, wherein the second flow entry applies to a relatively narrow subflow
of packets, wherein the packet received in (a) is one of the packets of the relatively narrow subflow of packets, wherein
all packets of the relatively narrow subflow of packets are also packets of the relatively broad flow of packets, and wherein
some packets of the relatively broad flow of packets are not packets of the relatively subflow of packets;

(e) forwarding the second flow entry from the first NFP circuit to the NFX circuit and storing the second flow entry in at
least one flow table in the first NFX circuit;

(f) receiving a first subsequent packet of the relatively broad flow of packets onto the SDN switch via the first NFX circuit
and determining that the first subsequent packet matches the second flow entry and using the second flow entry to output the
first subsequent packet from the SDN switch without consulting any flow table in the NFP circuit; and

(g) receiving a second subsequent packet of the relatively broad flow of packets onto the SDN switch via the first NFX circuit
and determining that the second subsequent packet does not match the second flow entry and forwarding the second subsequent
packet to the NFP circuit before outputting the second subsequent packet from the SDN switch.

US Pat. No. 9,729,353

COMMAND-DRIVEN NFA HARDWARE ENGINE THAT ENCODES MULTIPLE AUTOMATONS

Netronome Systems, Inc., ...

1. A Non-deterministic Finite Automaton (NFA) engine, comprising:
a pipeline comprising a plurality of stages, wherein one of the stages comprises a transition table, wherein the transition
table is programmable to track up to a maximum of X automaton states such that the transition table could be programmed to
simultaneously transition from each of the X automaton states to any of the other X-1 automaton states, wherein a first automaton and a second automaton are both encoded in the transition table, and wherein
at least one of the first and second automatons is an NFA that is not a Deterministic Finite Automaton (DFA); and

a controller that receives NFA engine commands onto the NFA engine, wherein the controller controls the pipeline in response
to the NFA engine commands.

US Pat. No. 9,753,710

RESOURCE ALLOCATION WITH HIERARCHICAL SCOPE

Netronome Systems, Inc., ...

12. A method comprising:
(a) receiving into a linker location information indicative of a location where an amount of code is to be loaded;
(b) receiving into the linker an instruction, wherein the instruction indicates a scope level of a symbol;
(c) uniquifying symbols by traversing the code and if an instance of an unresolved symbol is encountered and is of sub-circuit
scope then the symbol is uniquified to create a new symbol, wherein the uniquifying results in all instances of the same symbol
within code for the same sub-circuit being replaced with instances of the same new symbol and such that this new symbol is
unique with respect to symbols appearing in code for other sub-circuits, whereas if the symbol is of circuit scope then the
symbol of the symbol instance is not uniquified, wherein the uniquifying of a symbol in (c) involves modifying the symbol
to create a new symbol; and

(d) after the uniquifying of (c), for each instance of a symbol, allocating one or more resource instances to be associated
with the symbol, and then replacing the instance of the symbol with an address of the allocated one or more resource instances,
wherein (a) thru (d) are performed by the linker.

US Pat. No. 9,699,107

PACKET ENGINE THAT USES PPI ADDRESSING

Netronome Systems, Inc., ...

1. An integrated circuit, comprising:
a first Packet Data Receiving and Splitting Device (PDRSD) that receives first packet data and outputs a first portion of
the first packet data along with a first Packet Portion Identifier (PPI);

a second PDRSD that receives second packet data and outputs a first portion of the second packet data along with a second
Packet Portion Identifier (PPI);

a memory system comprising:
a memory; and
a packet engine, wherein the packet engine receives the first PPI and the first portion of the first packet data and translates
the first PPI into a first address and that uses the first address to store the first portion of the first packet data into
a first block of a memory in the memory system, and that receives the second PPI and the first portion of the second data
and translates the second PPI into a second address and that uses the second address to store the first portion of the second
packet data into a second block of the memory, and wherein the packet engine allocates all PPIs to the first and second PDRSD;
and

a processing circuit that receives the first PPI and the first portion of the first packet data from the memory system and
that processes the first portion of the first packet data and returns the first PPI to the memory system, and that receives
the second PPI and the first portion of the second packet data from the memory system and that processes the first portion
of the second packet data and returns the second PPI to the memory system.

US Pat. No. 9,619,418

LOCAL EVENT RING IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

1. An integrated circuit, comprising:
a plurality of rectangular islands disposed in rows, wherein there are at least three rows of the rectangular islands, and
wherein each rectangular island comprises:

a Run Time Configurable Switch (RTCS); and
a plurality of radiating half links, wherein the rectangular islands are coupled together such that the half links and the
RTCSs together form a configurable mesh event bus, wherein the configurable mesh event bus is configured to form a local event
ring, wherein the local event ring provides a communication path along which an event packet is communicated to each rectangular
island along the local event ring, wherein the local event ring comprises a plurality of event ring circuits and a plurality
of event ring sediments, wherein the event ring circuits are clocked by a clock signal, wherein a transitioning of the clock
signal causes an event packet being output by first event ring circuit onto a first ring sediment to be latched into a second
event ring circuit such that the second event ring circuit begins outputting the event packet onto a second event ring sediment
and such that the first event ring circuit stops outputting the event packet onto the first event ring segment.

US Pat. No. 10,009,270

MODULAR AND PARTITIONED SDN SWITCH

Netronome Systems, Inc., ...

1. A Software-Defined Networking (SDN) switch comprising:a first plurality of external network ports for receiving external network traffic onto the SDN switch;
a second plurality of external network ports for transmitting external network traffic out of the SDN switch;
a first Network Flow Switch (NFX) integrated circuit that has a plurality of network ports and that maintains a first flow table, wherein a first network port of the first NFX integrated circuit is coupled to a first of the first plurality of external network ports, and wherein a second of the network ports of the first NFX integrated circuit is coupled to a first of the second plurality of external network ports;
a second Network Flow Switch (NFX) integrated circuit that has a plurality of network ports and that maintains a second flow table, wherein a first network port of the second NFX integrated circuit is coupled to a second of the first plurality of external network ports, wherein a second of the network ports of the second NFX integrated circuit is coupled to a second of the second plurality of external network ports, and wherein a third of the network ports of the second NFX integrated circuit is coupled to a third of the network ports of the first NFX integrated circuit;
a Network Flow Processor (NFP) circuit that maintains a third flow table, wherein the NFP circuit couples directly to a fourth of the network ports of the first NFX integrated circuit but does not couple directly to any network port of the second NFX integrated circuit, wherein the NFP circuit maintains a copy of the first flow table and also maintains a copy of the second flow table; and
a controller processor circuit that maintains a fourth flow table, wherein the controller processor circuit is coupled by a serial bus to the NFP circuit but is not directly coupled by any network port to either the NFP circuit nor the first NFX integrated circuit nor the second NFX integrated circuit, wherein the controller processor circuit is not coupled directly to any of the first and second pluralities of external network ports, wherein an operating system and a SDN protocol stack is executing on the controller processor circuit, wherein the fourth flow table includes a first flow entry that was loaded into the fourth flow table as a result of the controller processor circuit receiving an SDN protocol message, wherein the SDN protocol message was received onto the SDN switch via one of the NFX integrated circuits and passed through the NFP circuit and to the controller processor circuit, wherein no SDN protocol stack executes on the NFP circuit but the third flow table contains a copy of the first flow entry, and wherein the controller processor circuit caused the copy of the first flow entry to be communicated across the serial bus from the controller processor circuit to the NFP circuit and to be loaded into the third flow table.

US Pat. No. 9,912,591

FLOW SWITCH IC THAT USES FLOW IDS AND AN EXACT-MATCH FLOW TABLE

Netronome Systems, Inc., ...

1. A method comprising:
(a) storing a plurality of flow entries as part of an exact-match flow table in a Static Random Access Memory (SRAM), wherein
each flow entry comprises a Flow Identification value (Flow ID) and an action value, and wherein the SRAM is a part of a first
island of an Island-Based Network Flow Processor (IBNFP);

(b) receiving an incoming packet onto the first island of the IBNFP;
(c) generating a Flow ID from the packet;
(d) determining whether the Flow ID generated in (c) is a bit-by-bit exact-match of any Flow ID of any flow entry stored in
the SRAM wherein the determining of (d) is performed by the first island of the IBNFP;

(e) if the determination of (d) is that the Flow ID generated in (c) is a bit-by-bit exact-match of a Flow ID of a flow entry
then performing an action, wherein the action is indicated by the action value of the flow entry; and

(f) if the determination of (d) is that there is no flow entry the Flow ID of which is a bit-by-bit exact-match of the Flow
ID generated in (c) then outputting a miss indication communication from the first island of the IBNFP and then in response
receiving a flow entry from a second island of IBNFP onto the first island of the IBNFP and loading the received flow entry
into the SRAM that is part of the first island, wherein the Flow ID of the flow entry received in (f) is the same as the Flow
ID generated in (c), wherein (a) through (f) are performed by the IBNFP, and wherein the first island of the IBNFP does not
include any processor that fetches instructions.

US Pat. No. 9,887,918

INTELLIGENT PACKET DATA REGISTER FILE THAT STALLS PICOENGINE AND RETRIEVES DATA FROM A LARGER BUFFER

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a packet buffer memory that stores an amount of packet data, wherein the packet data stored in the packet buffer memory comprises
a plurality of bytes of packet data;

a packet portion request bus;
a packet portion data bus; and
a plurality of processors, wherein each processor comprises a clock control state machine, a stall conductor, a decode stage,
an execute stage, and a register file read stage, wherein the register file read stage comprises an intelligent packet data
register file, wherein the intelligent packet data register file is adapted to send a packet portion request via the packet
portion request bus to the packet buffer memory, wherein the intelligent packet data register file is adapted to receive in
response a packet portion from the packet buffer memory via the packet portion data bus, wherein the intelligent packet data
register file is adapted to assert and de-assert a stall signal, wherein the stall signal is communicated via the stall signal
conductor to the clock control state machine, wherein the de-asserting of the stall signal causes the processor to resume
clocking, wherein the intelligent packet data register file cannot store all of the amount of packet data that is stored in
the packet buffer memory, and wherein there is only one packet buffer memory that is shared by the plurality of processors.

US Pat. No. 9,703,739

RETURN AVAILABLE PPI CREDITS COMMAND

Netronome Systems, Inc., ...

11. An integrated circuit comprising:
a credit-aware device;
a bus; and
means for: 1) allocating and de-allocating PPIs (Packet Portion Identifiers), 2) maintaining a CTBR (Credits To Be Returned)
value; and 3) receiving a return available credits command from the credit-aware device via the bus and in response to the
return available credits command causing the CTBR value to be communicated across the bus from the means to the credit-aware
device, wherein the return available credits command does not cause the means to allocate any PPI and does not cause the means
to de-allocate any PPI, and wherein the means includes no processor that fetches and executes processor-executable instructions,
wherein the credit-aware device is an ingress device that sends a PPI allocation request to the means and that uses a PPI
allocated by the means to cause packet data to be stored in the means in association with the PPI.

US Pat. No. 10,146,468

ADDRESSLESS MERGE COMMAND WITH DATA ITEM IDENTIFIER

Netronome Systems, Inc., ...

1. A method comprising:(a) maintaining Packet Portion Identifier (PPI) to memory address translation information in a device;
(b) receiving a merge command onto the device from a bus, wherein the merge command includes a PPI and a reference value, wherein the merge command includes no address;
(c) using the PPI to read a first portion of a packet from a first memory and to cause the first portion of the packet to be written into a second memory where a second portion of the packet is stored so that both the first portion and the second portion are stored in substantially contiguous memory locations in the second memory; and
(d) outputting the reference value from the device onto the bus, wherein (a) through (d) are performed by the device.

US Pat. No. 9,935,879

EFFICIENT INTERCEPT OF CONNECTION-BASED TRANSPORT LAYER CONNECTIONS

Netronome Systems, Inc., ...

1. A method comprising:(a) forwarding packets through a transparent proxy such that a single TCP (Transmission Control Protocol) connection is established between a client and server, wherein a single TCP control loop manages flow control between the client and the server across the single TCP connection in (a);
(b) after establishment of the single TCP connection in (a) communicating an amount of information across the single TCP connection through the transparent proxy, wherein the communication of the amount of information in (b) involves communication of the amount of information between the client and the transparent proxy, wherein a first TCP control loop manages flow control for the communication of the amount of information between the client and the transparent proxy, wherein the communication of the amount of information in (b) also involves communication of the amount of information between the transparent proxy and the server, and wherein a second TCP control loop manages flow control for the communication of the amount of information between the transparent proxy and the server;
(c) decrypting at least some of the amount of information communicated in (b);
(d) based at least in part on a result of the decrypting determining to connect the first TCP control loop and the second TCP control loop to form the single TCP control loop;
(e) connecting the first TCP control loop and the second TCP control loop to form the single TCP control loop and using the single TCP control loop to manage flow control across the single TCP connection; and
(f) after the connecting of (e) forwarding packets through the transparent proxy across the single TCP connection, wherein flow control across the single TCP connection in (f) is managed by the single TCP control loop.

US Pat. No. 9,727,673

SIMULTANEOUS SIMULATION OF MULTIPLE BLOCKS USING EFFICIENT PACKET COMMUNICATION TO EMULATE INTER-BLOCK BUSES

Netronome Systems, Inc., ...

1. A method of simulating a first circuit portion of a circuit and a second circuit portion of the circuit, wherein the first
circuit portion and the second circuit portion are coupled together by a bus, the method comprising:
(a) using a first digital logic circuit simulator to simulate operation of the first circuit portion during a first time period,
wherein during the first time period a valid signal is generated by the first digital logic circuit simulator that indicates
that there is no valid data being supplied by the first digital logic circuit to a bus interface circuit for communication
across the bus, wherein the simulation of (a) does not cause any packet to be generated that includes bus data of the bus,
and wherein the first digital logic circuit simulator is executing on a computer;

(b) using the first digital logic circuit simulator to simulate operation of the first circuit portion during a second time
period, wherein during the second time period a second valid signal is generated by the first digital logic circuit simulator
that indicates that there is valid data being supplied by the first digital logic circuit to the bus interface circuit for
communication across the bus, and wherein the simulation of (b) causes a packet to be generated that does include bus data
of the bus;

(c) communicating the packet of (b) to a second digital logic circuit simulator; and
(d) using the second digital logic circuit simulator to simulate operation of the second circuit portion, wherein the bus
data of the packet generated in (b) is used as input data to the second digital logic circuit simulator in (d).

US Pat. No. 9,755,910

MAINTAINING BYPASS PACKET COUNT VALUES

Netronome Systems, Inc., ...

1. A method involving a networking device, wherein the networking device includes a Network Interface Device (NID) and a host,
the method comprising:
(a) receiving a plurality of packets onto the networking device via the NID, wherein some of the packets pass along paths
from the NID to the host, and wherein others of the packets do not pass to the host;

(b) maintaining a bypass packet count for each path that passes from the NID to the host;
(c) receiving a packet onto the NID, wherein one or more match tables on the NID indicates that the packet is to be sent to
the host;

(d) sending the packet received in (c) along a bypass path without going to the host rather than sending the packet to the
host as indicated by the one or more match tables;

(e) determining the path the packet of (c) would have traversed to the host had the packet not been sent along the bypass
path in (d); and

(f) updating the bypass packet count associated with the path determined in (e), wherein (a) through (f) are performed by
the networking device.

US Pat. No. 9,755,948

CONTROLLING AN OPTICAL BYPASS SWITCH IN A DATA CENTER BASED ON A NEURAL NETWORK OUTPUT RESULT

Netronome Systems, Inc., ...

1. A method involving an optical switch, wherein the optical switch can be controlled to receive packet traffic from a first
network device via a first optical link and to output that packet traffic to a second network device via a second optical
link, the method comprising:
(a) using a neural network to analyze packet traffic and determine a number of elephant flows passing through the first network
device; and

(b) controlling the optical switch based at least in part on a result of the analyzing of (a) to switch such that immediately
prior to the switching of (b) no packet traffic passes from the first network device via first optical link and through the
optical switch and via the second optical link to the second network device but such that after the switching of (b) packet
traffic does pass from the first network device via the first optical link and through the optical switch and via the second
optical link and to the second network device, wherein the optical switch performs circuit switching, wherein the electrical
switch performs packet switching, and wherein the neural network is implemented on an Island-Based Network Flow Processor
(IBNFP) included in the first network device.

US Pat. No. 9,621,481

CONFIGURABLE MESH CONTROL BUS IN AN ISLAND-BASED NETWORK FLOW PROCESSOR

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a plurality of islands, wherein each rectangular island includes a functional circuit and a switch without intersecting another
functional circuit or another switch, wherein each island comprises a switch and four half links, wherein the islands are
coupled together such that the half links and the switches form a configurable mesh control bus, wherein the configurable
mesh control bus is configured to have a tree structure such that configuration information passes from the switch of a root
island to the switch of each of the other islands, wherein the functional circuits are clocked by a clock signal, wherein
a transitioning of the clock signal causes configuration information to be latched into the functional circuit of each of
the plurality of islands, and wherein functional circuitry in each of the plurality of islands is configured by configuration
information received via the configurable mesh control bus from the root island.

US Pat. No. 9,588,928

UNIQUE PACKET MULTICAST PACKET READY COMMAND

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a packet ready command from a memory system via a bus and onto a network interface circuit, wherein the packet
ready command includes a multicast value;

(b) determining a communication mode as a function of the multicast value, wherein the multicast value indicates a plurality
of packets are to be communicated to a plurality of destinations by the network interface circuit, and wherein each of the
plurality of packets are unique;

(c) outputting a free packet command from the network interface circuit onto the bus, wherein the free packet command includes
a Free On Last Transfer (FOLT) value, wherein the FOLT value indicates that the packets are to be freed from the memory system
by the network interface circuit after the packets are communicated to the network interface circuit, and wherein (a) through
(c) is performed by the network interface circuit.

US Pat. No. 10,069,767

METHOD OF DYNAMICALLY ALLOCATING BUFFERS FOR PACKET DATA RECEIVED ONTO A NETWORKING DEVICE

Netronome Systems, Inc., ...

1. A method of dynamically allocating buffers as packets are received and processed through a network device, comprising:(a) receiving a packet onto an ingress circuit of the network device, wherein the ingress circuit includes a first memory that stores a free buffer list, and an allocated buffer list, and wherein the ingress circuit is coupled to a bus;
(b) storing packet data of the packet into a buffer, wherein the buffer is in a second memory and has a buffer identification (ID), and moving the buffer ID from the free buffer list to the allocated buffer list;
(c) using the buffer ID to read the packet data via the bus from the buffer in the second memory and into an egress circuit, and storing the buffer ID in a de-allocate buffer list stored in a third memory located in the egress circuit, wherein the egress circuit is coupled to the bus, and wherein the ingress circuit and the egress circuit are located on an integrated circuit; and
(d) receiving a send buffer IDs command via the bus from a processor onto the egress circuit, wherein the send buffer IDs command instructs the egress circuit to send the buffer ID from the egress circuit to the ingress circuit via the bus such that the buffer ID is pushed onto the free buffer list stored in the first memory located in the ingress circuit, wherein the ingress circuit is a hardware circuit, wherein the egress circuit is a hardware circuit, wherein the second memory is not part of the ingress circuit or the egress circuit, and wherein the first, second and third memories are separate hardware memory units located in different locations on the network device.

US Pat. No. 9,990,307

SPLIT PACKET TRANSMISSION DMA ENGINE

Netronome Systems, Inc., ...

1. A system comprising:a bus;
a first device that is operatively coupled to the bus, wherein the first device includes a memory that stores a first part of packet information, wherein the first part of packet information includes a first descriptor, and wherein the first device can translate a PPI (Packet Portion Identifier) into address information;
a second device that is operatively coupled to the bus, wherein the second device stores a second part of the packet information; and
a third device that is operatively coupled to the bus, wherein the third device comprises:
an egress FIFO (First In First Out) memory; and
means that comprises a buffer memory, wherein the means is for: 1) receiving a second descriptor, wherein the second descriptor includes a PPI and length information, wherein the PPI is not an address and also includes no address; 2) sending the PPI to the first device across the bus such that the first device translates the PPI into first address information and then uses the first address information to read part of the first descriptor from the memory of the first device, 3) receiving the part of the first descriptor from the first device via the bus, 4) extracting second address information from the first descriptor, 5) causing a part of the first part of the packet information to be transferred from the first device to the means across the bus and to be stored into the buffer memory, 6) using the second address information and the length information to cause the second part of the packet information to be transferred from the second device to the means across the bus and to be stored into the buffer memory, wherein the second part of the packet information is transferred in a plurality of bus transfers, and wherein after the transferring of the first part and after the transferring of the second part the first and second parts are stored together in the buffer memory, 7) once the part of the first part and the second part of the packet information have been stored in the buffer memory then transferring the packet information from the buffer memory to the egress FIFO memory, wherein the means comprises no processor that fetches and executes instructions.

US Pat. No. 9,971,720

DISTRIBUTED CREDIT FIFO LINK OF A CONFIGURABLE MESH DATA BUS

Netronome Systems, Inc., ...

1. An integrated circuit comprising:a plurality of functional circuits, wherein each of the functional circuits is disposed in a corresponding one of a plurality of islands, wherein the islands are oriented in a plurality of rows; and
a configurable mesh bus that is coupled to the each of the functional circuits, wherein the configurable mesh bus comprises crossbar switches and links between the crossbar switches, wherein each link comprises a first link portion for communicating data in a first direction across the link and further comprises a second link portion for communicating data in a second direction opposite the first direction across the link, and wherein each link portion comprises a distributed credit First-In-First-Out (FIFO) structure that receives push signals from a first crossbar switch at a first end of the distributed credit FIFO structure and receives taken signals from a second crossbar switch at a second end of the distributed credit FIFO structure.

US Pat. No. 9,929,933

LOADING A FLOW TABLE WITH NEURAL NETWORK DETERMINED INFORMATION

Netronome Systems, Inc., ...

1. A method comprising:(a) receiving a plurality of packets of a flow onto an integrated circuit;
(b) using a neural network on the integrated circuit to analyze the plurality of packets thereby generating a neural network output value; and
(c) loading a flow entry for the flow into a flow table, wherein the flow entry includes the neural network output value, and wherein (a), (b) and (c) are performed by the integrated circuit.

US Pat. No. 9,912,533

SERDES CHANNEL OPTIMIZATION

Netronome Systems, Inc., ...

1. A method, comprising:
(a) selecting a test bit stream;
(b) selecting a first transmitter configuration;
(c) selecting a first receiver configuration;
(d) configuring a first physical layer circuit with the first transmitter configuration;
(e) configuring a second physical layer circuit with the first receiver configuration;
(f) storing the test bit stream, the first transmitter configuration, and the first receiver configuration in a memory device;
(g) transmitting the test bit stream from the first physical layer circuit via a channel to the second physical layer circuit;
(h) measuring a characteristic of the test bit stream after passing through the channel, wherein the measuring of (h) is performed
by the second physical layer circuit; and

(i) generating a multidimensional array of measured characteristics, wherein the multidimensional array is organized by transmitter
configuration and receiver configuration combinations, wherein the first physical layer circuit and the second physical layer
circuit communicate via a bus to a host processor, and wherein steps (a) through (i) are performed without use of a JTAG communication,
wherein the first receiver configuration includes a plurality of values that determine various settings for determining the
value of each bit received by the second physical layer circuit, wherein the plurality of values include an equalizer bandwidth
value, a differential termination value, a common-mode voltage value, an AC and DC coupling value, a signal threshold detection
value, a continuous time linear equalization value, a DC gain value, an equalizer value, and an eye quality value, and wherein
the plurality of values are programmed by writing to a register in the second physical layer circuit.

US Pat. No. 9,900,090

INTER-PACKET INTERVAL PREDICTION LEARNING ALGORITHM

Netronome Systems, Inc., ...

1. A network appliance, comprising:
a management card;
an optical fiber port;
a line card, comprising:
a Flow Processing Device (FPD);
an optical transceiver;
and a memory, and
a backplane through which the line card communicates with the management card,
wherein the FPD connects to the optical fiber port through the optical transceiver, and analyzes a plurality of packets received
through the optical transceiver, wherein the analysis comprises:

determining an application protocol of each of the plurality of packets,
measuring a size of each of the plurality of packets,
determining a packet flow direction of each of the plurality of packets, and
determining a packet size state that corresponds to the size and the flow direction of each of the plurality of packets,
wherein the plurality of packets are part of a flow pair, and the analysis further comprises determining packet size state
transitions (PSST) between each sequential pair of packets of the flow pair,

wherein the FPD analyzes packets for a plurality of flow pairs of a particular application protocol, and maintains PSST count
values for the particular application protocol, each PSST count value indicating a number of occurrences of each packet size
state transitions between each sequential pair of packets in the plurality of flow pairs of the particular application protocol,
and

wherein the FPD analyzes flow pairs for a plurality of application protocols and determines a most likely application protocol
(MLAP) for each PSST between sequential pairs of packets, and wherein the MLAP is written into an application protocol estimation
table that is stored in the memory.

US Pat. No. 9,846,662

CHAINED CPP COMMAND

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a chained command onto a device from a bus, wherein the chained command includes a reference value;
(b) in response to the receiving of the chained command in (a) outputting a plurality of commands onto the bus, wherein each
of the commands causes a corresponding amount of data from a first portion of memory to be written into a second portion of
memory, wherein due to the outputting of the plurality of commands the data is moved so that the data is stored in the second
portion of memory, and wherein the chained command does not include any address; and

(c) outputting the reference value from the device onto the bus, wherein (a) through (c) are performed by the device, and
wherein the device includes no processor that fetches and executes processor-executable instructions.

US Pat. No. 9,727,513

UNICAST PACKET READY COMMAND

Netronome Systems, Inc., ...

1. A method comprising:
(a) receiving a packet ready command from a memory system via a bus and onto a network interface circuit, wherein the packet
ready command includes a multicast value;

(b) determining a communication mode as a function of the multicast value, wherein the multicast value indicates a single
packet is to be communicated to a single destination by the network interface circuit;

(c) outputting a free packet command from the network interface circuit onto the bus, wherein the free packet command includes
a Free On Last Transfer (FOLT) value, wherein the FOLT value indicates that the packet is to be freed from the memory system
by the network interface circuit after the packet is communicated to the network interface circuit, and wherein (a) through
(c) is performed by the network interface circuit.

US Pat. No. 9,756,152

MAKING A FLOW ID FOR AN EXACT-MATCH FLOW TABLE USING A BYTE-WIDE MULTIPLEXER CIRCUIT

Netronome Systems, Inc., ...

1. An integrated circuit, comprising:
an input port that receives an incoming packet onto the integrated circuit, wherein the packet comprises a plurality of bytes;
a characterizer and classifier circuit that analyzes the packet and classifies the packet as being in a first class or a second
class;

a byte-wide multiplexer circuit comprising a plurality of sets of inputs leads and a set of output leads and a set of select
input leads, wherein each respective one of the sets of input leads is coupled to receive a corresponding respective one of
the bytes of the packet, wherein the set of select input leads is controlled in a first way if the packet is classified to
be in the first class such that a first selected byte of the packet is output by the byte-wide multiplexer circuit as a particular
byte of a Flow Identification value (Flow Id), and wherein the set of select input leads is controlled in a second way if
the packet is classified to be in the second class such that a second selected byte of the packet is output by the byte-wide
multiplexer circuit as the particular byte of the Flow Id, wherein the Flow Id comprises a plurality of bytes; and

an exact-match flow table structure that receives the Flow Id from the byte-wide multiplexer circuit.

US Pat. No. 9,755,911

ON-DEMAND GENERATION OF SYSTEM ENTRY PACKET COUNTS

Netronome Systems, Inc., ...

1. A method, comprising:
(a) maintaining a match table on a first processor in a networking device, wherein the match table includes an entry, wherein
packets of multiple flows result in matches to the entry, and wherein the entry includes an entry packet count;

(b) maintaining a set of bypass packet counts on a second processor in the networking device, wherein there are multiple paths
through the networking device through which packet information passes, and wherein there is one bypass packet count for each
path;

(c) receiving a request onto the networking device, wherein the request is a request for a system entry packet count of the
entry, and wherein the entry is in the match table on the first processor;

(d) determining all the paths of all the flows that resulted in matches of the entry;
(e) determining the system entry packet count based at least on part on the entry packet count and the bypass packet counts
for the paths determined in (d); and

(f) outputting a response to the request, wherein the response is output from the networking device, and wherein the response
includes the system entry packet count determined in (e), wherein (a) through (f) are performed on the networking device.

US Pat. No. 9,705,811

SIMULTANEOUS QUEUE RANDOM EARLY DETECTION DROPPING AND GLOBAL RANDOM EARLY DETECTION DROPPING SYSTEM

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a queue number and a packet descriptor, wherein the queue number indicates a queue stored within a memory unit,
and wherein the packet descriptor is associated with a packet;

(b) determining a priority level associated with the packet;
(c) determining an amount of free memory in the memory unit by (c1) reading a free descriptor count from the memory unit,
wherein a free descriptor counter located in the memory unit generates the free descriptor count and (c2) comparing the free
descriptor count value with a free descriptor threshold value, wherein a first range includes all free descriptor count values
greater than the free descriptor threshold value, wherein a second range includes all free descriptor count values less than
the free descriptor threshold value, wherein the free descriptor counter decrements the free descriptor count when a packet
descriptor is stored in a queue in the memory unit, and wherein the free descriptor counter increments the free descriptor
count when a packet descriptor is removed from a queue the memory unit;

(d) applying a global drop probability to generate a global drop indicator, wherein the global drop probability is a function
of the amount of free memory;

(e) applying a queue drop probability to generate a queue drop indicator, wherein the queue drop probability is a function
of an instantaneous queue depth or a drop precedence value;

(f) causing the packet to be transmitted when the priority level is a first priority level;
(g) causing the packet to be transmitted with the priority level is a second priority level, the global drop indicator indicates
that the packet is not to be dropped, and the queue drop indicator indicates that the packet is not to be dropped; and

(h) causing the packet not to be transmitted when the priority level is the second priority level, and either the global drop
indicator or the queue drop indicator indicate that the packet is to be dropped.

US Pat. No. 9,641,436

GENERATING A FLOW ID BY PASSING PACKET DATA SERIALLY THROUGH TWO CCT CIRCUITS

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
an input port through which an incoming packet is received onto the integrated circuit;
a first Characterize/Classify/Table Lookup and Mux Circuit (CCTC)
comprising:
a first characterizer coupled to receive at least part of the incoming packet;
a first classifier coupled to receive a characterization output value from the first characterizer; and
a first Table Lookup and Multiplexer Circuit (TLMC) coupled to receive a data output value from the first classifier and to
receive a control information value from the first classifier, wherein a programmable lookup circuit of the first TLMC is
controllable to receive a selectable part of the data output value received from the first classifier and to output an associated
lookup value, and wherein a multiplexer circuit of the first TLMC is controllable to output the lookup value as a part of
a data output value that is output by the first TLMC;

a second CCTC comprising:
a second characterizer coupled to receive at least a part of the data output value that is output by the first TLMC;
a second classifier coupled to receive a characterization output value from the second characterizer; and
a second TLMC coupled to receive the data output value from the second classifier and to receive a control information value
from the second classifier, wherein a programmable lookup circuit of the second TLMC is controllable to receive a selectable
part of the data output value received from the second classifier and to output an associated lookup value, and wherein a
multiplexer circuit of the second TLMC is controllable to output the lookup value as a part of a Flow Identification value
(Flow Id) that is output by the second TLMC; and

an exact-match flow table structure coupled to receive the Flow ID that is output by the second TLMC, wherein the exact-match
flow table structure maintains an exact-match flow table, wherein the exact-match flow table comprises a plurality of flow
entries, wherein each flow entry comprises a Flow ID and an action value, and wherein the exact-match flow table structure
determines, for each Flow ID received from the multiplexer circuit of the second TLMC, whether the Flow ID is an exact bit-by-bit
match for any Flow ID of any flow entry stored in the exact-match flow table.

US Pat. No. 9,641,466

PACKET STORAGE DISTRIBUTION BASED ON AVAILABLE MEMORY

Netronome Systems, Inc., ...

1. A method, comprising:
(a) receiving a packet descriptor that includes a queue number and a-a priority indicator, wherein the queue number indicates
a queue stored within a first memory unit located on a first island in an Island-Based Network Flow Processor (IB-NFP), wherein
the packet descriptor is associated with packet information stored in a second memory unit, and wherein packet information
includes at least a portion of a packet;

(b) determining an amount of free memory in the first memory unit;
(c) determining if the amount of free memory in the first memory unit is within a first range;
(d) determining a priority level associated with the packet;
(e) when the amount of free memory in the first memory unit is within the first range and the priority level associated with
the packet is a first level, causing the packet to be written from the second memory unit to a third memory unit; and

(f) when the amount of free memory in the first memory unit is not within the first range or the priority level associated
with the packet is a second level, not causing the packet to be written to the third memory unit from the second memory unit,
wherein the second and third memory units are not located on the first island in the IB-NFP, and wherein the packet descriptor
is not stored in the first, second or third memory unit.

US Pat. No. 9,558,224

AUTOMATON HARDWARE ENGINE EMPLOYING MEMORY-EFFICIENT TRANSITION TABLE INDEXING

Netronome Systems, Inc., ...

1. An automaton engine circuit, comprising:
a transition table organized into 2n rows, wherein each row comprises a plurality of storage locations, wherein each storage location has n bits and no more than
n bits, wherein each storage location stores an n-bit entry value, wherein a first automaton and a second automaton are both
encoded in the transition table, and wherein each row is usable to store an entry value that points to at least one other
row; and

next state logic circuitry that receives entry values from the transition table and generates therefrom a next states vector.

US Pat. No. 10,044,619

SYSTEM AND METHOD FOR PROCESSING AND FORWARDING TRANSMITTED INFORMATION

Netronome Systems, Inc., ...

1. An apparatus for handling first and second digital electronic flows, wherein the first flow comprises a plurality of first datagrams, wherein the first datagrams include datagram header information and datagram payload information, wherein at least one of the first datagrams contains application layer payload information, wherein the second flow comprises a plurality of second datagrams, wherein the second datagrams include datagram header information and datagram payload information, and wherein at least one of the second datagrams contains application layer payload information, the apparatus comprising:a network interface through which at least some of the first datagrams and at least some of the second datagrams pass; and
a plurality of processing components upon which a flow identification and classification subsystem is hosted, wherein the plurality of processing components inspects at least some of the first datagrams and based at least in part on datagram header information and application layer payload information of the first datagrams causes a first flow policy to be applied to the first flow such that at least some of the first datagrams of the first flow take a first path through the plurality of processing components of the apparatus, and wherein the flow identification and classification subsystem inspects at least some of the second datagrams and based on at least in part on datagram header information and application layer payload information of the second datagrams causes a second flow policy to be applied to the second flow such that at least some of the second datagrams of the second flow take a second path through the plurality of processing components of the apparatus, wherein the network interface and the plurality of processing components together comprise a special purpose processor subsystem, the apparatus further comprising:
a general purpose processor subsystem that hosts a general purpose operating system, wherein an application layer program executes on the general purpose processor subsystem, wherein a flow policy causes a copy of a flow to be communicated from the special purpose processor subsystem to the application layer program.

US Pat. No. 9,940,097

REGISTERED FIFO

Netronome Systems, Inc., ...

1. A FIFO (First In First Out) memory, comprising:a tail register;
a plurality of internal registers;
a head register;
a set of input data leads;
a set of output data leads;
a clock signal input lead, wherein a clock signal is received onto the clock signal input lead;
a push signal input lead;
a pop signal input lead;
a full signal output lead;
a valid signal output lead; and
means for controlling the tail register, the internal registers, and the head register such that:
1) an incoming data value that is pushed into the FIFO memory from the set of input data leads can only be written directly into either the head register or the tail register and can never be written directly from the set of input data leads into any of the internal registers;
2) only a data value stored in the head register is ever output from the FIFO memory onto the set of output data leads;
3) if during a clock cycle the tail register stores a data value and at least one of the internal registers is empty and no push operation is to be performed by the FIFO memory in the next clock cycle and no pop operation is to be performed by the FIFO memory in the next clock cycle then in the next clock cycle the data value stored in the tail register is loaded into an empty one of the internal registers;
4) an incoming data value on the set of input data leads is captured into the FIFO memory on a rising edge of the clock signal if a push signal on the push signal input lead is asserted at the time of the rising edge provided that a full signal on the full signal output lead is not asserted at the time of the rising edge; and
5) if the FIFO memory is empty in one clock cycle and if the FIFO memory then performs a push operation in the next clock cycle then an incoming data value from the set of input data leads is written directly into the head register in said next clock cycle.

US Pat. No. 9,891,985

256-BIT PARALLEL PARSER AND CHECKSUM CIRCUIT WITH 1-HOT STATE INFORMATION BUS

Netronome Systems, Inc., ...

1. An integrated circuit comprising:
a plurality of state buses, wherein each of the state buses receives state information for one of a plurality of protocols;
a multi-bit data bus that receives a DATA signal, wherein the DATA signal is part of a frame of a packet;
a first L3 start circuit that receives a first and second portion of the DATA signal;
a second L3 start circuit that receives a third and fourth portion of the DATA signal;
a third L3 start circuit that receives a fifth and sixth portion of the DATA signal;
a fourth L3 start circuit that receives a seventh and eighth portion of the DATA signal;
a first parse32 circuit that is coupled to receive the first portion of the DATA signal and the plurality of state buses,
wherein in at least two of the state buses all the bits are left shifted by one bit with the most significant bit being discarded
and with the new least significant bit being supplied by an output bit by the first L3 start circuit;

a second parse32 circuit that is coupled to receive the second portion of the DATA signal and the plurality of state buses
from the first parse32 circuit, wherein in at least two of the state buses all the bits are left shifted by one bit with the
most significant bit being discarded and with the new least significant bit being supplied by an output bit by the first L3
start circuit;

a third parse32 circuit that is coupled to receive the third portion of the DATA signal and the plurality of state buses from
the second parse32 circuit, wherein in at least two of the state buses all the bits are left shifted by one bit with the most
significant bit being discarded and with the new least significant bit being supplied by an output bit by the second L3 start
circuit;

a fourth parse32 circuit that is coupled to receive the fourth portion of the DATA signal and the plurality of state buses
from the third parse32 circuit, wherein in at least two of the state buses all the bits are left shifted by one bit with the
most significant bit being discarded and with the new least significant bit being supplied by an output bit by the second
L3 start circuit;

a fifth parse32 circuit that is coupled to receive the fifth portion of the DATA signal and the plurality of state buses from
the fourth parse32 circuit, wherein in at least two of the state buses all the bits are left shifted by one bit with the most
significant bit being discarded and with the new least significant bit being supplied by an output bit by the third L3 start
circuit;

a sixth parse32 circuit that is coupled to receive the sixth portion of the DATA signal and the plurality of state buses from
the fifth parse32 circuit, wherein in at least two of the state buses all the bits are left shifted by one bit with the most
significant bit being discarded and with the new least significant bit being supplied by an output bit by the third L3 start
circuit;

a seventh parse32 circuit that is coupled to receive the seventh portion of the DATA signal and the plurality of state buses
from the sixth parse32 circuit, wherein at least two of the state buses all the bits are left shifted by one bit with the
most significant bit being discarded and with the new least significant bit being supplied by an output bit by the fourth
L3 start circuit;

a eighth parse32 circuit that is coupled to receive the eighth portion of the DATA signal and the plurality of state buses
from the seventh parse32 circuit, wherein at least two of the state buses all the bits are left shifted by one bit with the
most significant bit being discarded and with the new least significant bit being supplied by an output bit by the fourth
L3 start circuit; and

a first parse64 circuit comprising the first L3 start circuit, the first parse32 circuit, and the second parse32 circuit;
a second parse64 circuit comprising the second L3 start circuit, the third parse32 circuit, and the fourth parse32 circuit;
a third parse64 circuit comprising the third L3 start circuit, the fifth parse32 circuit, and the sixth parse32 circuit;
a fourth parse64 circuit comprising the fourth L3 start circuit, the seventh parse32 circuit, and the eighth parse32 circuit;
a checksum summer and compare circuit that receives a checksum from each of the parse64 circuits and determines a checksum
for the packet from the received checksums, and wherein the checksum summer and compare circuit compares an extracted checksum
from the packet to the determined checksum.

US Pat. No. 9,699,084

FORWARDING MESSAGES WITHIN A SWITCH FABRIC OF AN SDN SWITCH

Netronome Systems, Inc., ...

10. A method involving a Software-Defined Networking (SDN) switch, wherein the SDN switch comprises a plurality of Network
Flow Switch (NFX) integrated circuits, a Network Flow Processor (NFP) circuit, and a controller processor, the method comprising:
(a) maintaining a flow table on each NFX integrated circuit, wherein none of the NFX integrated circuits executes any SDN
protocol stack;

(b) maintaining a flow table on the NFP circuit, wherein the NFP circuit executes no SDN protocol stack;
(c) maintaining a flow table on the controller processor, wherein the controller processor does execute an SDN protocol stack;
(d) communicating a flow entry out of the NFP circuit along with an addressing label, wherein the flow entry of the addressing
label are communicated as parts of a first MAC frame;

(e) receiving the first MAC frame onto a first of the NFX integrated circuits and using the addressing label to determine
that the flow entry is to be forwarded to another NFX integrated circuit;

(f) communicating the flow entry out of the first NFX integrated circuit and to the other NFX integrated circuit without the
addressing label, wherein the flow entry is communicated out of the first NFX integrated circuit as part of a second MAC frame;

(g) loading the flow entry into a flow table of a selected one of the NFX integrated circuits after the flow entry was forwarded
through one or more others of the NFX integrated circuits, wherein the NFP circuit outputs the flow entry along with one or
more addressing labels so that the flow entry will be loaded into the flow table of a selected one of the NFX integrated circuits,
wherein the selected one of the NFX integrated circuits that is loaded with the flow entry is determined by the NFP circuit;
and

(h) using the flow tables of the NFX integrated circuit to control how packet traffic received onto the SDN switch is output
from the SDN switch.

US Pat. No. 10,366,019

MULTIPROCESSOR SYSTEM HAVING EFFICIENT AND SHARED ATOMIC METERING RESOURCE

Netronome Systems, Inc., ...

1. An integrated circuit comprising a plurality of identical multiprocessor systems, wherein each multiprocessor system comprises:a first processor;
a second processor;
an interface means that is coupled to the first processor via a first bus, and that is coupled to the second processor via a second bus, wherein neither the first bus nor the second bus is a posted transaction bus, wherein the interface means includes a first register that is readable by the first processor via the first bus and a second register by the second processor via the second bus, wherein the interface means is for receiving information from one of the first and second buses in a write operation and for using that information to generate an atomic request, wherein the atomic request has a command portion, an address portion, and a data value portion; and
an atomic engine means for performing an atomic meter operation, wherein the atomic engine means receives the atomic request from the interface means, wherein the atomic engine means comprises:
a memory that stores pairs of credit values; and
a pipeline that uses the address portion of the atomic request to read a first credit value and a second credit value from the memory and then uses the first and second credit values along with the data value portion as input values to perform the atomic meter operation, wherein the pipeline outputs a result color value as a result of the atomic meter operation such that the result color value is then stored into one of the first and second registers in the interface means.

US Pat. No. 10,204,046

HIGH-SPEED AND MEMORY-EFFICIENT FLOW CACHE FOR NETWORK FLOW PROCESSORS

Netronome Systems, Inc., ...

1. A method, comprising:(a) maintaining a plurality of cache lines, wherein one of the cache lines includes a plurality of lock/hash entries, wherein each of the lock/hash entries comprises an exclusive lock value, a shared lock value, and an associated entry hash value, wherein some of the cache lines are stored in a cache memory and wherein others of the cache lines are stored in a bulk memory;
(b) storing a linked list of one or more stored keys, wherein each of the one or more stored keys in the linked list hashes to the entry hash value;
(c) determining a first hash value and a second hash value from an input key;
(d) communicating the second hash value from a processor to a lookup engine;
(e) reading a Cache Line or Cache Line Portion (CL/CLP) from the cache memory, wherein which CL/CLP is read in (e) is at least in part determined by the first hash value;
(f) determining that the second hash value matches an entry hash value of one of the lock/hash entries of the CL/CLP read in (e), wherein the exclusive lock value of said one lock/hash entry had a prior value immediately prior to the determining of (f) and wherein the shared lock value of said one lock/hash entry had a prior value immediately prior to the determining in (f);
(g) in response to the determining of (f) incrementing the shared lock value of said one lock/hash entry of the CL/CLP;
(h) writing the CL/CLP back into the cache memory, wherein (e), (f), (g) and (h) are performed by the lookup engine as parts of a lookup operation;
(i) communicating the prior value of the exclusive lock value and the prior value of the shared lock value from the lookup engine to the processor;
(j) from the prior value of the exclusive lock value communicated in (i) determining that the linked list is not exclusively locked; and
(k) traversing the linked list and checking to determine if the input key matches any stored key in the linked list.

US Pat. No. 10,129,135

USING A NEURAL NETWORK TO DETERMINE HOW TO DIRECT A FLOW

Netronome Systems, Inc., ...

1. A method comprising:(a) receiving a plurality of packets of a flow, wherein the packets are received in (a) onto a first network device;
(b) using a neural network to analyze the plurality of packets thereby generating a neural network output value, wherein the neural network is used in (b) on the first network device; and
(c) using the neural network output value to output packets of the flow from a selected one of a plurality of output ports, and wherein the packets of the flow that are output in (c) are output from a second network device.

US Pat. No. 10,341,232

PACKET PREDICTION IN A MULTI-PROTOCOL LABEL SWITCHING NETWORK USING OPENFLOW MESSAGING

Netronome Systems, Inc., ...

1. A method comprising:(a) receiving a plurality of packets on a first switch;
(b) performing a packet prediction learning algorithm on the first switch using the first plurality of packets and thereby generating a packet prediction information;
(c) communicating the packet prediction information from the first switch to a Network Operating Center (NOC);
(d) in response to (c) the NOC communicates the packet prediction information to a second switch;
(e) in response to (d) the NOC communicates a packet prediction control signal to the second switch; and
(f) in response to (e) the second switch utilizes the packet prediction control signal to determine if a packet prediction operation algorithm utilizing the packet prediction information is to be performed, wherein the communications of (c) and (d) are accomplished using at least one OpenFlow message, and wherein the packet prediction information includes Inter-Packet Interval (IPI) information for a specific application protocol.

US Pat. No. 10,228,968

NETWORK INTERFACE DEVICE THAT ALERTS A MONITORING PROCESSOR IF CONFIGURATION OF A VIRTUAL NID IS CHANGED

Netronome Systems, Inc., ...

1. A Network Interface Device (NID) that implements a plurality of virtual NIDs, wherein the NID is adapted to be coupled to a host via a bus the NID comprising:a processor that monitors configuration of the plurality of virtual NIDs;
a transactional memory that includes a plurality of second blocks of a memory of the NID, wherein there is a second block for each of the virtual NIDs, wherein each virtual NID is configured by configuration information stored in one of the second blocks that corresponds to the virtual NID, wherein if there is a write into any of the second blocks then the transactional memory causes an alert to be sent to the processor; and
a bus interface circuit for coupling the NID to the host via the bus, wherein the bus has a bus address space, wherein a Virtual Function (VF) portion of the bus address space is usable by the host to configure the plurality of virtual NIDs, wherein the VF portion comprises a plurality of first blocks of bus address space, wherein each first block comprises X address locations, wherein each first block corresponds to a corresponding one of the second blocks, wherein a write across the bus from the host into one of the first blocks results in a write into the corresponding one of the second blocks, wherein each second block comprises Y memory locations, and wherein X is greater than Y.

US Pat. No. 10,496,625

ORDERING SYSTEM THAT EMPLOYS CHAINED TICKET RELEASE BITMAP HAVING A PROTECTED PORTION

Netronome Systems, Inc., ...

1. An ordering system comprising:a plurality of ticket order release bitmap blocks that together store a ticket order release bitmap, wherein each Ticket Order Release Bitmap Block (TORBB) stores a different part of the ticket order release bitmap, wherein a first TORBB of the plurality of TORBBs is protected;
a bus; and
a Global Reordering Block (GRO) that: 1) receives a queue entry onto the ordering system from a thread, wherein the queue entry includes a first sequence number, 2) receives a ticket release command from the thread, and in response 3) outputs a return data of ticket release command, wherein the return data of ticket release command indicates if a bit in the protected TORBB was set by the thread.

US Pat. No. 10,476,747

LOADING A FLOW TRACKING AUTOLEARNING MATCH TABLE

Netronome Systems, Inc., ...

1. A networking device, comprising:a first processor comprising:
a match table that stores a plurality of entries; and
a second processor comprising:
a synchronized match table that stores a plurality of entries, wherein the plurality of entries stored in the synchronized match table is synchronized with a corresponding plurality of entries stored in the match table of the first processor; and
a flow tracking autolearning match table, wherein a match of a first packet to one of the synchronized entries in the synchronized match table causes an action to be recorded in the flow tracking autolearning match table in association with a flow of the first packet, and wherein the action is recorded without the first processor being first alerted of the recording, and without the first processor first receiving the first packet from the second processor, and without the first processor instructing the flow tracking autolearning match table to record the action, wherein the flow tracking autolearning match table, for the first packet of the flow, generates a Flow Identifier (Flow ID) and stores the Flow ID as part of an entry for the flow and directs the first packet to the synchronized match table for possible matching, and wherein a subsequent packet of the flow results in a hit in the flow tracking autolearning match table and results in the previously recorded action being applied to the subsequent packet.

US Pat. No. 10,474,465

POP STACK ABSOLUTE INSTRUCTION

Netronome Systems, Inc., ...

1. A method, comprising:(a) storing a data value in a register file, wherein the register file comprises a plurality of registers, wherein the register file is utilized in a processor as a stack, wherein each and every one of the registers of the plurality of registers of the register file is operable as a stack register in the stack, and wherein the register file is a part of a first pipeline stage of a pipeline of the processor;
(b) decoding an instruction, wherein the instruction includes an absolute pointer value, wherein the decoding is performed by a second pipeline stage of the pipeline, wherein the second pipeline stage stores a stack pointer;
(c) as a result of the decoding of (b), determining an operand value A by popping the stack and using a popped value as the operand value A;
(d) as a result of the decoding of (b), determining an operand value B by using the absolute pointer value to identify a particular register of the plurality of registers of the register file and by using a data value portion stored in the particular register as the operand value B;
(e) performing an arithmetic logic operation using the operand value A and the operand value B thereby generating a result, wherein the arithmetic logic operation is performed by a third pipeline stage of the pipeline; and
(f) replacing the data value portion stored in the particular register with the result, wherein (a) through (f) are performed by the pipeline of the processor, and wherein the pipeline includes the first pipeline stage, the second pipeline stage, and the third pipeline stage.

US Pat. No. 10,419,406

EFFICIENT FORWARDING OF ENCRYPTED TCP RETRANSMISSIONS

Netronome Systems, Inc., ...

1. A method comprising:(a) receiving a first Transmission Control Protocol (TCP) segment from a client in a first Secure Sockets Layer (SSL) session and transmitting the first TCP segment to a server in a second SSL session, wherein the first TCP segment is communicated from the client through a network appliance and to the server via a flow of a TCP connection;
(b) receiving a retransmit TCP segment from the client onto the network appliance, wherein the retransmit TCP segment is a retransmit of the first TCP segment, wherein the first TCP segment as transmitted from the client in (a) had a TCP payload, and wherein at a time of the receiving of (b) the network appliance did not store the TCP payload;
(c) storing a number of decrypt engine states on the network appliance, wherein the decrypt engine states are decrypt engine states for the flow of the TCP connection, wherein at a time a number of TCP segments communicated from the network appliance to the server via the flow are unacknowledged, and wherein the number of decrypt engine states stored in (c) is smaller than the number of unacknowledged TCP segments;
(d) using one of the decrypt engine states to set the state of a decrypt engine;
(e) using the decrypt engine to decrypt an SSL payload of the retransmit TCP segment thereby generating an unencrypted SSL payload, wherein the decrypt engine implements a stream cipher;
(f) encrypting the unencrypted SSL payload thereby generating a re-encrypted SSL payload; and
(g) transmitting a retransmit TCP segment from the network appliance to the server via the second SSL session, wherein the retransmit TCP segment transmitted in (g) includes the re-encrypted SSL payload.

US Pat. No. 10,419,242

LOW-LEVEL PROGRAMMING LANGUAGE PLUGIN TO AUGMENT HIGH-LEVEL PROGRAMMING LANGUAGE SETUP OF AN SDN SWITCH

Netronome Systems, Inc., ...

1. A method comprising:(a) compiling a section of high-level programming language code (HLL code) thereby obtaining a first section of native code and a first section of low-level programming language code (LLL code);
(b) compiling a second section of LLL code along with the first section of LLL code obtained in (a) thereby obtaining a second section of native code, wherein the section of HLL code at least in part defines how a network switch performs a matching in a first condition, and wherein the second section of LLL code at least in part defines how the network switch performs matching in a second condition;
(c) loading the first section of native code into the network switch such that a first processor of the network switch can execute at least part of the first section of native code; and
(d) loading the second section of native code into the network switch such that a second processor of the network switch can execute at least part of the second section of native code.

US Pat. No. 10,419,348

EFFICIENT INTERCEPT OF CONNECTION-BASED TRANSPORT LAYER CONNECTIONS

Netronome Systems, Inc., ...

1. A network device, comprising:a memory; and
means for:
a) monitoring first information passing across a TCP (Transmission Control Protocol) connection, wherein the first information is communicated via the TCP connection through the network device;
b) as a result of the monitoring of (a) making a determination that the first information has a first characteristic;
c) in response to the making of the determination in (b) splitting a single TCP control loop that manages flow across the TCP connection into two TCP control loops, wherein after the splitting the memory stores two TCP Transmission Control Blocks (TCBs) for the TCP connection;
d) after the splitting of (c) monitoring second information communicated via the TCP connection and making a determination that the second information does not have a second characteristic;
e) in response to the making of the determination in (d) connecting the two TCP control loops into one TCP control loop such that the one TCP control loop manages flow across the TCP connection; and
(f) after the connecting of (e) communicating third information through the network device via the TCP connection, wherein at no time from the monitoring of (a) to the communicating of (f) is the TCP connection terminated on the network device, and wherein all of the first information, the second information, and the third information is at least received onto the network device via the TCP connection.

US Pat. No. 10,365,681

MULTIPROCESSOR SYSTEM HAVING FAST CLOCKING PREFETCH CIRCUITS THAT CAUSE PROCESSOR CLOCK SIGNALS TO BE GAPPED

Netronome Systems, Inc., ...

1. An Instruction and Data Prefetch Interface Circuit (IDPIC) for a plurality of processors, wherein each processor has an instruction code bus interface and has a data code bus interface, the IDPIC comprising:an instruction code interface circuit comprising:
an instruction fetch request arbiter; and
a plurality of instruction prefetch circuits, wherein each instruction prefetch circuit comprises:
an instruction bus interface for coupling to an instruction code bus interface of a processor;
instruction prefetch line circuitry that outputs instructions onto the instruction bus interface; and
a state machine that causes an instruction read request to be output to the instruction fetch request arbiter if an instruction requested via the instruction bus interface is not present in the instruction prefetch line circuitry;
a data code interface circuit comprising:
a data request arbiter; and
a plurality of data prefetch circuits, wherein each data prefetch circuit comprises:
a data bus interface for coupling to a data code bus interface of a processor;
data prefetch line and write buffer circuitry that outputs data values onto the data code bus interface; and
a state machine that causes a data access request to be output to the data request arbiter if a data location an access of which is requested via the data bus interface is not present in the data prefetch line and write buffer circuitry;
a shared memory comprising:
a first port through which the shared memory receives instruction read requests from the instruction fetch request arbiter and through which it returns instructions; and
a second port through which the shared memory receives data access requests from the data request arbiter, and through which it returns data values, and through which it receives data values to be written into the shared memory; and
a plurality of clock gapping circuits, wherein there is one clock gapping circuit for each processor of the plurality of processors, wherein each clock gapping circuit: 1) receives a base clock signal, 2) receives a clock must not complete signal from one of the instruction prefetch circuits, 3) receives a clock must not complete signal from one of the data prefetch circuits, and 4) outputs a gapped clock signal to a corresponding one of the processors, wherein the instruction code interface circuit and the data code interface circuit are clocked by the base clock signal, wherein the base clock has a fixed and constant period of T from period to period, wherein the durations of time between consecutive rising edges of the gapped clock signal are integer multiples of T including 2T, 3T and 4T, and wherein the smallest duration of time between two consecutive rising edges of the gapped clock signal is 2T.

US Pat. No. 10,341,246

UPDATE PACKET SEQUENCE NUMBER PACKET READY COMMAND

Netronome Systems, Inc., ...

1. A method involving a network flow processor integrated circuit, wherein the network flow processor integrated circuit comprises a first network interface circuit, a second network interface circuit, a bus, and at least a part of a memory system, the method comprising:(a) storing a multicast packet in the memory system;
(b) receiving an egress packet descriptor from the memory system via the bus and onto the first network interface circuit, wherein the egress packet descriptor includes a packet sequence number and a packet ready command, wherein the packet ready command includes a multicast value, an updated sequence number, and an indicator of a network interface circuit, wherein the multicast value indicates whether a packet described by the egress packet descriptor is a multicast packet or a unicast packet, and wherein the first network interface circuit uses and maintains sequence numbers in a first sequence of sequence numbers;
(c) determining a communication mode as a function of the multicast value, wherein the indicator of the network interface circuit of the packet ready command indicates the second network interface circuit, and wherein the second network interface circuit uses and maintains sequence numbers in a second sequence of sequence numbers; and
(d) as a result of the determining of (c) replacing the packet sequence number of the egress packet descriptor with the updated sequence number of the packet ready command thereby generating a modified egress packet descriptor, wherein the receiving of (b), the determining of (c), and the replacing of (d) are performed by the first network interface circuit, wherein at least one copy of the multicast packet is transmitted out of the network flow processor integrated circuit via at least one of the first network interface circuit and the second network interface circuit.

US Pat. No. 10,230,638

EXECUTING A SELECTED SEQUENCE OF INSTRUCTIONS DEPENDING ON PACKET TYPE IN AN EXACT-MATCH FLOW SWITCH

Netronome Systems, Inc., ...

1. A method comprising:(a) maintaining an exact-match flow table on an integrated circuit, wherein the exact-match flow table comprises a plurality of flow entries, wherein each flow entry comprises a Flow Identification value (Flow Id) and an action value;
(b) receiving a first packet onto an integrated circuit;
(c) analyzing the first packet and determining that the first packet is of a first type;
(d) as a result of the determining of (c) initiating execution of a first sequence of instructions by a processor of the integrated circuit, wherein execution of the first sequence causes bits of the first packet to be concatenated and modified in a first way thereby generating a first Flow Id, wherein the Flow Id is of a first form;
(e) determining that the first Flow Id generated in (d) is a bit-by-bit exact-match of a Flow Id of a first flow entry in the exact-match flow table;
(f) using an action value of the first flow entry in outputting packet information of the first packet out of the integrated circuit;
(g) receiving a second packet onto an integrated circuit;
(h) analyzing the second packet and determining that the second packet is of a second type;
(i) as a result of the determining of (h) initiating execution of a second sequence of instructions by the processor, wherein execution of the second sequence causes bits of the second packet to be concatenated and modified in a second way thereby generating a second Flow Id, wherein the second Flow Id is of a second form;
(j) determining that the second Flow Id generated in (i) is a bit-by-bit exact-match of a Flow Id of a second flow entry in the exact-match flow table; and
(k) using an action value of the second flow entry in outputting packet information of the second packet out of the integrated circuit, wherein (a) through (k) are performed by the integrated circuit;
wherein both the first and second packets include a header of a particular type but the first Flow Id includes at least one bit of the header of the particular type of the first packet whereas the second Flow Id includes no bit of the header of the particular type of the second packet.

US Pat. No. 10,191,867

MULTIPROCESSOR SYSTEM HAVING POSTED TRANSACTION BUS INTERFACE THAT GENERATES POSTED TRANSACTION BUS COMMANDS

Netronome Systems, Inc., ...

1. An integrated circuit comprising a plurality of rectangular islands disposed in a two-dimensional array, wherein the rectangular islands are intercoupled by a posted transaction bus, wherein one of the rectangular islands comprises:a first processor;
a second processor that executes the same instruction set that the first processor executes; and
an interface means for receiving addresses from the first processor via a first bus and for receiving addresses from the second processor via a second bus and for interfacing to the posted transaction bus, wherein neither the first bus nor the second bus is a posted transaction bus, wherein an address received by the interface means via the first bus is an address in shared address space shared by the first and second processors, wherein the interface means generates a posted transaction bus read command from the address received via the first bus and causes a posted transaction bus read transaction to occur using the generated posted transaction bus read command such that read data is obtained from the posted transaction bus and is then stored into a shared memory in the interface means at a memory location indicated by the first processor.

US Pat. No. 10,318,334

VIRTIO RELAY

Netronome Systems, Inc., ...

1. In a system that includes a Network Interface Device (NID) and a host computer, wherein the NID is coupled to the host computer via a Peripheral Component Interconnect Express (PCIe) bus, wherein the host computer has an operating system and a plurality of Virtual Machines (VMs), wherein the operating system has a kernel, a method comprising:(a) executing an Open Virtual Switch (OvS) switch subsystem on the host computer, wherein at least part of the OvS switch subsystem executes in user space;
(b) executing a PCIe VF-to-VIRTIO device Relay Program (a PCIe Virtual Function to Virtual Input/Output device Relay Program) on the host computer, wherein the Relay Program executes in user space;
(c) supplying mapping information from the OvS switch subsystem to the Relay Program, wherein the mapping information is PCIe virtual function to Virtual I/O (VIRTIO) device mapping information;
(d) communicating switching rule information from the OvS switch subsystem to the NID via the PCIe bus;
(e) receiving a packet onto the NID from a network, wherein the packet is received in (e) at a time;
(f) based at least in part on packet contents of the packet and the switching rule information deciding on the NID to communicate the packet across the PCIe bus via a selected one of a plurality of Single Root I/O Virtualization (SR-IOV) compliant PCIe virtual functions;
(g) communicating the packet from the NID and across said selected one of the plurality of SR-IOV compliant PCIe virtual functions to the host computer such that the packet is written into user space memory of an instance of a user mode driver of the Relay Program; and
(h) using the mapping information on the Relay Program to cause the packet to be transferred from the user space memory of the instance of the user mode driver of the Relay Program to memory space of one of the VMs, wherein the transfer of the packet in (h) is completed at a time, wherein the packet is communicated in (g) and is transferred in (h) without the operating system of the host computer making any steering decision for the packet based on packet contents at any time between the time the packet is received onto the NID in (e) and the time the transfer of the packet in (h) is completed.

US Pat. No. 10,250,528

PACKET PREDICTION IN A MULTI-PROTOCOL LABEL SWITCHING NETWORK USING OPERATION, ADMINISTRATION, AND MAINTENANCE (OAM) MESSAGING

Netronome Systems, Inc., ...

1. A method comprising:(a) receiving a plurality of packets on a first switch;
(b) performing a packet prediction learning algorithm using the first plurality of packets and thereby generating packet prediction information, where the packet prediction information includes application protocol estimation information and inter-packet interval prediction information, wherein the inter-packet interval prediction information comprises a plurality of sets of inter-packet interval indicator values, wherein each set corresponds to a corresponding one a plurality of application protocols;
(c) communicating the packet prediction information from the first switch a second switch, wherein the packet prediction information is not communicated to a Network Operation Center (NOC);
(d) communicating a packet prediction information notification from the first switch to the NOC;
(e) in response to (d) the NOC communicates a packet prediction control signal to the second switch; and
(f) in response to (e) the second switch utilizes the packet prediction control signal to determine if a packet prediction operation algorithm utilizing the packet prediction information is to be performed, wherein performing the packet prediction operation algorithm includes preloading packet flow data related to a not yet received packet in a memory cache located within the second switch.