US Pat. No. 10,067,509

SYSTEM AND METHOD FOR OCCLUDING CONTOUR DETECTION

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
an occluding object contour detection processing module, executable by the data processor, the occluding object contour detection processing module being configured to perform an occluding object contour detection operation using a fully convolutional neural network and dense upsampling convolution (DUC), the occluding object contour detection operation being configured to:
receive an input image;
produce a feature map from the input image by semantic segmentation;
apply a DUC operation on the feature map to produce contour information of objects and object instances detected in the input image; and
apply the contour information onto the input image.

US Pat. No. 9,953,236

SYSTEM AND METHOD FOR SEMANTIC SEGMENTATION USING DENSE UPSAMPLING CONVOLUTION (DUC)

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
an image processing module, executable by the data processor, the image processing module being configured to perform semantic segmentation using a dense upsampling convolution (DUC) operation, the DUC operation being configured to:
receive an input image;
produce a feature map from the input image;
perform a convolution operation on the feature map and reshape the feature map to produce a label map;
divide the label map into equal subparts, which have the same height and width as the feature map;
stack the subparts of the label map to produce a whole label map; and
apply a convolution operation directly between the feature map and the whole label map without inserting extra values in deconvolutional layers to produce a semantic label map.

US Pat. No. 9,952,594

SYSTEM AND METHOD FOR TRAFFIC DATA COLLECTION USING UNMANNED AERIAL VEHICLES (UAVS)

TUSIMPLE, San Diego, CA ...

1. A system comprising:an unmanned aerial vehicle (UAV), equipped with a camera, positioned at an elevated position at a monitored location or to track a specific target vehicle, the UAV configured to capture video data of the monitored location or the target vehicle for a pre-determined period of time using the UAV camera;
a data processor; and
a human driver model module, executable by the data processor, the human driver model module being configured to:
receive the captured video data from the UAV;
process the captured video data on a frame basis to identify vehicles or objects of interest for analysis;
group the video data from multiple frames related to a particular vehicle or object of interest into a data group associated with the particular vehicle or object;
create a data group for each of the vehicles or objects of interest; and
provide the data groups corresponding to each of the vehicles or objects of interest as output data used to configure or train a human driver model for prediction or simulation of human driver behavior.

US Pat. No. 10,311,312

SYSTEM AND METHOD FOR VEHICLE OCCLUSION DETECTION

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
an autonomous vehicle occlusion detection system, executable by the data processor, the autonomous vehicle occlusion detection system being configured to perform an autonomous vehicle occlusion detection operation for autonomous vehicles, the autonomous vehicle occlusion detection operation being configured to:
receive training image data from a training image data collection system;
obtain ground truth data corresponding to the training image data;
perform a training phase to train a plurality of classifiers, a first classifier being trained for processing static images of the training image data, a second classifier being trained for processing image sequences of the training image data;
receive image data from an image data collection system associated with an autonomous vehicle; and
perform an operational phase including performing feature extraction on the image data, determine a presence of an extracted feature instance in multiple image frames of the image data by tracing the extracted feature instance back to a previous plurality of N frames relative to a current frame, apply the first trained classifier to the extracted feature instance if the extracted feature instance cannot be determined to be present in multiple image frames of the image data, and apply the second trained classifier to the extracted feature instance if the extracted feature instance can be determined to be present in multiple image frames of the image data.

US Pat. No. 10,147,193

SYSTEM AND METHOD FOR SEMANTIC SEGMENTATION USING HYBRID DILATED CONVOLUTION (HDC)

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
an image processing module, executable by the data processor, the image processing module being configured to perform semantic segmentation using a hybrid dilated convolution (HDC) operation, the HDC operation being configured to: receive an input image; produce a feature map from the input image;
perform a convolution operation on the feature map and produce multiple convolution layers;
group the multiple convolution layers into a plurality of groups;
apply different dilation rates for different convolution layers in a single group of the plurality of groups; and
apply a same dilation rate setting across all groups of the plurality of groups
wherein the HDC operation is used by an autonomous control subsystem to control a vehicle without a driver.

US Pat. No. 10,657,390

SYSTEM AND METHOD FOR LARGE-SCALE LANE MARKING DETECTION USING MULTIMODAL SENSOR DATA

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a multimodal lane detection module, executable by the data processor, the multimodal lane detection module being configured to perform a multimodal lane detection operation configured to:
receive image data from an image generating device mounted on a vehicle, the received image data corresponding to a particular location;
receive point cloud data from a distance and intensity measuring device mounted on the vehicle;
fuse the image data and the point cloud data to produce a set of lane marking points in three-dimensional (3D) space that correlate to the image data and the point cloud data, the fusion including aligning and orienting the image data with a terrain map corresponding to the particular location and using terrain map elevation data to transform the image data to the 3D space; and
generate a lane marking map from the set of lane marking points.

US Pat. No. 10,303,956

SYSTEM AND METHOD FOR USING TRIPLET LOSS FOR PROPOSAL FREE INSTANCE-WISE SEMANTIC SEGMENTATION FOR LANE DETECTION

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
an image processing and lane detection module, executable by the data processor, the image processing and lane detection module being configured to perform an image processing and lane detection operation configured to:
receive image data from an image generating device mounted on an autonomous vehicle;
perform a semantic segmentation operation or other object detection on the received image data to identify and label objects in the image data with object category labels on a per-pixel basis and produce corresponding semantic segmentation prediction data;
perform a triplet loss calculation operation using the semantic segmentation prediction data to identify different instances of objects with similar object category labels found in the image data, the triplet loss calculation operation being configured to select an anchor pixel from the image data, select a second pixel proximally located relative to the anchor pixel, select a third pixel distally located relative to the anchor pixel, determine that the anchor pixel and the second pixel are associated with a same object instance, and determine that the anchor pixel and the third pixel are associated with a different object instance; and
determine an appropriate vehicle control action for the autonomous vehicle based on the different instances of objects identified in the image data.

US Pat. No. 10,586,456

SYSTEM AND METHOD FOR DETERMINING CAR TO LANE DISTANCE

TuSimple, San Diego, CA ...

1. An in-vehicle control system, comprising:a camera configured to generate an image;
a processor; and
a computer-readable memory in communication with the processor and having stored thereon computer-executable instructions to cause the processor to:
receive the image from the camera,
generate a wheel segmentation map representative of one or more wheels detected in the image,
generate a lane segmentation map representative of one or more lanes detected in the image,
for at least one of the wheels in the wheel segmentation map, determine a distance between the wheel and at least one nearby lane in the lane segmentation map, and
determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.

US Pat. No. 10,488,521

SENSOR CALIBRATION AND TIME METHOD FOR GROUND TRUTH STATIC SCENE SPARSE FLOW GENERATION

TUSIMPLE, San Diego, CA ...

1. A method of generating a ground truth dataset for motion planning for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising:performing data alignment for a set of sensors by calibrating the sensors in a common coordinate system and providing time synchronization for data acquired among the sensors;
collecting data in an environment, using the sensors;
calculating light detecting and ranging (LiDAR)'s poses using the collected data;
stitching multiple LiDAR scans to form a local map;
refining positions of close points of static objects in the local map based on a matching algorithm, wherein the close points are aligned based on global navigation satellite system (GNSS)-inertial estimates; and
projecting 3D points in the local map onto corresponding images.

US Pat. No. 10,303,522

SYSTEM AND METHOD FOR DISTRIBUTED GRAPHICS PROCESSING UNIT (GPU) COMPUTATION

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
a distributed task management module, executable by the data processor, the distributed task management module being configured to:
receive a user task service request from a user node;
query resource availability from a plurality of slave nodes having a plurality of graphics processing units (GPUs) thereon, the plurality of slave nodes configured with multiple GPUs mounted on distributed processing containers;
generate a list of uniform resource locators (URLs), each URL on the list corresponding to a path to an available distributed processing container on the plurality of slave nodes;
issue the list of URLs to a load balancing node;
receive from the load balancing node an overall unique URL corresponding to the list of URLs;
use the overall unique URL to assign the user task service request to a plurality of available GPUs based on the resource availability and resource requirements of the user task service request, the assigning including using available distributed processing containers on the plurality of slave nodes; and
retain the list of URLs corresponding to the distributed processing containers assigned to the user task service request.

US Pat. No. 10,685,244

SYSTEM AND METHOD FOR ONLINE REAL-TIME MULTI-OBJECT TRACKING

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an online real-time multi-object tracking system, executable by the data processor, the online real-time multi-object tracking system being configured to perform an online real-time multi-object tracking operation for autonomous vehicles, the online real-time multi-object tracking operation being configured to:
receive image frame data from at least one camera associated with an autonomous vehicle;
generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame;
maintain a plurality of different templates for each object in the object data, the plurality of different templates for each object corresponding to different appearance features for each object as extracted from the object detection results;
update the plurality of templates for each object based on the similarity data;
use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame;
cause state transitions in finite state machines for each object according to the data association results; and
provide as an output object tracking output data corresponding to the states of the finite state machines for each object.

US Pat. No. 10,565,457

FEATURE MATCHING AND CORRESPONDENCE REFINEMENT AND 3D SUBMAP POSITION REFINEMENT SYSTEM AND METHOD FOR CENTIMETER PRECISION LOCALIZATION USING CAMERA-BASED SUBMAP AND LIDAR-BASED GLOBAL MAP

TUSIMPLE, INC., San Dieg...

1. A method of localization comprising:processing of images from a camera and processing a data from a light detection and ranging (LiDAR), the processing comprising:
extracting first features from a 3D submap, wherein the 3D submap is generated using the images from the camera;
extracting second features from a global map, wherein the global map is generated from the data from the LiDAR,
generating matching scores from comparing the first features to the second features, wherein each matching score represents a correspondence between one of the first features and one of the second features, and wherein each matching score includes a distance between the one of the first features and the one of the second features; and
removing one or more correspondences between the first features and the second features for one or more matching scores that are larger than a threshold value.

US Pat. No. 10,558,864

SYSTEM AND METHOD FOR IMAGE LOCALIZATION BASED ON SEMANTIC SEGMENTATION

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
an image processing and localization module, executable by the data processor, the image processing and localization module being configured to perform an image processing and localization operation configured to:
receive image data from an image generating device mounted on an autonomous vehicle;
perform semantic segmentation on the received image data to identify and label objects in the image data and produce semantic label image data, wherein the semantic segmentation assigns an object label to each pixel in the image data and the object labels and the velocities of moving or dynamic objects are included in the semantic label image data;
identify extraneous objects in the semantic label image data using the object labels included therein;
identify dynamic objects as extraneous objects in the semantic label image data using the object labels and the velocities of moving or dynamic objects included therein;
remove the extraneous objects from the semantic label image data;
compare the semantic label image data to a baseline semantic label map created from semantic segmentation, wherein the semantic segmentation assigns an object label to each pixel in baseline image data obtained from an image generating device and the object labels are included in the baseline semantic label map; and
determine a vehicle location of the autonomous vehicle based on information in a matching baseline semantic label map.

US Pat. No. 10,308,242

SYSTEM AND METHOD FOR USING HUMAN DRIVING PATTERNS TO DETECT AND CORRECT ABNORMAL DRIVING BEHAVIORS OF AUTONOMOUS VEHICLES

TuSimple, San Diego, CA ...

1. A system comprising: a data processor; and a vehicle control module, executable by the data processor, the vehicle control module being configured to perform a vehicle control command validation operation for autonomous vehicles, the vehicle control command validation operation being configured to: generate data corresponding to a normal driving behavior safe zone; receive a proposed vehicle control command; compare the proposed vehicle control command with the normal driving behavior safe zone; and issue a warning alert and modify the proposed vehicle control command to produce a modified and validated vehicle control command if the proposed vehicle control command is outside of the normal driving behavior safe zone.

US Pat. No. 10,671,083

NEURAL NETWORK ARCHITECTURE SYSTEM FOR DEEP ODOMETRY ASSISTED BY STATIC SCENE OPTICAL FLOW

TUSIMPLE, INC., San Dieg...

1. A system for visual odometry, the system comprising:an internet server, comprising:
an I/O port, configured to transmit and receive electrical signals to and from a client device;
a memory;
one or more processing units; and
one or more programs stored in the memory and configured for execution by the one or more processing units, the one or more programs including instructions for:
extracting representative features from a pair of input images in a first convolution neural network (CNN) in a visual odometry model;
merging, in a first merge module, outputs from the first CNN;
decreasing a feature map size in a second CNN;
generating a first flow output for each layer in a first deconvolution neural network (DNN);
merging, in a second merge module, outputs from the second CNN and the first DNN;
generating, by the second merge module from the first flow of the first DNN and outputs from the second CNN, a motion estimate between the pair of input images;
generating a second flow output for each layer in a second DNN; and
reducing accumulated errors in a recurrent neural network (RNN).

US Pat. No. 10,565,728

SMOOTHNESS CONSTRAINT FOR CAMERA POSE ESTIMATION

TUSIMPLE, INC., San Dieg...

1. A method for estimating a camera pose of a camera, implemented in a vehicle, the method comprising:determining a first bounding box based on a previous frame;
determining a second bounding box based on a current frame, wherein the current frame is temporally subsequent to the previous frame;
estimating the camera pose by minimizing a weighted sum of a camera pose function and a constraint function, wherein the camera pose function tracks a position and an orientation of the camera in time, and wherein the constraint function is based on coordinates of the first bounding box and coordinates of the second bounding box; and
using at least the camera pose for navigating the vehicle.

US Pat. No. 10,471,963

SYSTEM AND METHOD FOR TRANSITIONING BETWEEN AN AUTONOMOUS AND MANUAL DRIVING MODE BASED ON DETECTION OF A DRIVERS CAPACITY TO CONTROL A VEHICLE

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
a driving control transition module, executable by the data processor, the driving control transition module being configured to perform a driving control transition operation for an autonomous vehicle, the driving control transition operation being configured to:
train a neural network of the driving control transition module using training data received from a remote source and use the trained neural network to modify the operation of the driving control transition module based on a vehicle context in which the autonomous vehicle is operating, the context including a current vehicle location;
receive sensor data related to a vehicle driver's capacity to take manual control of an autonomous vehicle and receive sensor data related to the vehicle context, the sensor data including image data from a camera in the vehicle, the driving control transition operation being configured to use the image data to perform facial feature analysis of a face of the driver;
classify the vehicle driver into at least one of a plurality of driver state classifications based on the facial feature analysis of the face of the driver;
determine, based on the sensor data and the at least one of the plurality of driver state classifications, if the driver has the capacity to take manual control of the autonomous vehicle, wherein the driving control transition operation is configured to prompt the driver to perform an action or provide an input; and
output a vehicle control transition signal to a vehicle subsystem to cause the vehicle subsystem to take action based on the driver's capacity to take manual control of the autonomous vehicle, the vehicle control transition signal corresponding to the at least one of the plurality of driver state classifications, the vehicle control transition signal causing the vehicle subsystem to direct the vehicle to safely pull over to the side of a roadway, if the driver is determined to not have the capacity to take manual control of the vehicle.

US Pat. No. 10,474,790

LARGE SCALE DISTRIBUTED SIMULATION FOR REALISTIC MULTIPLE-AGENT INTERACTIVE ENVIRONMENTS

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor;
a plurality of distributed computing devices in data communication with the data processor via a data network; and
a distributed multiple-agent simulation module, executable by the data processor, the distributed multiple-agent simulation module being configured to perform a distributed multiple-agent simulation operation for autonomous vehicle simulation, the distributed multiple-agent simulation operation being configured to:
generate a vicinal scenario for each simulated vehicle in an iteration of a simulation, the vicinal scenarios corresponding to different locations, traffic patterns, or environmental conditions being simulated;
assign, by use of the data network, a processing task to each of the plurality of distributed computing devices to cause each of the distributed computing devices to use their own computing resources to generate vehicle trajectories and data corresponding to simulated vehicle or driver behaviors for each of a plurality of simulated vehicles of the simulation based on the vicinal scenario and the assigned processing tasks;
receive via the data network from each of the distributed computing devices processed data including vehicle trajectories and data corresponding to simulated vehicle or driver behaviors for each of the plurality of simulated vehicles;
manage the processing tasks assigned and processed data received from each of the distributed computing devices, the managing including recording a particular distributed computing device as non-responsive or inactive when data from the particular distributed computing device is not received within a pre-defined time window and re-assigning the processing task originally assigned to the non-responsive distributed computing device to another distributed computing device; and
update a state and trajectory of each of the plurality of simulated vehicles based on the processed data received from the plurality of distributed computing devices.

US Pat. No. 10,223,807

FEATURE EXTRACTION FROM 3D SUBMAP AND GLOBAL MAP SYSTEM AND METHOD FOR CENTIMETER PRECISION LOCALIZATION USING CAMERA-BASED SUBMAP AND LIDAR-BASED GLOBAL MAP

TUSIMPLE, San Diego, CA ...

1. A method of localization for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform by one or more autonomous vehicle driving modules execution of processing of images from a camera and data from a LiDAR using the following steps comprising:constructing a 3D submap based on the images from the camera;
constructing a global map based on the data from the LiDAR, wherein the camera and the LiDAR are with a same vehicle;
aligning the 3D submap with the global map;
extracting features from the 3D submap and the global map;
classifying the extracted features in classes; and
establishing correspondence of features in a same class between the 3D submap and the global map.

US Pat. No. 10,679,074

SYSTEM AND METHOD FOR SEMANTIC SEGMENTATION USING HYBRID DILATED CONVOLUTION (HDC)

TUSIMPLE, INC., San Dieg...

1. A method for vehicular control, comprising:producing a feature map from an input image;
producing multiple convolution layers by performing a convolution operation on the feature map;
grouping the multiple convolution layers into a plurality of groups;
applying different dilation rates for different convolution layers in a single group of the plurality of groups, wherein the different dilation rates do not have a common factor relationship; and
applying a same dilation rate setting across each of the plurality of groups,
wherein the method is used by a control subsystem to control a vehicle.

US Pat. No. 10,481,044

PERCEPTION SIMULATION FOR IMPROVED AUTONOMOUS VEHICLE CONTROL

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
a perception simulation module, executable by the data processor, the perception simulation module being configured to perform a perception simulation operation for autonomous vehicles, the perception simulation operation being configured to:
receive perception data from a plurality of sensors of an autonomous vehicle;
configure the perception simulation operation based on a comparison of the perception data against ground truth data;
generate simulated perception data by simulating errors related to the physical constraints of one or more of the plurality of sensors, and by simulating noise in data provided by a sensor processing module corresponding to one or more of the plurality of sensors, wherein simulating errors related to the physical constraints of one or more of the plurality of sensors includes applying an occlusion beyond which the one or more of the plurality of sensors cannot detect objects; and
provide the simulated perception data to a motion planning system for the autonomous vehicle.

US Pat. No. 10,268,205

TRAINING AND TESTING OF A NEURAL NETWORK METHOD FOR DEEP ODOMETRY ASSISTED BY STATIC SCENE OPTICAL FLOW

TUSIMPLE, San Diego, CA ...

1. A method of visual odometry for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising:in response to images in pairs, generating a prediction of static scene optical flow for each pair of the images in a visual odometry model;
generating a set of motion parameters for each pair of the images in the visual odometry model;
training the visual odometry model by using the prediction of static scene optical flow and the motion parameters; and
predicting motion between a pair of consecutive image frames by the trained visual odometry model;
extracting representative features from a first image of a pair in a first convolution neural network (CNN);
extracting representative features from a second image of the pair in the first CNN;
merging, in a first merge module, outputs from the first CNN;
decreasing feature map size in a second CNN;
generating a first flow output for each layer in a first deconvolution neural network (DNN);
merging, in a second merge module, outputs from the second CNN and the first DNN, and generating a first motion estimate; and
generating a second flow output for each layer in a second DNN, the second flow output serving as a first optical flow prediction.

US Pat. No. 10,733,465

SYSTEM AND METHOD FOR VEHICLE TAILLIGHT STATE RECOGNITION

TUSIMPLE, INC., San Dieg...

1. A method of recognizing a taillight signal of a vehicle, the method comprising:receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle;
using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and
using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.

US Pat. No. 10,685,239

SYSTEM AND METHOD FOR LATERAL VEHICLE DETECTION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an autonomous lateral vehicle detection system, executable by the data processor, the autonomous lateral vehicle detection system being configured to perform an autonomous lateral vehicle detection operation for autonomous vehicles, the autonomous lateral vehicle detection operation being configured to:
receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle;
warp the lateral image data based on a line parallel to a side of the autonomous vehicle defined with configuration parameters corresponding to an installation orientation of the at least one laterally-facing camera on the autonomous vehicle;
perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and
apply bounding boxes around the extracted objects.

US Pat. No. 10,671,873

SYSTEM AND METHOD FOR VEHICLE WHEEL DETECTION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an autonomous vehicle wheel detection system, executable by the data processor, the autonomous vehicle wheel detection system being configured to perform an autonomous vehicle wheel detection operation for autonomous vehicles, the autonomous vehicle wheel detection operation being configured to:
receive training image data from a training image data collection system;
obtain ground truth data corresponding to the training image data;
perform a training phase to train one or more classifiers for processing images of the training image data to detect vehicle wheel objects of other vehicles in the images of the training image data;
receive operational image data from an image data collection system associated with an autonomous vehicle; and
perform an operational phase including applying the trained one or more classifiers to extract vehicle wheel objects of other vehicles from the operational image data and produce vehicle wheel object data related to wheels of other vehicles, the vehicle wheel object data including vehicle wheel contour data corresponding to the contours surrounding the wheels of other vehicles.

US Pat. No. 10,656,644

SYSTEM AND METHOD FOR USING HUMAN DRIVING PATTERNS TO MANAGE SPEED CONTROL FOR AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

7. A method comprising:generating data corresponding to desired human driving behaviors;
training a human driving model module using a reinforcement learning process and the desired human driving behaviors;
receiving a proposed vehicle speed control command prior to commanding a vehicle control subsystem to perform a maneuver corresponding to the proposed vehicle speed control command;
determining if the proposed vehicle speed control command conforms to the desired human driving behaviors by use of the human driving model module;
validating or modifying the proposed vehicle speed control command based on the determination; and
outputting the validated or modified vehicle speed control command to the vehicle control subsystem causing the autonomous vehicle to follow a trajectory corresponding to the validated or modified vehicle speed control command.

US Pat. No. 10,552,691

SYSTEM AND METHOD FOR VEHICLE POSITION AND VELOCITY ESTIMATION BASED ON CAMERA AND LIDAR DATA

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
a vehicle position and velocity estimation module, executable by the data processor, the vehicle position and velocity estimation module being configured to perform a proximate object position and velocity estimation operation for an autonomous vehicle, the proximate object position and velocity estimation operation being configured to:
receive input object data from a subsystem of the autonomous vehicle, the input object data including image data from an image generating device and distance data from a distance measuring device, the distance measuring device being one or more light imaging, detection, and ranging (LIDAR) sensors;
determine a two-dimensional (2D) position of a proximate object near the autonomous vehicle using the image data received from the image generating device and semantic segmentation processing of the image data;
track a three-dimensional (3D) position of the proximate object using the distance data received from the distance measuring device over a plurality of cycles and generate tracking data;
correlate the proximate object identified from the image data with the proximate object identified and tracked from the distance data, the correlation being configured to match the 2D position of the proximate object detected in the image data with the 3D position of the same proximate object detected in the distance data;
determine a 3D position of the proximate object using the 2D position, the distance data received from the distance measuring device, and the tracking data;
determine a velocity of the proximate object using the 3D position and the tracking data; and
output the 3D position and velocity of the proximate object relative to the autonomous vehicle.

US Pat. No. 10,493,988

SYSTEM AND METHOD FOR ADAPTIVE CRUISE CONTROL FOR DEFENSIVE DRIVING

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
an adaptive cruise control module, executable by the data processor, being configured to:
receive input object data from a subsystem of an autonomous vehicle, the input object data including distance data and velocity data relative to a lead vehicle;
generate a weighted distance differential corresponding to a weighted difference between an actual distance between the autonomous vehicle and the lead vehicle and a desired distance between the autonomous vehicle and the lead vehicle, the desired distance and a distance weight coefficient of the weighted distance differential being separately user configurable;
generate a weighted velocity differential corresponding to a weighted difference between a velocity of the autonomous vehicle and a velocity of the lead vehicle, a velocity weight coefficient of the weighted velocity differential being separately user configurable;
combine the weighted distance differential and the weighted velocity differential with the velocity of the lead vehicle to produce a velocity command for the autonomous vehicle; and
control the autonomous vehicle to conform to the velocity command.

US Pat. No. 10,482,769

POST-PROCESSING MODULE SYSTEM AND METHOD FOR MOTIONED-BASED LANE DETECTION WITH MULTIPLE SENSORS

TUSIMPLE, San Diego, CA ...

1. A method of lane detection for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the following steps comprising:generating, based on a lane detection algorithm, a hit-map image in response to a current view, the hit-map image including a classification of pixels that hit a lane marking;
receiving a lane marking, associated with the current view;
fitting, in a post-processing module, the lane marking in an arc by using a set of parameters;
generating a lane template, using the set of parameters, the lane template including features of the lane marking associated with the current view and features of the arc;
feeding the lane template associated with the current view for detection of a next view; and
generating a fitted lane marking based on the hit-map image and a lane template associated with an immediately previous view, wherein, based on priors or constraints, the lane template is associated with the immediately previous view to obtain a local optimal.

US Pat. No. 10,481,267

UNDISTORTED RAW LIDAR SCANS AND STATIC POINT EXTRACTIONS METHOD FOR GROUND TRUTH STATIC SCENE SPARSE FLOW GENERATION

TUSIMPLE, San Diego, CA ...

1. A method of generating a ground truth dataset for motion planning for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising:transforming LiDAR scans into undistorted LiDAR scans based on LiDAR's poses for the motion planning of a vehicle;
identifying, for a pair of undistorted LiDAR scans, points belonging to a static object in an environment;
aligning the close points based on pose estimates; and
transforming a reference scan that is close in time to a target undistorted LiDAR scan so as to align the reference scan with the target undistorted LiDAR scan.

US Pat. No. 10,678,234

SYSTEM AND METHOD FOR AUTONOMOUS VEHICLE CONTROL TO MINIMIZE ENERGY COST

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an energy-optimized motion planning module, executable by the data processor, the energy-optimized motion planning module being configured to perform an energy-optimized motion planning operation for autonomous vehicles, the energy-optimized motion planning operation being configured to:
receive sensor data from a plurality of sensors on an autonomous vehicle;
generate a plurality of potential routings and related vehicle motion control operations for the autonomous vehicle to cause the autonomous vehicle to transit from a current position to a desired destination, the related vehicle motion control operations including vehicle motion control operations to adjust the autonomous vehicle's speed, acceleration, steering direction, braking level, and to avoid obstacles detected in the proximity of the autonomous vehicle;
generate an energy consumption rate for each of the potential routings and related vehicle motion control operations using a trainable vehicle energy consumption model and the sensor data, the energy consumption rate for each of the potential routings and related vehicle motion control operations being generated from the sensor data and machine learning datasets configured from test data collections produced from prior real-world training scenarios;
score each of the plurality of potential routings and related vehicle motion control operations based on the corresponding energy consumption rate;
select one of the plurality of potential routings and related vehicle motion control operations having a score within an acceptable range;
modify the related vehicle motion control operations to lower the autonomous vehicle's energy consumption over a corresponding potential routing if the score of the corresponding potential routing is not within the acceptable range; and
output a vehicle motion control output representing the selected one of the plurality of potential routings and related vehicle motion control operations.

US Pat. No. 10,552,979

OUTPUT OF A NEURAL NETWORK METHOD FOR DEEP ODOMETRY ASSISTED BY STATIC SCENE OPTICAL FLOW

TUSIMPLE, San Diego, CA ...

1. A method of visual odometry for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, causes the computing device to perform the following steps comprising:performing data alignment among sensors including a light detection and ranging (LiDAR) sensor, cameras, and an IMU-GPS module;
collecting image data and generating point clouds;
processing a pair of consecutive images in the image data to recognize pixels corresponding to a same point in the point clouds;
establishing an optical flow for visual odometry;
receiving a first image of a first pair of image frames, and extracting representative features from the first image of the first pair in a first convolution neural network (CNN);
receiving a second image of the first pair, and extracting representative features from the second image of the first pair in the first CNN;
merging, in a first merge module, outputs from the first CNN;
decreasing feature map size in a second CNN;
generating a first flow output for each layer in a first deconvolution neural network (DNN); and
merging, in a second merge module, outputs from the second CNN and the first DNN to generate a first motion estimate.

US Pat. No. 10,410,055

SYSTEM AND METHOD FOR AERIAL VIDEO TRAFFIC ANALYSIS

TuSimple, San Diego, CA ...

1. A system comprising:an unmanned aerial vehicle (UAV), equipped with a camera, deployed at an elevated position at a monitored location, the UAV configured to capture a video image sequence of the monitored location for a pre-determined period of time using the UAV camera;
a data processor; and
an image processing module, executable by the data processor, the image processing module being configured to:
receive the captured video image sequence from the UAV;
clip the video image sequence by removing unnecessary images from the video image sequence;
stabilize the video image sequence by choosing a reference image and adjusting other images to the reference image;
extract a background image of the video image sequence for vehicle segmentation;
perform vehicle segmentation to identify vehicles in the video image sequence on a pixel by pixel basis using a trained neural network to classify individual pixels and produce a collection of pixel classifications corresponding to the video image sequence;
generate a vehicle segmentation mask to determine a general shape of each vehicle identified in the video image sequence, the vehicle segmentation mask being generated using the collection of pixel classifications produced by the trained neural network;
determine a centroid, heading, and rectangular shape of each identified vehicle based on the vehicle segmentation mask and the general shape of each vehicle;
perform vehicle tracking to detect a same identified vehicle in multiple image frames of the video image sequence; and
produce output and visualization of the video image sequence including a combination of the background image and the images of each identified vehicle.

US Pat. No. 10,387,736

SYSTEM AND METHOD FOR DETECTING TAILLIGHT SIGNALS OF A VEHICLE

TUSIMPLE, San Diego, CA ...

1. A method of detecting a taillight signal of a vehicle, the method comprising:receiving a plurality of images from one or more image-generating devices;
generating a frame for each of the plurality of images;
generating a ground truth, wherein the ground truth includes a labeled image with one of the following taillight status conditions for a right or left taillight signal of the vehicle: (1) an invisible right or left taillight signal, (2) a visible but not illuminated right or left taillight signal, and (3) a visible and illuminated right or left taillight signal;
creating a first dataset including the labeled images corresponding to the plurality of images, the labeled images including one or more of the taillight status conditions of the right or left taillight signal; and
creating a second dataset including at least one pair of portions of the plurality of images, wherein the at least one pair of portions of the plurality of the images are in temporal succession.

US Pat. No. 10,373,003

DEEP MODULE AND FITTING MODULE SYSTEM AND METHOD FOR MOTION-BASED LANE DETECTION WITH MULTIPLE SENSORS

TUSIMPLE, San Diego, CA ...

1. A method of lane detection for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the following steps comprising:generating a ground truth;
off-line training a lane detection algorithm by using the ground truth, the lane detection algorithm using parameters that express a lane marking in an arc;
on-line generating a predicted lane marking for a current view;
comparing the predicted lane marking against the ground truth; and
off-line refining the lane detection algorithm by using the lane template associated with the current view to generate an improved lane template used for improved lane detection of a next view;
wherein, a detected lane map is generated relative to a moving vehicle using the improved lane template and the predicted lane marking.

US Pat. No. 10,782,694

PREDICTION-BASED SYSTEM AND METHOD FOR TRAJECTORY PLANNING OF AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a prediction-based trajectory planning module, executable by the data processor, the prediction-based trajectory planning module being configured to perform a prediction-based trajectory planning operation for autonomous vehicles, the prediction-based trajectory planning operation being configured to:
receive training data and ground truth data from a training data collection system, the training data including perception data and context data corresponding to human driving behaviors;
perform a training phase to train a trajectory prediction module using the training data;
receive perception data associated with a host vehicle; and
perform an operational phase configured to extract host vehicle feature data and proximate vehicle context data from the perception data, generate a proposed trajectory for the host vehicle, use the trained trajectory prediction module to generate predicted trajectories for each of one or more proximate vehicles near the host vehicle, the predicted trajectories for each of one or more proximate vehicles being in reaction to the proposed host vehicle trajectory, determine if the proposed trajectory for the host vehicle will conflict with any of the predicted trajectories of the proximate vehicles, and modify the proposed trajectory for the host vehicle based on the determined conflicts until the conflicts are eliminated.

US Pat. No. 10,223,806

SYSTEM AND METHOD FOR CENTIMETER PRECISION LOCALIZATION USING CAMERA-BASED SUBMAP AND LIDAR-BASED GLOBAL MAP

TUSIMPLE, San Diego, CA ...

1. A method of localization for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform by one or more autonomous vehicle driving modules execution of processing of images from a camera and data from a LiDAR using the following steps comprising:constructing a 3D submap based on the images from the camera;
constructing a global map based on the data from the LiDAR, wherein the camera and the LiDAR share a common field-of-view;
extracting features from the 3D submap and the global map;
matching features extracted from the 3D submap against features extracted from the global map;
refining feature correspondence; and
refining location of the 3D submap.

US Pat. No. 10,710,592

SYSTEM AND METHOD FOR PATH PLANNING OF AUTONOMOUS VEHICLES BASED ON GRADIENT

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor;
a path planning module, executable by the data processor, the path planning module being configured to perform a path planning operation for autonomous vehicles based on gradient, the path planning operation being configured to:
generate and score a first suggested trajectory for an autonomous vehicle, the first suggested trajectory being defined by a first plurality of waypoints;
generate a trajectory gradient based on the first suggested trajectory, the trajectory gradient defining scale and direction data corresponding to each of the first plurality of waypoints of the first suggested trajectory, the scale and direction data defining a second plurality of waypoints;
generate and score a second suggested trajectory defined by the second plurality of waypoints for the autonomous vehicle, the second suggested trajectory being based on the first suggested trajectory, the trajectory gradient, the scale and direction data, and a human driving model; and
output the second suggested trajectory if the score corresponding to the second suggested trajectory is within a score differential threshold relative to the score corresponding to the first suggested trajectory; and
an autonomous vehicle control subsystem in data communication with the data processor, the path planning module, and vehicle control devices, the autonomous vehicle control subsystem configured to receive a trajectory output from the path planning module and to manipulate the vehicle control devices to cause the autonomous vehicle to follow a path conforming to the trajectory received from the path planning module.

US Pat. No. 10,649,458

DATA-DRIVEN PREDICTION-BASED SYSTEM AND METHOD FOR TRAJECTORY PLANNING OF AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A system comprising:an electronic data processor;
one or more vehicle sensor subsystems to generate input object data related to objects detected in proximity to an autonomous vehicle, the input object data including speed and trajectory data corresponding to proximate agents, the proximate agents being vehicles in the vicinity of the autonomous vehicle and detectable by the vehicle sensor subsystems;
a vehicle control subsystem to control the autonomous vehicle; and
a trajectory planning module, executable by the data processor, the trajectory planning module being configured to perform a data-driven prediction-based trajectory planning operation for autonomous vehicles, the trajectory planning operation being configured to:
generate a first suggested trajectory for the autonomous vehicle, the first suggested trajectory configured to comply with pre-defined goals for the autonomous vehicle;
generate a first distribution of predicted resulting trajectories for each of the proximate agents using a prediction module, the first distributions of predicted resulting trajectories being based on the input object data corresponding to the proximate agents and the first suggested trajectory, the predicted resulting trajectories of each of the first distributions having an associated confidence level;
score the first suggested trajectory based on the first distributions of predicted resulting trajectories of the proximate agents, the score corresponding to a level to which the first suggested trajectory complies with the pre-defined goals;
generate a second suggested trajectory for the autonomous vehicle to comply with the pre-defined goals and generate a corresponding second distribution of predicted resulting trajectories for each of the proximate agents using the prediction module, if the score of the first suggested trajectory is below a minimum acceptable threshold, the second distributions of predicted resulting trajectories being based on the input object data corresponding to the proximate agents and the second suggested trajectory, the predicted resulting trajectories of each of the second distributions having an associated confidence level;
output a suggested trajectory for the autonomous vehicle wherein the score corresponding to the suggested trajectory is at or above the minimum acceptable threshold; and
cause the vehicle control subsystem to manipulate the autonomous vehicle to follow the output suggested trajectory.

US Pat. No. 10,528,823

SYSTEM AND METHOD FOR LARGE-SCALE LANE MARKING DETECTION USING MULTIMODAL SENSOR DATA

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
a multimodal lane detection module, executable by the data processor, the multimodal lane detection module being configured to perform a multimodal lane detection operation configured to:
receive image data from an image generating device mounted on a vehicle;
fit piecewise lines for each lane marking object detected in the received image data;
receive point cloud data from a distance and intensity measuring device mounted on the vehicle;
fuse the image data and the point cloud data to produce a set of lane marking points in three-dimensional (3D) space that correlate to the image data and the point cloud data, wherein the multimodal lane detection operation is configured to project 3D point cloud data on to two-dimensional (2D) image data, and to add a 3D point cloud point to the set of lane marking points if a distance between the position of the projected 3D point cloud point in 2D space and the position of at least one of the piecewise lines is within a pre-determined threshold, the pre-determined threshold being linearly decreased based on perspective depth; and
generate a lane marking map from the set of lane marking points.

US Pat. No. 10,360,257

SYSTEM AND METHOD FOR IMAGE ANNOTATION

TuSimple, San Diego, CA ...

1. A system comprising:a data processor; and
an image annotation module, executable by the data processor, the image annotation module being configured to perform an image annotation operation, the image annotation operation being configured to:
register a plurality of labelers to which annotation tasks are assigned;
assign annotation tasks to the plurality of labelers;
randomly assigning evaluation tasks to the plurality of labelers, the evaluation tasks configured to evaluate the quality of the annotations provided by the plurality of labelers;
enabling the plurality of labelers in an annotation verification chain to add, delete, or modify annotations provided by sequentially prior labelers;
determine if the annotation tasks can be closed or re-assigned to the plurality of labelers;
aggregate annotations provided by the plurality of labelers as a result of the closed annotation tasks;
evaluate a level of performance of the plurality of labelers in providing the annotations; and
calculate payments for the plurality of labelers based on the quantity and quality of the annotations provided by the plurality of labelers.

US Pat. No. 10,528,851

SYSTEM AND METHOD FOR DRIVABLE ROAD SURFACE REPRESENTATION GENERATION USING MULTIMODAL SENSOR DATA

TUSIMPLE, San Diego, CA ...

1. A system comprising:a data processor; and
a multimodal drivable road surface detection module, executable by the data processor, the multimodal drivable road surface detection module being configured to perform a multimodal drivable road surface detection operation configured to:
receive image data from an image generating device mounted on a vehicle and to receive three dimensional (3D) point cloud data from a distance measuring device mounted on the vehicle;
project the 3D point cloud data onto the 2D image data to produce mapped image and point cloud data;
perform post-processing operations on the mapped image and point cloud data, the post-processing operations being configured to perform an outlier detection process to remove outlier points of the point cloud data that do not correspond to a flat road surface captured in the image data, the post-processing operations being further configured to perform a density-based spatial clustering process to remove outlier points of the point cloud data that do not correspond to a point cluster; and
perform a smoothing operation on the processed mapped image and point cloud data to produce a drivable road surface map or representation.

US Pat. No. 10,666,730

STORAGE ARCHITECTURE FOR HETEROGENEOUS MULTIMEDIA DATA

TUSIMPLE, INC., San Dieg...

1. A system comprising:a plurality of compute nodes being in data communication with a data network;
a plurality of physical data storage devices being in communication with the data network;
sensor logic to capture sensor input from an autonomous vehicle; and
a data storage access system enabling communication of data between the plurality of compute nodes and the plurality of physical data storage devices via an application programming interface (API) layer, a cache management layer, a server layer, and a storage layer, the data storage access system receiving a data request from at least one of the plurality of compute nodes at the API layer, the data request including an identification of a topic of a dataset, the topic including a metadata file, a data file, and an index file, the index file including at least one pointer into the data file, the dataset being a heterogeneous multimedia time-series dataset including a plurality of dataset topics, each dataset topic corresponding to a data stream created by a sensor and captured by the sensor logic of the autonomous vehicle, at least two dataset topics of the plurality of dataset topics corresponding to different data streams having different data types.

US Pat. No. 10,360,686

SPARSE IMAGE POINT CORRESPONDENCES GENERATION AND CORRESPONDENCES REFINEMENT SYSTEM FOR GROUND TRUTH STATIC SCENE SPARSE FLOW GENERATION

TUSIMPLE, San Diego, CA ...

1. A system for generating a ground truth dataset for motion planning, the system comprising:an internet server, comprising:
an I/O port, configured to transmit and receive electrical signals to and from a client device;
a memory;
one or more processing units; and
one or more programs stored in the memory and configured for execution by the one or more processing units, the one or more programs including instructions for:
a static point extracting module configured to generate a LiDAR static-scene point cloud based on undistorted LiDAR scans, wherein the static point extracting module is configured to execute the following steps in order to generate a LiDAR static-scene point cloud:
a) identify points belonging to a static object in an environment for a pair of undistorted LiDAR scans;
b) align close points based on GNSS-inertial estimates;
c) transform a reference scan that is close in time to a target undistorted LiDAR scan so as to align the reference scan with the target undistorted LiDAR scan;
d) determine that a distance between a point in the target undistorted LiDAR scan and its closest point in the aligned reference scan is smaller than a threshold; and
e) extract the point from the target undistorted LiDAR scan in order to generate the LiDAR static-scene point cloud;
a corresponding module configured to correspond, for each pair of images, a first image of the pair to the LiDAR static-scene point cloud for the motion planning of a vehicle; and
a computing module configured to compute a camera pose associated with the pair of images in the coordinate of the LiDAR static-scene point cloud.

US Pat. No. 10,768,626

SYSTEM AND METHOD FOR PROVIDING MULTIPLE AGENTS FOR DECISION MAKING, TRAJECTORY PLANNING, AND CONTROL FOR AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a memory for storing a multiple agent autonomous vehicle control module, executable by the data processor, the multiple agent autonomous vehicle control module being a data processing module configured to perform an autonomous vehicle control operation for an autonomous vehicle, the multiple agent autonomous vehicle control module being partitioned into a plurality of subsystem agents, the plurality of subsystem agents including a deep computing vehicle control subsystem and a fast response vehicle control subsystem, the fast response vehicle control subsystem configured to preempt the deep computing vehicle control subsystem if the fast response vehicle control subsystem and the deep computing vehicle control subsystem are executed by the same data processor, the autonomous vehicle control operation being configured to:
receive a task request from a vehicle subsystem;
determine, by use of the data processor, if the task request is appropriate for the deep computing vehicle control subsystem or the fast response vehicle control subsystem based on content of the task request or a context of the autonomous vehicle;
dispatch, by use of the data processor, the task request to the deep computing vehicle control subsystem or the fast response vehicle control subsystem based on the determination;
cause execution of the deep computing vehicle control subsystem or the fast response vehicle control subsystem by use of the data processor to produce a vehicle control output; and
provide the vehicle control output to a vehicle control subsystem of the autonomous vehicle.

US Pat. No. 10,762,673

3D SUBMAP RECONSTRUCTION SYSTEM AND METHOD FOR CENTIMETER PRECISION LOCALIZATION USING CAMERA-BASED SUBMAP AND LIDAR-BASED GLOBAL MAP

TUSIMPLE, INC., San Dieg...

1. A method for localization of an autonomous vehicle, comprising:receiving images from a camera, generating a 3D submap from the images, and converting the 3D submap into first voxels;
receiving data from a light detection and ranging (LiDAR), generating a global map from the data, and converting the global map into second voxels;
estimating a distribution of 3D points within the first and second voxels using a probabilistic model;
extracting features from the 3D submap and the global map;
classifying the extracted features into classes; and
matching the features extracted from the 3D submap against features extracted from the global map.

US Pat. No. 10,812,589

STORAGE ARCHITECTURE FOR HETEROGENEOUS MULTIMEDIA DATA

TUSIMPLE, INC., San Dieg...

1. A system comprising:a plurality of compute nodes being in data communication with a data network;
a plurality of physical data storage devices being in communication with the data network;
a sensor logic to capture sensor data from an autonomous vehicle; and
a data storage access system enabling communication of data between the plurality of compute nodes and the plurality of physical data storage devices via the data network, the data storage access system receiving a data request from at least one of the plurality of compute nodes, the data request comprising an identification of a topic of a dataset, the dataset being a heterogeneous multimedia time-series dataset comprising a plurality of dataset topics, each dataset topic corresponding to a data stream of the sensor data, at least two dataset topics of the plurality of dataset topics corresponding to different data streams having different data types.

US Pat. No. 10,771,726

IMAGE TRANSMISSION DEVICE AND METHOD INCLUDING AN IMAGE DATA RECEIVER AND A PROCESSOR

TUSIMPLE, INC., San Dieg...

1. An image transmission device comprising a receiver and a processor, wherein:the receiver is configured to:
receive pixel data in image data from a camera in sequence and buffer the pixel data into a memory, and
determine, upon reception of a line of pixel data, a line number of the line of pixel data in the image data, and a frame number of the image data,
buffer correlatively the line of pixel data, the line number of the line of pixel data, and the frame number of the image data into a preset location in the memory,
generate, upon reception of the line of pixel data, reading indication information that indicates that the line of pixel data is available to be read; and
the processor is configured to:
perform an obtain operation in which the line of pixel data, the line number of the line of pixel data, and the frame number of the image data are obtained from the receiver, wherein the processor is configured to perform the obtain operation by being configured to:
obtain the reading indication information, and
read the line of pixel data, the line number of the line of pixel data, and the frame number of the image data from the preset location of the memory according to the reading indication information,
package the obtained line of pixel data, line number of the line of pixel data, and frame number of the image data into a data packet, and
transmit the data packet to a server.

US Pat. No. 10,762,635

SYSTEM AND METHOD FOR ACTIVELY SELECTING AND LABELING IMAGES FOR SEMANTIC SEGMENTATION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a processor configured to:
receive image data corresponding to an image from an image generating device;
perform object detection on the received image data to produce semantic label image data by identifying and labeling objects in a plurality of regions of the image;
determine prediction probabilities associated with the plurality of regions of the image, wherein the prediction probabilities indicate likelihood that the objects in the plurality of regions are identified relative to training data;
identify a region of the image for manual labeling in response to determining that a prediction probability associated with the region of the image is below a pre-determined threshold; and
generate a map that shows portions of the image data having prediction probabilities below the pre-determined threshold, and wherein the portions include the identified region.

US Pat. No. 10,739,775

SYSTEM AND METHOD FOR REAL WORLD AUTONOMOUS VEHICLE TRAJECTORY SIMULATION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor in a computing system;
a data collection system, executable by the data processor of the computing system, to collect training data comprising perception or sensor data and ground truth data, and perform a training phase to train a plurality of trajectory prediction models with the training data to produce a plurality of trained trajectory prediction models, the data collection system comprising sensor devices, the sensor devices comprising at least one image generating device, the sensor devices, installed at various traffic locations or installed on moving test vehicles, collecting perception or sensor data from the various traffic locations or from the moving test vehicles, the plurality of trained trajectory prediction models comprising at least one trained trajectory prediction model configured to model a variable level of simulated driver aggressiveness, the collected training data being transferred to the data collection system;
a vicinal scene data generator to generate, by the data processor, a different vicinal scenario for each simulated vehicle in an iteration of a simulation;
a memory for storage of vehicle intention data representing simulated vehicle or driver intentions for a plurality of different vehicle actions and behaviors; and
a trajectory simulation module, executable by the data processor of the computing system, the trajectory simulation module being configured to perform a simulation or operational phase to:
generate, by the data processor, a trajectory corresponding to the collected perception data and the vehicle intention data, execute, by the data processor, at least one of the plurality of trained trajectory prediction models to generate a predicted simulated vehicle trajectory for each of a plurality of simulated vehicles of the simulation based on a corresponding generated vicinal scenario and the vehicle intention data, the predicted simulated vehicle trajectory comprising data indicative of a degree of likelihood or probability that a particular simulated vehicle will traverse the corresponding predicted simulated vehicle trajectory, use the predicted simulated vehicle trajectory, and update, by the data processor, a state and trajectory of each of the plurality of simulated vehicles based on the predicted simulated vehicle trajectory, the plurality of trained trajectory prediction models being used for configuring a control system in an autonomous vehicle.

US Pat. No. 10,953,880

SYSTEM AND METHOD FOR AUTOMATED LANE CHANGE CONTROL FOR AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a lane change control module, executable by the data processor, the lane change control module being configured to perform a lane change trajectory planning operation for autonomous vehicles, the lane change trajectory planning operation being configured to:
receive perception data associated with a host vehicle, the perception data comprising data received from one or more vehicle sensor subsystems;
use the perception data to determine a state of the host vehicle and a state of proximate vehicles detected near to the host vehicle;
determine a first target position within a safety zone between proximate vehicles detected in a roadway lane adjacent to a lane in which the host vehicle is positioned;
determine a second target position in the lane in which the host vehicle is positioned; and
generate, by fitting a Dubin's curve, a lane change trajectory to direct the host vehicle toward the first target position in the adjacent lane after directing the host vehicle toward the second target position in the lane in which the host vehicle is positioned,
wherein the curve is characterized by a set of parameters including a first parameter specifying a first time when the curve starts, and a second parameter specifying a second time when the curve ends, and wherein the set of parameters allows generating the lane change trajectory that avoids collisions.

US Pat. No. 10,953,881

SYSTEM AND METHOD FOR AUTOMATED LANE CHANGE CONTROL FOR AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a lane change control module, executable by the data processor, the lane change control module being configured to perform a lane change trajectory planning operation for autonomous vehicles, wherein the lane change trajectory planning operation comprises:
receiving perception data associated with a host vehicle, the perception data comprising data received from one or more vehicle sensor subsystems, the one or more vehicle sensor subsystems comprising at least one of:
an image capture device or
a laser range finder;
using the perception data to determine a state of the host vehicle and a state of proximate vehicles detected near to the host vehicle;
determining a first target position within a safety zone between proximate vehicles detected in a roadway lane adjacent to a lane in which the host vehicle is positioned;
determining a second target position in the lane in which the host vehicle is positioned;
generating a lane change trajectory to direct the host vehicle toward the first target position in the adjacent lane after directing the host vehicle toward the second target position in the lane in which the host vehicle is positioned;
determining a gradient of the lane change trajectory; and
comparing the gradient of the lane change trajectory with a speed profile of the host vehicle to determine if a curvature of the lane change trajectory is within the speed profile of the host vehicle.

US Pat. No. 10,973,152

COOLING SYSTEM

TUSIMPLE, INC., San Dieg...

1. A cooling system, comprising:a first set of fans mounted on an inward-facing side of an air inlet on an outer shell of a case;
a second set of fans mounted on an inward-facing side of an air outlet on the outer shell of the case, for generating, in cooperation with the first set of fans, a high-pressure airflow from the air inlet to the air outlet;
a first heat sink connected to a heat generating component in the case, for absorbing heat from the heat generating component and transferring the absorbed heat to a second heat sink; and
the second heat sink mounted on an inward-facing side of the second set of fans and cooled by the high-pressure airflow,
wherein the first heat sink comprises at least one water cooling device each corresponding to a plurality of heat generating components and comprising a water tank and a water cooling pipe which goes through the plurality of heat generating components,
wherein the water cooling pipe has a water inlet and a water outlet each connected to the water tank, and
wherein the water cooling pipe is structured such to allow water to pass from the water tank to one heat generating component to the second het sink, and then pass from the heat sink to a next heat generating component.

US Pat. No. 10,942,771

METHOD, APPARATUS AND SYSTEM FOR MULTI-MODULE SCHEDULING

TUSIMPLE, INC., San Dieg...

1. A method for multi-module scheduling, comprising:reading, by a master process, a pre-stored configuration file storing a directed computation graph associated with a computing task, the computing task comprising a plurality of slave processes each comprising a plurality of computing modules grouped in accordance with a computation direction, the directed computation graph comprising a plurality of nodes each corresponding to one computing module in one slave process, at least two of the nodes having a connecting edge there between, an incoming connecting edge of a node being an input edge and an outgoing connecting edge of a node being an output edge;
initializing, by the master process, states of the nodes and connecting edges in a current computing period, the initializing including:
determining, by a master thread, whether there is any free storage space in a control queue based on a predetermined time interval, and if so, storing, by the master thread, one directed computation graph in one free storage space in the control queue, or otherwise setting, by the master thread, a state of the master thread to wait;
creating, when more than one directed computation graph is stored in the control queue, an output edge from a node corresponding to each serial computing module in the directed computation graph in the (i?1)th storage space adjacent to a node in the directed computation graph in the ith storage space in accordance with a direction of the queue, where 2?i and
initializing the states of the nodes and connecting edges in the directed computation graph that is newly stored in the control queue;
determining a node to be called based on the computation direction of the directed computation graph and the states of the nodes, the node to be called comprising a node having all of its input edges in a complete state;
transmitting, to the computing module in the slave process corresponding to the node to be called, a call request of Remote Process Call (RPC) to execute the computing module;
updating the state of the node and the state of each output edge of the node upon receiving a response to the call request; and
proceeding with a next computing period after determining that the states of all the nodes in the directed computation graph have been updated;
wherein the computing modules in each slave process comprise parallel computing modules and serial computing modules.

US Pat. No. 10,800,645

METHOD, DEVICE AND SYSTEM FOR AUTOMATIC OILING OF LONG-DISTANCE TRANSPORT VEHICLE

TUSIMPLE, INC., San Dieg...

1. A method for automatic oiling of a long-distance transport vehicle performed by a computer associated with a transport planning system, the method comprising:obtaining vehicle status information, transport mission information, and highway port information of a vehicle before the vehicle starts from a highway port of departure,
wherein the vehicle status information describes one or more characteristics of the vehicle,
wherein the transport mission information indicates at least a transport route and information about cargo that is loaded or unloaded along the transport route, and
wherein the highway port information indicates one or more highway ports along the transport route;
generating a transport plan according to the vehicle status information, the transport mission information, and the highway port information, wherein the transport plan comprises at least one highway port to be stopped at, cargo quantity to be loaded at each highway port to be stopped at, and oil mass to be filled at each highway port to be stopped at,
wherein the generating comprises determining the oil mass to be filled at each highway port to be stopped at based on at least the vehicle status information, the cargo quantity at each highway port to be stopped at, and a position of each highway port to be stopped at; and
sending the transport plan to another computer associated with the vehicle, wherein the transport plan is configured to prompt the vehicle to: stop at the highway port, perform loading and/or unloading of cargo according to the cargo quantity to be loaded at the stopped-at highway port, and fill with oil according to the oil mass to be filled at the stopped-at highway port.

US Pat. No. 10,752,246

SYSTEM AND METHOD FOR ADAPTIVE CRUISE CONTROL WITH PROXIMATE VEHICLE DETECTION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an adaptive cruise control module, executable by the data processor, being configured to:
receive input object data from a subsystem of a host vehicle, the input object data comprising distance data and velocity data relative to a target vehicle, wherein the input object data comprises camera image data processed by an image data processor to detect the target vehicle or other objects proximate to the host vehicle;
detect the presence of any target vehicle within a sensitive zone in front of the host vehicle, to the left of the host vehicle, and to the right of the host vehicle;
determine a relative speed and a separation distance between the target vehicle relative to the host vehicle; and
generate a velocity command to adjust a speed of the host vehicle based on the relative speed and the separation distance between the host vehicle and the target vehicle to maintain a safe separation between the host vehicle and the target vehicle, wherein the velocity command comprises a linear weighted sum of a speed of the target vehicle, the relative speed, and the separation distance.

US Pat. No. 10,737,695

SYSTEM AND METHOD FOR ADAPTIVE CRUISE CONTROL FOR LOW SPEED FOLLOWING

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an adaptive cruise control module, executable by the data processor, being configured to:
receive input object data from a subsystem of an autonomous vehicle, the input object data including image data from a video stream generated by an image generating device, the input object data also including distance data and velocity data relative to a lead vehicle;
process the image data by semantic segmentation to identify the lead vehicle in the image data as a vehicle object;
generate a weighted distance differential corresponding to a weighted difference between an actual distance between the autonomous vehicle and the lead vehicle and a desired distance between the autonomous vehicle and the lead vehicle;
generate a weighted velocity differential corresponding to a weighted difference between a velocity of the autonomous vehicle and a velocity of the lead vehicle;
combine the weighted distance differential and the weighted velocity differential with the velocity of the lead vehicle to produce a velocity command for the autonomous vehicle;
adjust the velocity command using a dynamic gain, the dynamic gain being a function of a measured acceleration of the autonomous vehicle; and
control the autonomous vehicle to conform to the adjusted velocity command.

US Pat. No. 10,970,564

SYSTEM AND METHOD FOR INSTANCE-LEVEL LANE DETECTION FOR AUTONOMOUS VEHICLE CONTROL

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a memory for storing an autonomous vehicle instance-level lane detection system, executable by the data processor, the autonomous vehicle instance-level lane detection system being configured to:
receive training image data from a training image data collection system, the training image data collection system comprising sensors installed in a moving test vehicle navigated through real-world traffic scenarios, the training image data comprising data collected from real-world traffic scenarios;
obtain ground truth data corresponding to the training image data, the ground truth data corresponding to the data collected from real-world traffic scenarios;
perform a training phase to train a plurality of tasks each associated with different features of the training image data, at least one task of the plurality of tasks corresponding to a specific feature of the training image data, the plurality of tasks configured to execute concurrently, the training phase comprising extracting roadway lane marking features from the training image data, associating similar extracted features with a corresponding task of the plurality of tasks, associating different extracted features with different other tasks of the plurality of tasks, causing the plurality of tasks to generate task-specific predictions of feature characteristics based on the training image data, determining a bias between the task-specific prediction for each task and corresponding task-specific ground truth data, and adjusting parameters of each of the plurality of tasks to cause the bias to meet a pre-defined confidence level;
receive image data from an image data collection system associated with an autonomous vehicle; and
perform an operational phase comprising extracting roadway lane marking features from the image data, causing the plurality of trained tasks to execute concurrently to generate instance-level lane detection results based on the image data, and providing the instance-level lane detection results to an autonomous vehicle subsystem of the autonomous vehicle.

US Pat. No. 10,970,873

METHOD AND DEVICE TO DETERMINE THE CAMERA POSITION AND ANGLE

TUSIMPLE, INC., San Dieg...

1. A method for determining an attitude angle of a camera, the camera being fixed to one and the same rigid object in a vehicle along with an Inertial Measurement Unit (IMU), the method comprising:obtaining IMU attitude angles outputted from the IMU and images captured by the camera;
determining a target IMU attitude angle corresponding to each frame of image based on respective capturing time of the frames of images and respective outputting time of the IMU attitude angles; and
determining an attitude angle of the camera corresponding to each frame of image based on a predetermined conversion relationship between a camera coordinate system for the camera and an IMU coordinate system for the IMU and the target IMU attitude angle corresponding to each frame of image, wherein determining the target IMU attitude angle corresponding to each frame of image based on the respective capturing time of the frames of images and the respective outputting time of the IMU attitude angles comprises, for each frame of image:
selecting from the IMU attitude angles at least one IMU attitude angle whose outputting time matches the capturing time of a current frame of image; and
determining the target IMU attitude angle corresponding to the current frame of image based on the selected at least one IMU attitude angle.

US Pat. No. 10,962,979

SYSTEM AND METHOD FOR MULTITASK PROCESSING FOR AUTONOMOUS VEHICLE COMPUTATION AND CONTROL

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a memory for storing an autonomous vehicle computation and control system, executable by the data processor, the autonomous vehicle computation and control system being configured to:
receive training image data from a training image data collection system, the training image data collection system comprising sensors installed in a moving test vehicle navigated through real-world traffic scenarios, the training image data comprising data collected from real-world traffic scenarios;
obtain ground truth data corresponding to the training image data, the ground truth data corresponding to the data collected from real-world traffic scenarios;
perform a training phase to train a plurality of tasks each associated with different features of the training image data, at least one task of the plurality of tasks corresponding to a specific feature of the training image data, the plurality of tasks configured to execute concurrently, the training phase comprising extracting features from the training image data, associating similar extracted features with a corresponding task of the plurality of tasks, associating different extracted features with different other tasks of the plurality of tasks, causing the plurality of tasks to generate task-specific predictions of feature characteristics based on the training image data, determining a bias between the task-specific prediction for each task and corresponding task-specific ground truth data, and adjusting parameters of each of the plurality of tasks to cause the bias to meet a pre-defined confidence level;
receive image data from an image data collection system associated with an autonomous vehicle; and
perform an operational phase comprising extracting features from the image data, causing the plurality of trained tasks to execute concurrently to generate task-specific predictions of feature characteristics based on the image data, and output the task-specific predictions to an autonomous vehicle subsystem of the autonomous vehicle.

US Pat. No. 10,942,271

DETERMINING AN ANGLE BETWEEN A TOW VEHICLE AND A TRAILER

TUSIMPLE, INC., San Dieg...

1. A system, comprising:one or more ultrasonic sensors, wherein each ultrasonic sensor is mountable to a vehicle to determine a distance from the ultrasonic sensor to a front-end of a trailer attached to the vehicle, wherein the one or more ultrasonic sensors are arranged in a two-dimensional pattern in a plane that is perpendicular to an axis between the vehicle and the trailer; and
an ultrasonic control unit configured to receive the distance from each of the one or more ultrasonic sensors via a communications interface, wherein the ultrasonic control unit determines one or more angles, each angle corresponding to a distance received from the one or more ultrasonic sensors, wherein each angle is between the vehicle and the trailer, and wherein the ultrasonic control unit determines the trailer angle from the one or more angles, wherein a first angle corresponding to a first ultrasonic sensor is determined based on one or more geometrical relationships between a position of the first ultrasonic sensor and the front-end of the trailer, wherein the first angle is determined from:
a first neutral distance between the first ultrasonic sensor and the front-end of the trailer when the trailer is in line with the vehicle,
a first angled distance when the trailer is angled with respect to the vehicle, and
a first offset distance between the center of the first ultrasonic sensor and the center of the vehicle, wherein the first neutral distance is determined when a steering angle of the vehicle is about zero degrees and the vehicle is travelling at about 10 kilometers per hour or more.

US Pat. No. 10,889,237

LIGHTING CONTROL FOR AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A method for controlling one or more lights of a vehicle, comprising:receiving, from an autonomous driving system (ADS) of the vehicle, an input to control one or more exterior lights that are part of a lighting system of the vehicle;
transmitting, based on the input, an ADS message to a controller area network (CAN) of the lighting system;
receiving, by a vehicle electronic control unit (VECU) of the vehicle, a driver lighting command;
transmitting the driver lighting command to the CAN; and
transmitting a lighting command from the CAN to a chassis node of the vehicle, the chassis node configured to operate lighting functions of the vehicle,
wherein the lighting system of the vehicle further comprises a plurality of dashboard lights, the dashboard lights comprising visual distinctions configured to enable a driver or supervisor to discern between driver intent and ADS intent.

US Pat. No. 10,877,476

AUTONOMOUS VEHICLE SIMULATION SYSTEM FOR ANALYZING MOTION PLANNERS

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor;
a map data simulation module, executable by the data processor, and configured to receive map data corresponding to a real world driving environment;
a dynamic vehicle simulation module, executable by the data processor, and configured to
obtain perception data and configuration instructions and data including pre-defined parameters and executables, wherein the configuration instructions and data define a specific driving behavior for each of a plurality of simulated dynamic vehicles,
generate simulated perception data for each of the plurality of simulated dynamic vehicles based on the map data, the perception data, and the configuration instructions and data,
generate a target position, speed, and heading for each of the plurality of simulated dynamic vehicles at a specific point in time using the simulated perception data and the configuration instructions and data,
generate, for each of the plurality of simulated dynamic vehicles, a trajectory to transition the vehicle from its current position, speed, and heading to the target position, speed, and heading, and
provide, for each of the generated trajectories, simulated perception data messages to an autonomous vehicle perception data processing module; and
an autonomous vehicle simulation module, executable by the data processor, and configured to receive vehicle control messages from an autonomous vehicle control system, and simulate the operation and behavior of a real world autonomous vehicle based on the vehicle control messages received from the autonomous vehicle control system.

US Pat. No. 10,878,282

SEGMENTATION PROCESSING OF IMAGE DATA FOR LIDAR-BASED VEHICLE TRACKING SYSTEM AND METHOD

TUSIMPLE, Inc., San Dieg...

1. A method of LiDAR-based vehicle tracking for a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the following steps comprising:projecting points in a processed point cloud to a unit sphere;
for each of the projected points, determining that a distance between one and each of its neighboring points is smaller than a threshold and forming a point cluster of the one and its neighboring points, so that each point cluster represents a vehicle on a road;
forming cluster pairs from the point clusters;
calculating a probability that a cluster pair belongs to a same vehicle using tracking information from a previous frame and spatial information from a current frame;
for each cluster pair, determining whether the clusters of the cluster pair are to be merged into a single cluster.

US Pat. No. 10,878,580

POINT CLUSTER REFINEMENT PROCESSING OF IMAGE DATA FOR LIDAR-BASED VEHICLE TRACKING SYSTEM AND METHOD

TUSIMPLE, INC., San Dieg...

1. A method of Lidar-based vehicle tracking, comprising:calculating a probability that clusters of a cluster pair belong to a same vehicle using tracking information from previous frames and spatial information from a current frame, wherein each cluster in the cluster pair represents point cloud obtained using a LIDAR;
determining that the clusters of the cluster pair are to be merged by taking both the spatial information from the current frame and the tracking information from the previous frames into consideration; and
merging, based on the determining, the clusters of the cluster pair together.

US Pat. No. 10,866,101

SENSOR CALIBRATION AND TIME SYSTEM FOR GROUND TRUTH STATIC SCENE SPARSE FLOW GENERATION

TUSIMPLE, INC., San Dieg...

1. A system for motion planning, the system comprising:a memory;
a set of sensors including a light detection and ranging (LiDAR) sensor; and
at least one processor in communication with the set of sensors and the memory and configured to:
perform a LiDAR scan resulting in a pair of images, each image of the pair of images containing observed points and each image of the pair of images performed at a different time;
generate a LiDAR static-scene point cloud by extracting static points using a transformed reference scan; and
generate an image point correspondence for the pair of images using the LiDAR static-scene point cloud,
wherein the at least one processor is further configured to i) identify, for the pair of images, the static points belonging to static objects based on a comparison between a point from the LiDAR scan and a point in the transformed reference scan and ii) align the observed points based on an identification of the static points.

US Pat. No. 10,867,188

SYSTEM AND METHOD FOR IMAGE LOCALIZATION BASED ON SEMANTIC SEGMENTATION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an image processing and localization module, executable by the data processor, the image processing and localization module being configured to perform an image processing and localization operation configured to:
receive image data from an image generating device mounted on an autonomous vehicle;
perform semantic segmentation on the received image data to identify and label objects in the image data and produce semantic label image data, wherein the semantic segmentation assigns an object label to each pixel in the image data;
identify extraneous objects in the semantic label image data using the object labels included therein, the extraneous objects being dynamic or transitory objects in the image data;
remove the extraneous objects from the semantic label image data;
compare the semantic label image data to a baseline semantic label map created from semantic segmentation, wherein the semantic segmentation assigns an object label to each pixel in baseline image data obtained from an image generating device and the object labels are included in the baseline semantic label map; and
determine a vehicle location of the autonomous vehicle based on information in a matching baseline semantic label map.

US Pat. No. 10,860,018

SYSTEM AND METHOD FOR GENERATING SIMULATED VEHICLES WITH CONFIGURED BEHAVIORS FOR ANALYZING AUTONOMOUS VEHICLE MOTION PLANNERS

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor;
a perception data collection module, executable by the data processor, to receive perception data from a plurality of perception data sensors;
a dynamic vehicle configuration module, executable by the data processor, to obtain configuration instructions and data comprising pre-defined parameters and executables defining a specific driving behavior for each of a plurality of simulated dynamic vehicles;
a dynamic vehicle simulation module to generate a target position and a target speed for each of the plurality of simulated dynamic vehicles, the target positions and the target speeds being based on the perception data and the configuration instructions and data; and
a trajectory generator to generate a plurality of trajectories and acceleration profiles as waypoints to transition each of the plurality of simulated dynamic vehicles from a current position and a current speed to the corresponding target position and the target speed as generated by the dynamic vehicle simulation module, wherein each of the waypoints comprises the target speed and acceleration of a respective simulated dynamic vehicle at a corresponding time.

US Pat. No. 10,839,216

SYSTEM, METHOD AND APPARATUS FOR OBJECT IDENTIFICATION

TUSIMPLE, INC., San Dieg...

1. A system for object identification, comprising a sensing device, a control device and one or more unmanned vehicles, whereinthe sensing device is configured to sense a vehicle running condition and a road condition to obtain sensed data;
the control device is configured to determine an object not belonging to a predetermined category as an unknown object by performing object identification based on the sensed data; mark the unknown object in the sensed data including the unknown object; determine an unmanned vehicle within a predetermined range from the unknown object; transmit the sensed data with the marked unknown object and an instruction to identify the unknown object to the determined unmanned vehicle; receive a feedback message from the unmanned vehicle, and when the feedback message carries information on an object category, save the information on the object category and mark a category of the unknown object as the saved object category; and
the unmanned vehicle is configured to receive the instruction and the sensed data from the control device; identify an object based on sensed data obtained at the unmanned vehicle; compare the object in the sensed data from the control device with the identified object to determine whether the unknown object marked in the sensed data has been identified, and if so, transmit to the control device the feedback message carrying the information on the object category of the identified unknown object.

US Pat. No. 10,839,234

SYSTEM AND METHOD FOR THREE-DIMENSIONAL (3D) OBJECT DETECTION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a 3D image processing system, executable by the data processor, the image processing system being configured to:
receive image data from at least one camera associated with an autonomous vehicle, the image data representing at least one image frame;
use a trained deep learning module to determine pixel coordinates of a two-dimensional (2D) bounding box around an object detected in the image frame;
use the trained deep learning module to determine vertices of a three-dimensional (3D) bounding box around the object;
obtain geological information related to a particular environment associated with the image frame;
obtain camera calibration information associated with the at least one camera, wherein the camera calibration information comprises camera calibration matrices with a camera extrinsic matrix and a camera intrinsic matrix; and
determine 3D attributes of the object using the 3D bounding box, the geological information, and the camera calibration information, wherein the 3D attributes of the object comprise a length, height, width, 3D spatial location, and heading of the object.

US Pat. No. 10,837,795

VEHICLE CAMERA CALIBRATION SYSTEM

TUSIMPLE, INC., San Dieg...

1. A method of performing camera calibration, comprising:emitting, by a laser emitter located on a vehicle and pointed towards a road at a first angle, a first laser pulse group towards a first location on a road;
emitting, by the laser emitter pointed towards the road at a second angle, a second laser pulse group towards a second location on the road, wherein each of the first laser pulse group and the second laser pulse group comprises one or more laser spots;
for each of the first laser pulse group and the second laser pulse group emitted at the first location and the second location, respectively:
detecting, by a laser receiver located on the vehicle, the one or more laser spots;
calculating a first set of distances from a location of the laser receiver to the one or more laser spots;
obtaining, from a camera located on the vehicle, an image comprising the one or more laser spots; and
determining, from the image, a second set of distances from a location of the camera to the one or more laser spots; and
determining two camera calibration parameters of the camera by solving two predetermined equations, wherein each predetermined equation includes two unknown camera calibration parameters, and a first value associated with the first set of distances and a second value associated with the second set of distances for a same laser pulse group;
determining, based on a speed at which the vehicle is moving, the first angle and the second angle formed in between a direction in which the laser emitter is pointed towards the road and an imaginary horizontal plane that is at least partly parallel to the road and that includes at least a portion of the laser emitter;
adjusting the laser emitter according to each of first angle and the second angle to emit the first laser pulse group and the second laser pulse group at the first pre-determined distance and the second pre-determined distance, respectively; and
determining the two camera calibration parameters of the camera while the vehicle is moving.

US Pat. No. 10,830,669

PERCEPTION SIMULATION FOR IMPROVED AUTONOMOUS VEHICLE CONTROL

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a perception simulation module, executable by the data processor, the perception simulation module being configured to perform a perception simulation operation for autonomous vehicles, the perception simulation operation being configured to:
receive perception data from a plurality of sensors of an autonomous vehicle;
configure the perception simulation operation based on a comparison of the perception data against ground truth data;
generate simulated perception data by simulating errors related to the physical constraints of one or more of the plurality of sensors, and by simulating noise in data provided by a sensor processing module corresponding to one or more of the plurality of sensors, wherein simulating errors related to the physical constraints of one or more of the plurality of sensors includes applying a maximum range beyond which the one or more of the plurality of sensors cannot detect objects; and
provide the simulated perception data to a motion planning system for the autonomous vehicle.

US Pat. No. 10,816,354

VERIFICATION MODULE SYSTEM AND METHOD FOR MOTION-BASED LANE DETECTION WITH MULTIPLE SENSORS

TUSIMPLE, INC., San Dieg...

1. A method of lane detection, comprising:receiving images of a road including images of one or more lane markings;
generating a ground truth by annotating the received images to identify the one or more lane markings, wherein the ground truth is used to train a lane detection algorithm;
receiving a hit-map image associated with a current view of the road, wherein the hit-map image identifies pixels in the hit-map image that hit the one or more lane markings;
receiving a first lane template associated with a previous view of the road previous to the current view of the road;
generating a fitted lane marking using the first lane template and the hit-map image;
training a confidence module based on the ground truth and at least one of the hit-map image or the fitted lane marking, wherein the confidence module is configured to determine a confidence level of the fitted lane marking using parameters of an arc of a circle fitted into the fitted lane marking.

US Pat. No. 10,803,635

METHOD AND SYSTEM FOR MAP CONSTRUCTION

TUSIMPLE, INC., San Dieg...

1. A method of constructing a map comprising a plurality of lanes, the method comprising:for each of the plurality of lanes, constructing corresponding lane geometry data based on a plurality of polyline segments, wherein the lane geometry data of each of the lanes comprises a left boundary data formed of a first plurality of connected data points in the polyline segments and a right boundary data formed of a second plurality of connected data points in the polyline segments; and
generating a lane content for the respective lane based on the lane geometry data, comprising: generating a plurality of waypoints for each of the lanes in a space defined by the left boundary data and the right boundary data.

US Pat. No. 10,798,303

TECHNIQUES TO COMPENSATE FOR MOVEMENT OF SENSORS IN A VEHICLE

TUSIMPLE, INC., San Dieg...

1. A method, comprising:receiving, from a first set sensors, a first set of sensor data of an area towards which a semi-trailer truck is being driven, wherein the first set of sensors are located on a roof of a cab of a semi-trailer truck;
receiving, from a second set sensors, a second set of sensor data of the area, wherein the second set of sensors are located on a hood of the semi-trailer truck;
receiving, from a height sensor, a measured value indicative of a height of a rear portion of a cab of a semi-trailer truck relative to a chassis of the semi-trailer truck, wherein the height sensor is located at the rear portion of the cab;
determining, based on the measured value, a first correction value for the first set of sensor data and a second correction value for the second set of sensor data; and
compensating for movements of the first set of sensors and the second set of sensors by generating a first set of compensated sensor data and a second set of the compensated sensor data,
wherein the first set of compensated sensor data is generated by adjusting the first set of sensor data based on the first correction value and
wherein the second set of compensated sensor data is generated by adjusting the second set of sensor data based on the second correction value.

US Pat. No. 10,796,402

SYSTEM AND METHOD FOR FISHEYE IMAGE PROCESSING

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a fisheye image processing system, executable by the data processor, the fisheye image processing system being configured to:
receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame;
partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame;
warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view;
combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame;
generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image data set; and
provide the combined resulting fisheye image data set as an output for other autonomous vehicle subsystems.

US Pat. No. 10,782,693

PREDICTION-BASED SYSTEM AND METHOD FOR TRAJECTORY PLANNING OF AUTONOMOUS VEHICLES

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
a prediction-based trajectory planning module, executable by the data processor, the prediction-based trajectory planning module being configured to perform a prediction-based trajectory planning operation for autonomous vehicles, the prediction-based trajectory planning operation being configured to:
receive training data and ground truth data from a training data collection system, the training data including perception data and context data corresponding to human driving behaviors;
perform a training phase to train a trajectory prediction module using the training data;
receive perception data associated with a host vehicle; and
perform an operational phase configured to extract host vehicle feature data and proximate vehicle context data from the perception data, use the trained trajectory prediction module to generate predicted trajectories for each of one or more proximate vehicles near the host vehicle, generate a first proposed trajectory for the host vehicle, determine if the predicted trajectories cause the first proposed trajectory to violate a pre-defined goal, upon determination that the first proposed trajectory violates the pre-defined goal, reject the first proposed trajectory and generate a second proposed trajectory for the host vehicle, use the trained trajectory prediction module to generate new predicted trajectories for each of one or more proximate vehicles based on a current context of the host vehicle, and determine if the new predicted trajectories cause the second proposed trajectory to violate the pre-defined goal.

US Pat. No. 10,783,381

SYSTEM AND METHOD FOR VEHICLE OCCLUSION DETECTION

TUSIMPLE, INC., San Dieg...

1. A system comprising:a data processor; and
an autonomous vehicle occlusion detection system, executable by the data processor, the autonomous vehicle occlusion detection system being configured to perform an autonomous vehicle occlusion detection operation for autonomous vehicles, the autonomous vehicle occlusion detection operation being configured to:
receive image data from an image data collection system associated with an autonomous vehicle; and
perform an operational phase including performing feature extraction on the image data, determine a presence of an extracted feature instance in multiple image frames of the image data by tracing the extracted feature instance back to a previous plurality of N frames relative to a current frame, apply a first trained classifier to the extracted feature instance if the extracted feature instance cannot be determined to be present in multiple image frames of the image data, and apply a second trained classifier to the extracted feature instance if the extracted feature instance can be determined to be present in multiple image frames of the image data.

US Pat. No. 10,777,001

METHOD AND DEVICE OF LABELING LASER POINT CLOUD

TUSIMPLE, INC., San Dieg...

1. A method of labeling a laser point cloud, comprising:receiving, by a device for labeling laser point clouds, data of the laser point cloud;
constructing a 3D scene and establishing a 3D coordinate system corresponding to the 3D scene;
converting a coordinate of each laser point in the laser point cloud into a 3D coordinate in the 3D coordinate system;
mapping laser points in the laser point cloud into the 3D scene using the 3D coordinates of the laser points; and
labeling the laser points in the 3D scene, including:
determining types to which the laser points in the 3D scene belong, including:
using a first mode or a second mode,
wherein the first mode comprises:
generating a 3D selecting box according to a first size information set by a user;
moving a position of the 3D selecting box in the 3D scene in response to a first moving instruction input by the user; and
in response to receiving a labeling instruction comprising a target type, taking laser points within the 3D selecting box as target laser points and taking the target type in the labeling instruction as a type to which the target laser points belong;
and wherein the second mode comprises:
generating a ray between a start point and an end point;
taking laser points meeting the following conditions as target laser points: a distance of a laser point from the start point is between a first distance threshold and a second distance threshold, and a distance of the laser point from the ray is less than or equal to a third distance threshold; and
in response to receiving a labeling instruction comprising a target type, taking the target type in the labeling instruction as a type to which the target laser points belong.