3D Folder structure
Updated at January 3rd, 2025
A well-structured data and folder system is essential for ensuring that 3D sensor fusion projects run smoothly and effectively. This document is designed to provide a clear vision and detailed description of the data and folder structures necessary for successful 3D sensor fusion.
We will explore how to properly organize and store data from various sensors such as LIDARs, and cameras within a coherent and accessible folder structure. Topics include the management of PCD files for point cloud data, the integration of calibration files for both cameras and LIDARs, and the correct handling of vehicle pose information in relation to both vehicle and world coordinate systems.
What's inside each folder?
The file structure is organized into a root folder named Task/Scene, where a Task represents the unit of work being carried out. In this context, a task is a specific project or activity that involves collecting and processing data from multiple sources, such as cameras and LiDAR sensors.
Camera
Each camera has its own folder which stores the images captured by that camera.
Calibration
This image represents the folder containing the calibration file for the "front" camera. Inside this folder, there is a JSON file that includes the camera's calibration data. In sensor fusion projects, camera calibration is crucial for ensuring the accuracy of data integration from various sensors. Proper calibration allows the system to align and combine data from different sensors accurately.
Transformation
Transformation can be represented as a quaternion or with a matrix. See above examples
Quaternion
MatrixThe transformation value can also be given by a transformation matrix, which can either be a 3x3 matrix or a 4x4 matrix.
|
Camera Type
This chart lists the different camera types that are compatible with our platform.
Name | Required calibration data |
pinhole | f_x, f_y, c_x, c_y |
deformed_cylindrical |
f_x, f_y, c_x, c_y, cut_angle_lower,cut_angle_upper 💡 Note that cut_angle_lower and cut_angle_upper have a default value of -20 and 20 |
cylinder | f_x, f_y, c_x, c_y |
JSON calibration file plus camera type
The default values for all parameters, including x, y, z, rotation_w, rotation_x, rotation_y, rotation_z, f_x, f_y, c_x, and c_y, will be set to 0.
Mock values example
|
Focal Length 📘 Note By default is set to pinhole If you plan to use a different camera type you need to write/enter the value
|
Distortion model
Distortion is optional. Sama platform supports distortion parameters, including k1, k2, k3, k4, p1, and p2, which account for lens distortions. These parameters correct for image deformations caused by lens imperfections and are used to improve the accuracy of annotation projections.
The following charts provide an overview of the supported distortion models for sensor fusion projects.
Name | Distortion model data |
none | Default value |
mei | Require xi, k, and p |
kannala_brandt | Requires k coefficients (usually 4) |
brown_conrady | Requires k and p parameters (usually 2 of each) |
JSON calibration file with camera type plus distortion model
The default values for all parameters, including position (x, y, z), rotation (rotation_w, rotation_x, rotation_y, rotation_z), intrinsic camera parameters (f_x, f_y, c_x, c_y), and distortion coefficients (k1, k2, k3, k4, p1, p2), will be set to 0.
📘 Note
The default distortion model is none. If you plan to use a different distortion model, please specify it as it is in the next example.
Mock values example
|
This camera calibration file specifies the Kannala-Brandt distortion model, which is designed to correct both radial and tangential distortions in images captured by the camera. The coefficients (k1, k2, k3, k4, p1, p2) are used to correct the distortions introduced by the lens, ensuring that the images are as geometrically accurate as possible. This calibration is critical in sensor fusion projects where precise image data is essential for integrating information from multiple sensors.
Tangential distortion coefficients
📘 Note If you have a distortion model that we do not reference on our documentation please provide undistorted images |
Lidar
Single Lidar
This image represents the structure of a “single-lidar” folder. Inside the main lidar folder, there is a PCD (Point Cloud Data) file that stores the 3D point cloud data captured by the LiDAR sensor. Additionally, there is a transformation file, which can have any name but must end with _transform.json
What's inside the LIDAR calibration file (transform file) ?
Quaternion
Mock values example
|
Quaternions consist of four components: x, y, z (vector part), and w (scalar part). Position
NOTE: |
Matrix
The rotation value can also be given by a transformation matrix, which can either be a 3x3 matrix or a 4x4 matrix.
[
{ "matrix": [1, 2, 3, 4, 5, 6, 7, 8, 9]},
]
Multi Lidar
This image represents the structure of a "multi-lidar" folder. Inside the main "lidar" folder, there are subfolders for each LIDAR sensor, such as "lidar_front" and "lidar_back." Each of these subfolders contains a PCD (Point Cloud Data) file that stores the 3D point cloud data captured by the respective sensor.
In addition to the sensor-specific folders, the "lidar" folder also contains transform files, such as "lidar_front_transform.json" and "lidar_back_transform.json." These JSON files contain the transformation data necessary to correctly align the point cloud data from each LIDAR sensor within the overall 3D coordinate system.
📘 Note
The LIDAR calibration files (the transform files) must be stored at the same level as the other LIDAR folders within the main "lidar" directory.
Naming convention
Additionally, the name of each transform file must match the name of the corresponding LIDAR folder, following the format: <name_of_folder>_transform.json. This structure ensures that the data from each sensor is properly organized and can be accurately integrated into the overall project.
Supported point cloud formats
Sama supports these 3D point cloud data formats:
- .pcd
- .las
- .ply
- .pts
- .txt (csv)
💡 Our platform is optimized for PCD files, and we highly recommend using this format for best results. While other data formats, including custom formats, can also be supported, please reach out to the Sama team for assistance with these.
What's inside the LIDAR calibration file (transform file) ?
Quaternion
Mock values example
|
Quaternions consist of four components: x, y, z (vector part), and w (scalar part). Position
NOTE: |
Matrix
The rotation value can also be given by a transformation matrix, which can either be a 3x3 matrix or a 4x4 matrix.
[
{ "matrix": [1, 2, 3, 4, 5, 6, 7, 8, 9]},
]
Dynamic Object Motion Compensation
Dynamic Object Motion Compensation adjusts for the movement of dynamic objects as they are projected from 3D or camera space into the image. This compensation relies on the timestamps of lidar frames and camera images, which allow us to linearly interpolate the cuboid's velocity to match the timing of the camera frame for accurate projection.
|
The calibration file is essentially the same as before but now includes a new timestamp field. 📘 Note Note that we support a single JSON calibration file containing an array of objects, with each element representing a frame.
|
Vehicle pose data
Providing vehicle pose data
The vehicle_poses folder contains transformation files in .json format. These files provide the necessary transformations from various coordinate systems, such as the vehicle or sensor coordinate system, to the world coordinate system required by the Sama platform. This vehicle pose data, also referred to as world odometry data, enables accurate mapping and alignment within the global frame.
📘 Note
The vehicle_poses folder and its transformation files are only required if vehicle poses (or odometry data) are present. If your data does not include vehicle poses, this folder and its contents are unnecessary.
Vehicle pose transformation JSON file
{
"x":4.4224746916688495,
"y":1.439882718607162,
"z":0.0851939640708257,
"rotation_w":0.986960465714749,
"rotation_x":0.0008172231937295278,
"rotation_y":-0.003002087633395137,
"rotation_z":0.16093277705992529
}
Camera Calibration Files and Vehicle Poses
The calibration files include both intrinsic and extrinsic parameters.
- Intrinsic parameters: These transform 3D points into 2D image points and describe the camera's internal properties like focal length and distortion.
- Extrinsic parameters: These define the camera's position and orientation in the vehicle's pose.
The diagram illustrates how intrinsic parameters project 3D points onto the 2D image plane, while extrinsic parameters position and orient the camera in relation to the vehicle and the world frame.
Providing your data in world coordinates system
If the provided data is in world coordinate system you need to include the timestamp field in the calibration JSON file.
Camera Calibration files
The calibration files contain both intrinsic and extrinsic parameters.
- Intrinsic parameters: These are responsible for transforming points from the 3D world into the 2D image plane of the camera, defining properties like the focal length and optical center.
- Extrinsic parameters: These describe the camera's position and orientation within the 3D world, helping to align the camera's perspective with the global coordinate system.
This diagram demonstrates how intrinsic and extrinsic parameters function when there is no vehicle pose data, emphasizing the transformation from the 3D world to the 2D image space.
Camera Calibration JSON plus timestamp
|
⚠️ The timestamp value must be expressed as a UNIX epoch in milliseconds 📘 Note Note that this example does not include the camera type or distortion model, but these fields can be added if needed. |
Providing your data in local coordinates system
If the provided data is in local coordinate system you don't need to include the timestamp field in the calibration JSON file.
Camera Calibration files
The calibration files contain both intrinsic and extrinsic parameters.
- Intrinsic parameters: These are responsible for transforming points from the 3D world into the 2D image plane of the camera, defining properties like the focal length and optical center.
- Extrinsic parameters: These describe the camera's position and orientation within the 3D world, helping to align the camera's perspective with the global coordinate system.
This diagram demonstrates how intrinsic and extrinsic parameters function when there is no vehicle pose data, emphasizing the transformation from the 3D world to the 2D image space.
Camera Calibration JSON example
{
"x":0.058176723826541744,
"y":0.09375544726948734,
"z":0.005762090254080327,
"rotation_w":0.9999996542476844,
"rotation_x":0.0005840110577795655,
"rotation_y":-0.000483065574605817,
"rotation_z":0.16093277705992529
}
SLAM technique
If you prvite your data in local coordinate system you can ask our team to use SLAM (Simultaneous Localization and Mapping) to determine the vehicle poses for you. This approach allows us to calculate the necessary transformations, so lack of vehicle pose data won’t be an obstacle we have a solution ready to meet your needs.