Copied
Docs

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

EMPLOYEE LOGIN
  • Home
  • Getting Started
  • Annotate
  • Tasks
  • API
  • Recipes
  • Tutorials
  • Integrations

3D Folder structure

Updated at February 19th, 2025

 

A well-structured data and folder system is essential for ensuring that 3D sensor fusion projects run smoothly and effectively. This document is designed to provide a clear vision and detailed description of the data and folder structures necessary for successful 3D sensor fusion.

We will explore how to properly organize and store data from various sensors such as LIDARs, and cameras within a coherent and accessible folder structure. Topics include the management of PCD files for point cloud data, the integration of calibration files for both cameras and LIDARs, and the correct handling of vehicle pose information in relation to both vehicle and world coordinate systems.

What's inside each folder?

The file structure is organized into a root folder named Task/Scene, where a Task represents the unit of work being carried out. In this context, a task is a specific project or activity that involves collecting and processing data from multiple sources, such as cameras and LiDAR sensors.

Camera 

Each camera has its own folder which stores the images captured by that camera.

 
 

Calibration

This image represents the folder containing the calibration file for the "front" camera. Inside this folder, there is a JSON file that includes the camera's calibration data. In sensor fusion projects, camera calibration is crucial for ensuring the accuracy of data integration from various sensors. Proper calibration allows the system to align and combine data from different sensors accurately.

Transformation

Transformation can be represented as a quaternion or with a matrix. See above examples

Quaternion

Mock values example

{
   "x":0.058176723826541744,
   "y":0.09375544726948734,
   "z":0.005762090254080327,
   "rotation_w":0.9999996542476844,
   "rotation_x":0.0005840110577795655,
   "rotation_y":-0.000483065574605817,
   "rotation_z":0.00034217429263552914
}

W, X, Y, and Z represent the components of the quaternion.

⚠️ Warning  
All calibration files should be provided. If a particular calibration is not critical or requires less attention, you may use the default JSON provided to complete the calibration document. This ensures that all calibration files are accounted for and the documentation remains consistent.

 

If the calibration changes between images please add the calibration file per frame (image sequence) ask RND what happens if a calibration file doesn't come.

 
 

Matrix

The transformation value can also be given by a transformation matrix, which can either be a 3x3 matrix or a 4x4 matrix.

[
                                     { "matrix": [1, 2, 3, 4, 5, 6, 7, 8, 9]}
                                    ]
 
 

Camera Type

This chart lists the different camera types that are compatible with our platform.

Name Required calibration data
pinhole f_x, f_y, c_x, c_y
deformed_cylindrical

f_x, f_y, c_x, c_y, cut_angle_lower,cut_angle_upper

💡 Note that cut_angle_lower and cut_angle_upper have a default value of -20 and 20

cylinder  f_x, f_y, c_x, c_y

JSON calibration file plus camera type 

The default values for all parameters, including x, y, z, rotation_w, rotation_x, rotation_y, rotation_z, f_x, f_y, c_x, and c_y, will be set to 0.

Mock values example

{
   "x":0.058176723826541744,
   "y":0.09375544726948734,
   "z":0.005762090254080327,
   "rotation_w":0.9999996542476844,
   "rotation_x":0.0005840110577795655,
   "rotation_y":-0.000483065574605817,
   "rotation_z":0.00034217429263552914,
   "camera_type":"deformed_cylindrical",
   "f_x":3255.72,
   "f_y":3255.72,
   "c_x":1487.64,
   "c_y":1014.38
}

Focal Length  
Expressed in pixels, the focal length represents the distance between the camera lens and the image sensor. It is typically provided as two values, f_x, and f_y, corresponding to the focal lengths in the horizontal and vertical directions, respectively.  

Principle Point  
Also specified in pixels, the principle point denotes the image coordinate corresponding to the camera's optical center. It is represented by the values c_x and c_y, indicating the offsets in the horizontal and vertical directions from the image's top-left corner.

📘 Note

By default is set to pinhole If you plan to use a different camera type you need to write/enter the value  
The default camera type is pinhole. If you plan to use a different camera type, please specify it as it is in the example.

 

 

 
 

Distortion model

Distortion is optional. Sama platform supports distortion parameters, including k1, k2, k3, k4, p1, and p2, which account for lens distortions. These parameters correct for image deformations caused by lens imperfections and are used to improve the accuracy of annotation projections.

The following charts provide an overview of the supported distortion models for sensor fusion projects.

Name Distortion model data
none Default value
mei Require xi, k, and p
kannala_brandt Requires k coefficients (usually 4)
brown_conrady Requires k and p parameters (usually 2 of each)

JSON calibration file with camera type plus distortion model

The default values for all parameters, including position (x, y, z), rotation (rotation_w, rotation_x, rotation_y, rotation_z), intrinsic camera parameters (f_x, f_y, c_x, c_y), and distortion coefficients (k1, k2, k3, k4, p1, p2), will be set to 0.

📘 Note

The default distortion model is none. If you plan to use a different distortion model, please specify it as it is in the next example.

 

Mock values example

{
   "x":0.058176723826541744,
   "y":0.09375544726948734,
   "z":0.005762090254080327,
   "rotation_w":0.9999996542476844,
   "rotation_x":0.0005840110577795655,
   "rotation_y":-0.000483065574605817,
   "rotation_z":0.00034217429263552914,
   "f_x":3255.72,
   "f_y":3255.72,
   "c_x":1487.64,
   "c_y":1014.38,
   "distortion_model":"kannala_brandt",
   "k1":0.03016,
   "k2":-0.27366,
   "k3":0.00109,
   "k4":-0.00192,
   "p1":0.01,
   "p2":0.001
}

This camera calibration file specifies the Kannala-Brandt distortion model, which is designed to correct both radial and tangential distortions in images captured by the camera. The coefficients (k1, k2, k3, k4, p1, p2) are used to correct the distortions introduced by the lens, ensuring that the images are as geometrically accurate as possible. This calibration is critical in sensor fusion projects where precise image data is essential for integrating information from multiple sensors.  

Radial distortion parameters

  • "k1": First radial distortion coefficient
  • "k2": Second radial distortion coefficient
  • "k3": Third radial distortion coefficient
  • "k4": Fourth radial distortion coefficient

Tangential distortion coefficients

  • "p1": First tangential distortion coefficient
  • "p2": Second tangential distortion coefficient

📘 Note

If you have a distortion model that we do not reference on our documentation please provide undistorted images

 
 
 
 
 

Lidar

Single Lidar

This image represents the structure of a “single-lidar” folder. Inside the main lidar folder, there is a PCD (Point Cloud Data) file that stores the 3D point cloud data captured by the LiDAR sensor. Additionally, there is a transformation file, which can have any name but must end with _transform.json

What's inside the LIDAR calibration file (transform file) ?

Quaternion

 

Mock values example

[
   {
      "rotation_x":0,
      "rotation_y":0,
      "rotation_z":-0.706825181105366,
      "rotation_w":0.7073882691671998,
      "x":-2.53,
      "y":0.008,
      "z":0.303
   }
]

Quaternions consist of four components: x, y, z (vector part), and w (scalar part). 

Position  
These values represent the position of the LIDAR sensor in 3D space relative to the reference frame.

  • "x": Position along the X-axis.
  • "y": Position along the Y-axis.
  • "z": Position along the Z-axis.

NOTE:  
The timestamp is an optional field accepting a UNIX epoch time.

 

 
 

Matrix

The rotation value can also be given by a transformation matrix, which can either be a 3x3 matrix or a 4x4 matrix.

[
  { "matrix": [1, 2, 3, 4, 5, 6, 7, 8, 9]}, 
]
 
 

 

 
 

Multi Lidar

This image represents the structure of a "multi-lidar" folder. Inside the main "lidar" folder, there are subfolders for each LIDAR sensor, such as "lidar_front" and "lidar_back." Each of these subfolders contains a PCD (Point Cloud Data) file that stores the 3D point cloud data captured by the respective sensor.

In addition to the sensor-specific folders, the "lidar" folder also contains transform files, such as "lidar_front_transform.json" and "lidar_back_transform.json." These JSON files contain the transformation data necessary to correctly align the point cloud data from each LIDAR sensor within the overall 3D coordinate system.

📘 Note

The LIDAR calibration files (the transform files) must be stored at the same level as the other LIDAR folders within the main "lidar" directory.

Naming convention  
Additionally, the name of each transform file must match the name of the corresponding LIDAR folder, following the format: <name_of_folder>_transform.json. This structure ensures that the data from each sensor is properly organized and can be accurately integrated into the overall project.

 

Supported point cloud formats

Sama supports these 3D point cloud data formats:

  • .pcd
  • .las
  • .ply
  • .pts
  • .txt (csv)

💡 Our platform is optimized for PCD files, and we highly recommend using this format for best results. While other data formats, including custom formats, can also be supported, please reach out to the Sama team for assistance with these.

What's inside the LIDAR calibration file (transform file) ?

Quaternion

 

Mock values example

[
   {
      "rotation_x":0,
      "rotation_y":0,
      "rotation_z":-0.706825181105366,
      "rotation_w":0.7073882691671998,
      "x":-2.53,
      "y":0.008,
      "z":0.303
   }
]

Quaternions consist of four components: x, y, z (vector part), and w (scalar part). 

Position  
These values represent the position of the LIDAR sensor in 3D space relative to the reference frame.

  • "x": Position along the X-axis.
  • "y": Position along the Y-axis.
  • "z": Position along the Z-axis.

NOTE:  
The timestamp is an optional field accepting a UNIX epoch time.

 
 

Matrix

The rotation value can also be given by a transformation matrix, which can either be a 3x3 matrix or a 4x4 matrix.

[
  { "matrix": [1, 2, 3, 4, 5, 6, 7, 8, 9]}, 
]
 
 
 
 
 
 

Dynamic Object Motion Compensation

Dynamic Object Motion Compensation adjusts for the movement of dynamic objects as they are projected from 3D or camera space into the image. This compensation relies on the timestamps of lidar frames and camera images, which allow us to linearly interpolate the cuboid's velocity to match the timing of the camera frame for accurate projection.

[
   {
      "x":-1.5131402791467292,
      "y":-0.3024533144787727,
      "z":2.08064533234873,
      "rotation_w":-0.451111949393,
      "rotation_x":-0.008355095746999998,
      "rotation_y":0.012084912164000004,
      "rotation_z":0.892346022379,
      "timestamp":1712615518000,
      "distortion_model":"pinhole",
      "k1":0.033714,
      "k2":-0.363271,
      "k3":0,
      "k4":0,
      "k5":0,
      "k6":0,
      "p1":0,
      "p2":0,
      "p3":0,
      "p4":0,
      "camera_type":"",
      "f_x":2067.719432,
      "f_y":2067.719432,
      "c_x":972.760831,
      "c_y":650.54285
   },
   "..."
]

The calibration file is essentially the same as before but now includes a new timestamp field.  
⚠️ The timestamp value must be expressed as a UNIX epoch in milliseconds

📘 Note

Note that we support a single JSON calibration file containing an array of objects, with each element representing a frame.

 

 

 
 

Vehicle pose data

Providing vehicle pose data

The vehicle_poses folder contains transformation files in .json format. These files provide the necessary transformations from various coordinate systems, such as the vehicle or sensor coordinate system, to the world coordinate system required by the Sama platform. This vehicle pose data, also referred to as world odometry data, enables accurate mapping and alignment within the global frame.

📘 Note

The vehicle_poses folder and its transformation files are only required if vehicle poses (or odometry data) are present. If your data does not include vehicle poses, this folder and its contents are unnecessary. 

 

Vehicle pose transformation JSON file

{
   "x":4.4224746916688495,
   "y":1.439882718607162,
   "z":0.0851939640708257,
   "rotation_w":0.986960465714749,
   "rotation_x":0.0008172231937295278,
   "rotation_y":-0.003002087633395137,
   "rotation_z":0.16093277705992529
}

Camera Calibration Files and Vehicle Poses

The calibration files include both intrinsic and extrinsic parameters.

  • Intrinsic parameters: These transform 3D points into 2D image points and describe the camera's internal properties like focal length and distortion.
  • Extrinsic parameters: These define the camera's position and orientation in the vehicle's pose.

The diagram illustrates how intrinsic parameters project 3D points onto the 2D image plane, while extrinsic parameters position and orient the camera in relation to the vehicle and the world frame.

 
 

Providing your data in world coordinates system

If the provided data is in world coordinate system you need to include the timestamp field in the calibration JSON file.

Camera Calibration files 

The calibration files contain both intrinsic and extrinsic parameters.

  • Intrinsic parameters: These are responsible for transforming points from the 3D world into the 2D image plane of the camera, defining properties like the focal length and optical center.
  • Extrinsic parameters: These describe the camera's position and orientation within the 3D world, helping to align the camera's perspective with the global coordinate system.


This diagram demonstrates how intrinsic and extrinsic parameters function when there is no vehicle pose data, emphasizing the transformation from the 3D world to the 2D image space.

Camera Calibration JSON plus timestamp

 

{
   "x":0.058176723826541744,
   "y":0.09375544726948734,
   "z":0.005762090254080327,
   "rotation_w":0.9999996542476844,
   "rotation_x":0.0005840110577795655,
   "rotation_y":-0.000483065574605817,
   "rotation_z":0.16093277705992529,
   "timestamp":1712615518000
}

⚠️ The timestamp value must be expressed as a UNIX epoch in milliseconds

📘 Note

Note that this example does not include the camera type or distortion model, but these fields can be added if needed.

 
 
 

Providing your data in local coordinates system

If the provided data is in local coordinate system you don't need to include the timestamp field in the calibration JSON file.

 

Camera Calibration files 

The calibration files contain both intrinsic and extrinsic parameters.

  • Intrinsic parameters: These are responsible for transforming points from the 3D world into the 2D image plane of the camera, defining properties like the focal length and optical center.
  • Extrinsic parameters: These describe the camera's position and orientation within the 3D world, helping to align the camera's perspective with the global coordinate system.


This diagram demonstrates how intrinsic and extrinsic parameters function when there is no vehicle pose data, emphasizing the transformation from the 3D world to the 2D image space.

Camera Calibration JSON example

{
   "x":0.058176723826541744,
   "y":0.09375544726948734,
   "z":0.005762090254080327,
   "rotation_w":0.9999996542476844,
   "rotation_x":0.0005840110577795655,
   "rotation_y":-0.000483065574605817,
   "rotation_z":0.16093277705992529
}

SLAM technique

If you prvite your data in local coordinate system you can ask our team to use SLAM (Simultaneous Localization and Mapping) to determine the vehicle poses for you. This approach allows us to calculate the necessary transformations, so lack of vehicle pose data won’t be an obstacle we have a solution ready to meet your needs.

 
 
 
 

 

 

no title unnamed piece

Was this article helpful?

Yes
No
Give feedback about this article

The first B Corp-certified AI company

  • Security
  • Terms
  • Privacy
  • Quality & Information

Copyright © 2023 Samasource Impact Sourcing, Inc. All rights reserved.


Knowledge Base Software powered by Helpjuice

Expand