Sama Glossary
Updated at March 4th, 2024
This is a auto-generated Article of all your definitions within the glossary.
Glossary
This is a auto-generated Article of all your definitions within the glossary.
-
Attribute Propagation
Attribute propagation refers to the case when an attribute’s value set in one frame is taken over to the next frame by default without enforcing the labeller to (re-)set the value. The labeller shall check the correctness of the propagated value nevertheless. Attribute propagation to the next frame may induce wrong labeling when the ground truth value changes in a frame and the labeller doesn’t review.
-
cartersian coordinate system
Defines a position as the linear distance from the origin on two or three mutually perpendicular axes. The origin is the point where the axes intersect; points along the axes are specified by a pair (x, y) or triplet (x, y, z) of numbers, and both positive and negative directions (relative to the origin) can be specified in each axis. In the right-handed Cartesian coordinate system, such as the one defined in ISO 8855, the x-axis is forward toward the car’s movement, the y-axis is left, and the z axis points up from the ground. This coordinate system may be local to the vehicle or device sensing the surroundings, or may be a world or global coordinate system.
-
CLI
Command Line Interface
-
extrinsic matrix
Extrinsic parameters define the location and orientation of the camera with respect to the world frame. The sensor's extrinsic matrix, made up of a rotation matrix and translation vector, is used to convert point cloud data to a world coordinate system or global frame of reference.
-
H3ML
Hub 3 Markup Language
-
LIDAR intensity
Recorded as the return strength of a laser beam. In the Geo-MMS LIDAR systems, it is a byproduct, provided as an integer number between 1-256. This number varies with the composition of the surface object that is reflecting the laser beam. A low number indicates low reflectivity, while a high number indicates high reflectivity. The intensity of the laser beam return can also be affected by the angle of arrival (scan angle), range, surface composition, roughness, and moisture content. Features under the nadir of the LIDAR sensor usually have higher intensity than the same features along the edges (tilted further), as the returned energy decreases. For these reasons, LIDAR intensity does not always lead to consistent results and must be used as a relative measurement. An advantage is that unlike passive vision sensors (cameras), it is indifferent to shadows.
-
Mutable: False
An attribute marked as False is constant for one physical object. The value of the attribute must not change from frame to frame.
-
Mutable: FalseWithExceptionUnsure
An attribute marked as FalseWithExceptionUnsure is constant for one physical object in a sequence. Therefore the value of the object’s attribute must not change across the sequence. However the value “unsure” may be used in between to express the fact that the attributes value cannot be inferred given the current frame (only).
-
Mutable: True
The value of the attribute might change from frame to frame.
-
primitive
Primitive values are any basic data that is part of the dataset, containing largely standalone values. Examples of primitive values are: Integers, Floating-Point numbers, or Strings (ASCII characters)
-
quaternion
A representation of the orientation consistent with geodesic spherical properties that can be used to approximate of rotation. Orientation, or heading, data can be given in quaternions. They are simpler to compose than Euler angles, avoid the problem of gimbal lock, and have smooth interpolation. Compared to rotation matrices, they are more compact, more numerically stable, and more efficient. A quaternion is a complex number with w as the real part and x, y, z as imaginary parts.
-
sensor location metadata
Describes the LIDAR sensor’s position and what direction it's pointing (e.g. from a GPS system) for each Point Cloud frame. This is also known as the LIDAR extrinsic matrix. The metadata is given for each frame and contains Position [x, y, z] values (the position of the sensor with respect to a world frame) and Rotation [rotation_x, rotation_y, rotation_z, rotation_w] values (the orientation of a device with respect to a world frame). Values should be in quaternion.
-
world coordinate system
A fixed universal coordinate system in which the vehicle and sensor coordinate systems are placed. If multiple point cloud frames are in different coordinate systems because they were collected from two sensors, a world coordination system can translate all those coordinates into a single coordinate system where all frames have the same origin (0,0,0). This is done by translating the origin of each frame to the origin of the world coordinate system using a translation vector and rotating the three axes (typically x, y, and z) to the right orientation using a rotation matrix. This rigid body transformation is called a homogeneous transformation. If your point cloud data was collected in a local coordinate system, you can use an extrinsic matrix of the sensor used to collect the data to convert it to a world coordinate system.