Copied
Docs

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

EMPLOYEE LOGIN
  • Home
  • Getting Started
  • Annotate
  • Tasks
  • API
  • Recipes
  • Tutorials
  • Integrations

Error Calculation For Standard SLA

Updated at October 1st, 2025

Service Level Agreements (SLAs) define the minimum acceptable quality thresholds to ensure consistency and accountability across all project types. This document explains the standard approach for calculating annotation accuracy against SLA targets, using the error count and opportunity count method. SLA error calculation applies to sequences, objects, data categorization, taxonomy, and GenAI responses (with a different error metric for word-level accuracy).

  • Quick checks: Test batch quality instantly without extra review steps.
  • Consistent scoring: Apply the same standards across all work.
  • Less rework: Catch issues early and avoid failed reviews.

Understanding the Error Calculation for Standard SLA

The Standard SLA uses the error count and opportunity count method to measure accuracy across all project types. Each evaluated unit, such as a shape, object, non-annotation output, or response, is reviewed against defined quality requirements. If it meets all requirements, it counts as a correct opportunity, where every error is counted as a unique opportunity.

Error opportunities

An error opportunity is any unit under evaluation, such as a shape, object, non-annotation output, or response. Every unit reviewed is considered an opportunity to make an error. If the unit is correct, it is recorded as such; if it contains errors, it is marked incorrect.

💡 Important

Multiple errors within the same unit are still recorded individually so that all issues are tracked. While each error counts toward the total error sum, the unit itself is not “rolled up” into a single score. Instead, the final scoring reflects the total sum of errors.

 

As defined in the quality framework:

Opportunity Count = Total Perfect Units + Error Count

This method means the reported opportunity count may be slightly higher than the actual number of units reviewed. This is intentional, since it allows the calculation to capture more than one error type per unit. For example, a shape with multiple attributes may contain several independent errors. Each of these errors is treated as a separate opportunity, ensuring that all potential mistakes are included in the calculation.

Scoring a Batch

Batches are evaluated in a consistent way across all project types: as tables of feedback records. Each reviewed unit, such as a shape, non-annotation output, or response, produces feedback records. Perfect units appear as empty records, while erroneous units generate one record per error. When aggregated, the batch is treated as a conglomerate of these records rather than as individually evaluated units. This means that a single unit with multiple errors can heavily impact the overall score; for example, one shape with five errors would require five perfect shapes to balance its effect.

For projects using the opportunities system, accuracy is calculated as:

Score = 1 - (Error Count ÷ Opportunity Count)

Where:

  • Error Count = Total number of feedback records (errors) identified.
  • Opportunity Count = Total perfect units (error-free) + error count.

The resulting score is compared against the SLA threshold defined for the project. If the score falls below the threshold, the batch fails and may require rework.

Try It Yourself: SLA score calculator

Use the interactive table below to calculate your SLA score based on the number of Perfect Units and Errors in your batch.
Enter the count of Perfect Units, the Error Count, and the SLA Threshold for your project. The table will automatically calculate the Opportunity Count (Perfect Units + Errors) and show whether the batch passes or fails.

Quick instructions:

  • Enter Perfect Units and Errors for each batch.
  • Enter your project’s SLA Threshold (number or %).
  • The table automatically calculates Opportunities and shows the Result (Pass/Fail with score).
  • Empty fields stay quiet; warnings only appear if an input is invalid.
Batch Perfect Units Error Count Opportunity Count SLA Threshold Result
Batch 1
 
 
Batch 2
 
 

Opportunity Count = Perfect Units + Error Count

Score = 1 − (Error Count ÷ Opportunity Count)

Example SLA Calculation

Batch Perfect Units Error Count Opportunity Count SLA Threshold Result
Batch 1 293 7 300 99% Fail (97.67%)
Batch 2 495 5 500 99% Pass (99.00%)
 
 

Quality Metric Definitions

To ensure clarity and consistency when calculating SLA accuracy, the following metrics are used across all project types. These definitions apply whether the unit of evaluation is a shape, object, output, or response.

Metric Definition Formula
Error Count Total number of feedback records (errors) identified across the evaluated units (from automated checks, internal QA, or client feedback). Error Count = Total Feedback Records
Opportunity Count Total number of evaluated units, computed as perfect (error-free) units plus error records. Opportunity Count = Perfect Units + Error Count
Error Ratio Proportion of errors relative to total opportunities. Error Ratio = Error Count ÷ Opportunity Count
Score Final quality score (proportion of correct work), compared against the SLA threshold. Score = 1 - (Error Count ÷ Opportunity Count)

SLA Calculation Across Project Types

The formula for scoring remains consistent:  Score = 1 - (Error Count ÷ Opportunity Count)

The difference lies in what counts as an opportunity and how errors are identified for each type of project:

Project Type Unit of Evaluation Opportunity Definition Common Error Types
Computer Vision (CV) with Shapes Shape (polygon, cuboid, etc.) linked to an object One opportunity per shape; errors within a shape are recorded individually and summed in scoring Missing Shape, Extra Shape, Incorrect Label, Inaccurate Shape, Incorrect Attribute
Data Categorization Non-annotation output (e.g., text classification or metadata) One opportunity per output, each error recorded separately and summed in scoring Incorrect Metadata, Missing Metadata, Extra Metadata
Complex Categorization with Attributes (Object-centric) Object with attributes (simple or complex taxonomy) One opportunity per object, attribute-level errors are recorded individually and summed in scoring. An object is perfect only if all required attributes are correct. Incorrect Attribute Value, Missing Attribute, Extra Attribute, Incorrect Object Class (if applicable)
GenAI Evaluation Response object (model output) External reporting: one opportunity per response, errors recorded separately and summed in scoring.
Internal analysis (optional): may compute word-level error ratio for diagnostics.
Instruction Gap, Factuality Error, Grammar Issue

 

 

typical agreement mistake calculation

Was this article helpful?

Yes
No
Give feedback about this article
Understanding the Error Calculation for Standard SLA Error opportunities Scoring a Batch Try It Yourself: SLA score calculator Example SLA Calculation Quality Metric Definitions SLA Calculation Across Project Types

The first B Corp-certified AI company

  • Security
  • Terms
  • Privacy
  • Quality & Information

Copyright © 2023 Samasource Impact Sourcing, Inc. All rights reserved.


Knowledge Base Software powered by Helpjuice

Expand