Copied
Docs

Contact Us

If you still have questions or prefer to get help directly from an agent, please submit a request.
We’ll get back to you as soon as possible.

Please fill out the contact form below and we will reply as soon as possible.

EMPLOYEE LOGIN
  • Home
  • Getting Started
  • Annotate
  • Tasks
  • API
  • Recipes
  • Tutorials
  • Integrations

Writing

Updated at November 28th, 2025

Writing in the Workspace

Writing helps teams produce high-quality, on-brand text with the speed of AI and the assurance of expert human review. 
In a single workspace, you can review model outputs, refine responses, compare changes, and track quality, so content is accurate, consistent, and ready for production.

Workspace overview

 

What is Writing?

Writing is a workflow for generating and validating short- and long-form text (e.g., product copy, support replies, descriptions). AI produces initial drafts; Sama experts refine and validate them against your instructions, style, and compliance needs. You get fast scale and reliable quality.

📘 Results You Can Expect

  • Higher content quality from human-in-the-loop review and guided edits.
  • Consistent brand voice through structured prompts, definitions, and examples.
  • Faster turnaround with bulk review tools and clear diff comparisons.
  • Measurable accuracy using transparent quality metrics and audit trails.

 


Understanding the Writing Workflow

Sama reviews and validates client data generated by AI models to ensure accuracy and quality across images, videos, and text. This includes evaluating model outputs (e.g., checking that images are free from unwanted text/logos, that AI-generated backgrounds are contextually relevant, and that prompts/responses follow predefined criteria).

Prompt

In the Workspace, every task centers around a prompt that can include an image, a video, and/or text.

Workspace prompt

The prompt is an object that may contain:

  • Media (e.g., an image or video)
  • A text instruction or question (e.g., text under the prompt)
  • One or more responses generated by a model or an annotator

Below the media and question, you’ll see a response; this is where the annotation/answer lives.

 

Objects list view
 
 

Response Changes

% Change shows how much a response was modified versus a comparison point. It’s computed as: (additions + deletions) ÷ (additions + deletions + original word count).

Use the arrow icon to expand and view the exact diffs. If you are on Step B, you can compare the current answer with the Original (customer-provided) or the Latest (the most recent Step-A answer). Clicking Compare reveals what changed between the selected pair.

% change comparison UI
 
 

How to Review Writing Tasks in the Sampling Portal

In the Sampling Portal, you can review an annotated task, compare it against the original response, and leave feedback on specific responses. Learn more in the Sampling Portal guide here.

Quality Metrics

Internal quality metrics are computed by comparing QA and annotator answers. Reporting includes Answer Word Count, Additions, and Deletions, from which a quality metric can be inferred.

External (customer-facing) quality metrics are based on errors flagged by customers using highlights. We surface Answer Word Count, Highlight Count, and Highlight Word Count.

For all feedback contexts, the quality metric indicates the percentage of the response that needs to change:

Quality metric formula

This calculation is word-count-based and shows the extent of required modifications.


Writing Delivery Experience

This section explains the JSON format used to deliver Writing tasks. It covers the object model, parent/child relationships, and how to locate annotations.

Understanding the JSON Structure

Each task record contains an answers.objects array. Every entry is an object with:

  • id : unique identifier for the object
  • parent_id : the parent object’s id (if any)
  • sort_index : initial UI order (optional)
  • class_name : object type (e.g., prompt, response, text, image, video, metadata, skip)
  • tags : content/attributes (e.g., text for annotations, url for media)

Parent → Child Pattern

prompt (id: X)
├── image (parent_id: X)
├── text  [instruction/question] (parent_id: X)
└── response (id: Y, parent_id: X)
    └── text  [annotation/answer] (parent_id: Y)

In practice, this means: the annotation you care about is usually a text object whose parent_id is the id of a response object.

Minimal Example

[
   {
      "id":"task-001",
      "answers":{
         "objects":[
            {
               "id":1,
               "class_name":"prompt"
            },
            {
               "id":2,
               "parent_id":1,
               "class_name":"response"
            },
            {
               "id":9,
               "parent_id":1,
               "class_name":"image",
               "tags":{
                  "url":"https://example.com/dog.png"
               }
            },
            {
               "id":10,
               "parent_id":1,
               "class_name":"text",
               "tags":{
                  "text":"Describe what you see in the image."
               }
            },
            {
               "id":11,
               "parent_id":2,
               "class_name":"text",
               "tags":{
                  "text":"This is a happy dog."
               }
            }
         ]
      }
   }
]

Where is the annotation? It’s the text object with parent_id: 2 (the response’s id). Its value is in tags.text: “This is a happy dog.”

Richer Example 

The same logic holds even when tasks include additional information (e.g., metadata, multiple images). Below is a trimmed, realistic example showing how to find the annotation:

[
   {
      "id":"123",
      "project_id":78965,
      "state":"completed",
      "answers":{
         "objects":[
            {
               "id":1,
               "sort_index":1,
               "class_name":"prompt",
               "tags":{
                  
               }
            },
            {
               "id":2,
               "parent_id":1,
               "sort_index":0,
               "class_name":"response",
               "tags":{
                  
               }
            },
            {
               "id":9,
               "parent_id":1,
               "sort_index":2,
               "class_name":"image",
               "tags":{
                  "url":"https://...",
                  "assetId":"333"
               }
            },
            {
               "id":10,
               "parent_id":1,
               "sort_index":1,
               "class_name":"text",
               "tags":{
                  "text":"Given the following two dog images, describe the image."
               }
            },
            {
               "id":11,
               "parent_id":2,
               "sort_index":1,
               "class_name":"text",
               "tags":{
                  "text":"This is a happy dog."
               }
            },
            {
               "id":12,
               "parent_id":1,
               "sort_index":3,
               "class_name":"image",
               "tags":{
                  "url":"https://...",
                  "assetId":"1111"
               }
            },
            {
               "id":18,
               "sort_index":1,
               "class_name":"metadata",
               "tags":{
                  "before_datetime":"2021-11-09 16:52:31",
                  "after_datetime":"2021-11-13 16:36:55",
                  "texts_assistant":"Looking at this small dog, what can you describe about the hair?"
               }
            },
            {
               "id":20,
               "sort_index":1,
               "class_name":"skip",
               "tags":{
                  "skip_task":"No"
               }
            }
         ]
      }
   }
]

Where is the annotation? Find the response (id: 2), then find its children. The child text with parent_id: 2 holds the annotation: “This is a happy dog.”

How to Locate Annotations

  1. Find the prompt object (often the first object for a task).
  2. Find the response whose parent_id equals the prompt’s id.
  3. Find the text object(s) whose parent_id equals that response’s id.
  4. Read tags.text  That string is the annotation/answer (e.g., “This is a happy dog.” or “In the image you can see a sunny day”).
     

💡To find the final written description, look for a text object whose parent_id equals the id of a response object.

 

Quick Reference

Element Class Name Parent What it contains
Prompt prompt — Root container for a question/task and its media.
Response response Prompt The model/annotator’s answer; its children include the final text annotation.
Instruction/Question text Prompt The user-visible instruction or question for the task.
Annotation/Answer text Response The description you want, e.g., “This is a happy dog.”
Media image / video Prompt Asset URLs and metadata under tags.url, etc.
Metadata metadata Varies Additional task data (timestamps, filenames, confidence, etc.).

Fields Reference

  • id: unique identifier for an object
  • parent_id: id of the parent object (optional)
  • sort_index: initial ordering in the UI (optional)
  • tags: static attributes/content of the object (e.g., text, url, metadata)
  • class_name: object type (e.g., prompt, response, text, image, video, metadata, skip)
workplace artificial intelligence

Was this article helpful?

Yes
No
Give feedback about this article
Writing in the Workspace What is Writing? Understanding the Writing Workflow Prompt Response Changes How to Review Writing Tasks in the Sampling Portal Quality Metrics Writing Delivery Experience Understanding the JSON Structure Parent → Child Pattern Minimal Example Richer Example How to Locate Annotations Quick Reference Fields Reference

The first B Corp-certified AI company

  • Security
  • Terms
  • Privacy
  • Quality & Information

Copyright © 2023 Samasource Impact Sourcing, Inc. All rights reserved.


Knowledge Base Software powered by Helpjuice

Expand