Writing
Updated at November 28th, 2025
Writing in the Workspace
Writing helps teams produce high-quality, on-brand text with the speed of AI and the assurance of expert human review.
In a single workspace, you can review model outputs, refine responses, compare changes, and track quality, so content is accurate, consistent, and ready for production.

What is Writing?
Writing is a workflow for generating and validating short- and long-form text (e.g., product copy, support replies, descriptions). AI produces initial drafts; Sama experts refine and validate them against your instructions, style, and compliance needs. You get fast scale and reliable quality.
📘 Results You Can Expect
- Higher content quality from human-in-the-loop review and guided edits.
- Consistent brand voice through structured prompts, definitions, and examples.
- Faster turnaround with bulk review tools and clear diff comparisons.
- Measurable accuracy using transparent quality metrics and audit trails.
Understanding the Writing Workflow
Sama reviews and validates client data generated by AI models to ensure accuracy and quality across images, videos, and text. This includes evaluating model outputs (e.g., checking that images are free from unwanted text/logos, that AI-generated backgrounds are contextually relevant, and that prompts/responses follow predefined criteria).
Prompt
In the Workspace, every task centers around a prompt that can include an image, a video, and/or text.

The prompt is an object that may contain:
- Media (e.g., an
imageorvideo) - A text instruction or question (e.g.,
textunder the prompt) - One or more responses generated by a model or an annotator
Below the media and question, you’ll see a response; this is where the annotation/answer lives.

Response Changes
% Change shows how much a response was modified versus a comparison point. It’s computed as: (additions + deletions) ÷ (additions + deletions + original word count).
Use the arrow icon to expand and view the exact diffs. If you are on Step B, you can compare the current answer with the Original (customer-provided) or the Latest (the most recent Step-A answer). Clicking Compare reveals what changed between the selected pair.

How to Review Writing Tasks in the Sampling Portal
In the Sampling Portal, you can review an annotated task, compare it against the original response, and leave feedback on specific responses. Learn more in the Sampling Portal guide here.
Quality Metrics
Internal quality metrics are computed by comparing QA and annotator answers. Reporting includes Answer Word Count, Additions, and Deletions, from which a quality metric can be inferred.
External (customer-facing) quality metrics are based on errors flagged by customers using highlights. We surface Answer Word Count, Highlight Count, and Highlight Word Count.
For all feedback contexts, the quality metric indicates the percentage of the response that needs to change:

This calculation is word-count-based and shows the extent of required modifications.
Writing Delivery Experience
This section explains the JSON format used to deliver Writing tasks. It covers the object model, parent/child relationships, and how to locate annotations.
Understanding the JSON Structure
Each task record contains an answers.objects array. Every entry is an object with:
-
id: unique identifier for the object -
parent_id: the parent object’sid(if any) -
sort_index: initial UI order (optional) -
class_name: object type (e.g.,prompt,response,text,image,video,metadata,skip) -
tags: content/attributes (e.g.,textfor annotations,urlfor media)
Parent → Child Pattern
prompt (id: X)
├── image (parent_id: X)
├── text [instruction/question] (parent_id: X)
└── response (id: Y, parent_id: X)
└── text [annotation/answer] (parent_id: Y)In practice, this means: the annotation you care about is usually a text object whose parent_id is the id of a response object.
Minimal Example
[
{
"id":"task-001",
"answers":{
"objects":[
{
"id":1,
"class_name":"prompt"
},
{
"id":2,
"parent_id":1,
"class_name":"response"
},
{
"id":9,
"parent_id":1,
"class_name":"image",
"tags":{
"url":"https://example.com/dog.png"
}
},
{
"id":10,
"parent_id":1,
"class_name":"text",
"tags":{
"text":"Describe what you see in the image."
}
},
{
"id":11,
"parent_id":2,
"class_name":"text",
"tags":{
"text":"This is a happy dog."
}
}
]
}
}
]Where is the annotation? It’s the text object with parent_id: 2 (the response’s id). Its value is in tags.text: “This is a happy dog.”
Richer Example
The same logic holds even when tasks include additional information (e.g., metadata, multiple images). Below is a trimmed, realistic example showing how to find the annotation:
[
{
"id":"123",
"project_id":78965,
"state":"completed",
"answers":{
"objects":[
{
"id":1,
"sort_index":1,
"class_name":"prompt",
"tags":{
}
},
{
"id":2,
"parent_id":1,
"sort_index":0,
"class_name":"response",
"tags":{
}
},
{
"id":9,
"parent_id":1,
"sort_index":2,
"class_name":"image",
"tags":{
"url":"https://...",
"assetId":"333"
}
},
{
"id":10,
"parent_id":1,
"sort_index":1,
"class_name":"text",
"tags":{
"text":"Given the following two dog images, describe the image."
}
},
{
"id":11,
"parent_id":2,
"sort_index":1,
"class_name":"text",
"tags":{
"text":"This is a happy dog."
}
},
{
"id":12,
"parent_id":1,
"sort_index":3,
"class_name":"image",
"tags":{
"url":"https://...",
"assetId":"1111"
}
},
{
"id":18,
"sort_index":1,
"class_name":"metadata",
"tags":{
"before_datetime":"2021-11-09 16:52:31",
"after_datetime":"2021-11-13 16:36:55",
"texts_assistant":"Looking at this small dog, what can you describe about the hair?"
}
},
{
"id":20,
"sort_index":1,
"class_name":"skip",
"tags":{
"skip_task":"No"
}
}
]
}
}
]Where is the annotation? Find the response (id: 2), then find its children. The child text with parent_id: 2 holds the annotation: “This is a happy dog.”
How to Locate Annotations
- Find the
promptobject (often the first object for a task). - Find the
responsewhoseparent_idequals the prompt’sid. - Find the
textobject(s) whoseparent_idequals that response’sid. - Read
tags.textThat string is the annotation/answer (e.g., “This is a happy dog.” or “In the image you can see a sunny day”).
💡To find the final written description, look for a text object whose parent_id equals the id of a response object.
Quick Reference
| Element | Class Name | Parent | What it contains |
|---|---|---|---|
| Prompt | prompt |
— | Root container for a question/task and its media. |
| Response | response |
Prompt | The model/annotator’s answer; its children include the final text annotation. |
| Instruction/Question | text |
Prompt | The user-visible instruction or question for the task. |
| Annotation/Answer | text |
Response | The description you want, e.g., “This is a happy dog.” |
| Media |
image / video
|
Prompt | Asset URLs and metadata under tags.url, etc. |
| Metadata | metadata |
Varies | Additional task data (timestamps, filenames, confidence, etc.). |
Fields Reference
-
id: unique identifier for an object -
parent_id: id of the parent object (optional) -
sort_index: initial ordering in the UI (optional) -
tags: static attributes/content of the object (e.g.,text,url,metadata) -
class_name: object type (e.g.,prompt,response,text,image,video,metadata,skip)