What drives data annotation costs?
Annotation cost is a function of four variables: data type, task complexity, required accuracy, and throughput. A simple yes/no image classification task costs far less than pixel-level semantic segmentation of a surgical scene. A one-pass label with no quality review costs less than a three-pass QA workflow with expert review.
- Data type: text and image annotation are cheapest; 3D point cloud and video annotation command a premium.
- Complexity: simple classification vs. polygon segmentation vs. multi-attribute labeling.
- Domain expertise: general annotation vs. medical, legal, or engineering domain knowledge.
- Accuracy requirements: single-pass labeling vs. multi-pass QA with inter-annotator agreement checks.
- Volume: high-volume projects benefit from tiered pricing; one-off batches cost more per unit.
- Language: English annotation is cheapest; rare languages or Southeast Asian languages require specialist teams.
Typical annotation pricing by data type (2026)
These are market-rate ranges based on professional annotation vendors in 2026. Low-cost crowdsourcing platforms (like Mechanical Turk) can be cheaper, but come with significant quality trade-offs that most production AI teams cannot accept.
- Text classification (per item): $0.02 – $0.10
- Named entity recognition (per 1,000 tokens): $1 – $8
- Sentiment analysis (per item): $0.03 – $0.12
- Image bounding box (per box): $0.05 – $0.30
- Image semantic segmentation (per image): $0.50 – $5.00
- Video annotation with tracking (per frame): $0.10 – $2.00
- Audio transcription (per audio minute): $0.50 – $3.00
- 3D point cloud annotation (per scene): $5 – $50+
- Medical/legal specialist annotation (per hour): $40 – $120
Per-item vs. per-hour vs. fixed-project pricing
Vendors typically offer three pricing models. Per-item pricing works best for well-defined, repeatable tasks where the scope is clear. Per-hour pricing suits exploratory or complex tasks where throughput is hard to predict. Fixed project pricing works for end-to-end engagements where the vendor takes responsibility for delivery.
Be cautious of per-item pricing for complex tasks – vendors may rush labels to maximize throughput, degrading quality. For anything requiring specialist knowledge or multi-pass QA, per-hour or fixed-project pricing aligns incentives better.
Hidden costs to budget for
The line item on the quote rarely captures everything you will pay over the life of a real annotation program. Plan for these:
- Rework and corrections: budget 10–20% for re-annotation if initial quality falls short.
- Tooling and setup: some vendors charge onboarding or platform fees ($500–$5,000).
- Data transfer and storage: large video or 3D datasets require secure transfer infrastructure.
- Project management: dedicated PM time adds 10–15% to hourly projects.
- Quality audits: third-party accuracy audits can add value but cost extra.
How to get an accurate quote
The fastest way to get accurate pricing is to share a sample dataset (50–200 items) with a few annotation vendors and request a scoped quote. A good vendor will annotate the sample, provide a per-item rate, and give a realistic project timeline. Avoid vendors who quote without seeing your data – annotation complexity varies too much for blind quotes to be reliable.
Offshore vs. onshore annotation pricing
Offshore annotation teams (Vietnam, Philippines, India) typically cost 60–80% less than onshore teams (US, UK, Australia) for equivalent tasks. The quality gap has narrowed significantly as offshore vendors have invested in QA infrastructure and specialist training. For most standard annotation tasks, offshore teams at reputable vendors deliver production-quality output.
DataXanno operates from Hanoi, Vietnam, combining offshore cost efficiency with enterprise-grade QA processes. Our clients in Australia, Singapore, and the US consistently report annotation costs 50–70% lower than comparable onshore vendors with no compromise on accuracy.