WeaveOps is composed of three layers: a Salesforce-side orchestration and runtime layer, an Azure-hosted AI service layer, and the AI model provider. Each layer has distinct responsibilities and security boundaries.
What happens when a Salesforce user triggers a WeaveOps use case on a record page.
The WeaveOps AI service is designed to serve multiple Salesforce orgs from a single deployment. Each org is identified by a tenant ID (the Salesforce org ID) and authenticated with an independently issued API key.
POST /admin/tenant/register
X-Admin-Key: ••••••••••••
{
"tenantId": "00D000000000001",
"apiKey": "wops-tenant-key",
"label": "Acme Corp - Production"
}
→ 201 Created
{
"tenantId": "00D000000000001",
"registered": true
} LLM calls are retried up to 3 times on rate limit errors and network timeouts, with exponential backoff (5s, 10s, 20s). Batch record processing retries failed records up to 3 times before skipping with an error log entry.
Large-scale record processing uses Salesforce Queueable Apex with a chain pattern: one record per queue execution, self-re-enqueuing until all records are processed. Progress is tracked in real time on the Batch Run record.
The AI service includes a JSON repair pipeline that handles truncated or bracket-mismatched model output. It escalates max tokens and retries when the model's response is structurally incomplete.
All user-supplied text fields are validated to a 50,000 character maximum before reaching the AI model. List inputs (conversation history) are capped at 200 items. Oversized inputs return a 400 error immediately.
WeaveOps respects Salesforce's synchronous callout limits. Document analysis and batch processing use asynchronous Queueable Apex to avoid blocking user transactions. Timeout is configurable per endpoint.
The WeaveOps service exposes a /health endpoint returning service status and configuration flags (Anthropic connected, Blob connected). No authentication required — used for monitoring and support diagnostics.