Serverless background jobs on the same platform as your APIs
Return in milliseconds for browsers and mobile clients, then run async jobs through pipelines and the job queue with retries, idempotent writes, and traces that mirror synchronous invokes.
Last updated: 2026-04-20
Answer first
Direct answer
Serverless background jobs on the same platform as your APIs. Functions become pipeline steps; the platform tracks executions similarly to synchronous invokes so background jobs stay searchable.
When it fits
- > few seconds of work
- Spiky workloads
- External APIs with variable latency
Tradeoffs
- Every service inventing its own Redis consumer group diverges operationally from your serverless functions story.
- Without shared observability, background jobs and pipelines become a black box next to REST API endpoints.
Workload and what breaks
Why long HTTP requests break background jobs
Keeping slow work on the request thread hits gateway timeouts and frustrates users before async jobs can start.
Retry storms duplicate side effects unless every background job handler is idempotent.
Where shortcuts fail
Why ad-hoc job queues hide failures
Every service inventing its own Redis consumer group diverges operationally from your serverless functions story.
Without shared observability, background jobs and pipelines become a black box next to REST API endpoints.
How Inquir helps
One surface for HTTP, async jobs, and pipelines
Functions become pipeline steps; the platform tracks executions similarly to synchronous invokes so background jobs stay searchable.
Reuse secrets and networking decisions across online traffic and offline pipelines.
What you get
Background job patterns to standardize
Fan-out
Split one event into many tasks with clear ownership.
Compensation
Model rollback or alerting paths for partial failures.
Backpressure
Tune concurrency when downstream systems are fragile.
What to do next
How to design background jobs on Inquir Compute
Define payload
Version schemas so upgrades do not break in-flight jobs.
Make idempotent
Guard writes with stable keys.
Observe
Alert on DLQ-like states if your deployment exposes them.
Code example
HTTP timeout → async job handoff
The HTTP handler returns 202 immediately so the client is not held open. The job handler picks up the work with the same secrets and observability as the gateway route.
export async function handler(event) { const { reportId, userId } = JSON.parse(event.body || '{}'); if (!reportId) return { statusCode: 400, body: JSON.stringify({ error: 'reportId required' }) }; // Enqueue the slow export job — returns immediately const { instanceId: jobId } = await global.durable.startNew('export-report', undefined, { reportId, userId }); // Client polls GET /export-status/:jobId or receives a webhook when done return { statusCode: 202, body: JSON.stringify({ jobId }) }; }
export async function handler(event) { const { reportId, userId } = event.payload; // Idempotency key — safe to retry const existing = await db.exports.findByReportAndUser(reportId, userId); if (existing?.status === 'done') return { url: existing.url }; const rows = await buildReport(reportId); const url = await storage.upload(rows, { key: `reports/${reportId}.csv` }); await db.exports.upsert({ reportId, userId, url, status: 'done' }); await notify(userId, { url }); return { url }; }
When it fits
Choose async when…
When this works
- > few seconds of work
- Spiky workloads
- External APIs with variable latency
When to skip it
- Truly instantaneous reads that fit comfortably in SLA
FAQ
FAQ
Is exactly-once delivery realistic for background jobs?
Aim for idempotent handlers and deduplication keys; true exactly-once across networks and storage is rare—design for at-least-once with safe replays.
When should HTTP return 202 Accepted?
When the user-facing work is enqueued and you can point to a job or execution ID—better than holding a socket open until a long export finishes.
How do pipelines relate to schedules and webhooks?
Pipelines can start from schedule, HTTP, manual, or event triggers. A webhook handler can return quickly and enqueue async jobs or start a pipeline—different entry points, same orchestration code.