Skip to main content

Progress Reporting

Long-running jobs can report their progress as a value from 0.0 to 1.0 using ctx.ReportProgress. In the current SDK beta, ReportProgress is a no-op delegate (the call is accepted but not sent to the API). This will be wired to a progress endpoint in a future release. Calling it now is forward-compatible, has zero performance cost, and ensures your job code is ready when progress tracking goes live.

Basic usage

Call ctx.ReportProgress inside your job's ExecuteAsync method. Pass a double between 0.0 (not started) and 1.0 (complete):

public class ImportUsers : IJob<ImportPayload>
{
private readonly IUserService _users;

public ImportUsers(IUserService users) => _users = users;

public async Task ExecuteAsync(ImportPayload payload, JobContext ctx)
{
var records = await LoadRecords(payload.FileUrl, ctx.CancellationToken);

for (int i = 0; i < records.Count; i++)
{
await _users.ImportAsync(records[i], ctx.CancellationToken);
await ctx.ReportProgress((double)(i + 1) / records.Count);
}
}
}

Dashboard integration

The job detail page (GET /v1/jobs/{id}) returns a progress field. Currently this value is set server-side: null while processing, and 1.0 when the job succeeds. Once ReportProgress is wired to the API in a future release, this field will reflect real-time progress reported by your job code.

{
"id": "job_01JAXBKM3N4P5Q6R7S8T9UVWXY",
"state": "processing",
"progress": 0.45,
"job_type": "ImportUsers",
"attempt": 1,
"max_attempts": 3
}

The Zeridion dashboard renders this as a progress bar on the job detail page, giving you a visual indicator of how far along a running job is.

When a job completes successfully, the server automatically sets progress to 1.0 during the success acknowledgement — even if the job never called ReportProgress.

note

In the current SDK version (0.1.0-beta.1), ReportProgress is a no-op delegate — the value is accepted but not sent to the API. This will be wired to a progress endpoint in a future release. Calling ReportProgress now is forward-compatible and has zero performance cost.

Batch processing pattern

For jobs that process items in batches, report progress per batch rather than per item to reduce overhead:

[JobConfig(MaxAttempts = 3, TimeoutSeconds = 3600)]
public class ProcessLargeDataset : IJob<DatasetPayload>
{
private readonly IDataStore _store;

public ProcessLargeDataset(IDataStore store) => _store = store;

public async Task ExecuteAsync(DatasetPayload payload, JobContext ctx)
{
var totalCount = await _store.GetCountAsync(payload.DatasetId, ctx.CancellationToken);
var processed = 0;
const int batchSize = 100;

ctx.Logger.LogInformation(
"Processing dataset {DatasetId} with {Total} records",
payload.DatasetId, totalCount);

while (processed < totalCount)
{
ctx.CancellationToken.ThrowIfCancellationRequested();

var batch = await _store.GetBatchAsync(
payload.DatasetId, processed, batchSize, ctx.CancellationToken);

foreach (var record in batch)
{
await _store.ProcessAsync(record, ctx.CancellationToken);
}

processed += batch.Count;
await ctx.ReportProgress((double)processed / totalCount);

ctx.Logger.LogInformation(
"Processed {Processed}/{Total} records", processed, totalCount);
}
}
}

CSV import example

A complete real-world example that imports a CSV file, parses it in chunks, and reports progress:

[JobConfig(MaxAttempts = 3, TimeoutSeconds = 1800, Queue = "imports")]
public class ImportCsvFile : IJob<CsvImportPayload>
{
private readonly IBlobStorage _blobs;
private readonly ICustomerRepository _customers;

public ImportCsvFile(IBlobStorage blobs, ICustomerRepository customers)
{
_blobs = blobs;
_customers = customers;
}

public async Task ExecuteAsync(CsvImportPayload payload, JobContext ctx)
{
ctx.Logger.LogInformation(
"Starting CSV import {ImportId}, attempt {Attempt}/{Max}",
payload.ImportId, ctx.AttemptNumber, ctx.MaxAttempts);

await using var stream = await _blobs.OpenReadAsync(payload.BlobPath, ctx.CancellationToken);
using var reader = new StreamReader(stream);

var lines = new List<string>();
while (await reader.ReadLineAsync(ctx.CancellationToken) is { } line)
lines.Add(line);

var dataLines = lines.Skip(1).ToList(); // skip header

for (int i = 0; i < dataLines.Count; i++)
{
ctx.CancellationToken.ThrowIfCancellationRequested();

var fields = dataLines[i].Split(',');
await _customers.UpsertAsync(new Customer
{
Email = fields[0].Trim(),
Name = fields[1].Trim(),
}, ctx.CancellationToken);

if (i % 50 == 0 || i == dataLines.Count - 1)
{
await ctx.ReportProgress((double)(i + 1) / dataLines.Count);
}
}

ctx.Logger.LogInformation(
"CSV import {ImportId} complete — {Count} records imported",
payload.ImportId, dataLines.Count);
}
}

Checking progress from client code

Query a job's progress via the SDK or API:

var status = await jobs.GetStatusAsync(jobId);

if (status is not null)
{
Console.WriteLine($"State: {status.State}");
Console.WriteLine($"Progress: {status.Progress:P0}");
}

Or via the API:

GET /v1/jobs/{id}

The response includes "progress": 0.72 (or null if never reported).

Best practices

  1. Throttle reporting frequency — for jobs processing thousands of items, report every N items or every batch, not every single item. Once per second is a reasonable upper bound.

  2. Use meaningful granularity — report at natural boundaries (per batch, per file, per page) rather than arbitrary percentages.

  3. Combine with structured logging — pair ReportProgress with ctx.Logger messages so you can correlate progress with log output when debugging.

  4. Handle cancellation between progress reports — check ctx.CancellationToken.ThrowIfCancellationRequested() in your processing loop so the job can be interrupted promptly.

See also

  • JobContextReportProgress property and other runtime context
  • Monitoring — metrics API for tracking job health