Jobs
Jobs are the core scheduling units in Scheduler0. They represent individual tasks that need to be executed on a schedule, similar to cron jobs but with enhanced features like retry logic, timezone support, and flexible execution environments.
Overview
A job defines what needs to be executed, when it should run, and how it should be executed. Jobs are organized within projects and can be executed using various executor types including webhooks and cloud functions.
Job Structure
{
"id": 1,
"accountId": 123,
"projectId": 456,
"executorId": 789,
"spec": "0 2 * * *",
"data": "{\"action\": \"cleanup\", \"target\": \"logs\"}",
"startDate": "2024-01-15T00:00:00Z",
"endDate": "2024-12-31T23:59:59Z",
"retryMax": 3,
"timezone": "UTC",
"timezoneOffset": 0,
"status": "active",
"executionId": "abc123...",
"dateCreated": "2024-01-15T10:30:00Z",
"dateModified": null,
"createdBy": "user123",
"modifiedBy": null,
"deletedBy": null
}
Fields
- id: Unique identifier for the job
- accountId: ID of the account that owns this job
- projectId: ID of the project this job belongs to
- executorId: ID of the executor that will run this job
- spec: Cron expression defining when the job should run
- data: JSON string payload containing job-specific data (max 3KB by default, or 1MB with feature upgrade)
- startDate: When the job should start executing
- endDate: When the job should stop executing (optional)
- retryMax: Maximum number of retry attempts on failure
- timezone: Timezone for the job execution
- timezoneOffset: Timezone offset in minutes
- status: Current status of the job (active/inactive)
- executionId: Unique execution ID for the job run
- dateCreated: Timestamp when the job was created
- dateModified: Timestamp when the job was last modified (optional)
- createdBy: Identifier of the user who created the job
- modifiedBy: Identifier of the user who last modified the job (optional)
- deletedBy: Identifier of the user who deleted the job (optional)
Cron Specification
Scheduler0 uses the robfig/cron library which supports 6-field cron expressions:
┌───────────── second (0 - 59)
│ ┌───────────── minute (0 - 59)
│ │ ┌───────────── hour (0 - 23)
│ │ │ ┌───────────── day of the month (1 - 31)
│ │ │ │ ┌───────────── month (1 - 12)
│ │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
│ │ │ │ │ │
│ │ │ │ │ │
* * * * * *
Predefined Schedules
You can use predefined schedules for convenience:
| Expression | Description | Equivalent To |
|---|---|---|
@yearly (or @annually) | Run once a year, midnight, Jan. 1st | 0 0 0 1 1 * |
@monthly | Run once a month, midnight, first of month | 0 0 0 1 * * |
@weekly | Run once a week, midnight between Sat/Sun | 0 0 0 * * 0 |
@daily (or @midnight) | Run once a day, midnight | 0 0 0 * * * |
@hourly | Run once an hour, beginning of hour | 0 0 * * * * |
Interval Scheduling
You can also use interval-based scheduling:
| Expression | Description |
|---|---|
@every 1h30m10s | Every 1 hour, 30 minutes, and 10 seconds |
@every 5m | Every 5 minutes |
@every 1h | Every 1 hour |
@every 1d | Every 1 day |
Common Cron Expression Examples
0 0 2 * * *- Every day at 2:00 AM*/15 * * * * *- Every 15 minutes0 0 0 * * 1- Every Monday at midnight0 * 9-17 * * 1-5- Every minute from 9 AM to 5 PM, Monday to Friday0 0 0 1 * *- First day of every month at midnight
Creating Jobs
The /jobs endpoint accepts an array of jobs. You can create a single job or multiple jobs in one request.
Required Fields: projectId, timezone, and createdBy are required for all jobs. The executorId and spec fields are also typically required for a job to be executed.
curl -X POST "https://api.scheduler0.com/v1/jobs" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID" \
-d '[
{
"projectId": 456,
"executorId": 789,
"spec": "0 0 2 * * *",
"data": "{\"action\": \"backup\", \"database\": \"production\"}",
"startDate": "2024-01-15T00:00:00Z",
"retryMax": 3,
"timezone": "UTC",
"createdBy": "user123"
}
]'
Response: Job creation is asynchronous. The API returns HTTP 202 Accepted with a request ID in the response body and a Location header pointing to the async task endpoint:
{
"success": true,
"data": "request-id-abc123"
}
The Location header will be: /async-tasks/request-id-abc123
Note: The /api/v1/async-tasks/{id} endpoint is only available for users self-hosting Scheduler0 and requires Basic Authentication. For managed service users, job creation status is typically handled automatically, and you can verify job creation by listing or retrieving the job directly.
Node.js Client Example
const Scheduler0Client = require('@scheduler0/node-client');
const client = new Scheduler0Client({
apiKey: 'your_api_key',
apiSecret: 'your_secret_key',
baseURL: 'https://api.scheduler0.com/v1'
});
// Create a new job (or multiple jobs)
const requestId = await client.jobs.create([
{
projectId: 456,
executorId: 789,
spec: "0 2 * * *",
data: JSON.stringify({
action: "backup",
database: "production"
}),
startDate: "2024-01-15T00:00:00Z",
retryMax: 3,
timezone: "UTC",
createdBy: "user123"
}
]);
// Check the async task status
const task = await client.asyncTasks.get(requestId);
console.log('Task status:', task.data);
Managing Jobs
Listing Jobs
Retrieve jobs with pagination, filtering, and sorting:
# List all jobs for the account
curl -X GET "https://api.scheduler0.com/v1/jobs?limit=10&offset=0&orderBy=date_created&orderByDirection=desc" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
# List jobs by project (optional filter)
curl -X GET "https://api.scheduler0.com/v1/jobs?projectId=456&limit=10&offset=0&orderBy=date_created&orderByDirection=desc" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Query Parameters:
- limit (required): Maximum number of results (1-100, default: 10)
- offset (required): Number of results to skip (default: 0)
- projectId (optional): Filter by specific project ID. If omitted, returns all jobs for the account.
- orderBy (optional): Field to sort by (
date_created,date_modified,created_by,modified_by,deleted_by). Default:date_created - orderByDirection (optional): Sort direction (
asc,desc). Default:desc
Updating Jobs
Update job properties. Note: The cron spec (spec) cannot be updated. To change the schedule, delete and recreate the job.
curl -X PUT "https://api.scheduler0.com/v1/jobs/1" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID" \
-d '{
"retryMax": 5,
"data": "{\"action\": \"updated_work\"}",
"modifiedBy": "user123"
}'
Node.js Client Example
// Update a job
const updatedJob = await client.jobs.update(1, {
retryMax: 5,
data: JSON.stringify({ action: "updated_work" }),
modifiedBy: "user123"
});
console.log('Updated job:', updatedJob.data);
Deleting Jobs
Remove a job from the scheduler:
curl -X DELETE "https://api.scheduler0.com/v1/jobs/1" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID" \
-d '{
"deletedBy": "user123"
}'
Node.js Client Example
// Delete a job
await client.jobs.delete(1, {
deletedBy: "user123"
});
console.log('Job deleted successfully');
Job Execution
Execution Flow
- Scheduling: Jobs are queued based on their cron specification
- Execution: Jobs are executed by the specified executor
- Retry Logic: Failed jobs are retried up to the specified limit
- Logging: Execution results are logged for monitoring
Execution States
Every job execution has one of three states (represented as integers):
| State | Value | Description |
|---|---|---|
scheduled | 0 | The job has been scheduled but not yet executed |
success | 1 | The job was executed successfully |
failed | 2 | The job execution failed |
Note: The state field in execution logs is an integer, not a string.
Idempotency with Unique IDs
Each job execution has a unique ID (executionId) that serves as an idempotency key. This ID is generated based on:
- Job ID
- Project ID
- Last execution date
- Next execution time
The ID is computed as a SHA-256 hash of these values to ensure uniqueness:
uniqueId = SHA256(projectId + "-" + jobId + "-" + lastExecutionDate + "-" + nextExecutionTime)
How Idempotency Works
The system prevents duplicate executions through a multi-layered approach:
- Pre-execution: Before scheduling, a
uniqueIdis generated based on the specific execution time - State tracking: Execution logs store this
uniqueIdalong with transaction state (scheduled, success, or failed) - Dual tables: Execution logs are maintained in:
- Committed table: Permanent execution history
- Uncommitted table: Pending executions that transition to committed after completion
- Recovery mechanism: On restart, the scheduler examines execution logs to determine each job's last known state
- State-based rescheduling: Jobs are only rescheduled after checking their last execution state, preventing duplicate processing
This ensures that:
- The same execution won't be processed twice
- System can recover from crashes without re-executing jobs
- Retry attempts can be tracked accurately
- Execution history remains consistent across system restarts
Execution Versioning: Each job execution has an executionVersion number that increments when a job is rescheduled after success or after exhausting retries. This helps track retry attempts within the same execution cycle.
Retry Behavior
When a job fails, Scheduler0 will automatically retry it based on the retryMax setting:
- retryMax: 0 - No retries, job fails immediately
- retryMax: 3 - Up to 3 retry attempts (default for free tier)
- retryMax: 15 - Up to 15 retry attempts (requires feature upgrade)
Timezone Support
Jobs support timezone-aware scheduling:
- UTC:
"timezone": "UTC" - Local Time:
"timezone": "America/New_York" - Custom Offset:
"timezoneOffset": -300(5 hours behind UTC, in minutes)
Job Data
The data field allows you to pass custom information to your job execution. The data must be a JSON-encoded string. Maximum size is 3KB by default or 1MB with feature upgrade.
Example Data Structures
When creating a job, you need to pass the data as a JSON string:
// Simple action (pass as a string)
{
"action": "cleanup",
"target": "logs"
}
// Complex configuration (pass as a string)
{
"action": "data_processing",
"config": {
"source": "database",
"destination": "s3://bucket/data",
"format": "parquet",
"compression": "gzip"
},
"filters": {
"date_range": "last_30_days",
"status": "completed"
}
}
In your API request or code, these should be JSON-encoded strings, for example:
data: JSON.stringify({
action: "cleanup",
target: "logs"
})
Job Status
Jobs can have different statuses:
- active: Job is scheduled and will execute
- inactive: Job is paused and will not execute
Status Management
# Pause a job
curl -X PUT "https://api.scheduler0.com/v1/jobs/1" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-d '{
"status": "inactive",
"modifiedBy": "user123"
}'
# Resume a job
curl -X PUT "https://api.scheduler0.com/v1/jobs/1" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-d '{
"status": "active",
"modifiedBy": "user123"
}'
### Node.js Client Example
```javascript
// Pause a job
await client.jobs.update(1, {
status: "inactive",
modifiedBy: "user123"
});
// Resume a job
await client.jobs.update(1, {
status: "active",
modifiedBy: "user123"
});
Best Practices
Job Design
- Single Responsibility: Each job should perform one specific task
- Idempotent Operations: Jobs should be safe to run multiple times
- Error Handling: Design jobs to handle failures gracefully
- Resource Management: Consider resource usage and execution time
Scheduling Best Practices
- Avoid Peak Hours: Schedule resource-intensive jobs during off-peak hours
- Staggered Execution: Avoid scheduling many jobs at the same time
- Dependencies: Use job data to coordinate dependent tasks
- Monitoring: Set up alerts for job failures
Data Management
- Structured Data: Use consistent JSON structure for job data
- Sensitive Information: Avoid storing secrets in job data
- Size Limits: Keep job data reasonably sized
- Versioning: Include version information in job data
Monitoring and Debugging
Execution Logs
Monitor job execution through the Scheduler0 dashboard or API. You can query execution logs with optional date filtering, state filtering, and ordering.
Execution Log Structure
Each execution log tracks a single execution attempt of a job:
{
"id": 123,
"accountId": 123,
"uniqueId": "a1b2c3d4e5f6...",
"state": 1,
"nodeId": 42,
"jobId": 789,
"lastExecutionDatetime": "2024-01-15T10:00:00Z",
"nextExecutionDatetime": "2024-01-15T12:00:00Z",
"jobQueueVersion": 5,
"executionVersion": 2,
"dateCreated": "2024-01-15T10:00:01Z",
"dateModified": "2024-01-15T10:00:02Z"
}
Execution Log Fields
- id: Unique identifier for this execution log entry (integer)
- accountId: The account that owns this execution (integer)
- uniqueId: The idempotency key for this execution (used to prevent duplicate executions)
- state: The execution state as an integer (0=scheduled, 1=success, 2=failed)
- nodeId: The scheduler node that handled this execution (integer)
- jobId: The ID of the job that was executed (integer)
- lastExecutionDatetime: The actual time when execution started (RFC3339 format)
- nextExecutionDatetime: The planned time for the next execution (for recurring jobs, RFC3339 format)
- jobQueueVersion: Internal version number for the job queue at execution time (integer)
- executionVersion: Version number for tracking retries (increments when rescheduled, integer)
- dateCreated: When this execution log was created (RFC3339 format)
- dateModified: When this execution log was last modified (RFC3339 format, optional)
Query Parameters
- limit (required): Maximum number of results to return (default: 10, max: 100)
- offset (required): Number of results to skip for pagination (default: 0)
- startDate (optional): Start of the date range to query (RFC3339 format)
- endDate (optional): End of the date range to query (RFC3339 format)
- jobId (optional): Filter by specific job ID
- projectId (optional): Filter by specific project ID
- state (optional): Filter by execution state -
"scheduled","completed", or"failed" - orderBy (optional): Field to order results by -
"dateCreated","lastExecutionDateTime", or"nextExecutionDateTime" - orderDirection (optional): Direction to order results -
"ASC"or"DESC"
Example Queries
Get all execution logs (without date filtering):
curl -X GET "https://api.scheduler0.com/v1/executions?limit=10&offset=0" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Get all execution logs for a date range:
curl -X GET "https://api.scheduler0.com/v1/executions?startDate=2024-01-01T00:00:00Z&endDate=2024-12-31T23:59:59Z&limit=10&offset=0" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Get execution logs for a specific job with state filtering:
curl -X GET "https://api.scheduler0.com/v1/executions?startDate=2024-01-01T00:00:00Z&endDate=2024-12-31T23:59:59Z&jobId=789&state=completed&limit=50&offset=0" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Get execution logs with ordering:
curl -X GET "https://api.scheduler0.com/v1/executions?orderBy=dateCreated&orderDirection=DESC&limit=100&offset=0" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Get execution logs for all jobs in a project:
curl -X GET "https://api.scheduler0.com/v1/executions?startDate=2024-01-01T00:00:00Z&endDate=2024-12-31T23:59:59Z&projectId=456&limit=100&offset=0" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Node.js Client Example
// Get all execution logs (without date filtering)
const executions = await client.listExecutions({
limit: 10,
offset: 0
});
console.log(`Found ${executions.data.executions.length} executions`);
// Get execution logs for a date range
const dateRangeExecutions = await client.listExecutions({
startDate: "2024-01-01T00:00:00Z",
endDate: "2024-12-31T23:59:59Z",
limit: 10,
offset: 0
});
// Get execution logs with state filtering and ordering
const filteredExecutions = await client.listExecutions({
startDate: "2024-01-01T00:00:00Z",
endDate: "2024-12-31T23:59:59Z",
jobId: 789,
state: "completed",
orderBy: "dateCreated",
orderDirection: "DESC",
limit: 50,
offset: 0
});
// Filter by state in client code
const failedExecutions = filteredExecutions.data.executions.filter(
execution => execution.state === 2
);
console.log(`Found ${failedExecutions.length} failed executions`);
Note: Execution logs are automatically cleaned up based on your account's retention policy:
- Standard accounts: Logs are retained for 30 days
- Accounts with the increased retention feature: Logs are retained for 90 days
Execution Analytics
Get execution counts grouped by minute buckets for a date range. This endpoint is useful for visualizing execution patterns over time.
Endpoint: GET /api/v1/executions/analytics
Query Parameters:
- startDate (required): Start date for analytics (YYYY-MM-DD format)
- startTime (required): Start time for analytics (HH:MM:SS or HH:MM format)
Example:
curl -X GET "https://api.scheduler0.com/v1/executions/analytics?startDate=2024-01-01&startTime=00:00:00" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Response:
{
"success": true,
"data": {
"accountId": 123,
"timezone": "UTC",
"startDate": "2024-01-01",
"startTime": "00:00:00",
"endDate": "2024-01-01",
"endTime": "23:59:59",
"points": [
{
"date": "2024-01-01",
"time": "00:00:00",
"scheduled": 10,
"success": 8,
"failed": 2
}
]
}
}
Node.js Client Example:
const analytics = await client.getDateRangeAnalytics({
startDate: "2024-01-01",
startTime: "00:00:00",
accountId: 123
});
console.log(`Analytics for ${analytics.data.points.length} time buckets`);
Execution Totals
Get total counts of scheduled, success, and failed executions for an account.
Endpoint: GET /api/v1/executions/totals
Example:
curl -X GET "https://api.scheduler0.com/v1/executions/totals" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID"
Response:
{
"success": true,
"data": {
"accountId": 123,
"scheduled": 1000,
"success": 950,
"failed": 50
}
}
Node.js Client Example:
const totals = await client.getExecutionTotals(123);
console.log(`Scheduled: ${totals.data.scheduled}`);
console.log(`Success: ${totals.data.success}`);
console.log(`Failed: ${totals.data.failed}`);
Cleanup Old Execution Logs
Manually trigger cleanup of old execution logs based on a retention period. This endpoint is useful for managing storage and compliance requirements.
Endpoint: POST /api/v1/executions/cleanup-old-logs
Request Body:
{
"accountId": "123",
"retentionMonths": 6
}
Example:
curl -X POST "https://api.scheduler0.com/v1/executions/cleanup-old-logs" \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-Secret-Key: YOUR_API_SECRET" \
-H "X-Account-ID: YOUR_ACCOUNT_ID" \
-d '{
"accountId": "123",
"retentionMonths": 6
}'
Response:
{
"success": true,
"data": {
"message": "Old execution logs cleaned up successfully for account 123"
}
}
Node.js Client Example:
const result = await client.cleanupOldExecutionLogs("123", 6);
console.log(result.data.message);
Note: This endpoint requires peer authentication (Basic Auth) when self-hosting. For managed service users, this operation is typically handled automatically based on your account's retention policy.
Common Issues
- Cron Expression Errors: Validate cron expressions before creating jobs
- Executor Failures: Ensure executors are properly configured
- Timezone Issues: Verify timezone settings match your requirements
- Data Format: Ensure job data is valid JSON
API Reference
For complete API documentation, see the Scheduler0 API Reference.
Related Components
- Projects - Organizational units for jobs
- Executors - Execution environments for jobs
- Credentials - Authentication for API access