A Job runs a command to completion on a specified GPU or CPU resource. Unlike workspaces, jobs are non-interactive — submit a fine-tuning script or evaluation pipeline and let it run to completion. Jobs are ideal for:Documentation Index
Fetch the complete documentation index at: https://docs.cloud.vessl.ai/llms.txt
Use this file to discover all available pages before exploring further.
- Model training and fine-tuning
- Batch inference and evaluation
- Data preprocessing pipelines
- Hyperparameter sweeps (submit multiple jobs in parallel)

Jobs vs Workspaces
| Job | Workspace | |
|---|---|---|
| Interaction | Non-interactive (runs a command) | Interactive (SSH, JupyterLab) |
| Lifecycle | Starts → runs → completes automatically | Stays running until you pause or terminate |
| Billing | Only while running | While running (GPU + storage); while paused (storage only) |
| Best for | Training, batch processing, sweeps | Development, debugging, exploration |
Job statuses
| Status | Meaning |
|---|---|
scheduling | Waiting for resources to become available. The job shows a reason like Waiting for GPU capacity while it queues. |
running | Your command is actively executing on the allocated resources. |
succeeded | The command exited successfully (exit code 0). Output in mounted volumes is preserved. |
failed | The command exited with a non-zero code, or the container crashed (for example, OOMKilled). Check logs to debug. |
terminated | You manually cancelled the job before it finished. |
Next steps
- List batch jobs — browse, filter, and search jobs
- Create a batch job — submit from the console or CLI
- View job details — inspect command, image, environment, volumes, and tags
- View logs — stream container output in real time
- Monitor metrics — track GPU, VRAM, CPU, and memory usage
- Clean up jobs — terminate a running job and hide finished ones from the list
- Full CLI reference:
vesslctl job
