Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cloud.vessl.ai/llms.txt

Use this file to discover all available pages before exploring further.

VESSL Cloud
Documentation

On-demand GPUs, Docker-based environments, and team collaboration — built for ML.
VESSL Cloud lets you access GPUs on demand without managing infrastructure. Launch a workspace, connect using JupyterLab or SSH, and collaborate using shared storage.

Why VESSL Cloud

  • Instant: launch GPU instances with JupyterLab and SSH in a few clicks
  • Consistent: Docker-based images ensure identical environments across teams and projects
  • Scalable: scale GPU specs or count as needed; pause or terminate when done
  • Collaborative: share data and models with team volumes
  • Cost visibility: clear billing states for running, paused, and terminated workspaces

Quickstart

Create your first Workspace in minutes.

Organization

Admin scope, policies, billing, responsibilities.

Team

Collaboration model and shared resources.

Workspace (Run GPU instances)

GPU/CPU containers and cost controls.

vesslctl CLI

Manage everything from your terminal — workspaces, jobs, storage, and more.

More in introduction

Roles & permissions

Admin vs Member responsibilities and scopes.

Cluster

GPU/CPU execution environments for workspaces.

Storage

Cluster storage vs Object storage.

Billing

Credit model and cost states.