Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.woodwide.ai/llms.txt

Use this file to discover all available pages before exploring further.

Datasets are the foundation of every model in Wood Wide AI. Upload a CSV or Parquet file to create a dataset, then reference it when training models.

Create Dataset

Upload a file (up to 30 MB) to create a new dataset.

List Datasets

Retrieve all datasets in your account.

Get Dataset

Fetch details for a specific dataset.

Delete Dataset

Permanently remove a dataset and all its versions.

Large File Uploads

Direct file uploads to POST /datasets, POST /models/{model_id}/infer, and POST /models/{model_id}/infer-async are limited to 30 MB. For larger files, use the three-step signed-URL upload flow:
  1. Prepare the upload with the Prepare Signed-URL Upload endpoint to get a signed URL.
  2. Upload the file directly to the signed URL.
  3. Complete the upload to trigger ingestion.
import os
import requests

api_key = os.getenv("WOODWIDE_API_KEY")
base_url = "https://api.woodwide.ai"
headers = {"Authorization": f"Bearer {api_key}"}

# Step 1: Prepare Signed-URL Upload
prepare_response = requests.post(
    f"{base_url}/datasets/upload",
    headers={**headers, "Content-Type": "application/json"},
    json={
        "dataset_name": "large_dataset",
        "file": {
            "filename": "large_data.csv",
            "bytes": os.path.getsize("large_data.csv"),
            "content_type": "text/csv",
        },
    },
)

prepare = prepare_response.json()
upload_url = prepare["upload"]["upload_url"]
dataset_version_id = prepare["dataset"]["version_id"]

# Step 2: Upload the file to the signed URL
with open("large_data.csv", "rb") as f:
    requests.put(upload_url, data=f, headers={"Content-Type": "text/csv"})

# Step 3: Complete the upload to trigger ingestion
complete_response = requests.post(
    f"{base_url}/datasets/{dataset_version_id}/complete",
    headers=headers,
)

print(complete_response.json())