There are two ways to upload data to Canary Edge:
- Dashboard — upload a CSV file directly through the web UI
- API — send JSON payloads programmatically
Both methods support baseline creation (training) and anomaly detection.
Create a Baseline via Dashboard
A baseline teaches Canary Edge what “normal” looks like for your machine. To create one from the dashboard:
- Go to Machines in the sidebar and click + Add Machine
- Choose Univariate (single sensor) or Multivariate (multiple correlated sensors)
- Enter a Machine ID (e.g.
pump-47-vibX)
- Select the Granularity that matches your data sampling rate
- Upload a CSV file with your historical normal-operation data
- Set Sensitivity (0-99) — higher values flag more anomalies
- Click Create Baseline
Univariate — two columns with a header row:
timestamp,value
2026-01-01T00:00:00Z,10.5
2026-01-01T01:00:00Z,11.2
2026-01-01T02:00:00Z,10.8
Multivariate — first column is timestamp, remaining columns are channels:
timestamp,vibration_x,vibration_y,temperature
2026-01-01T00:00:00Z,0.42,0.38,72.1
2026-01-01T01:00:00Z,0.45,0.41,72.3
2026-01-01T02:00:00Z,0.43,0.39,72.0
Channel names are automatically read from the column headers.
Baseline creation takes up to 30 seconds. The model fine-tunes a predictor specifically for your machine’s normal behavior patterns.
Run Detection via Dashboard
Once a machine has a baseline, you can upload new data to check for anomalies:
- Go to the machine’s detail page
- Click Run Detection
- Upload a CSV file with the same format as your baseline data (timestamp, value)
- Set Sensitivity and click Detect Anomalies
The results show total points analyzed, anomaly count, and regime classification (HEALTHY, ACTIVE, TRANSITION, SHOCK) for each data point.
Create a Baseline via API
curl -X POST https://api.canaryedge.com/v1/baseline \
-H "Content-Type: application/json" \
-H "Ocp-Apim-Subscription-Key: YOUR_API_KEY" \
-d '{
"machine_id": "pump-47-vibX",
"series": [
{"timestamp": "2026-01-01T00:00:00Z", "value": 10.5},
{"timestamp": "2026-01-01T01:00:00Z", "value": 11.2},
...
],
"granularity": "minutely",
"sensitivity": 95
}'
The response includes baseline statistics and fine-tuning results:
{
"machine_id": "pump-47-vibX",
"status": "created",
"stats": {
"num_windows": 45,
"series_length": 2048,
"energy_mean": 0.0012,
"energy_p99": 0.0089
},
"predictor_finetuning": {
"status": "completed",
"epochs_run": 23,
"final_loss": 0.000142,
"duration_seconds": 8.3,
"early_stopped": true
}
}
Run Detection via API
Every detection request needs a series array of objects with timestamp and value:
{
"series": [
{"timestamp": "2026-01-01T00:00:00Z", "value": 10.5},
{"timestamp": "2026-01-01T01:00:00Z", "value": 11.2},
{"timestamp": "2026-01-01T02:00:00Z", "value": 10.8}
],
"granularity": "hourly",
"sensitivity": 85
}
API Constraints
| Field | Constraint |
|---|
series | 12 to 8,640 points |
timestamp | ISO 8601 format, sorted ascending, no duplicates |
value | Numeric (integer or float) |
granularity | minutely, hourly, daily, weekly, monthly, yearly |
From CSV
import csv
import json
import requests
# Read CSV with columns: timestamp, value
series = []
with open("sensor_data.csv") as f:
reader = csv.DictReader(f)
for row in reader:
series.append({
"timestamp": row["timestamp"],
"value": float(row["value"])
})
# Send to Canary Edge
response = requests.post(
"https://api.canaryedge.com/anomalydetector/v1.1/timeseries/entire/detect",
headers={
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "YOUR_API_KEY"
},
json={
"series": series,
"granularity": "minutely",
"sensitivity": 85
}
)
result = response.json()
anomalies = [i for i, flag in enumerate(result["isAnomaly"]) if flag]
print(f"Found {len(anomalies)} anomalies at indices: {anomalies}")
From Excel
import openpyxl
import requests
wb = openpyxl.load_workbook("sensor_data.xlsx")
ws = wb.active
series = []
for row in ws.iter_rows(min_row=2, values_only=True):
timestamp, value = row[0], row[1]
series.append({
"timestamp": timestamp.isoformat() + "Z" if hasattr(timestamp, "isoformat") else str(timestamp),
"value": float(value)
})
response = requests.post(
"https://api.canaryedge.com/anomalydetector/v1.1/timeseries/entire/detect",
headers={
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "YOUR_API_KEY"
},
json={"series": series, "granularity": "hourly", "sensitivity": 85}
)
print(response.json())
From Pandas DataFrame
import pandas as pd
import requests
# Your existing DataFrame with datetime index and a 'value' column
df = pd.read_csv("data.csv", parse_dates=["timestamp"], index_col="timestamp")
series = [
{"timestamp": ts.isoformat() + "Z", "value": float(val)}
for ts, val in zip(df.index, df["value"])
]
# Limit to 8640 points (API maximum)
if len(series) > 8640:
series = series[-8640:]
response = requests.post(
"https://api.canaryedge.com/anomalydetector/v1.1/timeseries/entire/detect",
headers={
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "YOUR_API_KEY"
},
json={"series": series, "granularity": "minutely", "sensitivity": 85}
)
From Database (SQL)
import psycopg2
import requests
conn = psycopg2.connect("postgresql://user:pass@host/db")
cur = conn.cursor()
cur.execute("""
SELECT timestamp, value
FROM sensor_readings
WHERE machine_id = 'pump-1'
ORDER BY timestamp DESC
LIMIT 8640
""")
series = [
{"timestamp": row[0].isoformat() + "Z", "value": float(row[1])}
for row in cur.fetchall()
]
series.reverse() # API expects ascending order
response = requests.post(
"https://api.canaryedge.com/anomalydetector/v1.1/timeseries/entire/detect",
headers={
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "YOUR_API_KEY"
},
json={"series": series, "granularity": "minutely", "sensitivity": 85}
)
Batch Processing
For large datasets, split into chunks of up to 8,640 points:
import requests
def detect_in_chunks(series, chunk_size=8640, **kwargs):
all_results = []
for i in range(0, len(series), chunk_size):
chunk = series[i:i + chunk_size]
if len(chunk) < 12:
continue
resp = requests.post(
"https://api.canaryedge.com/anomalydetector/v1.1/timeseries/entire/detect",
headers={
"Content-Type": "application/json",
"Ocp-Apim-Subscription-Key": "YOUR_API_KEY"
},
json={"series": chunk, **kwargs}
)
all_results.append(resp.json())
return all_results
Common Issues
| Issue | Solution |
|---|
Series must contain at least 12 points | Your data has fewer than 12 rows — check for empty/null filtering |
Series timestamps must be sorted | Sort your data by timestamp ascending before sending |
Duplicate timestamps | Remove duplicate timestamp rows from your dataset |
Series exceeds maximum length | Split into chunks of 8,640 or fewer points |
| Excel dates showing as numbers | Use pd.to_datetime() or openpyxl to parse dates properly |