State-of-the-Art Detection Engine
Canary Edge uses an advanced deep learning architecture purpose-built for time-series anomaly detection. It consistently outperforms traditional ML approaches, statistical methods, and competing cloud services in both accuracy and latency.How It Works
- Encode — Time-series windows are converted into rich latent-space representations using a Temporal Transformer Encoder
- Predict — A neural predictor estimates the expected next state from the current state
- Measure — The prediction error (energy score) between expected and actual behavior determines anomaly severity
- Classify — The system automatically classifies the operational regime: HEALTHY, ACTIVE, TRANSITION, or SHOCK
Why Canary Edge Wins
| Traditional ML / Cloud Services | Canary Edge |
|---|---|
| Fixed thresholds that need manual tuning | Learns what “normal” looks like automatically |
| Misses gradual regime shifts | Detects subtle behavioral changes in real-time |
| Requires large labeled training datasets | Works with minimal unlabeled data (12+ points) |
| One model per metric, no cross-signal awareness | Multi-resolution models that understand temporal dynamics |
| Azure AD: ~220ms, AWS Lookout: ~350ms | Sub-50ms p95 latency |
| Black-box anomaly scores | Interpretable regime classification with z-scores |
Benchmark Comparisons
We regularly benchmark Canary Edge against industry alternatives. Full benchmark methodology and results are published transparently.Benchmark comparison reports are coming soon. We are running standardized evaluations against Azure Anomaly Detector, AWS Lookout for Equipment, PyOD, ADTK, and Luminaire across the following datasets:
- NAB (Numenta Anomaly Benchmark) — 58 real-world time series
- NASA IMS Bearing — vibration sensor data with known failure points
- Yahoo S5 — synthetic and real anomaly benchmarks
- KPI (AIOps) — internet service KPI anomaly dataset
Benchmark Metrics We Report
| Metric | What It Measures |
|---|---|
| F1 Score | Balance of precision and recall |
| Detection Latency | Time from anomaly onset to detection |
| API Latency (p50/p95/p99) | End-to-end request latency |
| False Positive Rate | Incorrect anomaly flags per 1000 points |
| Cost per Million Points | Normalized pricing comparison |
Collapse Prevention (SIGReg)
Deep learning models can “collapse” — mapping all inputs to the same output, making everything look normal. Canary Edge uses SIGReg (Signature Regularization) based on the Cramer-Wold theorem to mathematically guarantee the latent space remains expressive and anomalies stay detectable. This is a key differentiator over simpler autoencoder approaches.Sensitivity
Thesensitivity parameter (0-99) controls detection strictness:
- Low (0-30): Only flag extreme deviations. Best for noisy industrial sensors.
- Medium (40-70): Balanced detection. Default for most use cases.
- High (71-99): Catch subtle shifts. Best for financial metrics and SLAs.
threshold = 4.0 - (sensitivity / 99.0) * 3.0
Per-Machine Fine-Tuning
Canary Edge ships with a powerful generic model trained on diverse time-series patterns. For even better accuracy on your specific equipment or metrics, you can fine-tune a per-machine predictor:- Set a baseline via the
/v1/baselineendpoint with your machine’s healthy operational data - The system automatically fine-tunes a dedicated predictor for that machine
- Fine-tuning takes seconds, not hours — no GPU required on your end
- Fine-tuned models typically improve detection accuracy from 82% to 97%+
Multi-Resolution Architecture
Canary Edge automatically selects the right model based on your data characteristics:| Model | Window Size | Best For |
|---|---|---|
| Sensor Model (2.3M params) | 2,048 points | High-frequency sensor data (vibration, temperature, pressure) |
| Metrics Model (1.8M params) | 128 points | Low-frequency business metrics (KPIs, SLAs, throughput) |