Industry-standard datasets, honest splits, reproducible results. Every number on this page comes from publicly available datasets with documented methodology.
Cross-load split: trained on 0-1HP, tested on 2-3HP (never-seen operating conditions). Competitive with CNN and TCN baselines — in under 20KB Flash.
Our lightest engine fits in the Flash memory of a chip that costs less than a coffee. No GPU needed, no cloud, no framework dependencies.
From vibration analysis to acoustic anomaly detection — each engine is optimized for a specific signal type. One platform, every edge use case.
We believe in transparency. Here's our honest view of where Luviner engines excel and where there's room to improve.
Bearing fault detection (CWRU): up to 99.6% accuracy (KAN), competitive with CNN 99.8%/TCN 98.9%. Unified classification (sklearn): 96% average across 4 datasets. Turbofan RUL (C-MAPSS): RMSE 19.3, beats BiLSTM. All in under 140KB Flash.
Time-series anomaly detection (TSAD): F1 0.56-0.80, vs SOTA 0.90+. Acoustic anomaly (DCASE): AUC 0.75, vs SOTA 0.90. HAR: 86%, vs SOTA 95%+. The gap is expected — we optimize for MCU deployment, not unlimited compute.
Auto-updated from the latest benchmark runs. Each cell shows the best result for that engine on that benchmark.
Not every engine fits on every MCU. Flash size depends on the engine architecture and configuration. Here's what fits where.
| Target MCU | Flash | RAM | Compatible Engines | Chip Cost |
|---|---|---|---|---|
| ARM Cortex-M0 | 32 KB | 4 KB | -- | ~from €1 |
| ARM Cortex-M4 | 256 KB | 64 KB | -- | ~from €3 |
| ESP32 | 4 MB | 520 KB | -- | ~from €2 |
| ESP32-S3 | 8 MB | 512 KB | -- | ~from €3 |
Every result on this page is reproducible. We use industry-standard datasets with documented train/test splits.
CWRU (Case Western Reserve), C-MAPSS (NASA), UCI HAR, TSAD (MSL/SMAP/SMD), DCASE 2020, sklearn (Iris/Wine/Cancer/Digits). All publicly available, no proprietary data.
Cross-load splits where applicable (train and test on different operating conditions). No data leakage. Pure NumPy — no GPU, no ML frameworks. Results include system info for reproducibility.
Every decision traceable to specific neurons. Full causal chain from input to output. Self-healing engines that never degrade silently. All running on a microcontroller.
When the engine flags an anomaly, it tells you exactly which neurons were responsible and what input features drove them. No black box.
The engine monitors its own neurons for degradation, concept drift, and dead units. It signals when retraining is needed — before performance drops.
Visualize the hidden state trajectory in real-time. See normal basins, anomaly trajectories, and predict problems before they cross the threshold.
| Metric | Value | Note |
|---|---|---|
| Decision traceability | 100% | Every neuron attribution tracked |
| Consciousness footprint | <11 KB | Header + source, Q8 fixed-point |
| Cloud dependency | 0 | Runs entirely on-device |
| Test coverage | 50 | All 10 organelles + C export |
| Neural organelles | 10 | Tracer, Map, Lysosome, Ribosome, Membrane, Debugger, Amygdala, Hippocampus, Pituitary, NCL |
Combine multiple sensor types into a single AI model. Per-modality encoders, cross-modal attention, and root cause analysis — all running on a microcontroller with less than 2 KB overhead.
Each sensor type gets a specialized encoder (FFT for vibration, statistical features for slow sensors). The model knows what each input means, not just its value.
Linear attention fuses modality embeddings in O(n) time. The model learns which sensor combinations matter — like vibration changing before temperature rises.
When the model detects a fault, it tells you which sensor modality is responsible. No guesswork: 'temperature caused 75% of this anomaly'.
| Metric | Value | Note |
|---|---|---|
| Fault detection | 100% | Bearing fault + overcurrent, 3-sensor industrial demo |
| False alarm rate | 2% | On normal operation data |
| Multimodal overhead | <2 KB | Encoder + fusion weights, Q8 fixed-point |
| Fusion complexity | O(n) | Linear attention, scales to any number of sensors |
| Compatible engines | 13 | Works with any of the 13 Luviner engines |
Classify sounds in real time on any microcontroller. Industrial sound monitoring, machine health via audio, environmental sound classification — all running locally, no cloud required.
Your machine tells you when something is wrong — before a human can hear it. Detects abnormal bearing noise, pump cavitation, and airflow changes in real time.
Continuous audio classification at sensor speed. No buffering, no upload delays. Decisions happen on the device as sound arrives.
AutoML selects the best engine for your audio task. All 13 Vivi engines support audio classification out of the box.
| Metric | Value | Note |
|---|---|---|
| Flash footprint | <60 KB | Complete model including feature extraction |
| Synthetic benchmark (Vivi 4 / Focus 4) | 100% | Controlled classification test, clean audio |
| DCASE 2020 fan anomaly (AUC) | 0.749 | Industrial fan anomaly detection, public benchmark |
| Streaming mode | Real-time | Classifies in real time as audio arrives |
| Cloud dependency | 0 | All processing on-device |
Classify gestures and activities from a 6-axis IMU (accelerometer + gyroscope). Wearables, robotics, gesture control — all running locally on commodity hardware.
Walking, running, sitting, climbing stairs — classified in real time. Validated on the UCI HAR benchmark: 95% accuracy, official train/test split.
Recognize custom gestures from wrist or hand movement. Build gesture-controlled interfaces without cloud processing or connectivity.
From prosthetic limb control to industrial exoskeleton monitoring, motion classification runs entirely on the embedded processor.
| Metric | Value | Note |
|---|---|---|
| UCI HAR accuracy | 95.0% | 6 activities, 30 volunteers, official split |
| Features extracted | 51 | 6-axis IMU (accelerometer + gyroscope) |
| Flash footprint | <20 KB | Complete model, pure C, no dependencies |
| Input axes | 6 | 3-axis accelerometer + 3-axis gyroscope |
| Inference mode | Real-time | Real-time streaming, per-sample classification |
Train models across distributed devices without sharing raw sensor data. Each device improves locally, then shares only what it learned — not what it measured. Privacy by design.
Each machine on the production line learns from its own vibration data. The fleet improves collectively. No raw sensor data leaves the equipment.
Patient monitoring devices improve from population patterns without exposing individual readings. Regulatory-compliant, no cloud data lake required.
Faulty or compromised devices cannot corrupt the global model. Robust median aggregation ensures bad actors are automatically excluded.
| Metric | Value | Note |
|---|---|---|
| Raw data shared | 0 | Only model updates are exchanged |
| Convergence — IID scenario | Fast | Devices share similar data distributions |
| Convergence — non-IID scenario | Stable | Devices have heterogeneous local distributions |
| Privacy mechanism | Gaussian DP | Gaussian differential privacy, configurable epsilon |
| Byzantine tolerance | Robust median | Robust median excludes faulty/malicious nodes |
Go from CSV to compiled C in minutes. Start free.