Frequently Asked Questions

Everything you need to know about our edge AI prediction technology based on Reservoir Computing.

1. Key Characteristics of Our AI
Reservoir Computing can be applied to a wide variety of phenomena that can be represented as time-series data — including mechanical vibrations, chemical processes, natural phenomena, and biological signals.
Unlike deep learning algorithms, which often require vast amounts of data, Reservoir Computing can achieve accurate predictions with as few as 500–1,000 time points. For reference: if temperature is measured once per hour, a single day produces 24 data points.
The training dataset should cover a sufficient variety of operating conditions and dynamics that you expect to encounter during actual operation. If the training data only reflects a narrow range of conditions, the model may not generalize well to situations outside that range.
In most cases, no. Reservoir Computing is far more robust than deep learning in this regard and typically requires little to no manual feature engineering. Some basic preprocessing such as standardization may be needed depending on the data, but our team handles this as part of the project.
Reservoir Computing can handle a wide range of signal frequencies — from fast mechanical vibrations above 10 kHz down to slow processes in the sub-Hz range.
Training is performed by Entrox before the model is deployed to a device. This keeps the on-device algorithm light and compact, which is the recommended approach. On-device retraining is also possible for applications that require periodic model updates.
This depends on the application. Our AI learns the relationships between input variables and the target variable. Over time, factors such as equipment aging, wear and tear, or sensor drift can change these relationships. If these changes were already accounted for in the original training, the model can remain effective over its entire lifetime. If not, periodic or even continuous online learning is advisable.
If you can send us a dataset, we can show initial test results in most cases typically within one week. No special infrastructure is needed — our AI is extremely lightweight and requires no GPU or cloud setup.
Our trained model is designed to run as optimized code directly on your device, customized to work within your specific environment — including constraints such as limited computational power on microcontrollers or embedded systems.
We develop our models in Python and can convert them to languages suited to your target platform — typically C for microcontrollers and embedded devices, or Python for more powerful edge devices.
2. Performance and Accuracy
For the types of industrial time-series problems Reservoir Computing is designed for — soft sensors, anomaly detection, and remaining useful life prediction — Reservoir Computing achieves accuracy comparable to deep learning methods such as LSTMs, while requiring significantly less data, less computational power, and less training time.
We use standard metrics depending on the task: R² and RMSE for regression and soft-sensor tasks, AUC-ROC and F1 score for anomaly detection, and MAE/RMSE for remaining useful life prediction. We can also work with your own metrics on request.
3. Deployment and Integration
Reservoir Computing runs on standard microcontrollers and embedded processors — no GPU, no cloud connection, and no special hardware required. Inference typically takes sub-milliseconds, making it suitable even for real-time control loops where speed is critical.
No. Because the model runs entirely on local devices, your data stays on-site. There is no need for cloud connectivity or external data transfer — making it suitable for environments with strict data security requirements.
Any standard time-series format works — CSV, database exports, or direct sensor feeds. The key requirement is time-stamped measurements from your sensors. We handle all preprocessing and formatting as part of the project.
The trained model integrates seamlessly as a software module into existing infrastructure — control systems, PLCs, SCADA systems, microcontrollers, or edge devices.
4. Explainability
Yes. Our models are not a black box: the entire structure is visible as mathematical expressions. For soft-sensor applications, we can identify which mathematical terms in the model, including nonlinear interactions, are most influential in driving the prediction. We call this capability Sensitivity Analysis.
Yes. Our anomaly detection architecture naturally decomposes the overall anomaly score by sensor. When an anomaly is flagged, engineers can immediately see which specific sensor or component is behaving abnormally — providing directly actionable information.
5. Getting Started
A dataset from your process, typically historical sensor measurements in any standard format. We can discuss specific data requirements during an initial consultation.
An initial feasibility assessment can be completed typically within one week of receiving your data. A full pilot project, including model optimization and performance evaluation, typically takes 2–4 weeks depending on complexity.
The output of a pilot project includes a trained model as optimized code (C for microcontrollers, or Python for more powerful edge devices), along with performance metrics, documentation, and recommendations for real-world use.
← Back to Home