It’s a big change. We reworked the core of Hastic Server and
fixed deep problems in the execution model: hanging analytics and not-ending learning. Hastic Server is much more stable now.
Analytics could get stuck while learning / detecting. Now it always returns a result or throws an error.
Analytics could be “not alive” and don’t respond. It’s because of main execution thread could be blocked by learning and detection.
In this change we:
- improved the way how data goes for detection during webhooks processing
- made a separate thread for connectivity with node-server
Thanks @matthiasendler for helpful tips.
NOTE: don’t forget to re-install analytics’ dependencies if you’re build hastic-server from source. One of the key dependencies (pyzmq) needs update.
Every change you make in the “Analytics” / “Webhook” tab is sent to hastic-server immediately. Then, server stores it in DB. So we don’t rely on “Save” button anymore.
Visit our Wiki page to learn more.
Hastic-server is a service that consumes a lot of resources and generates errors. We introduced some metrics you can monitor. It’s possible to export these metrics with the new exporting tool. We will write more about that in a future post.
- Validate connection to Hastic Server #145
- Synchronization with Hastic Server #64
- Hastic datasource doesn’t work with Grafana at sub-url #227
- Errors from the beginning #212
- Hastic Info isn’t updated on datasource switch #205
- Missing statuses and updates on saving #230
- Hastic server at “” is not available #166
- Analytic units’ fields are not persisted #240
- Error 500 should not set disconnect state #229
- Learning timeout #481
- Server info: number of task resolvers #510
- Server info: detections number #516
- Hastic-exporter for Prometheus #520
- Synchronization with panel #455
- Add tasks to queue when analytics is not ready #468
- Send data to detection in chunks #489
- Batch detection #500
- Find start and end of peaks and troughs #506