Running analytics at the edge for Observability data
with Zachary Quiring, Edge Delta
There is always tremendous value in decreasing latency in any decision-making process, particularly when we are dealing with stream processing in support of system and application observability. By running Metrics, Events, Logs, and Trace (MELT) data through an analytics algorithm at time of creation, on the very devices emitting the signals, organizations can set parameters on what information is worth sending to an observability platform, and what information is not.
In this episode, we speak with Zachary Quiring, Director of Product at Edge Delta regarding how they view observability and how distributed queries are the most economical approach to scale and efficiency for modern architectures. This is truly a novel approach and as Edge Delta puts it, the only way to achieve "observability without compromise".
About Edge Delta
Edge Delta is a stream processing platform that allows enterprises to use a novel distributed analytics approach to identify and remediate incidents.
Founded in 2018 by Ozan Unlu and Fatih Yildiz, Edge Delta posited that centralizing, indexing, and storing all raw observability data is a model that needs to be improved upon to support real-time and operational use cases. The founder's experience working on projects together as engineers at Microsoft brought them together to work on this challenge. They now partner with engineering teams across various industries to share how the Edge Delta platform can most effectively solve prevalent problems in logs, traces, and metrics data sets.