telemetry data pipeline

How to visualize telemetry data flow and volume with NXLog Platform

As organizations collect more telemetry data, their pipelines grow in complexity and scale. Telemetry pipelines are dynamic, continually adjusted to improve data quality, reduce costs, and meet evolving observability requirements. At this scale, even small configuration changes can significantly affect how much data moves through your pipeline. Without clear visibility, you rely on assumptions. Did the new filtering rule actually reduce the amount of data you’re sending to the SIEM?

comparison

Fluent Bit vs Filebeat: Architecture, trade-offs, and the better default

If you are choosing between Fluent Bit and Filebeat, the real question is where you want routing, parsing, and failure handling to live. Pick the wrong default, and you create config sprawl, brittle pipelines, and extra work every time your backend or deployment model changes. Choose Fluent Bit when the agent itself needs to behave like a small pipeline, and choose Filebeat when your log path ends inside Elastic and you want the shipper to match Elastic’s operating model.

telemetry data pipeline  |  telemetry data  |  observability

What is telemetry data? A practical guide for modern systems

Telemetry data is the stream of measurements that instrumented devices, applications, and services continuously emit to a central system so engineers can monitor behavior, diagnose problems, and make informed decisions in real time and over the long term. In this article, we’ll look at what telemetry data means in practice for modern software, networks, and cloud platforms: how it’s produced, what kinds of signals it carries (logs, metrics, traces, and more), and why it has become essential for observability, performance, and security at scale.

opentelemetry  |  telemetry data pipeline  |  NXLog Platform

Beyond basic ingestion: Advanced OpenTelemetry data processing with NXLog

Most discussions about OpenTelemetry pipelines focus on getting data from point A to point B. Collect telemetry, maybe convert the format, forward it to a backend. That’s the minimum viable pipeline, and it’s where most tooling stops. But a pipeline that only moves data is a pipe, not a processing layer. The telemetry arriving at your observability platform or SIEM is only as useful as the context it carries. A raw log entry saying "connection from 198.

opentelemetry  |  telemetry data pipeline  |  NXLog Platform

How NXLog simplifies your OpenTelemetry journey

OpenTelemetry has become the de facto standard for telemetry data. Nearly 50% of surveyed cloud-native end-user companies have adopted it, and the project ranks as the second-highest-velocity initiative in the CNCF, behind only Kubernetes. The direction is clear: if your infrastructure doesn’t speak OpenTelemetry, it will increasingly be left out of the observability conversation. But adopting OpenTelemetry across an entire infrastructure is a different problem than adopting it in a greenfield application.

comparison

Fluent Bit vs Fluentd: How to choose the right tool for your log pipeline

Choosing between Fluent Bit and Fluentd is an architecture decision, not a product shootout. Both projects live under the CNCF Fluent umbrella and share a common lineage at Treasure Data, but they target different roles in a logging pipeline. Fluent Bit is a C-based telemetry agent designed for low-overhead collection at the edge. Fluentd is a Ruby-and-C data collector built for aggregation, transformation, and multi-destination routing. The practical question is not which one is better — it’s where each one belongs in your stack, and whether you need both.

More

Data format chaos costs you weeks of visibility

Security dashboards go dark: why visibility isn't optional, even when your defenses keep running

Building a practical OpenTelemetry pipeline with NXLog Platform

Centralized log management: What it is, how centralized logging works, and how to choose the right system

All Posts