News and blog
NXLog main page
  • Products
    NXLog Platform
    Log collection
    Log management and analytics
    Log storage
    NXLog Community Edition
    Integrations
    Professional Services
  • Solutions
    Use cases
    Specific OS support
    SCADA/ICS
    Windows event log
    DNS logging
    MacOS logging
    Solutions by industry
    Financial Services
    Government & Education
    Entertainment & Gambling
    Telecommunications
    Medical & Healthcare
    Military & Defense
    Law Firms & Legal Counsel
    Industrial & Manufacturing
  • Pricing
    Licensing
    Plans
  • Partners
    Find a Reseller
    Partner Program
    Partner Portal
  • Resources
    Documentation
    Blog
    White papers
    Videos
    Webinars
    Case Studies
    Community Program
    Community Forum
  • About
    Company
    Careers
  • Support
    Support portals
    Contact us

NXLog Platform
Log collection
Log management and analytics
Log storage
NXLog Community Edition
Integrations
Professional Services

Use Cases
Specific OS support
SCADA/ICS
Windows event log
DNS logging
MacOS logging
Solutions by industry
Financial Services
Government & Education
Entertainment & Gambling
Telecommunications
Medical & Healthcare
Military & Defense
Law Firms & Legal Counsel
Industrial & Manufacturing

Licensing
Plans

Find a Reseller
Partner Program
Partner Portal

Documentation
Blog
White papers
Videos
Webinars
Case Studies
Community Program
Community Forum

Company
Careers

Support portals
Contact us
Let's Talk
  • Start free
  • Interactive demo
Let's Talk
  • Start free
  • Interactive demo
NXLog search
  • Loading...
Let's Talk
  • Start free
  • Interactive demo
September 26, 2024 securitydeploymentannouncement

What is a telemetry pipeline? Understanding and building effective telemetry data pipelines

By Tamás Burtics

Share
ALL ANNOUNCEMENT COMPARISON COMPLIANCE DEPLOYMENT SECURITY SIEM STRATEGY RSS

Back in the day, Gordon Moore made relatively accurate observations and projections about the exponential growth of transistors on semiconductors. It still amazes me, yet very few predicted the incredible growth of system interconnectedness and the vast amount of data it generates. It is estimated that 90% of all data was created in the last two years. Given that everything is connected, the need for telemetry is growing at an unprecedented rate, and so is the need to efficiently channel and manage telemetry data.

What is telemetry?

Telemetry is the ecosystem built around collecting metrics remotely. Telemetry collects telemetry data; the systems or chains of systems that collect this data are called telemetry pipelines or telemetry data pipelines. In this post, we look into that.

In the English language, telemetry is a foreign word, originating from the French télémètre, which is a compound word consisting of télé (meaning far) and mètre (meaning a device for measuring). The essence of the word implies remote observation and collection of metrics. Implementing the definition of the expression in the modern IT world, telemetry means a set of tools or a framework that collects and transmits telemetry data from sensors on devices from remote or inaccessible places.

What is a telemetry pipeline?

A telemetry pipeline (also known as a telemetry data pipeline) is a system for collecting, processing, and routing telemetry data (logs, metrics, and traces) from sources to destinations in order to enable real-time analysis and monitoring.

Telemetry pipelines

In the metrics collection system, the telemetry pipeline is responsible for collecting and shifting data to its destination, and you could consider it a subset of the observability pipeline, which also includes tools for data visualization, analysis, alerting, and so on.

Telemetry pipelines can handle diverse data types and provide real-time or near-real-time data processing. A telemetry pipeline works based on the same principles as a "conventional" log data pipeline. The main differences are the type of data it shifts and the time sensitivity of the data. Given these differences, they need different handling, priorities, and routes.

Benefits of telemetry pipelines

The main benefits of deploying a telemetry pipeline are:

  • Unified and consistent observability data — Telemetry pipelines normalize and enrich logs, metrics, and traces before forwarding the information. This ensures consistent, structured data regardless of the original source, improving analysis and reducing noise.

  • Cost reduction — Telemetry data pipelines can discard low-value data, sample high-volume sources, and compress or transform valuable data. These processes lower storage and ingestion costs in observability platforms.

  • Vendor flexibility — A well-designed telemetry data pipeline can seamlessly route data to multiple destinations, avoiding lock-in to a specific solution.

  • Improved performance and reliability — A telemetry pipeline can reduce application performance overhead and provide a more reliable collection of data. Telemetry pipelines also help decouple your systems from your monitoring tools.

  • Faster, more actionable insights — By filtering, transforming, and enriching the collected data, telemetry pipelines empower support teams to act faster, detecting issues and improving the analysis process.

What is the importance of collecting metrics?

Imagine you operate a complex industrial control system, for example, in a crude oil refinery or a wind farm on the open sea, to mention a few exciting cases when telemetry and the data gathered from it could be paramount for your operation. In such environments, the smallest resonance in the system matters, and even the pressure in the smallest pipe can be critical for the safety of those working there. On the other hand, you might also be operating a long production line where downtime is expensive, and you want to minimize the risk of losing money.

According to a Gartner report, telemetry pipelines are projected to be used in 40% of logging solutions by 2026 due to the complexity of distributed architectures. Read more on Gartner.

A well-designed and implemented telemetry strategy and a thoughtfully crafted telemetry pipeline are the two fundamental pillars of your metrics collection. They also play a crucial role in your IT security and operational continuity. A telemetry data pipeline enables real-time monitoring and analysis of system performance to identify and resolve issues swiftly.

  • Helps optimize system resources and improve overall reliability and efficiency.

  • Aids mission-critical elements of your operational chain.

  • Facilitates proactive maintenance and observability by providing timely insights with relevant, fresh data.

What is the secret to building a telemetry pipeline?

Having a dedicated toolchain for your telemetry operation, accommodating a safe and swift route to your telemetry data has many advantages. When planning and building a telemetry pipeline, there are a few principles to keep in mind.

Toolchain selection for collection

Choose tools, such as the following examples, that enable efficient data collection and support filtering mechanisms to ensure only relevant telemetry data is collected.

  • NXLog Platform is an on-premises solution for centralized log management without complexity or cost surprises, built to handle the challenges of telemetry data management, making it a great telemetry pipeline solution.

  • OpenTelemetry is an open-source framework that provides unified APIs, SDKs, and tools for generating, collecting, and exporting logs, metrics, and traces to enable standard, vendor-neutral observability. OpenTelemetry is supported seamlessly by NXLog Platform.

Data filtering and trimming

Early in the telemetry pipeline, filter and trim the telemetry data to eliminate redundant information entering the rest of the "journey". This step reduces log noise and improves the efficiency of data flow. In addition, many SIEM and analytics systems charge their customers by the amount of data ingested, so this step can also significantly lower costs.

Data normalization

Standardize the collected telemetry data by normalizing it into a common format that makes it easier to compare and analyze it. Normalizing your telemetry data also helps in later stages by easing correlation.

Relaying and packet prioritization

Relay the telemetry data with efficient routing and packet prioritization to ensure that real-time and critical data is shifted first, keeping latency minimal when dealing with time-sensitive data.

Data formatting for destinations

Format your data to ensure that the output format is compatible with the target analytics or visualization platform. This part is key to seamless data processing.

How does a telemetry pipeline work?

Telemetry pipeline architecture diagram showing data collection, filtering, and routing

A telemetry pipeline architecture must support the three main steps of a telemetry pipeline workflow:

  1. Data collection — Multiple sources such as applications, infrastructure, and services like cloud platforms generate logs, metrics, and traces. That telemetry data is gathered by local collection agents and sent to the telemetry pipeline.

  2. Data processing — Once the data has been collected, the pipeline processes it in preparation for analysis. This includes data cleansing and aggregation to reduce data noise and simplify the data.

  3. Data routing and delivery — Finally, the telemetry pipeline determines where the data should go, with destinations such as data storage systems, metrics backends, APM platforms, and SIEM tools. Those systems receive and ingest the data for visualization, alerting, querying, and analysis.

When is collecting telemetry data beneficial?

Telemetry data collection is particularly valuable in environments with complex, distributed, or mission-critical systems. Here are some scenarios where telemetry pipelines shine:

Monitoring distributed systems
  • Without telemetry pipelines — Monitoring each part of a distributed system separately leads to gaps in visibility. A microservice failure may not be linked to a database issue, causing delayed fixes.

  • Risks — Increased downtime, missed issues, and inefficient system management.

  • With telemetry pipelines — Centralized monitoring across services allows fast detection and resolution of issues, improving system uptime and performance.

For more insights into the challenges of handling telemetry data, check out this DevOps article.

Managing critical infrastructure
  • Without telemetry pipelines — Independent monitoring systems in healthcare, finance, or IT security can miss critical failures, risking safety and compliance.

  • Risks — Delayed detection of failures, potential safety risks, and regulatory penalties.

  • With telemetry pipelines — Centralized data collection ensures real-time issue detection and compliance, reducing risks and improving reliability.

Optimizing high-volume data processing
  • Without telemetry pipelines — High data volumes overwhelm traditional systems, making it hard to filter relevant insights and increasing costs.

  • Risks — Wasted resources, missed insights, and delayed responses.

  • With telemetry pipelines — Telemetry pipelines filter and route relevant data efficiently, reducing costs and enabling faster decision-making.

In summary, telemetry pipelines are essential for ensuring the smooth operation of complex systems, enhancing performance, and enabling proactive management.

Telemetry pipeline versus observability pipeline

Many people are confused and think these concepts are the same and interchangeable. However, they are not the same. Let’s see why.

Telemetry pipelines focus on collecting and processing data like logs, metrics, and traces from different parts of a system. Telemetry pipelines are a key part of observability pipelines, providing the data needed to understand and monitor the system effectively.

Observability pipelines do more by combining this data with other information to give a full picture of how the system is doing.

Observability pipeline

Simply put, telemetry pipelines gather the data, and observability pipelines work with that data to help you visualize and understand what’s happening in your system.

Table 1. Comparing telemetry pipelines versus observability pipelines
Scope Telemetry pipeline Observability pipeline

Purpose

Collect and transport raw telemetry data (logs, metrics, traces) from sources to storage or analysis tools.

Transform, enrich, correlate, and route telemetry data to enable deeper system understanding and troubleshooting.

Data processing

Minimal processing, primarily forwarding data.

Advanced processing, including enrichment, sampling, filtering, redaction, and normalization.

Flexibility

Usually fixed or limited in how data is modified before delivery.

Highly configurable, supports dynamic routing, schema changes, and policy-driven manipulation.

Outcome

Ensures data reaches its destination reliably.

Ensures data is meaningful, cost-efficient, and actionable for observability platforms.

Main advantage

Simple, reliable, lightweight, and easy to operate at scale.

Delivers high-quality, curated data that reduces noise and observability costs.

Main disadvantage

Raw data can be high-volume, costly, and less useful without downstream processing.

More complex to design, maintain, and tune due to transformations and policies.

Read more about choosing the right observability pipeline.

Frequently asked questions

This FAQ covers some of the most commonly asked questions about telemetry pipelines:

When do you need a telemetry pipeline?

You need a telemetry pipeline when your organization is managing very high telemetry data volumes, operating in a highly regulated environment, or relying on multiple monitoring and analytics tools that require consistent, centralized data handling. It becomes essential whenever you need to reduce observability costs, standardize data across diverse systems, or gain flexibility in how and where your telemetry is routed.

What are the cost implications of telemetry pipelines?

Telemetry pipelines may introduce some new infrastructure and operational overhead, but they significantly reduce downstream monitoring and storage costs by filtering, sampling, and optimizing high-volume telemetry data. Organizations can also balance expenses by selecting their telemetry tools to maximize cost efficiency and monitoring their pipeline.

How do telemetry pipelines enhance data security?

Telemetry pipelines enhance data security by ensuring important security logs are collected and delivered in real-time while reducing noise so analysts can identify threats faster. They also prevent sensitive data leaks by filtering or transforming data internally before it leaves the organization’s controlled environment.

How do telemetry pipelines support regulatory compliance?

Telemetry pipelines support regulatory compliance by enforcing consistent data handling policies such as redacting sensitive information, standardizing formats, enforcing retention policies and ensuring auditable data flows across systems. They also enhance data security by enabling encryption, access controls, and controlled routing of sensitive telemetry to approved destinations only.

Conclusion

Telemetry pipelines are an important part of modern observability systems because they help the safe and swift transition of your telemetry data from your remote sources. They are responsible for the collection and transmission of the information needed to keep everything running smoothly and help identify and fix issues early. While they’re just one piece of the puzzle, they’re crucial for making sure your systems are in their best shape. As IT systems become more connected and complex, having a solid telemetry strategy to collect remote metrics will only become more important.

NXLog Platform is an on-premises solution for centralized log management with
versatile processing forming the backbone of security monitoring.

With our industry-leading expertise in log collection and agent management, we comprehensively
address your security log-related tasks, including collection, parsing, processing, enrichment, storage, management, and analytics.

Start free Contact us
  • deployment
  • telemetry data pipeline
Share

Facebook Twitter LinkedIn Reddit Mail
Related Posts

Raijin vs Elasticsearch
14 minutes | August 9, 2022
How NXLog can help meet compliance mandates
4 minutes | June 1, 2022
Using Raijin Database Engine to aggregate and analyze Windows security events
11 minutes | July 29, 2021

Stay connected:

Sign up

Keep up to date with our monthly digest of articles.

By clicking singing up, I agree to the use of my personal data in accordance with NXLog Privacy Policy.

Featured posts

2025 and NXLog - a recap
December 18, 2025
Announcing NXLog Platform 1.10
December 11, 2025
Announcing NXLog Platform 1.9
October 22, 2025
Gaining valuable host performance metrics with NXLog Platform
September 30, 2025
Announcing NXLog Platform 1.8
September 12, 2025
Security Event Logs: Importance, best practices, and management
July 22, 2025
Announcing NXLog Platform 1.7
June 25, 2025
Enhancing security with Microsoft's Expanded Cloud Logs
June 10, 2025
Announcing NXLog Platform 1.6
April 22, 2025
Announcing NXLog Platform 1.5
February 27, 2025
Announcing NXLog Platform 1.4
December 20, 2024
NXLog redefines log management for the digital age
December 19, 2024
2024 and NXLog - a review
December 19, 2024
Announcing NXLog Platform 1.3
October 25, 2024
NXLog redefines the market with the launch of NXLog Platform: a new centralized log management solution
September 24, 2024
Welcome to the future of log management with NXLog Platform
August 28, 2024
Announcing NXLog Enterprise Edition 5.11
June 20, 2024
Raijin announces release of version 2.1
May 31, 2024
Ingesting log data from Debian UFW to Loki and Grafana
May 21, 2024
Announcing NXLog Enterprise Edition 6.3
May 13, 2024
Raijin announces release of version 2.0
March 14, 2024
NXLog Enterprise Edition on Submarines
March 11, 2024
The evolution of event logging: from clay tablets to Taylor Swift
February 6, 2024
Migrate to NXLog Enterprise Edition 6 for our best ever log collection experience
February 2, 2024
Raijin announces release of version 1.5
January 26, 2024
2023 and NXLog - a review
December 22, 2023
Announcing NXLog Enterprise Edition 5.10
December 21, 2023
Raijin announces release of version 1.4
December 12, 2023
Announcing NXLog Enterprise Edition 6.2
December 4, 2023
Announcing NXLog Manager 5.7
November 3, 2023
Announcing NXLog Enterprise Edition 6.1
October 20, 2023
Raijin announces release of version 1.3
October 6, 2023
Upgrading from NXLog Enterprise Edition 5 to NXLog Enterprise Edition 6
September 11, 2023
Announcing NXLog Enterprise Edition 6.0
September 11, 2023
The cybersecurity challenges of modern aviation systems
September 8, 2023
Raijin announces release of version 1.2
August 11, 2023
The Sarbanes-Oxley (SOX) Act and security observability
August 9, 2023
PCI DSS 4.0 compliance: Logging requirements and best practices
August 2, 2023
Detect threats using NXLog and Sigma
July 27, 2023
HIPAA logging requirements and how to ensure compliance
July 19, 2023
Announcing NXLog Enterprise Edition 5.9
June 20, 2023
Industrial cybersecurity - The facts
June 8, 2023
Raijin announces release of version 1.1
May 30, 2023
CISO starter pack - Security Policy
May 2, 2023
Announcing NXLog Enterprise Edition 5.8
April 24, 2023
CISO starter pack - Log collection fundamentals
April 3, 2023
Raijin announces release of version 1.0
March 9, 2023
Avoid vendor lock-in and declare SIEM independence
February 13, 2023
Announcing NXLog Enterprise Edition 5.7
January 20, 2023
NXLog - 2022 in review
December 22, 2022
Need to replace syslog-ng? Changing to NXLog is easier than you think
November 23, 2022
The EU's response to cyberwarfare
November 22, 2022
Looking beyond Cybersecurity Awareness Month
November 8, 2022
GDPR compliance and log management best practices
September 23, 2022
NXLog in an industrial control security context
August 10, 2022
Raijin vs Elasticsearch
August 9, 2022
NXLog provides native support for Google Chronicle
May 11, 2022
Aggregating macOS logs for SIEM systems
February 17, 2022
How a centralized log collection tool can help your SIEM solutions
April 1, 2020

Categories

  • ANNOUNCEMENT
  • COMPARISON
  • COMPLIANCE
  • DEPLOYMENT
  • SECURITY
  • SIEM
  • STRATEGY
logo

Subscribe to our newsletter to get the latest updates, news, and products releases. 

© Copyright NXLog FZE.

Privacy Policy. General Terms of Use

Follow us

  • Product
  • NXLog Platform 
  • Log collection
  • Log management and analysis
  • Log storage
  • Integration
  • Professional Services
  • Plans
  • Resources
  • Documentation
  • Blog
  • White papers
  • Videos
  • Webinars
  • Case studies
  • Community Program
  • Community forum
  • Support
  • Getting started guide
  • Support portals
  • About NXLog
  • About us
  • Careers
  • Find a reseller
  • Partner program
  • Contact us