News and blog
NXLog main page
  • Products
    NXLog Platform
    Log collection
    Log management and analytics
    Log storage
    NXLog Community Edition
    Integrations
    Professional Services
  • Solutions
    Use cases
    Specific OS support
    SCADA/ICS
    Windows event log
    DNS logging
    MacOS logging
    Open Telemetry
    Solutions by industry
    Financial Services
    Government & Education
    Entertainment & Gambling
    Telecommunications
    Medical & Healthcare
    Military & Defense
    Law Firms & Legal Counsel
    Industrial & Manufacturing
  • Pricing
    Licensing
    Plans
  • Partners
    Find a Reseller
    Partner Program
    Partner Portal
  • Resources
    Documentation
    Blog
    White papers
    Videos
    Webinars
    Case Studies
    Community Program
    Community Forum
  • About
    Company
    Careers
  • Support
    Support portals
    Contact us

NXLog Platform
Log collection
Log management and analytics
Log storage
NXLog Community Edition
Integrations
Professional Services

Use Cases
Specific OS support
SCADA/ICS
Windows event log
DNS logging
MacOS logging
Open Telemetry
Solutions by industry
Financial Services
Government & Education
Entertainment & Gambling
Telecommunications
Medical & Healthcare
Military & Defense
Law Firms & Legal Counsel
Industrial & Manufacturing

Licensing
Plans

Find a Reseller
Partner Program
Partner Portal

Documentation
Blog
White papers
Videos
Webinars
Case Studies
Community Program
Community Forum

Company
Careers

Support portals
Contact us
Let's Talk
  • Start free
  • Interactive demo
Let's Talk
  • Start free
  • Interactive demo
NXLog search
  • Loading...
Let's Talk
  • Start free
  • Interactive demo
March 5, 2026 strategy

How NXLog simplifies your OpenTelemetry journey

By João Correia

Share
ALL ANNOUNCEMENT COMPARISON COMPLIANCE DEPLOYMENT SECURITY SIEM STRATEGY RSS

OpenTelemetry has become the de facto standard for telemetry data. Nearly 50% of surveyed cloud-native end-user companies have adopted it, and the project ranks as the second-highest-velocity initiative in the CNCF, behind only Kubernetes. The direction is clear: if your infrastructure doesn’t speak OpenTelemetry, it will increasingly be left out of the observability conversation.

But adopting OpenTelemetry across an entire infrastructure is a different problem than adopting it in a greenfield application. Most organizations don’t have the luxury of starting from scratch. They have legacy applications that predate modern instrumentation, third-party systems with no source access, network appliances that emit syslog and nothing else, and mainframes that have been running since before "observability" was a word. Mandating a single telemetry format across all of these systems sounds reasonable in a planning meeting. Making it happen is where projects stall, budgets evaporate, and timelines collapse.

Why standardization projects stall

The textbook approach to OpenTelemetry adoption starts with instrumenting applications. Add the SDK, configure exporters, update your build pipeline, test, deploy. For a modern microservice written in Go or Python, this might take a sprint. For a 15-year-old Java monolith running a payroll system, it might take a year — if it’s possible at all.

This is the gap that derails standardization efforts. IT teams tasked with moving to OpenTelemetry quickly discover that their environment splits into three categories: systems that support OpenTelemetry natively, systems that could be updated but at significant risk and cost, and systems that will never support it. The third category is larger than anyone expects.

The result is predictable. A handful of newer services emit OpenTelemetry data. Everything else continues generating telemetry in its own format — Windows Event Logs, syslog, CEF, LEEF, vendor-specific JSON, flat text files. The organization ends up running parallel pipelines: one for the systems that speak OpenTelemetry and another (or several others) for everything else. Correlation across these pipelines requires custom parsers, translation layers, and manual effort. The standardization project that was supposed to simplify operations has added complexity instead.

As Tom Wilkie, CTO at Grafana Labs, put it in The New Stack: legacy systems stick around in organizations for decades, and the cost of re-instrumentation is prohibitive. OpenTelemetry makes greenfield work easier, but teams still need to weave together new and old systems into a coherent picture.

The cost of format fragmentation

Format fragmentation isn’t just an architectural inconvenience. It has direct consequences for security, operations, and compliance.

Consider the federal agency breach disclosed by CISA in September 2025. Attackers exploited CVE-2024-36401 in GeoServer and moved laterally through the network for three weeks before detection. The agency had endpoint detection and response tools that generated alerts. Those alerts weren’t continuously reviewed. Part of the reason is structural: when telemetry arrives in dozens of formats — GeoServer’s custom logs, Windows Event Logs in XML, EDR alerts in vendor-specific schemas, network traffic data in yet another structure — correlating events across systems requires significant manual effort. By the time analysts can reconstruct an attack chain from incompatible data sources, the damage is done.

This pattern repeats outside of security. Operations teams troubleshooting a production incident need to correlate application logs, infrastructure metrics, and trace data. If each source uses different field names, timestamp formats, and severity scales, that correlation happens slowly or not at all. Compliance teams need to demonstrate that all systems are monitored and auditable. Parallel telemetry pipelines with inconsistent schemas make that demonstration harder and more expensive.

The May 2025 SentinelOne outage illustrated a related risk. Endpoint protection kept running, but security teams lost access to their management consoles for several hours. When your visibility layer depends on a single platform and your telemetry isn’t standardized enough to route elsewhere, an outage in that platform leaves you blind — even if your defenses are technically still active.

Translating instead of rewriting

The assumption behind most OpenTelemetry adoption plans is that standardization must happen at the source. Each application must be instrumented to emit OpenTelemetry data natively. This is the ideal, but it’s not the only path.

The alternative is to translate telemetry at the collection layer. Applications keep emitting data in whatever format they already use. A collection agent sitting alongside each system reads that output, parses it, maps it to OpenTelemetry’s data model, enriches it with context like host information and environment metadata, and forwards the result in OTLP format. The receiving end sees clean OpenTelemetry data regardless of what the source application produced.

This approach has several properties that matter for real deployments. There are no application code changes. There is no risk of regressions in business-critical systems. There is no dependency on vendor update cycles for third-party software. The translation happens externally, and the source application doesn’t know or care that its output is being converted. Teams can roll out OpenTelemetry compliance incrementally — one system at a time — without waiting for a coordinated infrastructure-wide migration.

NXLog Platform operates as this translation layer. It deploys agents alongside existing systems that collect telemetry in its native format — syslog, Windows Event Logs, flat files, JSON, CEF, LEEF, or any of the other formats that accumulate in real environments — and transforms it into OpenTelemetry-compatible data in real time. The transformation includes parsing, field mapping, normalization, and enrichment before the data is forwarded via OTLP to whatever backend the organization uses. For organizations that need to standardize on OpenTelemetry but can’t rewrite their applications to get there, this conversion path cuts the timeline from months or years to weeks.

What a practical migration looks like

A realistic OpenTelemetry migration doesn’t start with a mandate to re-instrument everything. It starts with the collection layer.

Inventory your telemetry sources. Identify every system that generates logs, metrics, or events. Categorize them: which already emit OpenTelemetry, which could be instrumented with reasonable effort, and which can’t be modified. The third group — the mainframes, the legacy middleware, the proprietary appliances — is your starting point, because those are the systems blocking full standardization.

Deploy collection agents alongside non-native sources. Configure each agent to read the source’s native output format and transform it to OpenTelemetry. This doesn’t require changes to the source system. The agent reads log files, listens on syslog ports, or tails event streams the same way any log collector would.

Normalize and enrich at the edge. Apply consistent field naming, timestamp normalization, and severity mapping at the collection point. Add resource attributes — service name, environment, region, asset criticality — that downstream systems need for correlation. This processing happens once, and every downstream consumer benefits from it.

Route to your existing backends. Forward the transformed data via OTLP to your observability platform, SIEM, data lake, or all three. Because the output is standard OpenTelemetry, you’re not locked into any particular backend. If you switch platforms later, the collection and transformation layer stays the same.

This approach delivers OpenTelemetry compliance across the full infrastructure without the multi-year re-instrumentation effort. It also produces a secondary benefit: because all telemetry passes through a normalization layer, data quality improves. Inconsistent field names, ambiguous timestamps, and missing context get fixed before they reach your analysis tools.

Processing telemetry before it moves downstream

Raw telemetry data — even when standardized to OpenTelemetry format — still creates problems at scale. Debug logs, verbose attributes, and low-value metrics increase storage and query costs without improving visibility. A collection layer that only translates and forwards pushes this problem downstream, where it becomes somebody else’s expensive storage bill.

Processing telemetry close to the source addresses this. Filtering removes debug-level noise and duplicate events. Aggregation condenses high-frequency metrics into meaningful summaries. Deduplication catches repeated alerts from the same underlying event. The NXLog Platform processing pipeline applies these operations before data leaves the collection point, so downstream systems receive clean, relevant telemetry rather than raw volume.

This matters for cost control. Observability platforms typically charge based on ingestion volume. Industry benchmarks suggest organizations spend 10—​20% of their infrastructure costs on observability, with much of that driven by data volume rather than data value. Reducing the volume at the source — by filtering noise and aggregating metrics before forwarding — directly reduces that cost without sacrificing the signal you need.

When the destination changes

One underappreciated advantage of standardizing on OpenTelemetry at the collection layer is destination flexibility. Vendor-specific telemetry formats tie you to vendor-specific backends. If your SIEM expects CEF and your application logs arrive in CEF, switching to a different SIEM means rebuilding your parsing and routing rules.

When telemetry is already normalized to OpenTelemetry before it reaches any backend, switching destinations is a routing change, not a re-architecture. You can send the same data to multiple destinations simultaneously — SIEM for security correlation, an observability platform for application performance, a data lake for long-term retention — without maintaining separate transformation pipelines for each.

This also provides resilience. If your primary analysis platform goes down, telemetry can be rerouted to a backup destination with no data loss and no format conversion required. The SentinelOne outage mentioned earlier would have had a different impact on organizations that route standardized telemetry to multiple backends rather than depending on a single platform for all visibility.

The bottom line

OpenTelemetry adoption doesn’t require re-instrumenting every application in your infrastructure. It requires getting all telemetry into OpenTelemetry format, and the most practical way to do that — especially for legacy systems, third-party software, and infrastructure components that can’t be modified — is to translate at the collection layer.

The organizations that succeed with OpenTelemetry standardization are the ones that stop treating it as an application instrumentation project and start treating it as a data pipeline problem. Instrument what you can. Translate everything else. Process and enrich at the edge. Route to wherever you need. The result is a single, consistent telemetry format across the full infrastructure, achieved in weeks rather than years, without touching application code.

NXLog Platform is an on-premises solution for centralized log management with
versatile processing forming the backbone of security monitoring.

With our industry-leading expertise in log collection and agent management, we comprehensively
address your security log-related tasks, including collection, parsing, processing, enrichment, storage, management, and analytics.

Start free Contact us
  • opentelemetry
  • telemetry data pipeline
  • NXLog Platform
Share

Facebook Twitter LinkedIn Reddit Mail
Related Posts

Watching the watchers: The need for telemetry system observability
5 minutes | October 29, 2025
World of OpenTelemetry
7 minutes | December 16, 2024
Telemetry is evolving; is your business ready?
4 minutes | January 5, 2026

Stay connected:

Sign up

Keep up to date with our monthly digest of articles.

By clicking singing up, I agree to the use of my personal data in accordance with NXLog Privacy Policy.

Featured posts

Security dashboards go dark: why visibility isn't optional, even when your defenses keep running
February 26, 2026
Building a practical OpenTelemetry pipeline with NXLog Platform
February 25, 2026
Announcing NXLog Platform 1.11
February 23, 2026
Adopting OpenTelemetry without changing your applications
February 10, 2026
Linux security monitoring with NXLog Platform: Extracting key events for better monitoring
January 9, 2026
2025 and NXLog - a recap
December 18, 2025
Announcing NXLog Platform 1.10
December 11, 2025
Announcing NXLog Platform 1.9
October 22, 2025
Gaining valuable host performance metrics with NXLog Platform
September 30, 2025
Security Event Logs: Importance, best practices, and management
July 22, 2025
Enhancing security with Microsoft's Expanded Cloud Logs
June 10, 2025

Categories

  • ANNOUNCEMENT
  • COMPARISON
  • COMPLIANCE
  • DEPLOYMENT
  • SECURITY
  • SIEM
  • STRATEGY
  • Products
  • NXLog Platform
  • NXLog Community Edition
  • Integration
  • Professional Services
  • Licensing
  • Plans
  • Resources
  • Documentation
  • Blog
  • White Papers
  • Videos
  • Webinars
  • Case Studies
  • Community Program
  • Community Forum
  • Compare NXLog Platform
  • Partners
  • Find a Reseller
  • Partner Program
  • Partner Portal
  • About NXLog
  • Company
  • Careers
  • Support Portals
  • Contact Us

Follow us

LinkedIn Facebook YouTube Reddit
logo

© Copyright NXLog Ltd.

Subscribe to our newsletter

Privacy Policy • General Terms of Business