News and blog
NXLog main page
  • Products
    NXLog Platform
    Log collection
    Log management and analytics
    Log storage
    NXLog Community Edition
    Integrations
    Professional Services
  • Solutions
    Use cases
    Specific OS support
    SCADA/ICS
    Windows event log
    DNS logging
    MacOS logging
    Open Telemetry
    Solutions by industry
    Financial Services
    Government & Education
    Entertainment & Gambling
    Telecommunications
    Medical & Healthcare
    Military & Defense
    Law Firms & Legal Counsel
    Industrial & Manufacturing
  • Pricing
    Licensing
    Plans
  • Partners
    Find a Reseller
    Partner Program
    Partner Portal
  • Resources
    Documentation
    Blog
    White papers
    Videos
    Webinars
    Case Studies
    Community Program
    Community Forum
  • About
    Company
    Careers
  • Support
    Support portals
    Contact us

NXLog Platform
Log collection
Log management and analytics
Log storage
NXLog Community Edition
Integrations
Professional Services

Use Cases
Specific OS support
SCADA/ICS
Windows event log
DNS logging
MacOS logging
Open Telemetry
Solutions by industry
Financial Services
Government & Education
Entertainment & Gambling
Telecommunications
Medical & Healthcare
Military & Defense
Law Firms & Legal Counsel
Industrial & Manufacturing

Licensing
Plans

Find a Reseller
Partner Program
Partner Portal

Documentation
Blog
White papers
Videos
Webinars
Case Studies
Community Program
Community Forum

Company
Careers

Support portals
Contact us
Let's Talk
  • Start free
  • Interactive demo
Let's Talk
  • Start free
  • Interactive demo
NXLog search
  • Loading...
Let's Talk
  • Start free
  • Interactive demo
February 26, 2026 security

Security dashboards go dark: why visibility isn't optional, even when your defenses keep running

By João Correia

Share
ALL ANNOUNCEMENT COMPARISON COMPLIANCE DEPLOYMENT SECURITY SIEM STRATEGY RSS

The SentinelOne outage showed why visibility isn’t optional—​even when your defenses keep running.

On May 29, 2025, organizations running SentinelOne experienced something unsettling: their security controls kept working, but they couldn’t see what was happening.

A software flaw in SentinelOne’s infrastructure control system caused a global service disruption that lasted several hours. According to reports, the incident significantly impacted customers' ability to manage their security operations and access important data.

Endpoint protection continued functioning. Agents kept monitoring. But security teams lost access to their management consoles and related services.

For those hours, defenders were flying blind.

The hidden risk in centralized visibility

Modern security operations depend on centralized visibility platforms. When those platforms fail, even functioning security controls become nearly useless.

During the SentinelOne outage, security teams couldn’t:

  • Investigate alerts or anomalies

  • Adjust policies or response actions

  • Access historical data for threat hunting

  • Confirm whether endpoints were still protected

  • Respond to potential incidents with full context

The agents themselves kept working, but without visibility into what they were detecting or the ability to manage responses, security operations ground to a halt.

This incident highlights a critical dependency: your security architecture is only as reliable as your ability to see what’s happening.

What telemetry resilience looks like

The SentinelOne outage was resolved within hours, and to their credit, endpoint protection continued operating. But the incident raises an important question: how should organizations design telemetry infrastructure to maintain visibility even when primary systems fail?

Multiple telemetry destinations

Sending security telemetry to a single platform creates a single point of failure. A resilient pipeline routes data to multiple destinations based on use case—​SIEM for correlation, data lake for long-term analysis, specialized tools for specific threat detection.

Local buffering and retention

When network connectivity fails or a cloud service goes down, telemetry shouldn’t disappear. Local buffering ensures data gets captured and forwarded once connectivity resumes. You don’t lose the visibility you need for incident reconstruction.

Independent monitoring of monitoring systems

If your primary security dashboard fails, how do you know? Independent health checks and metrics collection from your telemetry pipeline itself can alert you to visibility gaps before you discover them during an incident.

Context preservation across platforms

When you need to shift from one analysis tool to another—​whether due to an outage or tactical needs—​contextual data should follow. Enriched telemetry that includes asset information, user context, and threat intelligence remains valuable regardless of which platform you’re viewing it in.

Metrics data: the overlooked visibility layer

Traditional security monitoring focuses on discrete events: alerts, logs, authentication attempts. But metrics data provides a different kind of visibility that can persist even when event-based systems fail.

Metrics answer questions like:

  • Is CPU usage on critical systems normal?

  • Are network traffic patterns consistent with baseline behavior?

  • Is disk I/O suggesting data exfiltration or encryption activity?

  • Are services responding within expected timeframes?

During the SentinelOne outage, organizations with separate metrics collection could still monitor system health and spot anomalies, even without access to their primary security console.

This layered approach to visibility—​events plus metrics, multiple destinations, independent collection paths—​creates resilience that single-platform strategies can’t match.

The operational impact of visibility loss

Several hours without security visibility might sound manageable, but consider the operational reality:

Incident response delays

If an alert fired during the outage, teams couldn’t investigate. By the time visibility returned, critical forensic data might have aged out or been overwritten.

Compliance concerns

Many regulatory frameworks require continuous monitoring and timely incident detection. Extended visibility outages create documentation challenges and potential compliance gaps.

Decision-making paralysis

Security leaders faced a difficult choice during the outage: continue operations without visibility, or pause activities until monitoring returns? Neither option is ideal.

Stakeholder confidence

Explaining to executives that "the security tools are working, but we can’t see what they’re doing" doesn’t inspire confidence.

Building visibility that lasts

The SentinelOne incident wasn’t a security breach—​it was an availability issue that affected visibility. But that distinction matters less than you might think. Whether visibility disappears due to an outage or an attacker disabling monitoring, the result is the same: security teams operating without the information they need.

Organizations can reduce this risk by treating telemetry management as critical infrastructure:

Design for redundancy

Route telemetry to multiple destinations. If one platform fails, others remain available.

Enrich data before routing

Add context at collection time, not just at analysis time. Enriched telemetry remains valuable even if you need to analyze it with different tools than originally planned.

Monitor your monitoring

Track the health of your telemetry pipeline itself. Know immediately when data stops flowing or platforms become unavailable.

Reduce dependence on single vendors

While platforms like SentinelOne provide significant value, your visibility architecture shouldn’t collapse if any single vendor experiences issues.

The broader lesson

The SentinelOne outage was resolved quickly and without reports of security incidents resulting from the visibility gap. That’s good news, as this type of situation is a magnet for significant problems. But it’s also a reminder that availability matters as much as functionality.

Your security controls can be working perfectly, but if you can’t see what they’re doing, you’re taking risks you might not intend to take.

A well-designed telemetry management pipeline creates visibility that persists across platform failures, routes data where it’s needed most, and provides multiple layers of insight—​events, metrics, and context—​so your security operations don’t depend on any single point of failure.

If you’re thinking about how to reduce dependence on single platforms or want to improve your telemetry infrastructure, our team can walk you through how resilient telemetry management keeps your visibility online when it matters most.

NXLog Platform is an on-premises solution for centralized log management with
versatile processing forming the backbone of security monitoring.

With our industry-leading expertise in log collection and agent management, we comprehensively
address your security log-related tasks, including collection, parsing, processing, enrichment, storage, management, and analytics.

Start free Contact us
  • awareness
  • cybersecurity
  • opentelemetry
Share

Facebook Twitter LinkedIn Reddit Mail
Related Posts

Watching the watchers: The need for telemetry system observability
5 minutes | October 29, 2025
The shadow IT haunting your network: A Halloween horror story
7 minutes | October 30, 2025
Linux security monitoring with NXLog Platform: Extracting key events for better monitoring
12 minutes | January 9, 2026

Stay connected:

Sign up

Keep up to date with our monthly digest of articles.

By clicking singing up, I agree to the use of my personal data in accordance with NXLog Privacy Policy.

Featured posts

Announcing NXLog Platform 1.11
February 23, 2026
2025 and NXLog - a recap
December 18, 2025
Announcing NXLog Platform 1.10
December 11, 2025
Announcing NXLog Platform 1.9
October 22, 2025
Gaining valuable host performance metrics with NXLog Platform
September 30, 2025
Announcing NXLog Platform 1.8
September 12, 2025
Security Event Logs: Importance, best practices, and management
July 22, 2025
Announcing NXLog Platform 1.7
June 25, 2025
Enhancing security with Microsoft's Expanded Cloud Logs
June 10, 2025
Announcing NXLog Platform 1.6
April 22, 2025
Announcing NXLog Platform 1.5
February 27, 2025
Announcing NXLog Platform 1.4
December 20, 2024
NXLog redefines log management for the digital age
December 19, 2024
2024 and NXLog - a review
December 19, 2024
Announcing NXLog Platform 1.3
October 25, 2024
NXLog redefines the market with the launch of NXLog Platform: a new centralized log management solution
September 24, 2024
Welcome to the future of log management with NXLog Platform
August 28, 2024
Announcing NXLog Enterprise Edition 5.11
June 20, 2024
Raijin announces release of version 2.1
May 31, 2024
Ingesting log data from Debian UFW to Loki and Grafana
May 21, 2024
Announcing NXLog Enterprise Edition 6.3
May 13, 2024
Raijin announces release of version 2.0
March 14, 2024
NXLog Enterprise Edition on Submarines
March 11, 2024
The evolution of event logging: from clay tablets to Taylor Swift
February 6, 2024
Migrate to NXLog Enterprise Edition 6 for our best ever log collection experience
February 2, 2024
Raijin announces release of version 1.5
January 26, 2024
2023 and NXLog - a review
December 22, 2023
Announcing NXLog Enterprise Edition 5.10
December 21, 2023
Raijin announces release of version 1.4
December 12, 2023
Announcing NXLog Enterprise Edition 6.2
December 4, 2023
Announcing NXLog Manager 5.7
November 3, 2023
Announcing NXLog Enterprise Edition 6.1
October 20, 2023
Raijin announces release of version 1.3
October 6, 2023
Upgrading from NXLog Enterprise Edition 5 to NXLog Enterprise Edition 6
September 11, 2023
Announcing NXLog Enterprise Edition 6.0
September 11, 2023
The cybersecurity challenges of modern aviation systems
September 8, 2023
Raijin announces release of version 1.2
August 11, 2023
The Sarbanes-Oxley (SOX) Act and security observability
August 9, 2023
PCI DSS 4.0 compliance: Logging requirements and best practices
August 2, 2023
Detect threats using NXLog and Sigma
July 27, 2023
HIPAA logging requirements and how to ensure compliance
July 19, 2023
Announcing NXLog Enterprise Edition 5.9
June 20, 2023
Industrial cybersecurity - The facts
June 8, 2023
Raijin announces release of version 1.1
May 30, 2023
CISO starter pack - Security Policy
May 2, 2023
Announcing NXLog Enterprise Edition 5.8
April 24, 2023
CISO starter pack - Log collection fundamentals
April 3, 2023
Raijin announces release of version 1.0
March 9, 2023
Avoid vendor lock-in and declare SIEM independence
February 13, 2023
Announcing NXLog Enterprise Edition 5.7
January 20, 2023
NXLog - 2022 in review
December 22, 2022
Need to replace syslog-ng? Changing to NXLog is easier than you think
November 23, 2022
The EU's response to cyberwarfare
November 22, 2022
Looking beyond Cybersecurity Awareness Month
November 8, 2022
GDPR compliance and log management best practices
September 23, 2022
NXLog in an industrial control security context
August 10, 2022
Raijin vs Elasticsearch
August 9, 2022
NXLog provides native support for Google Chronicle
May 11, 2022
Aggregating macOS logs for SIEM systems
February 17, 2022
How a centralized log collection tool can help your SIEM solutions
April 1, 2020

Categories

  • ANNOUNCEMENT
  • COMPARISON
  • COMPLIANCE
  • DEPLOYMENT
  • SECURITY
  • SIEM
  • STRATEGY
  • Products
  • NXLog Platform
  • NXLog Community Edition
  • Integration
  • Professional Services
  • Licensing
  • Plans
  • Resources
  • Documentation
  • Blog
  • White Papers
  • Videos
  • Webinars
  • Case Studies
  • Community Program
  • Community Forum
  • Compare NXLog Platform
  • Partners
  • Find a Reseller
  • Partner Program
  • Partner Portal
  • About NXLog
  • Company
  • Careers
  • Support Portals
  • Contact Us

Follow us

LinkedIn Facebook YouTube Reddit
logo

© Copyright NXLog Ltd.

Subscribe to our newsletter

Privacy Policy • General Terms of Business