The choice here is not between two interchangeable log tools. It is a choice about where you want parsing, routing, and failure handling to live. Filebeat runs close to the source and keeps collection small. Logstash sits in the middle of the flow and takes on filtering, enrichment, and fan-out.
That architectural difference matters more than a feature checklist. Pick the narrower tool when your logs have one destination and your parsing rules are modest. Pick the heavier one when the edge agent would otherwise turn into a tangle of custom processors, retries, and backend-specific logic.
The Filebeat vs Logstash decision also changes your operational burden. A host-level shipper is easy to stamp out across nodes. A JVM pipeline with queues, filters, and multiple outputs buys you control, but it also adds a service tier you need to size, monitor, and upgrade.
Filebeat vs Logstash at a glance
| Decision point | Filebeat | Logstash |
|---|---|---|
Design center |
Agent that tails files and forwards events through libbeat. |
Server-side pipeline that ingests from inputs, runs filters, and writes to one or more outputs. |
Runtime model |
Starts inputs and one harvester per file; common deployments are packages, containers, and Kubernetes DaemonSets. |
Runs one or more pipelines with worker threads and a central queue; common deployments are services, containers, and small central clusters. |
Output behavior |
A pipeline can send events to multiple outputs, and separate pipelines can isolate flows. |
|
Parsing and enrichment |
Uses processors, multiline handling, and a JavaScript script processor for edge cleanup. |
Uses filters such as grok, dissect, and mutate plus conditionals. |
Buffering and recovery |
Memory queue defaults to 3200 events; disk queue is available. |
Memory queue capacity depends on workers x batch size; persistent queues and dead letter queues are available but off by default. |
Operational footprint |
Elastic describes Beats as small-footprint shippers that use fewer system resources than Logstash. |
JVM-based service with a bundled JDK 21 by default. |
Time to first Kibana view |
filebeat setup can load templates and sample dashboards when Elasticsearch is the destination. |
No built-in log search or dashboard UI; pipeline viewer covers the pipeline, not stored log data. |
Cleanest fit |
Per-host collection, Kubernetes DaemonSets, one destination, low transformation needs. |
Central parsing tier, several inputs or outputs, heavy enrichment, replay, and routing. |
Architecture and deployment
As of April 2026, Elastic lists Filebeat 9.3.3 and Logstash 9.3.3 on the current download pages. The Elastic support matrix remains the source of truth for supported operating systems, JVM combinations, Kubernetes distros, and product compatibility.
Filebeat keeps the edge role narrow
Filebeat monitors files or locations, starts one harvester for each file, hands those events to libbeat, and publishes them to the output you configured.
Elastic documents package, container, and Kubernetes deployment models, including a DaemonSet pattern for Kubernetes that mounts /var/log/containers on every node.
Linux DEB and RPM packages also ship with a systemd unit.
The constraints are just as important as the layout.
Filebeat allows only one output, so any dual-write, fan-out, or destination-specific branching has to happen later.
Elastic deprecated the old log input in 7.16 and turned it off in 9.0, so new file tailing work should use filestream.
Elastic still supports Filebeat modules, but it recommends Elastic Agent integrations for new work.
Elastic also publishes Filebeat in two licensing forms. The default download is under the Elastic License, and there is an OSS-only build for teams that want the Apache 2.0 distribution.
Logstash is a processing tier, not a sidecar replacement
Logstash runs a server-side event pipeline with inputs, filters, and outputs. Its execution model is explicit: inputs write to a central queue, pipeline workers pull batches from that queue, filters transform the events, and outputs ship them onward. Elastic ships it as tar.gz, zip, deb, rpm, and Docker images, and the current line bundles JDK 21 by default.
That design gives you more room to shape traffic. A single instance can run multiple pipelines, and pipeline-to-pipeline communication supports distributor, forked-path, and output-isolator patterns inside one process. If you turn on centralized pipeline management, Elastic marks it as a subscription feature, and local pipeline files stop applying.
Logstash follows the same packaging split as Filebeat. The default packages are under the Elastic License, and Elastic also provides an OSS-only Logstash build.
Measurable comparison
These products do not compete on query latency because neither one stores or queries log data. Search speed, dashboards, and retention belong to the destination system. The hard differences you can model before deployment are footprint, queue behavior, fan-out, and the number of steps required to get from raw logs to a usable view.
Footprint and queue math
Elastic’s own comparison page says Beats have a small footprint and use fewer system resources than Logstash. Filebeat’s memory queue defaults to 3200 events, and its disk queue can preserve pending data across restarts. If you point Filebeat at Logstash, the Logstash output batches up to 2048 events by default.
Logstash exposes more sizing knobs because it does more work.
Its memory queue is not configured as a single event count.
The upper bound is pipeline.workers x pipeline.batch.size, with defaults of CPU core count and 125 events.
On an 8-core host, that means 1,000 in-flight events per pipeline before tuning.
If you need disk-backed buffering, persistent queues exist, but they are disabled by default and sized per pipeline.
Fan-out and scaling behavior
Filebeat scales by repeating a simple unit: one agent per host or one pod per Kubernetes node. That stays easy to reason about when each instance reads local files and forwards to one destination.
Logstash scales in a different direction. One pipeline can accept several inputs and send to several outputs, and multiple pipelines let you give different flows their own workers, queues, and durability settings. Elastic also notes that a blocked output in one pipeline does not backpressure another pipeline when the flows are separated this way. That is valuable during migrations, during partial outages, and when one destination has stricter latency than another.
Steps to a first working view
Filebeat has the shortest path into Elastic.
The quick-start flow shows filebeat setup loading the recommended index template and sample dashboards, and dashboard loading is built into the product.
If Filebeat modules connect directly to Elasticsearch, ingest pipelines are set up automatically.
That path gets longer when you insert Logstash. Elastic states that if your output is Logstash instead of Elasticsearch, you must load the index template, dashboards, and ingest pipelines manually. Logstash gives you a central place to parse and route logs, but it is not the shortest route to a first dashboard.
Features and capabilities
Search and visualization
Neither Filebeat nor Logstash is a log store. If you need ad hoc search, saved investigations, or dashboards, those features come from Elasticsearch and Kibana or from another backend. Filebeat gets you there with less setup because modules package input settings, ingest pipeline definitions, fields, and sample dashboards for common log types.
Logstash has visibility of its own, but that visibility is operational rather than analytical. The monitoring APIs expose node info, JVM stats, plugin inventory, pipeline runtime stats, and hot threads. The pipeline viewer highlights topology, throughput, CPU anomalies, and plugin latency so you can find ingestion bottlenecks. None of that replaces a search UI for your application logs.
Parsing, enrichment, and routing
Filebeat can do more than plain forwarding. Processors handle field cleanup, JSON decoding, metadata enrichment, event dropping, and multiline assembly. The script processor runs JavaScript when the built-in processors are not enough. That is enough for edge normalization and for trimming noise before transport.
Logstash takes over when transformation becomes the main design problem. Grok parses variable text with regular expressions. Dissect tokenizes stable formats without regex. Mutate handles field renames, replacements, and conversions, and standard pipeline configuration supports conditionals plus several outputs. Add pipeline-to-pipeline communication, and Logstash becomes a routing layer rather than just a parser.
Alerting, safeguards, and paid-tier boundaries
Neither product offers first-class alerting on stored log data by itself. Filebeat depends on the destination for rules and notifications. What it does offer is a direct path to Elastic assets when you keep the pipeline simple.
Logstash answers a different operational problem. Persistent queues reduce loss during restarts and downstream interruptions. Dead letter queues capture selected failures for later inspection instead of dropping them silently. The main paid-tier boundary in this comparison is centralized pipeline management, which Elastic lists as a subscription feature.
Use-case scenarios
- Elastic-first application logging: 150 GB/day, 120 Linux nodes, two platform engineers
-
Pick Filebeat. The per-node footprint stays small, deployment maps cleanly to packages or a DaemonSet, and
filebeat setupgets you to templates and sample dashboards quickly when Elasticsearch is the destination. Adding Logstash here creates a second service tier without solving a problem the environment actually has. - Central parsing hub: 700 GB/day, Beats plus syslog plus Kafka, three downstream destinations
-
Pick Logstash. This design needs several inputs, several outputs, and a central place for parsing rules. Filebeat can collect the file-based sources, but it cannot become the branching layer because it only allows one output.
- Kubernetes platform with tight node budgets: 250 nodes, per-node collectors limited to 200m CPU
-
Pick Filebeat on the nodes. Elastic documents the DaemonSet pattern, and the product was built for host-local collection rather than central filter graphs. Add Logstash upstream only if shared parsing or multi-destination routing becomes a real requirement.
- Compliance-heavy ingest: 1 TB/day, PII masking, audit retention, replay after downstream failures
-
Pick Logstash. Central filters, persistent queues, dead letter queues, monitoring APIs, and pipeline viewer fit controlled processing much better than scattered edge scripts. This is also the point where one central processing tier is easier to audit than dozens of independent host agents with custom logic.
- Heterogeneous fleet with lifecycle management requirements: 1,500 systems across Windows servers, Linux hosts, and network appliances
-
Neither Filebeat nor Logstash solves the whole job. When the real requirement is centralized fleet control plus data collection and storage, NXLog Platform covers a different layer by combining telemetry collection, log storage, and agent management, and it supports both agent-based and agentless collection modes.
Conclusion
Choose Filebeat when logs come from local files or containers, the destination is stable, and you want the collector to stay small and narrow. It is the cleaner fit for direct-to-Elastic shipping and for Kubernetes node collection where the agent should do as little as possible.
Choose Logstash when collection is the easy part and the harder work is transformation. Several inputs, several outputs, central parsing, queue-backed recovery, and better failure isolation all point to a real processing tier.
There is a clear dividing line. Put Filebeat at the edge when you want collection. Put Logstash in the middle when you need a pipeline. If your architecture already needs both, keep Filebeat on the hosts and let Logstash own the shared rules, retries, and routing rather than duplicating that logic across every node.