Skip to content
  • There are no suggestions because the search field is empty.

LogStash Integration Guide for 12.5

Tags: Documentation, Installation & Upgrade, PDF Documentation, Version 12.5.0

The following article contains a summary of the LogStash Integration Guide for 12.5. To see the full guide, go to Attachments on this article and download the associated PDF.

Overview

This section provides an overview of how Logstash integrates with NetWitness® and explains the scope and purpose of the guide. It introduces Logstash as an open‑source data collection and processing engine capable of normalizing logs from multiple sources and forwarding them to different destinations. The section also introduces Managed Logstash, which is bundled and supported with the NetWitness® Log Collector or Virtual Log Collector, removing the need for a standalone Logstash server and simplifying deployment and maintenance. From a NetWitness® perspective, the integration supports custom or unsupported event sources as well as existing Logstash deployments that can forward logs to NetWitness® with minimal output configuration changes. 

Configuration Process

This section explains the end‑to‑end workflow for integrating Logstash with NetWitness®. It outlines a logical sequence that guides users through installing Logstash, configuring the NetWitness® codec, setting up event source inputs, applying enrichment filters, and deploying JSON parsers. The section also describes the complete data flow lifecycle, starting from event generation at the source, collection via plugins such as Beats, processing and enrichment within Logstash, encoding with the NetWitness® codec, transmission through output plugins, and final metadata extraction on the NetWitness® Log Decoder.

Install Logstash

This section covers how to install and prepare Logstash for use with NetWitness®. It explains that users can deploy either the open‑source or licensed Elastic version of Logstash and emphasizes following Logstash security and operational best practices. The section describes configuring Logstash to run as a service on Linux or Windows, enabling it to start automatically on system boot, and identifying default log file locations for troubleshooting. It also highlights platform‑specific considerations, such as ensuring correct service ownership when installing via RPM on CentOS systems.

Install and Configure the NetWitness® Codec

This section explains how to install and configure the NetWitness® Logstash codec, which enables Logstash to format events in RFC‑5424 syslog format for ingestion by NetWitness®. It walks through downloading the offline codec installer, stopping the Logstash service, removing any previously installed codec versions, and installing the updated codec package. The section also identifies the default directories where Logstash input, filter, and output configuration files reside and explains restarting the Logstash service to activate the codec after installation.

Configure Logstash Output Plugins

This section covers how to configure Logstash output plugins to send processed events to NetWitness®. It explains using the TCP output plugin together with the NetWitness® codec to forward events to a Log Decoder or Virtual Log Collector. The section also describes enabling TLS for secure communication, including configuring encrypted connections and verifying NetWitness® receivers using trusted CA certificates. These configurations ensure that log data is transmitted securely and reliably from Logstash to NetWitness®.

Configure the Event Source

This section explains how to configure event sources that feed logs into Logstash before forwarding them to NetWitness®. It describes using Filebeat to collect file‑based logs such as Apache logs and using Auditbeat to collect operating system audit logs, particularly from CentOS. The section also explains how to configure Filebeat or Auditbeat to enable desired inputs and redirect output from Elasticsearch to Logstash by updating the appropriate configuration files. This ensures consistent and centralized log ingestion into the Logstash pipeline.

Configure Logstash Filters to Add NetWitness® Meta

This section explains how to enrich events in Logstash with required NetWitness® metadata so that events can be properly parsed and categorized on the Log Decoder. It describes mandatory metadata fields such as the NetWitness® device type, message ID, and source host, along with optional collection host metadata. The section also explains naming requirements for device types and discusses how the NetWitness® codec handles JSON payloads by default. An example filter configuration illustrates how metadata can be conditionally added based on the event source.

Advanced NetWitness® Configuration

This section covers advanced configuration options that enhance log processing and resilience. It explains how Grok filters can be used to extract structured metadata from logs and how additional Logstash input and filter plugins can expand collection capabilities. The section also describes filtering unwanted logs at both the Logstash and Beats levels, using the Heartbeat plugin to validate connectivity to NetWitness®, enabling persistent queues to prevent data loss during failures, and configuring advanced NetWitness® codec payload mappings for customized event formatting.

Configure NetWitness® to Collect Events

This section explains how to configure NetWitness® Log Decoders to receive events sent from Logstash. It describes starting or restarting capture services, verifying that capture is active, and adjusting Log Decoder settings to support larger event sizes when necessary. The section also explains how to modify Log Decoder configuration parameters using REST endpoints and recommends reducing event size through filtering when oversized logs risk truncation.

Linux Event Source Example

This section provides a practical example of collecting Linux system and audit events using Logstash and forwarding them to NetWitness®. It explains sample input, filter, and output configurations using Beats plugins, including opening required firewall ports for log ingestion. The section also demonstrates how metadata is applied differently for system and audit logs and recommends organizing configurations into separate pipelines for scalability and easier management.

Build Custom JSON Parser

This section covers how to build a custom JSON parser for NetWitness® when default parsing is insufficient. Using a Linux device example, it explains analyzing a sample JSON log, extracting metadata from the RFC‑5424 header, mapping JSON payload elements to NetWitness® datatypes, and parsing message strings, arrays, nested objects, and dynamically keyed structures. The section also illustrates how parsed metadata appears within the NetWitness® Log Decoder and provides a complete example parser configuration.

Deploy JSON Parser

This section explains how to deploy and activate custom JSON parsers on a NetWitness® Log Decoder. It describes uploading the parser file to the correct device directory on the Log Decoder, reloading parsers using REST APIs, and performing the same action through the NetWitness® user interface. These steps ensure that new or updated parsers take effect immediately without requiring a full system restart.

The following article contains a summary of the LogStash Integration Guide for 12.5. To see the full guide, go to Attachments on this article and download the associated PDF.



Attachments:
nw_12.5_logstash_guide.pdf