Export Connector - Installation and Configuration Guide for 12.5.1
Tags: Documentation, Installation & Upgrade, PDF Documentation, Version 12.5.1
The following article contains a summary of the Export Connector - Installation and Configuration Guide for 12.5.1. To see the full guide, go to Attachments on this article and download the associated PDF.
Overview
This section provides an introduction to the NetWitness Export Connector, which is a Logstash input plugin used to continuously export events from the NetWitness Platform to downstream systems. It integrates with NetWitness Decoders and Log Decoders to collect metadata and, optionally, raw log data, converting this information into Logstash JSON objects. These events can then be routed to multiple destinations such as Kafka, Elastic, AWS S3, TCP endpoints, and other supported consumers. The connector is installed on Logstash and becomes active after restarting the Logstash service, with support for both standalone Logstash and NetWitness Managed Logstash deployments.
Work Flow of NetWitness Export Connector
This section explains the end‑to‑end workflow of how data moves through the NetWitness Export Connector using the Logstash pipeline model. Data ingestion begins with the input plugin, which retrieves events from NetWitness Decoders or Log Decoders through NetWitness APIs. The data can optionally be processed by Logstash filter plugins to enrich, modify, or remove fields before being forwarded to the output plugin. Finally, the output plugin delivers the processed events to one or more external destinations, enabling flexible and scalable data export architectures.
Configuration Process
This section covers the overall configuration flow required to deploy the NetWitness Export Connector successfully. The process begins by ensuring Logstash is installed and running, followed by installing and configuring the NetWitness Export Connector input plugin. Optional steps include configuring Logstash filter plugins and output plugins based on the desired data transformation and destination. Once configuration is complete, the deployment can be validated by monitoring data flow and system health.
VM Sizing Recommendations
This section provides guidance on sizing virtual machines for Logstash and the NetWitness Export Connector based on expected throughput. It recommends deploying Logstash and the connector on a dedicated virtual machine and sizing resources according to cumulative Events Per Second (EPS) or network throughput. CPU, JVM memory, and total VM memory requirements vary depending on whether metadata only or metadata plus raw logs are exported, as well as the number of meta keys involved. The guidance also emphasizes setting JVM heap values consistently and allocating additional CPU resources for operating system services.
Install Logstash
This section explains how to install and prepare Logstash for use with the NetWitness Export Connector. It highlights the importance of following Logstash security best practices and confirms that Logstash version 7.6.2 is supported. The section outlines installing Logstash, configuring it to run as a service, enabling it at system startup, and identifying key file locations such as logs. It also notes that CentOS is the recommended operating system for optimal results and provides references for troubleshooting common Logstash issues.
Install NetWitness Export Connector
This section provides step‑by‑step instructions for installing the NetWitness Export Connector plugin into an existing Logstash environment. It explains how to obtain the offline installer, copy it to the Logstash host, stop the Logstash service, install the plugin, and ensure required configuration files are in place. After installation, Logstash is restarted so the connector can begin collecting events from configured Decoders or Log Decoders.
Configure NetWitness Export Connector
This section explains how to configure the NetWitness Export Connector within a Logstash configuration file. It describes the structure of a Logstash pipeline, including input, optional filter, and output sections, and explains how to define one or multiple NetWitness input plugin instances for different Decoders. The section also covers the use of the Logstash keystore for securely managing sensitive credentials and references support for multiple pipelines when required.
Position Tracking and Start Session
This section explains how position tracking is used to ensure reliable and continuous data export. The connector maintains a tracking file that records the last processed session ID and timestamp, allowing it to resume ingestion without duplication or data loss. The start session parameter can be used to control where ingestion begins, either from the last known session, a stored position, or a specific session ID defined by the user, with clear rules governing precedence.
Configure SSL
This section covers how to enable and configure SSL to establish trusted, encrypted communication between Logstash and NetWitness Decoders or Log Decoders. It explains the use of SSL parameters within the Logstash configuration file, including enabling SSL and referencing keystore paths and passwords. The section also recommends using the Logstash keystore to protect sensitive information and highlights the requirement to use encrypted ports for trusted connections.
Certificate and Keystore
This section explains the process for creating and managing certificates required for SSL communication. It covers generating a Certificate Authority certificate, creating server private keys and certificate signing requests, packaging certificates into a PKCS12 keystore, and importing trust certificates. The section also explains how to upload certificates to NetWitness Decoders using REST APIs or CLI commands and emphasizes that the same keystore can be reused across multiple NetWitness hosts.
Health and Wellness
This section provides an overview of monitoring capabilities using the New Health & Wellness feature in NetWitness. It explains how Logstash metrics are sent to Elastic and visualized through Kibana dashboards, allowing administrators to monitor connector status, data flow, and system health. The section distinguishes between plugin metrics, which track the NetWitness Export Connector itself, and host metrics, which provide visibility into system resources such as CPU, memory, and I/O.
Download and Install Dashboard
This section explains how to deploy predefined Health and Wellness dashboards from NetWitness Live Content. It walks through selecting the appropriate dashboard resources, deploying them to the Metrics Server service, and verifying successful deployment. These dashboards provide visual insight into Logstash performance and NetWitness Export Connector activity.
Create a User in New Health & Wellness (Kibana)
This section explains how to create a dedicated user account in Kibana for sending Logstash metrics to the New Health & Wellness service. It describes navigating the NetWitness interface, creating the user in the internal database, assigning the appropriate service role, and mapping permissions so metrics can be successfully indexed and displayed.
Enable the Logstash Plugin Metrics
This section covers how to enable plugin‑level metrics for the NetWitness Export Connector. It explains the required configuration parameters, including enabling metrics and specifying Elastic host connection details and credentials. These metrics allow visibility into connector behavior and event processing performance.
Enable the Logstash Host Metrics
This section explains how to enable host‑level monitoring for Logstash using Metricbeat. It covers installing Metricbeat, disabling default Logstash monitoring, enabling the appropriate Metricbeat modules, and configuring Metricbeat to send data to Elastic. This provides deeper insight into system‑level performance and resource utilization.
Configuring Custom Multi‑valued Meta
This section explains how to define custom multi‑valued metadata beyond the default supported fields. It describes using a JSON configuration file to specify generic meta keys applied to all hosts or host‑specific meta keys applied selectively. The section emphasizes correct meta naming conventions, file permissions, and the ability to combine default and custom multi‑valued metadata.
Configure Logstash Filter Plugin (Optional)
This section covers optional configuration of Logstash filter plugins to transform events before they are exported. It explains how filters can be used to add, remove, or modify fields and provides an example of using a mutate filter to remove unwanted metadata. Standard Logstash filter plugins can be used depending on processing requirements.
Configure Logstash Output Plugin
This section explains how to configure Logstash output plugins to send exported events to external destinations such as Kafka. It describes defining output plugins in the Logstash configuration file and highlights that standard Logstash output plugins are supported. The section also introduces performance tuning considerations for Kafka deployments, including tuning both Logstash Kafka output parameters and Kafka broker settings to improve throughput.
The following article contains a summary of the Export Connector - Installation and Configuration Guide for 12.5.1. To see the full guide, go to Attachments on this article and download the associated PDF.
Attachments:
nw_12.5.1_export_connector_guide.pdf