Skip to content
  • There are no suggestions because the search field is empty.

Content Quick Start Guide

Content Quick Start GuideContent Quick Start Guide

This topic discusses configuration procedures for getting RSA NetWitness Platform set up initially in your environment.

Sections:

Configuring ServicesConfiguring Services

Throughout this document, you may come across the following:

  • Services that your installation does not have or use. For example, if you only use RSA NetWitness Platform to capture packet data, you may not have any Log Decoders. In this case, skip the sections that do not apply to you.
  • Services for which your installation has multiple instances. For example, you may have several Log Decoders. In this case, repeat the instructions so that you have set up each of your individual services.

If you have all of the following services, this is the preferred order for configuring your system:

  1. Decoder(s)
  2. Log Decoder(s)
  3. Concentrator(s)
  4. Broker(s)
  5. Reporting Engine
  6. ESA

DecoderDecoder

The Decoder service captures network data in packet form. RSA recommends that you begin setup with your Decoder.

  1. Assign a capture interface. For details, see Assign Capture Interface in the Appendix.
  2. Enable Capture Autostart. For details, see Capture Autostart in the Appendix.

For more details, see the "Configure Capture Settings" topic in the Decoder and Log Decoder Configuration Guide.

Log DecoderLog Decoder

The Log Decoder service captures log data as events. Setup for your Log Decoder is similar to setting up your Decoder:

  1. Assign a capture interface. For details, see Assign Capture Interface in the Appendix.
  2. Enable Capture Autostart. For details, see Capture Autostart in the Appendix.

For more details, see the "Configure Capture Settings" topic in the Decoder and Log Decoder Configuration Guide.

ConcentratorConcentrator

Concentrators aggregate data captured by Decoders and Log Decoders. This allows you to investigate, query and alert on both log and packet meta data in real time. You need to add your Decoder and Log Decoder services to the Concentrator to begin the aggregation process.

Note: RSA recommends that if you are capturing both log and packet data, you should have a dedicated Concentrator service for each, and use a Broker service and then have a broker to aggregate data between the two services.

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Concentrator, and select View > Config.

    Note: If you have both Decoder and Log Decoder services, you can add them in any order.

  3. To add a service, perform the following steps:

    1. Click in the Aggregate Services toolbar to add a service.
    2. Select and add your service (for example, a Decoder).
    3. Enter the administrator credentials for the Decoder service.
  4. Repeat step 3 until you have added all of the Decoder and Log Decoder services from which you want to aggregate.

    Note: Optionally, you can configure your Concentrator to aggregate from both your Log Decoders and Decoders. For details, see the "Configure Aggregate Services" topic in the Broker and Concentrator Configuration Guide.

  5. In the Aggregation Configuration panel, under Aggregation Settings, select Aggregate Autostart.

    When a Decoder or Log Decoder starts up, it automatically begins capturing data if Capture Autostart is enabled. You can always start and stop data capture manually.

  6. Click Apply, then click Start Aggregation.
  7. The Aggregate Autostart takes effect on the next service restart. To restart the service:

    1. From the toolbar, change the View from Config to System, by opening the View menu (Config ) and selecting System.
    2. From the toolbar, select Reboot.
    3. The system displays a message asking you to confirm the reboot: click Yes, then the service restarts.

For more details, see the "Broker and Concentrator Configuration" topic in the Broker and Concentrator Configuration Guide.

BrokerBroker

The broker service aggregates meta data from configured concentrators. This allows you to investigate and monitor data from multiple concentrators. You need to add your Concentrator service to your Broker.

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Broker, and select View > Config.
  3. Click in the Aggregate Services toolbar to add a service.
  4. Select and add your Concentrator.
  5. Enter the administrator credentials for the Concentrator service.
  6. In the Aggregation Configuration panel, under Aggregation Settings, select Aggregate Autostart.

    Note: This option determines whether aggregation starts automatically each time the Broker is started. Checked means yes, unchecked means no.

  7. Click Apply, then click Start Aggregation.

    Note: Changes take effect immediately.

For more details, see the "Broker and Concentrator Configuration" topic in the Broker and Concentrator Configuration Guide.

Reporting EngineReporting Engine

A Reporting Engine runs reports and alerts based on the data drawn from a data source, so you must associate a data source, or multiple data sources, to a Reporting Engine. There are three types of data sources:

  • NWDB Data Sources—The NetWitness Database (NWDB) data sources are Decoders, Log Decoders, Brokers, Concentrators, Archiver, and Collection.
  • IPDB Data Sources—The Internet Protocol Database (IPDB) data source contains both normalized and raw event messages. It stores all collected messages in a file system organized by event source (device), IP address, and time (year/month/day) with index files to facilitate searches (report and queries).
  • Warehouse Data Sources—The Warehouse data sources are Pivotal and MapR.

To associate a data source with a Reporting Engine:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Reporting Engine, and select View > Config.
  3. Select the Sources tab.
  4. Click > Available Services to display the list of available services.

    Note: The UI presents a list of all services that have already been configured, and that can be used as a source for the reporting engine. This may include any of the following services (depending on your NetWitness installation): Archivers, Brokers, Concentrators, Log Decoders, Malware Analysis, Network Decoders, Incident Management or IPDB Extractor.

  5. Select a Concentrator or Broker service and click OK.

  6. Enter the administrator credentials for the service and click OK.

For more details, see the "Configure Data Sources" topic in the Reporting Engine Configuration Guide.

Event Stream Analysis (ESA)Event Stream Analysis (ESA)

The RSA NetWitness Platform Event Stream Analysis (ESA) service provides advanced stream analytics such as correlation and complex event processing of disparate event data from Concentrators, Decoders, and Log Decoders, which results in incident detection and alerting.

To associate a data source with the ESA service:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select an ESA service, and select View > Config.
  3. Select the Data Sources tab.
  4. Click to display the list of available services.
  5. Select the Concentrator (or Broker, if it is being used) and click OK.

    Note: RSA recommends Concentrators as the data source for ESA. For more details, see the "Add a Data Source to an ESA Service" topic in the Event Stream Analysis Configuration Guide.

  6. Click .

    The Edit Service dialog is displayed.

  7. Enter the administrator credentials for the service and click Save.
  8. Click Enable.
  9. Click Apply for your changes to take effect.

For more details, see the "Configure ESA" and "Add a Data Source to an ESA Service" topics in the Event Stream Analysis Configuration Guide.

Deploying ContentDeploying Content

Content developed by the RSA team may be found in RSA Live within the RSA NetWitness Platform. See the Live Services Guide for deploying the content from Live. Content may also be created and deployed through a Professional Services engagement or directly by RSA customers.

The following table lists the types of content, and the guide and topic where you can find more information for that content type.

  • Resource Type:

    RSA Log Collector

  • Guide:

    Log Collector Configuration Guide

  • Topic:

    Configure Event Source Types


  • Resource Type:

    RSA Log Device (i.e. Log Parser)

  • Guide:

    Log Parser Tool User Guide

  • Topic:

    Parser Structure


  • Resource Type:

    RSA Lua Parser

  • Guide:

    Decoder and Log Decoder Configuration Guide

  • Topic:

    Use Custom Parsers


  • Resource Type:

    RSA Feeds

  • Guide:

    Decoder and Log Decoder Configuration Guide

  • Topic:

    Create and Deploy Custom Feed Using Wizard


  • Resource Type:

    RSA Application Rules

  • Guide:

    Decoder and Log Decoder Configuration Guide

  • Topic:

    Configure Decoder Rules


  • Resource Type:

    RSA Event Stream Analysis Rule

  • Guide:

    Alerting Using ESA Guide

  • Topic:

    Alerting: Add Rules to the Rule Library


  • Resource Type:

    RSA Security Analytics Reports, Charts, Alerts and Lists

  • Guide:

    Reporting Guide

  • Topic:

    Working with Reports in the Reporting Module


Developing Use CasesDeveloping Use Cases

RSA recommends that you begin with a conversation of your required use cases and the desired outcomes. Once you and your team have determined what you want to be achieve, understand the components of RSA NetWitness Platform that you have purchased, and they can help in meeting your use cases. Review the content in RSA Live, understand what each resource type does, how they are used and their output. If existing content does not meet your use case requirements, you may need to develop custom content. For details, see Developing Content.

Inventory Customer SystemInventory Customer System

RSA recommends that you begin with an inventory of the customer system. Some information that may be helpful when addressing customer use cases are as follows:

  • What are their critical assets?
  • What parsing capabilities does the customer have configured? Are they a packet or log customer or do they have both?
  • What are their alerting capabilities? Will their resources and product licensing support reporting alerts and ESA?
  • What protocols are configured within the environment?
  • What event sources does the customer forward to RSA NetWitness Platform?
  • Do they have an Endpoint product that will forward logs to RSA NetWitness Platform?
  • What are their vulnerabilities and needs related to business-driven security.

Gather Use CasesGather Use Cases

Once you have an inventory of the customer environment, you gather the use cases they need. Addressing the use cases may follow this general flow:

  1. Look at existing bundled content to determine if the content within it aligns with the use cases.
  2. Review Tags and Medium to search for other content.

    Note: Tags catalog existing content according to an incident response approach, for example attack phase or authentication. Medium categorizes content based on whether it applies to log or packet customers (or both).

  3. Read descriptions and requirements for the rules to determine if they match the environment, for example the Windows log being collected.
  4. Consider custom content if you do not find any existing content that matches the use cases.

For details of RSA Live Content, see RSA Live Content. For details on developing custom content, see Developing Content.

Example Use Case: Detection of MalwareExample Use Case: Detection of Malware

Use Case: I want to detect malware within my network.

Environment: Packet and log customer. Full alerting capabilities including ESA. Log sources include an IDS, Firewall, Anti-virus and web logs.

Potential Implementation:

With a search of RSA Live Content for Bundle types, you can see that the Known Threats Pack and Hunting Pack exist and are tagged with a Medium of packet.

The customer environment supports this and the pack descriptions match the use case. Additionally, the customer has ESA and can enable the Advanced Threat Detection module (see the "Configure Automated Threat Detection" topic in the Alerting using ESA guide) to be alerted to command and control traffic within their environment using their packet capture and web logs. This is a good start for content for the detection of malware. Between the parsers, app rules, feeds, and the c2 atd module, this content provides coverage for multiple malware delivery and infection vectors, as well as beaconing and c2 behavior. This enables you to catch a potential infection as well as existing incidents before they worse.

Since the bundle is geared mostly towards packet content, you search within RSA Live for a Medium of log, log and packet, and with a Category (Tag in Security Analytics 10.x) of malware.

This search returns the same ESA rules that were not by default deployed by the bundles, such as Backdoor Activity detected or Windows Worm Activity Detected Logs. Since the customer has ESA, you read the description and determine their environment and use case match these rules and deploy them to the service.

Finally, you review the logs for events generated that may indicate malware signatures and decide you want them to exist in the malware report. At this time, you may decide to customize the log parser to generate the metadata that aligns with the Malware Activity report (i.e. add static tags to messages of inv.category = 'threat' && inv.context = 'malware’).

RSA Live ContentRSA Live Content

The following is a brief overview of the content types available within RSA NetWitness Platform and links to existing RSA Live content.

  • Resource Type:

    Bundle

  • Supported Medium:

    Log, Packet, Log and Packet

  • Description:

    A container for a themed or related set of content. Each piece of content is specified as a dependency within the bundle.

    List of supported content is in Content Bundles or Packs.

    Note: Content Bundles do not support subscription. You can view the list of content in each bundle in the documentation (RSA Link Content Space). You can then periodically redeploy these pieces of content.


  • Resource Type:

    RSA Log Collector

  • Supported Medium:

    Log

  • Description:

    Event sources are the assets on the network, such as servers, switches, routers, storage arrays, operating systems, and firewalls. In most cases, your Information Technology (IT) team configures event sources to send their logs to the Log Collector and the Security Analytics administrator configures the Log Collector to poll event sources and retrieve their logs.

    List of supported content:



  • Resource Type:

    RSA Lua Parser

  • Supported Medium:

    Packet (with few exceptions for Log)

  • Description:

    Packet parsers identify the application layer protocol of sessions seen by the Decoder, and extract meta data from the packet payloads of the session.

    List of supported content: Packet Parsers


  • Resource Type:

    RSA Feeds

  • Supported Medium:

    Log, Packet

  • Description:

    A feed is a list of data that is compared to sessions as they are captured or processed. For each match, additional metadata is created. This data could identify and classify malicious IPs or incorporate additional information such as department and location based on internal network assignments.

    List of supported content: In Depth Feeds Information


  • Resource Type:

    RSA Application Rule

  • Supported Medium:

    Log, Packet

  • Description:

    Application layer rules are applied at the session level on a Log Decoder or a Decoder to output a single piece of meta key and value once the rule logic has been matched.

    List of supported content: RSA Application Rules


  • Resource Type:

    RSA Event Stream Analysis Rule

  • Supported Medium:

    Log, Packet, Log and Packet

  • Description:

    ESA's advanced Event Processing Language allows you to express filtering, aggregation, joins, pattern recognition and correlation across multiple disparate event streams. Event Stream Analysis helps to perform powerful incident detection and alerting.

    List of supported content: RSA ESA Rules


  • Resource Type:

    RSA Security Analytics Report

  • Supported Medium:

    Log, Packet, Log and Packet

  • Description:

    A container for RSA Security Analytics rules. A rule represents a unique query that detects and summarizes the requested information within a collection of network data.

    List of supported content:


Using BundlesUsing Bundles

Bundles are a grouping of related content around a theme that are easily deployed at one time instead of needing to individually select and deploy each piece of content within the set. For the list of available bundles and dependencies, see Content Bundles or Packs.

Note: Be sure to use the medium attribute assigned to the bundle to ensure it matches your RSA NetWitness Platform’s deployment of either a packet (Decoder), log (Log Decoder) or log and packet combination (Decoder + Log Decoder).

Filtering Content with Live SearchFiltering Content with Live Search

You can filter the content for deployment by Resource Type, Medium and Tags.

  • Resource Type is valuable to deploy content types only supported by your environment. For example, if you are not using Event Stream Analytics (ESA), there would be no reason to attempt to deploy that content type.
  • Medium is helpful to filter on just log (applied to content that uses meta derived from log data), packet (applied to content that uses meta derived from network packets) or log and packet (applied to content that correlates meta derived across log and packet data).
  • Depending on your version, you can filter by tag or category:

    • In Security Analytics 10.x, Tags you can select tag , see Live Content Search Tags.
    • In NetWitness 11.x, you can use Categories to narrow results. Categories offer a richer set of items than was available as Tags. This is a hierarchical model, four levels deep. Each category catalogs content with an Incident Response service-based approach. For details, see the NetWitness Investigation Model.
    , , see the topic "Search Criteria Panel" in the Live Services Guide., , , , , see Discontinued Content. To remove the discontinued content from your system, use the Discontinued Resources tab in the Live > Configure view. See the "Live: Discontinued Resources Tab" topic the Live Services guide for more details., , , they identify content that is no longer relevant or useful.
  • RSA developers mark the content as discontinued in RSA Live. Users can view discontinued content or hide it from view in Live.
  • Discontinued content is added to the Discontinued Content list in the documentation.
  • Users should disable Discontinued Parsers (see below).
  • , , , , you may still see these parsers., , , , select Administration > Services.
, , select a Decoder service, and select View > Config., , , disable the following parsers:, , click Enabled in the Config Value column for the parser.
  • , select Disabled., , , , , , , the system parsers have a Lua parser equivalent that is more comprehensive in terms of metadata output. If deploying a Lua parser, RSA recommends that you disable the corresponding system parser. For details on this mapping, see Mapping of System to Lua Parsers., , , , select Administration > Services.
  • , , select a Decoder service, and select View > Config., and the system parsers are listed in the Parsers Configuration section of the screen., , select Enabled in the Config Value column for the parser, and select Disabled from the drop-down menu., , , , , , the rules are not evaluated at all. If you disable the keys, they are evaluated, but the key are not registered (means these keys will not be indexed, and thus not seen in investigations).
  • FeedParser: This parser enables or disables the feeds. If you disable it entirely, feeds are not evaluated at all. If you disable a key, feeds are evaluated, but meta going to that key from a feed are not registered (means these keys will not be indexed, and thus not seen in investigations).
  • GeoIP: Geographic data based on source and destination information (ip.src , after you have identified equivalent Lua parsers and deployed them through Live. For mappings, see Mapping of Flex to Lua Parsers., , choose a Decoder and select View > Config.
  • , , , , , , select their checkbox and click ., , , , you want to get the most accurate results with the best performance. Accuracy can be achieved by adhering to the data model and careful query construction. Before you begin developing content, however, it is important to understand what type of content is for your use case. , , , , , you can deploy either a log parser or packet Lua parser dependent upon the source of the data. Use feeds and application rules to examine the data parsed from the event sources and generate additional metadata when conditions are met. You can use this additional metadata during the investigation process, for alerting or integration with incident management tools. , you should create a reporting engine alert. If you want to be notified about multiple different events across disparate sources within a timeframe, you should use ESA. , you can simply create a report. If you want a report that has more enhanced aggregation criteria, create a report against the warehouse using the criteria listed on the screen. Also, if you want to use a timeframe, you can create a report based on the last day or week against warehouse data, last month up to a year on Archiver data and the last hour up to a day using Concentrator or Broker (NWDB) data., , packets and the endpoint. RSA recommends that you keep within this model as much as possible when you create custom content, so the existing rules may apply to it as well. As a best practice, document any custom meta keys and possible values, if applicable, with clear definitions for future content developers and researchers., , , configuration files for the Log Decoder and Concentrator may need to be updated. (The Decoder does not need to have manual updates to the meta model as new meta keys are automatically stored.) See Customize the meta framework for information about updating the meta model within the product., aliases and data types that are used across the different content types. The individual services control and manage this data model differently:, any new meta keys generated by the packet parsers or feeds are dynamically added to the data model. The only time the data model needs to be updated is if aliases are needed for a particular key.
  • With Log Decoders, you need to explicitly set the meta keys to store for use with content analysis
  • Concentrator services need to be explicitly configured with keys to store in the index and make available on the Investigation page for an analyst.
  • Brokers automatically sync with the Concentrator’s index file and do not need to be manually adjusted
  • ESA updates its data model dynamically based on a poll of the Concentrator(s) set as its data source.
  • , you can view the index-decoder.xml file within the product. From the NetWitness UI, navigate to Administration > Services > Decoder service > File, and select the file from the drop-down menu., , , , , you can view the table-map.xml file within the product. From the NetWitness UI, navigate to Administration > Services > Log Decoder service > File, and select the file from the drop-down menu., , , , you can view the index-concentrator.xml file within the product. From the NetWitness UI, navigate to Administration > Services > Concentrator service > File, and select the file from the drop-down menu., , , , , additional meta data is generated using the Investigation Feed in order to describe the logic being detected. This additional metadata assists analysts during the investigation process as well as helps with accuracy of content development across disparate data sources. , , , , , , but also describes the literal intent or functional objective of the resource itself. This key may contain any of the subcategories listed within the Investigation Model except the highest level reserved for use in the Investigation Category key., , , , , , , , , , , , , , , , , , deviations, conduct and session attributes. , , , , , , , , , , , , , , , , , , , , , , , , or known signature-type resources should be pushed here. , , , , , , , , , or simply administrator activity. It should be used only when there is no datatype indicator present but signifies potential cause for concern if high value hosts or parties are involved. , , , , , , , , , , , , and
  • Create Custom Typespec for ODBC Collection, , , the Event Source Integrator (ESI) tool, has been developed to create or update Log Parsers. The blog post RSA NetWitness ESI 1.0 Beta 3 discusses how to download the tool, videos for using it, and more detailed usage information and release notes as well. Look for other recent blog posts regarding the tool and updates in the RSA NetWitness Platform blog space., , see the Parsers Book, on RSA Link. The book gives details on the development and debugging of parsers based on the Lua programming language. There are also sample parsers available for review., , which explains how to manage and create a custom feed., , , see the topic "Configure Decoder Rules." The Capture Rule Syntax section of this topic describes the rule syntax.
  • , see Application Rules Cheat Sheet., the Decoder then attempts to match the next rule listed, until a match is found., , , , select Administration > Services., select a Decoder or Log Decoder service, and select View > Config.
  • , , , , , , , , App Rule Example 1 and App Rule Example 2 for details on how to fill in the values for this dialog box.
  • , click OK., , move the rule.
  • To apply the updated rule set to the Decoder or Log Decoder, click Apply.
  • , then applies the updated set to the Decoder and removes the pending indicator from the rules that were pending., , create a rule as follows:, , make sure your screen looks similar to the following:, , , , , create a rule as follows:, , make sure your screen looks similar to the following:, , , , , , , , , , , , , , , , to truncate all Server Message Block (SMB), enter service=139., RSA NetWitness Platform displays syntax errors and warnings., , , , , , no further rules are processed for this session, even if more are listed after this rule in the App Rules tab., , , , , , , , , , , , , , and do not save the packet payload., , and do not save the packet payload after the specified first bytes, where is a number of bytes., , where the SSL exchange is preserved, but the rest of the payload is not saved. This option is for use with SSL parsers., , , , , , , enable the Alert flag and select the name of the alert meta from the Alert On drop-down list., , , , , , , , enable the Forward flag. Make sure that:, , , , , , , , , prevents the alert metadata that is created from being written to the disk., , , see the Report topic " Reporting Guidelines" in the Reporting Guide., , see the "Core DB: Optimization Techniques" topic in the Core Database Tuning Guide., , select the Custom report type and select up to 6 meta keys within the Select statement., , , , , an event description for log messages may be different for each log message within each parser., , keep usage to a minimum, and only used these operators when required. Regex operators that are commonly used can be created as application rules on the respective Log Decoders., , , , , you should not need to know the EPL syntax used within the rules. However, if your use case exceeds the capabilities of either of these, you should become familiar with at least the basics of the EsperTech EPL language used with ESA., , download EPL Essentials from RSA Link here: https://community.rsa.com/docs/DOC-59978., , which is just a sequence of events. Within NetWitness, this stream is called the Event stream. EPL looks similar to SQL., , , Event., , , , , , , , , rather than the time within the event itself. There are a variety of these data window types available to use., , you need to navigate to the Events View to see the true case of the metadata in question. If the case of the metadata is not and will not be known, EPL comes equipped with the following functions you can use to mange case:, , , , , , , and it is important to define the loading order if there are dependencies between them. You can use the EPL statement “uses ” to force the pre-loading of the required rules. For example:, , , , , , you must add the RSA specific annotation, @RSAAlert, before the EPL statements upon which you want to alert. For ecample:, , , , either by a time window or event count (unless they are only matching on a single event). Boundaries ensure that the EPL rules do not consume more and more memory over time and clean up old data that is no longer required. This can be achieved by using EPL views:, , 10), , , , a new thread will be created for every ‘a’ event in the first statement below. This means that multiple ‘a’ events will match with the same ‘b’ event. , , , , , , , , , , , , , , , , , , , { 'monitor' , 'session' })), , , or refer to Regular Expressions Reference Table of Contents., , , , , it will output everything within it that matches an ip_src. The HAVING count clause instructs the engine to only output after the time window if the count of events is greater than 10. The GROUP BY ip_src aggregation instructs the count to apply to only a single ip_src instead of across all ip_src that match the filter criteria., , , , , , , , , , , but only if within those 10 minutes, we do not see a TCP RST sent from the destination IP. This is an example of correlating packet and log data. Our F5s do a TCP RST on the inbound web requests for unknown paths, so in this instance, we only want to be alerted when a source receives 10 unique attacks to a single destination, and that destination has not responded to the web requests., , , , , , , no TCP RST can occur to match the pattern:, , , , , , , , , , , , , , , , , , , , nonstandard traffic and ECAT alerts
    */, , ip_dstport integer, service integer , tcp_dstport integer, device_type string, ip_dst string);, , nonstandard traffic and ECAT alerts
    */, , , , , , 'IDS', 'Firewall') AND ip_dstport=443), , , , , , , nonstandard traffic and ECAT alerts
    */, , , , 'IDS', 'Firewall') AND ip_dstport=443) as s1, , , , , , , , , , , , , , , , 18, *, *, *) end (0,9, *, *, *); , , , , , , , , , , , , , , , , , , , , , , , such as percentages, rations, averages, counts, min and max, within a given time window., see the topic "Alerting: ESA Enablement Guide" in the Alerting Using ESA guide., this stores events in memory, which may cause issues if storing over a long period or large number of events., , size long);, sum(size) AS size FROM Event.win:time_batch(1 minute) GROUP BY ip_src;, , , , , , , , , sum(size) as size, , , , , , , , , , put rules into trial mode on the ESA service prior to enabling in production. See the topic "Work with Trial Rules" in the Alerting Using ESA guide., , , , , , , , device_class='firewall', action=eval("{'GET','POST'}") }, , , , , , , , , , , , , , , , , , , , , , , , to test advanced EPL statements with schema-based scenarios. This is a good resource for when you start the development process outside of RSA NetWitness:, , , , , or has become irrelevant due to changes in technology or attack techniques and tools., or by searching through RSA Live by data range since last deployed. Be sure to subscribe to any content for which you want to receive update notifications. See the RSA Security Analytics Live Services guide for more information about subscriptions., , , , , , , , , , select the capture interface., , , , select Administration > Services., , select a Decoder or Log Decoder service, and select View > Config., , select Capture Interface Selected.
  • , , there is a single choice: log_events,Log Events.
  • For Decoders, the available choices depend upon your environment.
  • , , , , , , , it automatically begins capturing data if Capture Autostart is enabled. You can always start and stop data capture manually. However, if the service goes down or is restarted, then if Capture Autostart is enabled, capture is automatically restarted. You will still need to manually start the capture the first time, though., , , , select Administration > Services., , select a Decoder or Log Decoder service, and select View > Config., , , select Capture Autostart, as shown in the screen below (showing a Decoder service)., , , , , , , change the View from Config to System, by opening the View menu (Config ) and selecting System., , , , , select Reboot., , then the service restarts., , , , , , , , , , , , , , , ,