Getting started capturing live network traffic↑
The yanadump
tool provides an alternative to PCAP analysis. yanadump
listens on a network interface and captures handshake information in a trace file (.bin) formatted for analysis in AQtive Guard.
yanadump
functions as a network probe, enabling live network traffic monitoring through two deployment methods. It must be deployed on a machine that either sends and receives traffic or receives forwarded traffic for analysis:
- Direct traffic monitoring: Listen to traffic from the network interface of an endpoint, such as a Linux server.
- Traffic mirroring: Use cloud-native traffic mirroring to monitor forwarded traffic from cloud-based assets or in a hybrid environment.
Direct traffic monitoring (Linux)↑
To dump handshake information from a live interface, run:
The yanadump
binary requires both the CAP_NET_ADMIN
and CAP_NET_RAW
Linux capabilities to capture packets. This can be achieved by doing one of the following:
- Run the
yanadump
tool as root (not recommended) - Add the following capabilities to the
yanadump
binary:
Info
The specified interface is put in promiscuous mode by default. However, some network interfaces don’t support promiscuous mode and an error such as Failed to set promiscuous mode
can be encountered. The promiscuous mode can be disabled with the --disable-promiscuous
argument.
Stopping a live capture↑
To run the yanadump
tool for a predetermined amount of time, use the unix timeout
command:
In this example, the yanadump
command will be terminated after running for one hour.
Continuous monitoring↑
Deploy yanadump
to regularly push protobuf
files to AQtive Guard on a recurring schedule (such as daily) for ongoing monitoring of your network cryptography.
To continuously analyze machine traffic, first ensure the machine uploading the trace has access to the AQtive Guard API.
The following command line streams handshake information to AQtive Guard continuously and create a new AQtive Guard session every day:
$ export AQG_API_TOKEN='xxxxx'
$ ./yanadump -i interface0 --api-session-renew 1d --api-url https://API.AQG.DOMAIN/
In this example:
xxxxx
is the API token generated from the settings page of AQtive Guard web UIhttps://API.AQG.DOMAIN/
is the base URL of the AQtive Guard instance.
The API token can also be passed through the --api-token
argument, however it should be considered insecure as the token ends up in ps
output.
Info
In the case of high data volume, handshake information may be dropped if the upload to the AQtive Guard API takes too long. The channel capacity can be increased with the --channel-capacity
argument, but this will increase memory usage.
How to use a custom CA↑
yanadump
verifies the TLS certificate of the server specified with --api-url
. This verification can fail if the API server makes use of a custom CA certificate. In that case, the location of a custom certificate can be specified through the --ca-file
argument, as in the following example:
The file may contain multiple CA certificates and all certificate(s) must be in PEM format. If this argument is not set, the platform-specific certificate source is used:
- On Windows, certificates are loaded from the system certificate store.
- On macOS, certificates are loaded from the keychain. The user, admin and system trust settings are merged together as documented by Apple.
- On Linux and other UNIX-like operating systems, CA files are searched within the default directories unless otherwise specified through the environment variables
SSL_CERT_FILE
andSSL_CERT_DIR
.
Session renewal↑
By default, yanadump
uploads handshake information within a single session for a given live capture.
The --api-session-renew
argument can be used to create a new AQG session at a regular interval. The format is <number><unit>
, where <unit>
can be d
(days), h
(hours), m
(months) or y
(years). For example, to renew the session every 2 days:
Network interface deletion↑
Some networking services (eg. VPNs) delete a network interface when they are stopped and create a new network interface when they are launched. yanadump
transparently handles this situation and resume trafic capture on a network interface that has been deleted and created again within a given amount of time. It allows such network services to be restarted without having to restart yanadump
.
By default, this duration is set to 2 minutes and can be modified with the --if-check-duration
argument. For instance, to wait for the interface interface0
to be created within 5 seconds after having been deleted:
Inactive stream collection↑
In live capture mode, the memory consumption might grow constantly in some situations, for instance if TCP connections aren’t closed properly. To prevent these situations, an inactive stream collector runs every 2 minutes by default and drop streams inactive (that is with no packet exchanged) for longer that 1 minute. These values can be changed with --stream-collector-timeout
and --stream-collector-interval
.
To disable the inactive stream collector:
To run the inactive stream collector every 4 minutes and drop streams inactive for longer that 30 seconds:
Yanadump traffic mirroring strategy↑
Using cloud-native traffic mirroring, yanadump
extends monitoring to cloud assets, enabling seamless coverage in hybrid environments.
By analyzing traffic from virtual instances, containers, and other cloud resources, it centralizes monitoring for both on-premises and cloud networks, delivering consistent security insights and comprehensive visibility across the entire enterprise infrastructure.
Yanadump implementation plan↑
We recommend strategically deploying yanadump
at key aggregation points, such as core routers or switches, rather than on individual endpoints. This placement enables it to efficiently monitor traffic from multiple sources, providing broad visibility with minimal deployment effort.
- Assess traffic mirroring and forwarding options. Identify current traffic mirroring, forwarding, and monitoring configurations (such as SPAN ports, TAPs, AWS VPC Traffic Mirroring, or Azure Network Watcher) across your on-premises and cloud environments. Aligning with current configurations enables centralized data collection and streamlines analysis across the network.
- Identify key network aggregation points. Locate core routers, switches, or other network convergence points where traffic from multiple endpoints converges. These will be prioritized for
yanadump
deployment, providing maximum visibility from minimal infrastructure. - Select critical cloud assets. Identify essential cloud resources, such as virtual machines and containers, for monitoring. Configure traffic mirroring to forward traffic from these cloud assets to
yanadump
, ensuring a unified view across hybrid environments. - Deploy yanadump at aggregation points. Install
yanadump
at selected on-premises convergence points to capture and analyze aggregated network traffic efficiently. This placement enablesyanadump
to monitor multiple sources with minimal deployment footprint. - Enable cloud traffic mirroring. For comprehensive hybrid network coverage, configure traffic mirroring for the selected cloud assets to direct relevant traffic to
yanadump
for analysis. This setup allows AQtive Guard to receive both on-premises and cloud data in one central location. - Automatic analysis. Set
yanadump
to upload its data to AQtive Guard. This streaming design allows for time-based cryptographic insights without large data storage requirements, keeping analysis efficient.
Refer to the GCP packet mirroring tutorial for an example of configuring traffic mirroring on Google Cloud Platform.
Important
To ensure proper analysis in AQtive Guard, PCAP network captures must include TCP handshake packets (SYN/SYNACK) and, if applicable, RST packets. If your network device pre-filters traffic or limits captured data (for example, truncating after a certain number of bytes), verify that these critical packets are retained. Omitting them prevents tools like yanadump
from identifying and analyzing TCP streams, resulting in incomplete or unusable analysis.