Back Story
So there I was with a pcap in front of me while I sat there converting Oxygen to Carbon dioxide while I scoured the internet looking for ways to visualize bandwidth consumption and protocol throughout the duration of a vulnerability scan.I was trying to have a simple visualization of Transmit and Receive bandwidth over the time of the scan. (Evidently, this is not as popular as I thought, or I lacked the Google-Fu to find a utility to help me in this endeavor.) This was all in an effort to validate that the vulnerability (operated by a third party) was honoring the bandwidth limitations we had outlined. Given the performance issues observed at the remote site, I had suspicions that the bandwidth threshold was exceeded.
The following tools are available, but either did not provide what I was looking for or did not run well on my VM. There are many more, some of which probably do exactly what I was able to accomplish.
- TNV (Did not run on Ubuntu 12.04)
- Flowtag
- Wireshark [Statistics] > [IO Graph]
- RUMINT Made by the author of Security Data Visualization. Great book BTW! I recommend getting the Google Books version if you don't have $200 to throw around.
Setup
Below is how everything was setup. Most notably, our capture box was on location with the vulnerability scanner. We can see everything the scanner is sending out, but anything dropped by the remote site WAN connection obviously won't make it to our scanner.Solution
Not one to give up easily, I resorted to using what I had available.... tshark and R.Tshark
Below is the command I used to squeeze the pertinent information out of the pcap. (192.168.1.0/24 is the remote site and 192.168.0.15 is the scanner.)(I'll leave it to you to look at the tshark man page and display filter reference)
This command outputs a tilde (~) delimited file containing the pertinent information. Why a tilde? Tildes work well for delimiting text fields since they are much rarer than commas. The frame.time field uses a comma in the date format. Using a tilde is just a good habit when delimiting anything with text.
Sample Output
frame.time~ip.proto~ip.len~ip.src~ip.dst Aug 17, 2013 01:54:09.555897000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.555916000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.555928000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.555986000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.555995000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.556009000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.556012000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.556068000~6~40~192.168.0.15~192.168.1.38 Aug 17, 2013 01:54:09.556077000~6~40~192.168.0.15~192.168.1.38
There were over 1.8 million rows in this data set (one for each packet). This immediately ruled Excel out of the picture since there's a hard limit to the number of rows available.
How to work with > 1.8 Million line .csv
I'm a fan of Perl, but looping through a CSV and doing calculations takes a lot of control structures and logic (something in short supply these days). Perl has some great graphing modules available (GDGraph is one), I'm sure <insert your favorite language here> does as well. I'm not knocking any language, but R has excellent built in functions that make graphing a snap.Teaching the syntax of R is out of the scope of this post, but it is well worth the time spent getting familiar. Looking at the code below, you should be able to decipher the long and short of what's going on. If not, then start Googling! I'm sure R gurus will have some criticisms here. I am by no means an expert in R, just a big fan. I welcome any suggestions.
R loading the CSV
R Script and Output
Now we can do some great things with our zagg dataframe ... like graphing.You can see very clearly that our bandwidth limitations were not honored. Time to chase down the vendor and make them correct their settings.
Obviously, this does not have to be limited to the packet length. We can also get a protocol distribution, graph the number of ICMP error codes, etc. Let me know what you think of my first blog post below. I look forward to your