A collection of tool utilisation to assist with monitoring the performance (?) behaviour of your active Firewall.
Table of Contents
Your packets don’t go through as you expect, some are dropped, some are delayed significantly.
File extract: ~/.vimrc
set guifont=9x15bold set ruler syntax on set tabstop=4 set shiftwidth=4 filetype on
File extract: ~/.vim/filetypes.vim
augroup filetype au! au BufRead,BufNewFile *.c set filetype=c au BufRead,BufNewFile pf.* set filetype=pf au BufRead,BufNewFile pf.conf set filetype=pf au BufRead,BufNewFile pf.conf.* set filetype=pf
[Ref: Hitting the PF state table limit]
The PF State tables set the limit of connections that have been authorised, and thus limits the number new connections that the firewall will accept. You may have excess bandwidth available, but if there are no free capacity in the State Tables, then your firewall becomes a bottle-neck.
The configured limits for state information is accessible through pfctl
# pfctl -sm
states hard limit 10000 src-nodes hard limit 10000 frags hard limit 5000 tables hard limit 1000 table-entries hard limit 200000
The above limits pre-sets the allocated memory the the defined structures, such that they are always available, and it also limits growth of the said data structures. If your firewall traffic exceeds the above settings, then performance will be effected.
It is now important to monitor the effects of your traffic on the counters for the above limits. The generic “-s info*” output gives us clues to where to further investigate potential bottle-necks in our firewall.
# pfctl -si
Status: Enabled for XXXXXXXXXXXXXXXX Debug: Urgent State Table Total Rate current entries 34 searches 96379206 15.2/s inserts 726196 0.1/s removals 726162 0.1/s
On the above gateway, connected to two infrequently used laptops, the current entries is very low relative to the hard limit 10000 above. Obviously, the current entries will fluxuate due to use, and on a busier gateway may fluxuate significantly.
# systat states
From the manpage: systat(1)
states Display pf states statistics, similar to the output of pfctl -s states. Available orderings are: none, bytes, expiry, packets, age, source address, source port, destination address, destination port, rate, and peak columns.
[Ref: Hitting the PF state table limit, Open BSD state hard-limit reached]
An important counter to monitor from pfctl -si is the “memory” counter. The same details should be availble through systat pf
From an active gateway linking our 6 sites, we get the following from a standard install, no modification to state tables.
# pfctl -si | grep memory
memory 209230 0.1/s
The counter highlights how often PF has failed at least one of the pool(9). The higher the number, the higher the frequency of incidences where packets arriving at the firewall have most likely been dropped due to one of the hardware limits.
Our above example shows 209,230 times the memory limit was hit.
[Ref: States by rule, pfctl]
When you want to observe the effects of a rule on routing/maintained ‘states’. Documented since OpenBSD 4.9 is the [-R id] option for showing states.
# pfctl -ss -R rule-number
Snippets from the manpage (since OpenBSD 4.9)
pfctl ... [-s modifier [-R id]] -s modifier Show the filter paremeters specified by modifier (may be abbreviated): -s states Show the contents of the state table. If -R id is specified as well, only states created by the rule with the specified numeric ID are shown.
The next review is to check with Kernel memory allocations, using vmstat. To narrow our search down to the effects on the pf state table we check the entry for pfstatepl.
Below, we grab the lines with state or Fail (so we can get the column headers)
# vmstat -m | grep -E "state|Fail"
Name Size Requests Fail InUse Pgreq Pgrel Npage Hiwat Minpg Maxpg Idle pfstatepl 296 213123877 209235 5075 1050 0 1050 1050 0 2308 526 pfstatekeypl pfstateitempl
pfstatepl is the label for memory allocated for the struct pf_state (/usr/src/sys/net/pf_ioctl.c) The failures do seem to be significant.
From the manpage: vmstat(8)
vmstat - report statistics about kernel activities -m Report on the usage of kernel dynamic memory listed first by size of allocation and then by type of usage
[Ref: pf question about tables derived from interface groups, pfctl]
From the manpage: pfctl
-g Include output helpful for debugging.
The -g command-line option displays more debug information, a useful case is seeing hidden table entries that can be very useful in identifying more information about your configuration.
Combining -g with -sT will show all the tables, including “internal” tables used by PF.
# pfctl -g -sT
bnx0:network carp0:0 carp1 carp1:0 carp2
The display of “internal/hidden” tables is very interesting, and helps us answer questions we’ve always had, but never really considered asking, such as:
To access the ‘internal/hidden’ table entries, you have to use the hidden “_pf” anchor, such as:
# pfctl -a _pf -T show -t carp0:0
192.168.0.1 IPv6-Address
It’s nice to have PF queues, but how do you know that your theory is actually operational ? systat queues displays a screen of your queues and statistics relating to it’s use.
# systat queues
From the manpage: systat(1)
queues Display statistics about the active altq(9) queues, similar to the output of pfctl -s queue
The OpenBSD network stack has been tuned by the OpenBSD developers. If, after careful analysis of your network performance you determine a bottle-neck, please attempt to understand the differing relation between knobs before tweaking. After all, it works better for your system if you understand why features work.
# pfctl -vvsi | grep congestion
The network devices maintain quite a number of records which we can get at, to provide an overview of some of the network traffic. One issue that is important to keep track of, is dropped packets.
# netstat -s | grep -E "(output packets dropped|:$)" | grep -v " .*:"
ip: XY output packets dropped due to no bufs, etc. ... ip6: MN output packets dropped due to no bufs, etc.
To find out what we think is the cause for the dropped packets,
# netstat -s | grep -E "(dropped|:$)" | grep -v " .*:"
Refer src:/usr/include/net/if.h for definitions of the above.
/*
* On input, each interface unwraps the data received by it, and either
* places it on the input queue of an internetwork datagram if necessary,
* and then transmits it on its medium.
*/
Another useful iteration of netstat is:
# netstat -s -p carp
carp: xy packets received (IPv4) xy packets received (IPv5) 4 transitions to master
From the manpage: netstat(1)
-p protocol Restrict the output to protocol, which is either a well-known name for a protocol or an alias for it. Some protocol names and aliases are listed in the file /etc/protocols. The program will complain if protocol is unknown. If the -s option is specified, the per-protocol statistics are displayed. Otherwise the states of the matching sockets are shown.
Intresting information from the output, is the number of transitions to master.
# netstat -m
From the manpage: netstat(1)
-m Show statistics recorded by the memory management routines (the network manages a private pool of memory buffers).
To take a look at how our network interface queueing.
# sysctl net.inet.ip.ifq
net.inet.ip.ifq.len= net.inet.ip.ifq.maxlen= net.inet.ip.ifq.drops=
Where we get the current length, maximum length size, and number of dropped packets since queue reset.
/* sysctl for ifq (per-protocol packet input queue variant of ifqueue) */
Refer src:/usr/include/net/if.h for definitions of the above.
/*
* Structure defining a queue for a network interface.
* XXX keep in sync with struct ifaltq
*/
Increasing the inet.inet.ip.ifq.maxlen can be beneficial if you are seeing an increasing value for net.inet.ip.ifq.congestion. Note the return value for congestion is an INT, so an increasing negative number is worse.
# netstat -i
From the manpage: netstat(8)
Show the state of interfaces which have been auto-configured (interfaces statically configured into a system but not located at boot-time are not shown).
# netstat -I$interface
From the manpage: netstat(8)
Show information about the specified interface;
From the ports: net/iftop
# iftop -ni $interface
Some keyboard shortcuts
t => toggle traffic display mode s => source IP resolution p => port resolution S => display source port D => display destination port d => destination IP resolution P => pause the display h => keyboard shortcut help