Frequency Spectrums

The Bands

Way back in 1947, the International Telecommunication Union (ITU) reserved a band of frequencies for what are commonly referred to as Industrial, Scientific, and Medical (ISM) uses besides telecommunications. Those devices include things like microwaves which can create interference for radio based telecommunications. Basically these types of devices were so noisy that the ITU gave them their own frequency band. Since in the ISM band must tolerate interference from ISM devices, it was an easy spot of spectrum to be used for unlicensed use. Originally proposed in 1980 and finally authorized in 1985, the FCC began to allow the use of the unlicensed ISM bands for communications purposes.

Two of the ranges in the ISM band commonly used for wireless networks include 2.400 – 2.500 GHz and 5.725 – 5.875 GHz. In addition to the ISM bands there are also the UNII bands in America. The FCC refers to the 5 GHz band as the Unlicensed National Information Infrastructure (U-NII) band. The UNII bands are further broken down into 4 ranges:
UNII-1 from 5.150 GHz to 5.250 GHz, aka Lower Band
UNII-2 from 5.250 GHz to 5.35 GHz, aka Middle Band
UNII-2 Extended from 5.470 GHz to 5.725 GHz, aka H Band
UNII-3 from 5.725 GHz to 5.825 GHz, aka Upper Band

The Channels

In the US, the 2.4 GHz ISM band is broken down into 22 MHz wide channels separated by 5 MHz. With those requirements we get 11 available channels in the US, however a majority of those are overlapping. To avoid interference we should stick to the non-overlapping channels. This allows the use of channels 1, 6, and 11 in the US.

That seems kind of strange doesn’t it? With so many channels available why are we “limited” to only 3? If two access points use different channels which are too close together they will get overlapping channel interference. Basically what happens when devices are on the same channel they will know how long each other are transmitting for and will remain silent during that time to limit collisions. When devices are only separated by say 2 or 3 channels then they can still have collisions, but they will not hear the messages stating how long the other devices are transmitting. Therefore, this overlapping or adjacent channel interference is much worse than the co-channel interference because we can better handle the co-channel interference.

The 5 GHz spectrum is broken down into different bands listed earlier. The channels are spaced out 30 MHz wide in the lower band, and 20 MHz wide in the other bands. These channels are considered non-interfering though there is some slight overlap in the RF spectrum. This gives us a default of 25 channels available which makes the 5 GHz much more attractive for wireless networks.

Summary

This article was meant to be a brief introduction to the frequencies, bands, and channels which are used in 802.11 wireless networks. I will dig into some of these topics in greater detail, possibly touching on encoding mechanisms and the various 802.11 standards. If you found any of this information useful, please drop me a comment!

Advertisements

Getting nfacctd to InfluxDB via Curl

I wanted to highlight the process I used to get Netflow data gathered from nfacctd imported into InfluxDB. I found very little of this information online so I’m sharing it for my notes and if someone else needs it that’s great also.

The first step in my process was to get some temporary data coming in from nfacctd. To do this I setup a config file to write to the memory plugin with debug mode enabled and daemonize disabled. My nfacctd-test.conf file looks like:

!
debug: true
daemonize: false
plugins: memory
aggregate: src_host, dst_host, timestamp_start, timestamp_end, src_port, dst_port, proto, tos, tcpflags
nfacctd_port: 2100
!

From there I opened another SSH session and used the pmacct client to see what my traffic would look like:

robert@debian-netflow:~$ pmacct -s -e
SRC_IP           DST_IP           SRC_PORT  DST_PORT  TCP_FLAGS  PROTOCOL    TOS    TIMESTAMP_START                TIMESTAMP_END                  PACKETS               BYTES
192.168.1.100    155.70.42.251    52579     443       26         tcp         0      2016-11-21 06:24:31.0          2016-11-22 04:23:25.0          6                     400

For a total of: 1 entries

That line may be a little to wide to appear in the browser but it helped me to visualize what I was working with as I setup my awk statements later. I also made sure to use the -e flag with the pmacct client so I wouldn’t import records multiple time into InfluxDB.

The first thing I did was use sed to remove the first and last lines, and then awk to print a test with some values:

pmacct -s -e | \
sed '1d;$d' | \
awk '{print "SRC_IP value="$1" DST_IP value="$2" BYTES value="$13}'

Which gave me a result of:

SRC_IP value=192.168.1.107 DST_IP value=8.8.8.8 BYTES value=90
SRC_IP value= DST_IP value= BYTES value=

So I added another iteration of sed to remove the blank line that was previously before the last line:

pmacct -s -e | \
sed '1d;$d' | \
sed '/^\s*$/d' | \
awk '{print "SRC_IP value="$1" DST_IP value="$2" BYTES value="$13}'

The next thing I wanted to do was pass my Netflow timestamp to InfluxDB. To do this I had to reformat the timestamp value using gsub and then convert that string into a Unix timestamp with the mktime() function of gawk. (I had to install gawk to make this work on my Debian machine).

pmacct -s -e | \
sed '1d;$d' | \
sed '/^\s*$/d' | \
awk '{gsub(/[-:]/, FS); print "traffic,src_ip="$1",dst_ip="$2" value="$21" "mktime($14" "$15" "$16" "$17" "$18" "$19)}'

The final step was to pass this again to curl to write to the InfluxDB HTTP API. I also went ahead and added the variables for the rest of the Netflow tags I’m capturing in nfacctd.
When I first attempted this I noticed my timestamps were not taking correctly. After a little researched I discovered the mktime() function was creating a timestamp with seconds precision and InfluxDB was expecting a timestamp with nanosecond precision. So far I’ve just adjusted the curl statement to specify a precision of seconds:

pmacct -s -e | \
sed '1d;$d' | \
sed '/^\s*$/d' | \
awk '{gsub(/[-:]/, FS); print "traffic,src_ip="$1",dst_ip="$2",src_port="$3",dst_port="$4",tcp_flags="$5",proto="$6",tos="$7" value="$21" "mktime($14" "$15" "$16" "$17" "$18" "$19)}' | \
curl -i -XPOST 'http://192.168.1.73:8086/write?db=nfacctd&precision=s' --data-binary @-

So for now I’m going to pause with this setup to get some Grafana charts setup. I’ll also some back to tweak my nfacctd script to run as a daemon and get the curl script running on a regular schedule. That’s for another day.

Netflow via nfacctd

I’ve been experimenting with with some network management systems at home and one piece that I keep getting hung up on is Netflow. I think there is great value in being able to see what type of traffic is leaving (or entering) your network, even at a macro scale. To test these systems I’ve been using my home Juniper SRX100 and various Open Source solutions to try to grab this netflow data and do something useful. I’ve discovered that grabbing the data isn’t particulary difficult, but making it useful has been extremely difficult.

I’m currently using nfacctd, from the pmacct project. Within the pmacct project there is pmacctd which can grab traffic via libpcap and then aggregate it to make it useful for analysis. There is also nfacctd, which is the daemon that can listen for and process netflow/sFlow/IPFIX data. As far as my research has found the pmacct project is the most actively supported/developed open source project.

It is quite simple to setup nfacctd however there are no front-ends for this data. There are a plethora of plugins available to export the data, everything from an in-memory database to MySQL, Kafka, or AMQP. I’ve tried setting some custom charts written in PHP for the MySQL data, but I’m not a developer and this turns out to be an inefficient use of my time. My latest attempt though has been to get the data into InfluxDB which I can then graph on using Grafana. So far this appears to be the most promising solution.

As I complete stages in this project I plan on building documentation and sharing it via my Projects page. So far I’ve done the base Debian install and the simple InfluxDB install. My next step is to create a bash script that will use the pmacct client and curl to send the data to the InfluxDB HTTP API.

Installing Cisco VIRL using ESXi Host Client


author: admin
comments: true
date: 2015-12-18 17:18:28+00:00
layout: post
slug: installing-cisco-virl-using-esxi-host-client
title: Installing Cisco VIRL using ESXi Host Client
wordpress_id: 1276
categories:
– Networking
tags:
– Cisco
– esxi
– esxi host client
– host client
– install
– setup

– virl

I recently gave away my last Windows PC and vowed never to return. As a result I’m left with a MacBook Air for our general home use. I’ve wanted to get a VIRL lab setup since I heard about it, I even purchased some modest hardware to use as a host. However I was hesitant to proceed with VIRL because I was concerned with being able to manage my lone ESXi host without a Windows PC to install the vSphere client. I recently learned about a great new tool to manage standalone ESXi servers, the ESXi Embedded Host Client.

The host client is a VIB that you install directly on your ESXi host which offers a web interface to manage the host. You’re able to configure most items on the host, access VM consoles, and some limited performance monitoring. It really is a great tool, however it doesn’t have the full functionality that the vSphere client does. It is also only a Fling with VMware Labs, so its future is not certain. With this new tool I decided to plow along and get my VIRL lab up and running.

The first first issue I ran into was getting the ESXi .iso file onto a boot-able USB drive since my host hardware did not have an optical drive (neither does my MBA). The DiskUtility on OSX was recently changed with El Capitan so most of the how-to articles I found didn’t apply or wouldn’t fully function. I finally found a working solution from the Mac Repairs London blog. I summarized it here for posterity.

To get started I first installed ESXi (I used version 6 Update1). After the installation was completed I configured a static IP and enabled SSH. I then proceeded to install the ESXi Host Client VIB. The instructions are on the Host Client website. I used the SCP + SSH options to copy the file over and then execute the esxcli commands to install it. Now I’m able to view a web interface for the host.

The next step was to proceed with the VIRL installation. When purchasing my license I selected the VM option. The VIRL website has instructions for importing the OVA file using either the vSphere Client or the vSphere Web Client. I was able to follow the instructions for the vSphere client with only a few slight modifications:

Using the ESXi Host Client I was able to build the required port-groups (Flat, Flat1, SNAT, INT), however I had no options to enable promiscuous mode. I dug around the esxcli documentation and found out how to set this via ssh. I used SSH to connect to the host and issued the following commands (after creating the port-groups in the Host Client).

<code>esxcli network vswitch standard portgroup policy security set -o true -p Flat
esxcli network vswitch standard portgroup policy security set -o true -p Flat1
esxcli network vswitch standard portgroup policy security set -o true -p SNAT
esxcli network vswitch standard portgroup policy security set -o true -p INT</code>

After doing so command esxcli network vswitch standard portgroup policy security get -p Flat or the Host Client will display the new, correct settings.

I then proceeded to import the .ova file per the VIRL instructions. However, after a few failed attempts I finally noticed a message on the Host Client stating that if you’re attempting to import an .OVA over 1gb, you should extract and import the individual .ovf and .vmdk files. So on my OSX terminal I issued the command tar -zxvf virl.ova and then repeated the process but instead of selecting the .ova file I selected the individually extracted files and the import job successfully completed.

Now I’ve got an ESXi host and a VIRL lab all of which I can manage from my MBA!

Redistribute Connected

I’ve been building a lab scenario out for my next Pluralsight course and I discovered some interesting (or at least new to me) behavior with Cisco OSPF. I’ve already known about the different OSPF router types (ABR, ASBR), but it never really clicked for me until I saw it in real life.

The lab setup is very simple, a Juniper EX4200 connected to two Cisco 1721s running OSPF. Since I’m doing the labs on the Juniper EX4200, I was going to preconfigure the Cisco routers to already have OSPF configured. I configured the L3 interfaces and a network statement for the physical and the RID loopback. I also configured a loopback interface so I could generate some extra routes to the Juniper. I thought I would keep things simple so I just added the ‘redistribute connected’ command under the OSPF configuration without adding additional network statements. (This is me being lazy.)

When I logged into the EX4200 to verify the routes were working I noticed that one of the loopback routes I configured was showing up as an OSPF external route [OSPF/150]:
`192.168.10.1/32 *[OSPF/150] 00:00:17, metric 20, tag 0

to 10.10.2.2 via ge-0/0/0.0
192.168.50.0/24 *[OSPF/150] 00:00:12, metric 20, tag 0
to 10.10.3.2 via ge-0/0/23.0
`

So I started digging into the Cisco router to see why it was displaying the RID loopback, which had a network statement as an OSPF internal route, and the one being redistributed was showing up as external. A quick show ip ospf command and I had my lightbulb moment:
Cisco-RTR#sh ip ospf 1
Routing Process "ospf 1" with ID 2.2.2.2
Supports only single TOS(TOS0) routes
Supports opaque LSA
Supports Link-local Signaling (LLS)
Supports area transit capability
** It is an autonomous system boundary router
Redistributing External Routes from,
connected**
Initial SPF schedule delay 5000 msecs

That’s when it hit me that the redistributed routes were automatically advertised as external. Which when I started thinking it made perfect sense. If we think about what the OSPF ‘network’ statement is actually doing; which is enabling OSPF and adding the interface(s) to the topology. Then it makes sense that my redistributed loopback interface wasn’t actually part of the OSPF topology, it was just a network being advertised into the topology. I guess I was just thinking that since it was still a local interface on a router running OSPF that it wouldn’t be considered ‘external’. That’s what I get for thinking. The ASBR function is to redistribute routes from other protocols, even if that protocol is directly connected.

When I was doing research into this behavior I found this Cisco link. So, I added a network statement to cover my additional loopback interface. This corrected the route preference on the Juniper EX4200, but I noticed that the Cisco router still considered itself an ASBR. So even if you have network statements covering all of the interfaces on the router, if ‘redistribute connected’ is still configured then the router will act as an ASBR.

This might be a ‘duh’ moment for most other engineers, and when I look back, it is. But it was a lightbulb moment for me to really understand the Type 5 LSA and the function of the ASBR in OSPF.

JNCIA-Junos Study Notes

I recently took (and passed) the JN0-101 JNCIA-Junos exam. I spent my time studying and taking notes, but when it gets to crunch time I like to condense my notes into a quick study sheet. I use this study sheet to look at the day of the exam and right before I walk into the testing center. These aren’t complete notes, they’re only a summary that’s useful for last minute studying and staying fresh. I’ve got a few notebooks in my office from various certifications, but this time I thought I would put my study sheet into an electronic format and share it with the world. A few thoughts on my notes first:

I like to write notes in the long hierarchy form because I am a visual creature. Every indentation level I see could be a new edit or set hierarchy. If I start in a particular hierarchy, I either talk about it in my notes or specify the [edit] level just as you would see in the CLI. Sometimes my curly brackets do not correctly align with the configuration. It makes no sense to waste 5 lines just for curly brackets.

I use < and > to identify variable names.
I use ‘/’ to identify multiple options
If there is an optional command completion I list it in ‘( )’

You can download the PDF for free here:
JNCIA-Junos Notes

If you see any mistakes, or something needs clarification please comment or reach out to me. I always appreciate feedback.

EDIT 12-20-16: I’ve updated the download to a Dropbox link since it was still pointing to my old WordPress content. Please use the contacts below if you have any issues. I’d like to keep it up to date and relevant.

Bits on a Wire


author: admin
comments: true
date: 2013-06-21 18:23:13+00:00
layout: post
slug: bits-on-a-wire
title: Bits on a Wire
wordpress_id: 896
categories:
– Networking
tags:
– MSS
– MTU

– Network

Bits, Frames, Packets, Streams, Datagrams; we have many terms for what basically involves getting information sent across a transmission medium. However, I like details, and I think that the terms should be used correctly. As somebody once told me, “Words have meanings!” So let’s get a few things straight:

Bits – The last logical unit of data before the information is turned into an electrical or optical signal.

Frames – Layer 2 data units used to transfer data between adjacent network nodes. Provides frame synchronization (framing) and can provide error detection.

Packets – Used to establish the path which data travels from host-to-host. In the world of networking and the OSI model, this strictly refers to Layer 3 (IPv4 or IPv6).

Let’s pause right here to make a special note between frames and packets and their uses. Frames are used at L2 to transfer data between adjacent nodes, while packets are used at L3 to transfer data between end-to-end hosts. What this means is that L2 frame headers(src/dst addresses) are rewritten at each node while L3 packet headers (src/dst addresses) do not change.

Segments/Streams/Datagrams – Used by Layer 4 protocols to provide end to end communication for applications. Datagram is a term used when delivery, time, or order are not guaranteed by the protocol, such as with UDP. Streams/segments are used by protocols such as TCP to guarantee end to end error free communication. A TCP stream is considered a data stream of bytes, not individual messages. If this single flow of bytes is too large to flow across the network in a single message, it can be fragmented into smaller pieces. A TCP segment would be an individual piece of the larger TCP stream. TCP streams offer many “services” for applications such as ordered delivery, guaranteed delivery, flow control, congestion avoidance, and even multiplexing.

As network engineers it’s our job to ensure that end to end communication is available for the higher layers of our protocol stacks. Now that we know the correct terms for how data is encapsulated so that it can be transported across the network we should start to look at how this data fits across our network. How big are these TCP segments, datagrams, packets and frames flowing across our network? At first it may look like this doesn’t matter but when a higher level protocol is trying to cram too much information into the pipes this can cause problems. Just picture connecting a garden hose to a fire hydrant. This is where MTU and MSS come into play.

MTU, Maximum Transmission Unit, is the maximum size of data that a network layer can forward. This means that MTU is/can be different for each layer (Ethernet MTU, IPv4 MTU). TCP uses MSS, Maximum Segment Size, to define the maximum size of data that it will include in an individual TCP segment which would make up a TCP stream. We will talk about MSS in a moment, for now let’s focus on MTU.

Differentiating between the physical interface MTU (PHY MTU) and IP MTU can get a little confusing. The PHY MTU is the maximum allowed frame size (not counting the header) or in other words, the payload size. IP MTU is the maximum packet size allowed to transmit before fragmentation occurs. Increasing PHY MTU will also increase IP MTU. However decreasing IP MTU will not affect the interface PHY MTU. Why does this matter? In scenarios where we are adding additional headers such as GRE or MPLS. To accommodate the extra protocol overheard of something like GRE, you could lower the IP MTU by 24 bytes to leave room for the GRE overhead without exceeding the PHY MTU.

TCP MSS is the maximum size of data that TCP is willing to RECEIVE in a segment. When a TCP connection is established between two hosts they announce their TCP MSS to each other. It’s interesting to note that TCP MSS isn’t negotiated and doesn’t necessarily have to match.

I’m going to leave you with this little graphic I’ve borrowed and want to give full credit to @PacketLife ‘s blog post on MTU Manipulation:

MTUs

I may write some follow-on posts to talk about PMTUD or TCP Window Scaling, but for now I just wanted to get some definitions out of the way.