Building Blocks

Over the last year, I’ve been working on some automation efforts. I’ve had a few projects lately that involved data-gathering and inventory work. This type of work is very tedious and boring. The exact type of stuff people like to automate. I’ve always been scripting/coding in the background. I remember one of my first computers ever, and the first thing I did with it was to start writing code in Apple BASIC. So I’ve always kept a pulse on programming, but it sways in out and out my work agenda. So with my latest projects I’ve decided to pick it back up.

I’ve had to blow the dust off of some of my old scripts and update some notes. I’m finally getting to a place where I wanted to start sharing some pieces of scripts that I’ve been creating. I’m a proponent of not re-inventing the wheel, but at the same time if I don’t write the code myself I never really grasp what it’s doing. So what I’ve been doing lately has been commenting my code in greater detail so that I can pull out what I need when building a new script.

I started out creating a script that is basically a search+replace function for a CSV file. I created this to help sanitize inventory lists. I had a list of IP addresses and I just wanted to generalize them. I could have accomplished the very same thing manually in Excel with Search+Replace, but then I wouldn’t have built on my scripting skills. Instead I started with a script that opens a CSV file, reads the data and makes changes into an array, and then writes the data into a new CSV file. Those are building blocks I can use to build future scripts. I then went back and added some capability for command-line arguments and input validation. The script itself was more useful as a training excercise than an actual functional tool. I think I’ve used it as a base fore more scripts, than I’ve actually run it as intended.

If you want to take a look at my CSV experiment check it out on GitHub.

Real quick, I want to offer a few tips to help get you started.
1. Don’t re-invent the wheel.
You are not unique and there is a very high chance that what you’re trying to do, someone else has already tried to do and solved. Assimilate that code to help you out.

  1. Use print to figure out what is happening in your code.
    As you move data through variables and iterations things can get confusing. Also as you deal with variables you will likely run into issues using the wrong variable type, or not knowing what the variable type is being returned by a function you called. To wade your way through this you can use print(type(x)) to print out the type of the variable you are using. So, print at every step as code and then go back and clean them up as needed once your code is working as desired.

  2. Comment everything.
    Your code can be short, or it can be well understood. Do yourself a favor and comment everything. If coding isn’t your primary job, you’ll need to know why you did something when you pick your script back up in a years time. If you want to re-use something in a future script it will help to know exactly the purpose of snippet of code you developed.


Vulnerability Tracking

Do you know how vulnerable the equipment you manage is to attack?

You manage a lot equipment; disparate hardware types from multiple vendors. It’s messy, I get it. Then you are expected to maintain this equipment and part of that duty is to ensure that that you are plugging any security holes in order to limit the attack surface. This is one of those situations where ignorance is definitely not bliss. Vulnerability management is a system. It is a system many organizations do not even have established. If you do not have a system you should consider creating your own or at the very least you should start asking yourself questions to ensure that you are doing your best to secure your environment.

How are you tracking your vulnerabilities? Are you even tracking your vulnerabilities?

When a vulnerability is announced by a vendor how are you tracking the vendor’s status and fix? How are you documenting the vulnerability itself? My suggestion here is to begin documenting these announced vulnerabilities so that you can perform an analysis of the vulnerability and then add your findings to the documentation. You should then also be able to track the remediation vulnerability (patches, upgrades, workarounds), or document the risk acceptance. Consider getting to this level of documentation something to strive after. Some people use their ticketing system, some use dedicated or custom tools, some poor souls use a spreadsheet. It’s hard to consume and process all of the security announcements which occur today. If you can get just started then you have something to build on and improve.

Automate What You Can

The next level of vulnerability management is to create automation so that you can quickly perform the analysis. Then you can also re-run the checks as you add equipment to ensure that the vulnerabilities are no longer introduced. There are lots of options here, from Group Policies, config auditors, asset management tools, Ansible scripts, the possibilities are endless and you will need to find the right tool for the job possibly using more than one tool.

Just Start

At the very least don’t be afraid to start by crafting your process. You can always refine the process and bolt onto it as you grow.


IEEE 802.11n was proposed in 2009 to help scale throughput of WLANs using a few different techniques known as high throughput (HT) in either the 2.4 or the 5 GHz band. 802.11n was designed to be backwards compatible with OFDM used in the 802.11g and 802.11a standards. The primary advantage of 802.11n was it’s ability to leverage multiple radios. Instead of using a single Tx/Rx radio pair (or radio chain), 802.11n devices could use multiple antennas, transmitters or receivers; a system known as multiple-input, multiple-output (MIMO). The transmitters and receivers are described in the format TxR, and 802.11n requires at least two radio chains (2×2) and supports up to a maximum of four (4×4).

In addition to the MIMO functionality, 802.11n introduced a few features to improve throughput including:

  • Channel Aggregation
  • Spatial Multiplexing (SM)
  • MAC Layer Efficiency

802.11n also introduced some features to improve the reliability of RF signals:

  • Transmit Beam Forming (TxBF)
  • Maximal-Ratio Combining (MRC)

Channel Aggregation

The 802.11n amendment first increased the 20 MHz channel throughput by increasing the number of data sub-carriers in OFDM from 48 to 52. 802.11n then goes on to allow either the use of either a single 20 MHz or a single 40 MHz channel. The aggregated channels always bond two adjacent 20 MHz channels. By bonding the channels it is able to free up the quiet space between the two original  channels for additional bandwidth. The quiet space on each end is left along to separate the 40 MHz channels. This increases the number of data sub-carriers from 52 to 108.

When the channels are aggregated it also lowers the total number of available channels. Channel aggregation shrinks the 5 GHz band from 23 non-overlapping 20 MHz channels to 11 non-overlapping 40 MHz channels. Since the 2.4 GHz band only has 3 non-overlapping channels it is not recommended nor usually attempted on that band.

Spatial Multiplexing

Channel aggregation allows for increased throughput by increasing the channel width that can be used by a single radio chain. With the advent of MIMO, the 802.11n device could have multiple radio chains waiting to be used. To further increase throughput we can multiplex, or distribute, the data across two or more radio chains, while still operating on the same channel. This is known as spatial multiplexing because the radio chains are separated by spatial diversity (they are predictably spaced out).

The spatial diversity will ultimately cause slight changes in each signal as they make their way across the free space to the receiver. If the radio signals don’t all start at the same location, then they would naturally take different paths. the 802.11n devices can also distribute the data across the multiple radio chains in a known fashion. These separate data streams can be processed as spatial streams  and can be demuxed on the receiving end. The number of spatial streams  a device can support is designated with a colon at the end of the MIMO designation. A 3×3:2 MIMO device has 3 transmitters, 3 receivers, and can support two unique spatial streams. Since not all devices in an environment may support the same amount of spatial streams, capabilities are advertised and the lowest common denominator is negotiated prior to transmitting data.

MAC Layer Efficiency

Additional improvements with 802.11n include block acknowledgement. In traditional 802.11 networks each frame of data transmitted must be acknowledged by the receiver. If no acknowledgement is received it is assumed that the receiver did not get the frame and it must be resent. Acknowledging each frames wastes communication time. With 802.11n  all the data frames can be transmitted in one burst, and only one acknowledgement is expected from the receiver. This is more efficient and helps increase throughput.

With 802.11 as OFDM symbols are transmitted they can take different paths to the receiver. If the two symbols arive two close together they can actually cause interference with each other, this is known as intersymbol interference (ISI). The 802.11 standard requires a guard interval of 800 nanoseconds between transmissions to alleviate this problem. With 802.11n devices you can configure this interval to 400 nanoseconds. Doing so will increase throughput since less time is wasted in the guard interval but it does put you at a greater risk of data corruption.

Transmit Beamforming

As data is transmitted across the multiple radio chains of a MIMO device, they will ultimately take separate paths to the receiver. To help ensure that the data arrives at the receive in the same relative time frame transmit beamforming (TxBF) is used. Transmit beamforming adjusts the phase of each signal as it leaves so that as it travels across the free space they will arrive at relatively the same time. The receiver sends back TxBF data as feedback so that the transmitter can constantly keep track of the required adjustments and send focused transmissions to each receiver dynamically.

Maximal-Ratio Combining

If you’re familiar with digital photography, you may be familiar with the concept of HDR, or High Dynamic Range. With HDR, multiple images are combined to provide a single image with the best contrast. Maximal-Ratio Combining (MRC) does something very similar with RF signals. It takes multiple received copies of a signal and combines them to provide one signal with improved Signal to Noise Ration and receiver sensitivity.

OFDM Deep Dive

With DSSS we spread the chips of a single data stream into one wide 22 MHz channel, and because of the constant chip rate of 11 MHz we are restricted to 11 Mbps of data throughput. On the other hand, Orthogonal Frequency-Division Multiplexing (OFDM) sends data bits in parallel over multiple frequencies , all contained within a single 20 MHz channel.  Each channel is divided into 64 sub-carriers (hence the Frequency-Division) which are spaced 312.5 kHz apart. There are 3 different types of sub-carriers:

  • Guard – 12 sub-carriers which are used to separate each channel and help receivers lock onto a channel. These actually aren’t transmitted but stay silent as spacing.
  • Pilot – 4 sub-carriers which are equally spaced and always transmitted to help receivers determine the noise state of the channel.
  • Data – 48 sub-carriers which are devoted to carrying data.

Since OFDM is transmitting data in parallel it is able to get high aggregate throughput through its relatively low throughput sub-carriers. Since the data is also sent in parallel we can also modify how much of it needs to be unique or repeated data for error prevention. The coding schemes in OFDM are named using fractions to identify the ratio of new bits to repeated bits (coder ratio); BPSK 1/2 indicates that one half of the bits are new and the other half repeated. BPSK 3/4 therefore indicates that three-fourths of the bits are new and only one fourth are repeated.

At the lower speeds, BPSK modulation can be used with two different coder ratios. OFDM and BPSK 1/2 results in a 6 Mbps throughput and with BPSK 3/4 achieves 9 Mbps throughput. If we combine OFDM with QPSK 1/2 we can achieve 12 Mbps and QPSK 3/4 can achieve 18 Mbps. If you recall the DSSS post we introduced QPSK and the fact that it uses 2 binary bits to give four possible phase shifts. Therefore for us to break through 18 Mbps we would need a additional modulations options.

Quadrature Amplitude Modulation (QAM) combines QPSK phase shifting with multiple amplitude level to give an even greater number of modulation options. As an example 16-QAM uses 2 bits for the QPSK modulation and an additional 2 bits for the amplitude for a total of 4 bits used for modulation changes. 4 binary bits would give use 16 unique modulation options (hence the name). The coder ratios still apply when we move to QAM so the names still carry a fractional prefix indicating the ratio. The current OFDM supposed modulation options include:

  • OFDM QPSK 1/2 – 12 Mbps
  • OFDM QPSK 3/4 – 18 Mbps
  • OFDM 16-QAM 1/2 – 24 Mbps
  • OFDM 16-QAM 3/4 – 36 Mbps
  • OFDM 64-QAM 2/3 – 48 Mbps
  • OFDM 64-QAM 3/4 – 54 Mbps
  • OFDM 256-QAM 3/4 – 78 Mbps
  • OFDM 256-QAM 5/6 – 86 Mbps


The IEEE 802.11g amendment was introduced in 2003 and is also commonly called Extended Rate PHY (ERP) or ERP-OFDM. ERP is just another name for 802.11g in the 2.4 GHz band. Since 802.11g was based on OFDM as opposed to DSSS in the previous standard devices cannot directly understand each other’s RF signals. 802.11g was intended to be backwards compatible with legacy 802.11b devices by downgrading and using DSSS, however the reverse is not true. To allow both OFDM and DSSS devices to coexist a protection mechanism was included. When using 802.11g Protection Mode, before a device transmits it will send a warning message with DSSS before transmitting its data with OFDM. Protection mode is enforced automatically if an 802.11b devices is detected on the WLAN, and once it leave the network it is lifted. Since the protection mode adds the additional DSSS warning messages it greatly reduces network throughput.


So we’ve already covered 802.11b and 802.11g why are we just now circling around to 802.11a (yes they do go in order). Actually almost as soon as 802.11 was ratified the need was recognized to limit interference. 802.11a was introduced in 1999, earlier in the same year at 802.11b however since migrating from 2.4 GHz to 5 GHz required new hardware it was never widely adopted. IEEE 802.11a restricts devices to use OFDM only and is based on channels that are 20 MHz wide. Since it only supported OFDM it was not backwards compatible with any devices and depending on the modulation scheme any of the supported data rates were available.

DSSS Deep Dive

1 Mbps DSSS

To achieve the 1 Mbps throughput with DSSS, each bit of a data was encoded into a sequence of 11 bits. This is called the Barker 11 Code. In the Barker code, a 0 data bit is always represented as (10110111000) and a 1 data bit is represented as (01001000111). With these 11 bit chips, up to 9 of the bits can be lost before the original data bit cannot be restored. To transmit each chip Differential Binary Phase Shift Keying (DBPSK) modulation is used. Binary being the key word in that scheme since 1 or 0 gives us two options. With DBPSK the carrier signal is shifted or rotated depending on the bit. A 0 bit would result in no change to the carrier signal, and 1 bit would rotate or shift the signal 180 so that it was suddenly upside down. DSSS always uses a chipping rate of 11 million chips per second, so when each symbol (original bit) contains 11 chips, we get a transmitted data rate of 1 Mbps.

2 Mbps DSSS

To double our initial throughput we keep the original 11 bit Barker code but instead of this time we modulate the symbols using Differential Quadrature Phase Shift Keying (DQPSK). Quadrature (4) being the key word in this scheme. With DQPSK two chips are modulated at a time, and since we have 2 binary bits that gives us 4 possible options:

  • 00 – The phase is not changed
  • 01 – The phase is rotated 90 degrees
  • 11 – The phase is rotated 180 degrees
  • 10 – The phase is rotated 270 degrees

Since the data bits are modulated in pairs we are able to transmit twice as much data in the same amount of time compared to DBPSK, which gives DQPSK twice the throughput with 2 Mbps.


As we briefly mentioned in our initial modulation post, IEEE 802.11-1997 was the first standard. It included FHSS and DSSS using either DBPSK or DQPSK in the 2.4 GHz band. The 802.11-1997 standard only supported Barker coding for the maximum throughput of 2 Mbps.

5.5 Mbps DSSS

To increase our throughput Complementary Code Keying (CCK) was introduced to replace the Barker code. CCK takes 4 bits of original data at a time to create a unique 6 chip symbol. After the original bit is encoded 2 more chips are added to the symbol to indicate the phase orientation per DQPSK, making a total symbol of 8 chips. So Barker coding gave us a 1:11 coding ratio CCK gives us a 4:8. Given the steady chipping rate of 11 MHz with DSSS and each symbol containing 8 chips, we get a symbol rate of 1.375 MHz (11 MHz / 8). Since each symbol is based on 4 original data bits we get an effective data rate of 5.5 Mbps (1.375 MHz * 4).

11 Mbps DSSS

By making an adjustment to the encoder, we can take 8 original data bits to create the 8 chip symbols. By doubling the the amount of original data in the chip we double the throughput rate. Since we’re still using 8 chip symbols and the constant 11 MHz chipping rate we still have a symbol rate  of 1.375 MHz, but with 8 data bits in each symbol we can now reach 11 Mbps (1.375 MHz * 8). Increasing the number of data bits in symbol means we lose some of the resiliency to recover information. While we’ve increased throughput we are more sensitive interference and therefore require a stronger and less-noisy signal.


IEEE 802.11b was introduced in 1999 and standardize the use of CCK supporting a maximum throughput of 11 Mbps. Since 802.11b was based on DSSS and the 2.4 GHz band it was also backwards compatible with the original standard and devices could select their speed by simple changing the modulation or coding schemes.

RF Modulation and Standards

So we’ve talked about the frequency bands and transmit power, but how are these things utilized to actually carry our network data? Since computers communicate in binary bits, we have to be able to differentiate a 1 or a 0 on an RF signal.  Since RF isn’t a closed circuit we can’t use on/off to signal 1 or 0, the only thing we can do is modify the RF signal in some way to make it slightly different. Modifying the RF signal to indicate the data it is carrying is known as modulation. Given the physical properties of an RF signal modulation can only alter a few attributes of the signal. We can modify the frequency, but only slightly above or below the carrier frequency. We can modify the phase of the signal, which is the timing relative to the start of the cycle. Or we can modify amplitude which is the strength or height of the signal.

Since our wireless networks require sending data at high bit rates (fast), we require more bandwidth to modulate this data. This additional bandwidth is distributed across a range of frequencies as opposed to using a single carrier signal. We call this distribution Spread Spectrum since we are spreading the signal across multiple frequencies. There are 3 primary categories of spread spectrum used for wireless data networks: Frequency-Hopping Spread Spectrum (FHSS), Direct-Sequence Spread Spectrum (DSSS), and Orthogonal Frequency-Division Multiplexing (OFDM). We will expand on DSSS and OFDM in future posts.

I also want to take a moment to introduce the wireless standards. The first standards bodies we need to be concerned with are the ITU-R which is set up by the United Nations to manage RF spectrum globally. In the United States the Federal Communications Commission (FCC) regulates frequencies, RF channels, and transmission power. A similar body called the European Telecommunications Standards Institute (ETSI) manages the same things in the European region. On top of the RF standards we have the familiar IEEE which manages a majority of our computer standards. IEEE 802 standards deal all deal with local area and metro area networks and specifically IEEE 802.11 is responsible for wireless networks. As we work through the different RF transmitting schemes I will include a mention on which 802.11 standard introduced or maintained the technology.


The initial wireless network standards utilizing an idea call frequency hopping was used to avoid interference with other devices in the ISM band. In a frequency hopping system the transmitter and receiver have to be synchronized so they know which frequency they are supposed to be on at any given time. To accomplish they they switch between channels at regular intervals. To avoid interference small channels are used so that if interference does occur it will not be a large impact on the data being transmitted. FHSS utilized 1-MHz channels spread across the entire band. These smaller sized channels meant that only so much data could be transmitted at a time and this limited bandwidth to 1 or 2 Mbps. Also multiple transmitters (access points) in an area would eventually collide with each other on the same channels. For these reasons FHSS was fairly quickly replaced with DSSS.


Instead of using many small channels, DSSS utilized a smaller number of wider channels. With DSSS each channel is 22-MHz wide with a maximum supported throughput of 11-Mbps. DSSS was designated to be used in the 2.4GHz band. As noted in previous posts this is where we run into the problem of overlapping channels, since the ISM band and it’s 5 MHz channels existed before the wireless standard which dictated the 22-MHz wide channels. The non-overlapping channels available in the US are 1, 6, and 11. As the name indicates, DSSS transmits data in a direct sequence, or a serial stream. Instead of frequency hopping to avoid interference DSSS relies on a few methods to try an alleviate any interference problems:

  • Scrambling – Instead of transmitting long sequences of 1s or 0s (think in binary), the data is first sent through a scrambler to generate a randomized sequence of 0s or 1s.
  • Coding – Each bit of data is converted into multiple bits using special patterns that help protect against errors. Think of using the phonetic alphabet for radio transmissions. Instead of saying each letter individually we use a word to describe the letter. ‘A’ becomes ‘alpha’, ‘B’ becomes ‘bravo’, ‘C’ becomes ‘charlie’, and so on. This requires more data to transmit the original data however it helps eliminate errors and the need for re-transmission. Error correction is more costly than error prevention. Each of the newly coded bits is called a Chip, and the complete group of chips representing a data bit is called a Symbol. DSSS utilizes two encoding techniques, either Barker Codes or Complementary Code Keying (CCK).
  • Interleaving – The encoded data is then spread out into separate blocks so that temporary interference would only affect a smaller number of blocks.
  • Modulation – Finally the bits in each symbol are used to modulate the phase of the carrier signal.


The original 802.11 standard was ratified in 1997. It originally included two main transmission types FHSS or DHSS for use only in the 2.4 GHz band.

RF Power – dB vs dBm vs dBi vs dBd

In my last post we discussed comparing RF power between 2 transmitters. In that comparison we used a logarithmic function called the decibel (dB) to compare two absolute power values. When we compare two absolute values one is considered the source of interest and one is called the reference or the source of comparison. This works simply enough if we just want to compare transmitter A to transmitter B, but what if we need to see the bigger picture? When working with wireless networks we always have a transmitter and a receiver, as well as the path in between. To ensure that the receiver has enough signal strength we could just compare the absolute transmit power to the absolute receive power to find the total loss. However if we need to make a change to this signal path we may want to change transmit power, or we may want to make a change to the path in between. To really see the big picture we need to know what the change is at each point on the signal path.

The dBm

To find the change at each point along the signal path it would be easiest to compare them all against a common reference so that the values could be added or subtracted along the path. In wireless networks the reference power level is usually 1mW which is where we get dBm (dB-milliwatt). Instead of comparing the absolute power of each end of the path, we compare each end to a common value so that we can see the change which is occurring.

To expound on this further, we’ve always discussed transmitter and receiver as though they were a single unit. However in reality we know that there exists the transmitter, the antenna, and usually the antenna cable. When we connect an antenna to a transmitter it provides a certain amount of gain, or amplification. However when we remove the antenna it does not generate any power by itself. That is to say that the antenna itself does not generate any amount of absolute power, and therefore we cannot measure its gain in milliwatts or dBm.

The dBi

Since we can’t compare millitwatts for an antenna what can we compare it to? We could compare it to another antenna, and to keep everything equal we should always use the same antenna as a comparison. This is where the isotropic antenna was created, or at least the idea of it. The isotropic antenna does not actually exist because it is the perfect antenna. It is a tiny point which sends RF power out equally in all directions. It is basically a concept which can be worked out mathematically. So when we compare our actual antenna to this ideal isotropic antenna we get a value in decibel-isotropic (dBi).


An important concept to note here is Effective Isotropic Radiated Power (EIRP). EIRP is the combination of the transmitter absolute power value, the loss from the antenna cable, and the gain from the antenna. This is the first point where the dBm makes itself useful. When measuring EIRP we are not comparing two transmitters, we want to add up the gain and loss for the components of a single transmitter so we have to rely on a common reference point. We measure the transmitter power in dBm, the antenna gain in dBi, and the antenna cable manufactures give us the cable loss  in dB per foot of cable. We can then start with the transmitter’s dBm value, subtract the dB loss of the cable, and add the dBi value of the antenna gain to give us the EIRP. The EIRP value is important because it is regulated by the government bodies such as the FCC.

The dBd

We can’t leave well enough alone, can we? The dBi was a comparison of our actual antenna to the conceptualized isotropic antenna. However sometimes antenna gain is measured against a real antenna. This value is called the decibel-dipole (dBd) since the comparison is between our actual antenna and a simple dipole antenna. The dipole antenna has a gain itself of 2.14 dBi. So when we describe our antenna gain in dBd we are comparing it to the dipole antenna. However we can’t add up the EIRP values using dBd because we have to take into consideration the gain of the dipole antenna it was originally compared against.

In other words,  when calculating EIRP or the total signal path, we always have to convert dBd to dBi (by adding 2.14 dBi).

All of these measurements can be confusing, but if you take the time to slow down and consider what you are comparing then it will begin to make sense. And remember when we are adding things up we need to make sure that we are adding values that are all compared to the same reference value. In other words, we can only add/subtract with dB, dBm, and dBi. If we introduce dBd we have to take into account it’s own gain.