As discussed in our first post in the series about our ICS firing range, we came to the conclusion that we had to build a lab ourselves. Now, this turned out to be a quite tricky task and in this blog post I am going to tell you why: which challenges we faced and which choices we made on our way to building our very own lab.
This was a rather long project and involved quite some steps. So to structure this post, I will guide you through the process of how we designed our lab by dividing it into the tasks we worked on in chronological order.
Requirements Analysis
Well, we knew that we were going to build this lab but we needed more information about the exact requirements it should meet so we could focus on those specifically. During internal discussions and meetings with the client we worked out this list of initial, key requirements:
- The lab shall feature IT and OT components that represent a realistic bridge-operation scenario.
- The lab shall be mobile so it can be transported, set up and worked with on different sites.
- The lab shall be extensible: scenarios and both hardware and software components can be added in the future.
These requirements were intentionally left rather broad so that we could work out different feasible concepts for individual challenges and decide with the client which way to go. This approached allowed us to keep in close contact to the client and make sure that we meet their needs.
Designing an ICS Firing Range
In order to build great things, you will need great plans. In this phase, we worked out said plans, starting with our very first concept.
1. The First Concept
Once we knew our key requirements, we started doing our research on the topic of operating bascule bridges. This was certainly easier said than done: it turned that publicly available information about critical infrastructure, such as bridges, was not that easy to find.
Eventually we found a very good resource, the “Bridge Maintenance Reference Manual” provided by the Florida Department of Transportation (FDOT). This manual was a very good find since it detailed the general structure of bascule bridges and explained the most relevant components. Using this, we developed our first, simplified concept:

It features the core components of bascule bridges:
- Structural components such as the leaves, counterweights and pits (bottom part of the bridges)
- Drives that move the actual bridge leaves, road barriers and counterweights-locks
- LEDs that indicate STOP/GO signals for maritime and road traffic
- An alarm (buzzer) that is turned on and beeps while the leaves are moving

We translated this into a first, early 3D model in Blender to get an idea of the dimensions and looks. While it was very much simplified, it allowed up to work out some ideas about shapes and placement of components.
This way we found out that a modular setup might provide much needed flexibility for assembly and maintainance: the pits would provide a strong foundation to mount the remaining components onto while the upper part (shown in green-ish color in the picture) would be made of two halves that were set atop the pit. Resting inside the pit there would be a large stepper motor that drove a pinion gear, which in turn drove a rack that is install underneath the bridge leaf.
Satisfied with this concept, we moved on to working on the underlying infrastructure of the lab.
2. Blocking out the Infrastructure
From the start we knew that it would take a good number of systems and networks to represent a somewhat realistic ICS environment: we expected a bridge operator to have an enterprise network that their regular office workstations are connected to, which are probably domain joined. Furthermore, they would have a SCADA network that contains operator workstations for monitoring and controlling the remote bridge-sites, historians to record operational data, and engineering workstations to program PLCs and HMIs. These networks would be routed via a public demilitarized zone (DMZ) over the internet to a remote bridge site. Also, all of these networks would have their own subnets and feature a router that allows adequate routing between the networks and a firewall that specifies individual rules for incoming and outgoing traffic (with a DENY ALL default rule). We decided that virtualizing these machines and networks would be a good compromise between the resources demanded to implement them and the physical space they would take up.

In addition to the IT infrastructure, we also designed the OT part. We intended the diagram below to somewhat represent the lower levels of the Purdue model: There are three substations that represent individual cells for traffic lights, gates and leaf lifting operation. They contain sensors and drives (our level 0 devices, e.g. limit switches and motors) which are controlled by individual PLCs per cell. These PLCs are instructed by one central PLC that is connected to the SCADA network.

In addition to these rather traditional OT components, our client requested us to include CCTV functionality in the lab. For this we planned to use Raspberry Pis with PiCams.
This network design represented a sufficiently realistic ICS network and allowed for future additions. Time to move on!
3. Figuring Out Which OT Hardware to Use
Now that we knew what things we wanted to interact with physically (those being mainly motors and LEDs) and how to connect them to our lab networks, we started doing our research on suitable OT hardware.
Naturally, we were soon overwhelmed by the sheer amount of devices to choose from: there were plenty of manufacturers that offered loads of different devices for a variety of different use-cases, requirements and budgets with an equally wide variety of different features, dependencies and compatibilities.
Confronted with this challenge (and severely lacking expertise in building OT environments), we decided to make assumptions that helped us trimming down the selection of manufacturers and devices and making educated decisions:
- We would assume that, for a single operation site, one would stick to devices of one single manufacturer. This allowed us to largely ignore cross-manufacturer-compatibility. We chose SIEMENS for their significant European market-share in ICS hardware.
- In order to reduce the complextity of building and interconnecting the OT devices, we decided to implement communication via ethernet exclusively and ignore other communication media and interfaces.
- To represent a realistic and “historically grown” (e.g. occasionally updated and maybe upgraded across decades) operation site, we would use devices of varying EOL (end of life). We decided to include legacy PLCs (S7-300), all-rounder “standard” PLCs (S7-1200), modern high-end PLCs (S7-1500) and standard HMIs (TP700).

At this point, all that was left to do now was to figure out which PLC to use for which task. This required digging through quite some datasheets of the abovementioned PLCs, mainly to find out how many and what digital inputs and outputs the PLCs feature. For example we learned that, to control stepper motors, we needed to create pulse-signals (in our case Pulse-Train-Outputs, PTOs) of up to 100kHz. During our research, a rather cheap signal-board for the S7-1200 turned up that would generate PTOs of up to 200kHz. We ended up using the S7-1200 PLCs to drive the leaves and barriers, the S7-300 to control LEDs and the buzzer and the S7-1500 for orchestration and outbound communication to the virtualized IT environment.
4. Our Vision
With all this information we worked out there, we came up with a vision of what we wanted our lab to look like:
It’s essentially an aluminium frame on wheels, featuring a 3D printed bridge on-top and a steelplate put inside it vertically. The OT components are mounted onto the front-facing side of the steel-plate and the virtualization server running the IT systems and networks is located in the back. The black panels are made of wood and hide the power distribution and the server. It may be hard to pick up visually, but there are acryllic glass panels on the sides and the front to provide a look inside.
With this vision in mind, we set out to build it!
We are going to cover this in the next blog post about our ICS firing range – Stay tuned!
One thought on “Building an ICS Firing Range – Part 2 (Defcon 29 ICS Village)”