Introduction To Control Systems

Posted on February 17 • 7 minutes 55 seconds • Home

Once a system has been installed, it often becomes necessary to develop a way to control its operation. Control systems, in their simplest form, are present all around us in everyday life. Consider a lighting system for a room: once the wiring is complete and the lightbulb is installed, there is still a need to control when the light turns on (at night) or off (during the day). A common solution is a light switch, which forms the basis of a type of control system known as open-loop control.

In open-loop control, the user sets a desired output—in this case, turning the light on or off—and the system executes this action. The light switch directly dictates the state of the lightbulb, which relies solely on the user's input. For basic lighting needs, this open-loop approach is often sufficient. However, open-loop systems operate without feedback; they don't monitor the actual output (light being on or off) or the surrounding environment to adjust their operation.

In more complex scenarios, it can be beneficial for the lighting system to be "aware" of its surroundings. For instance, large institutions with multiple rooms may experience erratic human behavior—rooms may be occupied at unpredictable times, or people may forget to turn the lights off when leaving. In such cases, relying solely on human "control" may not be efficient— leading to large energy waste. In a case where there is a need to ensure consistent day and night lighting without human intervention, closed-loop control systems become increasingly valuable.

Closed-loop control systems incorporate sensors to gather information about their surroundings. Using an ambient light sensor as an example, the system can sense the level of natural light. This sensor information provides feedback, allowing the system to monitor the actual output or environmental conditions. Based on this feedback, the control system can then automatically adjust the output to achieve the desired outcome. For instance, if the ambient light sensor detects low light levels (e.g., at night or on a cloudy day), the system can automatically turn the lights on. Conversely, if sufficient natural light is present, the lights can be turned off, even without direct user input at that moment. This reliance on sensor feedback is the defining characteristic of closed-loop control, enabling more automated and responsive system behavior.

At its core, a closed-loop control system involves three key components: a sensor, a controller, and an actuator. The sensor measures the current state of the system (e.g., light level), the controller processes this information and determines the appropriate action, and the actuator executes the action (e.g., turning on a light). While this process may seem straightforward, its complexity depends on the dynamics of the system being controlled.


Control systems are simpler to design and implement when dealing with high-inertia systems—systems that change slowly over time. For example, the transition from daylight to darkness occurs over several hours, making it relatively easy for a lighting control system to adjust. Similarly, a car’s cruise control operates in a high-inertia environment because the vehicle’s mass makes it (relatively) slow to accelerate or decelerate. These slow changes give the control system ample time to respond, reducing the risk of instability.

In contrast, low-inertia systems—where changes can occur almost instantaneously—present a greater challenge for control systems. For example, in electronic circuits, voltage or current can change in microseconds. If the control system reacts too aggressively, it may overshoot the desired value, causing oscillations or instability. Similarly, in robotics, controlling the position of a lightweight robotic arm requires careful tuning because the arm can move very quickly, and any delay or overcorrection by the controller can lead to erratic behavior. In such cases, the control system must be carefully designed to ensure stability, often using advanced algorithms like PID control or model predictive control.

To illustrate the difference, imagine trying to steer a massive oil tanker versus a jet ski. The oil tanker, a high-inertia system, responds very slowly to changes, giving the captain plenty of time to make adjustments. The jet ski, like a low-inertia system, is highly responsive, so even a small adjustment can cause a sharp turn. If you overreact or make sudden corrections, you risk losing control.


The light-bulb closed-loop control system we discussed earlier is an example of a system with extremely high inertia. The transition between day and night happens over hours, giving the system plenty of time to respond. This makes the control system relatively simple. However, to illustrate more advanced types of control, such as PID control, we need a different example—one that involves faster dynamics and requires more precise control.

Let’s consider a room where we want to maintain a temperature of 20°C using a thermostat. This is a classic example of a closed-loop control system because it continuously measures the room’s temperature and adjusts the heater to maintain the desired setpoint.

Bang-bang control operates on a basic binary principle—the heater runs at full power when the temperature drops below 20°C and completely shuts off when it rises above this setpoint.

  • When the temperature is below 20°C, the heater is fully on.
  • When the temperature is above 20°C, the heater is fully off.
This results in oscillation around the setpoint due to the delay in temperature reading and the actuation of the heater. For example: When the heater is on, the room heats up and overshoots 20°C. Once the temperature exceeds 20°C, the heater turns off, but the room continues to cool down below 20°C before the heater kicks back on. While bang-bang control is simple and effective for some applications, it’s not ideal for maintaining precise temperature control because of these oscillations. To achieve smoother and more accurate control, we can use proportional control.


In proportional control, the heater’s output is adjusted proportionally to the difference between the current temperature and the setpoint. This difference is called the error.

Error=Setpoint−Current Temperature

The heater’s output is determined by multiplying the error by a design constant called the proportional gain (Kp):

Heater Output = Kp × Error
  • If the error is large (e.g., the room is much colder than 20°C), the heater will output more power.
  • If the error is small (e.g., the room is close to 20°C), the heater will output less power.

This results in smoother control compared to bang-bang, with smaller oscillations. However, proportional control alone often leads to a steady-state error, where the system settles at a temperature slightly below the setpoint because the heater’s power becomes too small to maintain the exact target as the error diminishes.


The integral term accounts for the accumulated error over time. It ensures that even small errors are corrected over time, eliminating steady-state errors. The integral term is calculated as:

Integral Term = Ki × ∫(Error)dt

Where:

  • Ki is the integral gain.
  • ∫(Error)dt represents the sum of all past errors, an accumulation over time.

The integral term considers the accumulated error over time, meaning it adds up all past errors and adjusts the heater’s power accordingly. If the temperature has been below the setpoint for a long time, the integral term increases the heater’s power to correct the error. If the room temperature is consistently below 20°C, the integral term accumulates over time, increasing the heater’s output until the temperature reaches the setpoint. Once the setpoint is reached, the integral term stops growing, and the system stabilizes.

The integral term eliminates steady-state error by considering the accumulated error over time.
  • It increases the heater’s power if the temperature has been below the setpoint for a long time.
  • Once the setpoint is reached, the integral term stops growing, stabilizing the system.

This ensures the system eventually reaches and maintains the exact setpoint. However, if the integral gain is too high, the system may become sluggish or overshoot excessively.


The derivative term considers the rate of change of the error, or how quickly the temperature is approaching or moving away from the setpoint. The derivative term is calculated as:

Derivative Term = Kd × (d(Error)/dt)

Where:

  • Kd is the derivative gain.
  • d(Error)/dt is the rate of change of the error.

If the temperature is rising too quickly, the derivative term reduces the heater’s power to prevent overshooting. If the temperature is dropping rapidly, it increases the heater’s power to counteract the change.

  • If the temperature is rising quickly toward the setpoint, the derivative term reduces the heater’s output to prevent overshooting.
  • If the temperature is falling quickly, the derivative term increases the heater’s output to stabilize the system.

A PID controller combines all three terms—proportional, integral, and derivative—to achieve precise and stable control:

Heater Output = Kp × Error + Ki × ∫(Error)dt + Kd × (d(Error)/dt)

How PID Control Works:

  • The proportional term provides immediate response to the current error.
  • The integral term eliminates steady-state error over time.
  • The derivative term reduces overshoot and stabilizes the system.

The animation below shows how different control strategies—bang-bang, proportional, and PID—impact the temperature control system. You can observe the temperature response, control output, and the effect of each control strategy on the system’s behavior. The green line is the set-point, the purple is the system output and the orange is the actuating signal. Click "Start" to begin the simulation and "Stop" to pause it. You can also reset the simulation to its initial state by clicking "Reset."

Bang-Bang Control

Tuning a PID Controller:

  • The gains Kp, Ki, and Kd must be carefully tuned to balance responsiveness, stability, and accuracy.
  • Too much Kp can cause oscillations; too little Kp results in sluggish response.
  • Too much Ki can make the system unstable; too little Ki leaves steady-state error.
  • Too much Kd amplifies noise; too little Kd allows overshoot.

In conclusion, the design of a control system is a meticulous and iterative process, often requiring a trial-and-error approach to achieve optimal performance. The general design steps provide a structured framework for tackling the challenges involved:

  1. Problem Definition: Clearly understanding the system and defining what you want to control—be it temperature, speed, position, or any other variable—is the critical first step. Without a clear objective, the rest of the design process lacks direction.
  2. System Modeling: Developing a mathematical model of the system is essential. This could involve differential equations, transfer functions, or state-space representations, allowing engineers to predict how the system will behave under different conditions.
  3. Control Strategy Selection: Choosing the right control strategy is crucial. For simpler systems, PID controllers may suffice, while more complex systems might require advanced techniques like optimal control, model predictive control, or adaptive control. The choice depends on the complexity of the system and the desired performance.
  4. Controller Design and Tuning: Once the strategy is selected, the controller must be implemented and its parameters (such as gains) tuned to achieve the desired behavior. This step often involves simulation tools and real-world experimentation to ensure the system performs as expected.
  5. Implementation: The controller is then physically integrated into the system, which may involve programming microcontrollers, integrating sensors, and ensuring real-time responsiveness. This step bridges the gap between theory and practice.
  6. Testing and Iteration: Finally, the system must be rigorously tested under various conditions to ensure robustness. If issues arise, adjustments are made, and the design process begins anew. This "trial and error" phase is vital for refining the system and achieving reliable performance.

While these foundational steps are essential, it’s also important to recognize that more advanced control schemes are continually being developed, offering enhanced precision, adaptability, and efficiency. Control systems play a pivotal role in numerous industries—from manufacturing and robotics to aerospace and energy management—and their ability to regulate processes, maintain stability, and improve productivity cannot be overstated.

Understanding and implementing robust control strategies is not just a technical necessity; it’s a strategic advantage. Whether you're an industry professional, a researcher, or simply someone interested in technology, recognizing the importance of control systems can help you appreciate their impact on modern innovation and why they matter in shaping the future.

© 2025 Fanny Fushayi