C has been the backbone of embedded systems for decades, with an estimated 80% of embedded software still written in it. Its direct access to hardware, deterministic execution, and minimal runtime overhead have made it ideal for systems constrained by memory, power, and performance. From microcontroller firmware to certified industrial controllers, C has enabled developers to build highly efficient and reliable products with tight real-time constraints.
But as embedded systems become more connected, modular, and long-lived, the limitations of C are becoming harder to ignore. Devices are no longer static or isolated; they are dynamic, networked platforms that must support remote updates, integrate third-party software, and defend against unpredictable inputs. In this new context, C’s lack of safety mechanisms, manual memory management, and monolithic structure increases the risk of faults and vulnerabilities, especially as codebases grow and maintenance burdens intensify.
A helpful way to understand this shift is through the Chromium project’s Rule of 2. It warns against combining unsafe languages, untrusted inputs, and lack of isolation in any software component. Breaking any two of these principles makes security difficult. Violating all three, as embedded C code often does, makes it nearly impossible. As the embedded industry moves toward software-defined, security-aware products, it’s worth asking whether C still fits or whether it’s time to manage it differently.




