Rethinking C in Embedded Systems: Sandboxing C Modules for Secure and Flexible Software Development

C has been the backbone of embedded systems for decades, with an estimated 80% of embedded software still written in it. Its direct access to hardware, deterministic execution, and minimal runtime overhead have made it ideal for systems constrained by memory, power, and performance. From microcontroller firmware to certified industrial controllers, C has enabled developers to build highly efficient and reliable products with tight real-time constraints.

But as embedded systems become more connected, modular, and long-lived, the limitations of C are becoming harder to ignore. Devices are no longer static or isolated; they are dynamic, networked platforms that must support remote updates, integrate third-party software, and defend against unpredictable inputs. In this new context, C’s lack of safety mechanisms, manual memory management, and monolithic structure increases the risk of faults and vulnerabilities, especially as codebases grow and maintenance burdens intensify.

A helpful way to understand this shift is through the Chromium project’s Rule of 2. It warns against combining unsafe languages, untrusted inputs, and lack of isolation in any software component. Breaking any two of these principles makes security difficult. Violating all three, as embedded C code often does, makes it nearly impossible. As the embedded industry moves toward software-defined, security-aware products, it’s worth asking whether C still fits or whether it’s time to manage it differently.

Chromium Rule of 2

(drawing source here)

The Embedded Industry’s Quiet Violation of the Rule of 2

In many embedded systems, we routinely violate all three pillars of the Rule of 2. Code is written in C or C++, which provides direct access to memory but no guarantees against buffer overflows, use-after-free errors, or type confusion. Inputs come from sources that cannot be fully trusted, such as radio interfaces, wireless protocols, external peripherals, and user-driven applications. Finally, this code often runs without any form of runtime isolation. It is directly linked to a flat memory space and executes with full access to the system.

This architecture was not a problem in the past when embedded systems were isolated and monolithic. But now that devices operate in dynamic, connected environments, the risk profile has changed significantly. Vulnerabilities are no longer theoretical. A minor flaw in a communication stack or peripheral driver can compromise the entire system and, in some cases, propagate that compromise across a network.

Why Rewriting Everything Is Not a Viable Solution

It is tempting to imagine that the solution lies in abandoning C entirely. Languages like Rust offer memory safety guarantees and modern tooling, and many isolation techniques promise better separation between components. However, for most embedded teams, these approaches are not feasible in practice.

The embedded ecosystem is still heavily dependent on vendor-supplied SDKs and third-party middleware written in C. These components are often the product of years of investment, validation, and certification. Rewriting them introduces additional costs, complexity, and regulatory risks. Even where new languages are used for application logic, the underlying platform still depends on native code for device drivers, communication layers, and security primitives.

Moreover, many embedded platforms lack the hardware features necessary to implement true process isolation. There may be no memory management unit, no dynamic memory allocator, and no real-time operating system. In these environments, even if isolation were theoretically desirable, it would not be technically achievable.

A More Realistic Path: Sandboxing Legacy or Untrusted Code

Given these constraints, what embedded developers need is a strategy that preserves the strengths of C, acknowledges its risks, and introduces mechanisms to contain those risks without forcing a complete system redesign. This is where the concept of sandboxing C modules within a virtualized runtime, as implemented in MicroEJ’s Virtual Execution Environment (VEE), becomes relevant.

VEE brings virtualization to embedded systems, enabling multiple software components (including legacy C code) to run within isolated sandboxes on a single hardware platform. VEE app containers allow developers to integrate and execute native C code within a controlled and observable environment. Rather than running freely within the application’s memory space, native code is encapsulated within a sandbox that emulates the structure of a virtual machine.

Interactions between sandboxed C modules and the host system are strictly governed by interface contracts, with access to system resources mediated by the containerized runtime. This virtualization layer enforces boundaries between components, introducing a clear separation between native code and system-critical logic.

Benefits That Extend Beyond Security

By introducing structure, isolation, and virtualization into the execution of C code, sandboxed C modules enable a more robust and flexible architecture. This supports a modular development model where different parts of the system can be developed, tested, and maintained independently without compromising safety.

When a bug or vulnerability is discovered in a native module, it can be addressed in isolation. Updates become more targeted, regression risks decrease, and certification cycles shorten. Partner-developed components can be safely integrated without exposing the rest of the system to undue risk.

This level of modularity, enabled by virtualization, also supports software-defined product strategies. Manufacturers can deliver tailored product variants, remote feature upgrades, and frequent security patches without rebuilding or retesting the full stack. Virtualization enables the decoupling of software development from hardware timelines, allowing for faster iteration and a shorter time-to-market.

Beyond improving modularity and security, virtualization also unlocks new possibilities in software expression and productivity. MicroEJ VEE enables native C code to run alongside higher-level languages, such as Java, allowing for a hybrid approach that combines low-level efficiency with modern programming paradigms. Teams can leverage best practices from enterprise software (object-oriented design, reusable components, powerful tooling) while preserving their investment in existing C codebases. The result is a versatile development environment that fosters innovation, simplifies long-term maintenance, and scales across product lines.

Towards a Sustainable Embedded Software Architecture

Embedded development has always involved trade-offs. We optimize for performance, footprint, power, and timelines, often at the expense of architecture. But the cost of those shortcuts is now coming due. Vulnerabilities, maintenance burdens, and integration risks are growing.

Sandboxing native C modules within a virtualized execution environment is a practical, scalable way to regain architectural control. It introduces modularity and containment to systems that traditionally lacked them without abandoning the native performance, compatibility, and extensive ecosystem that makes C so valuable.

Ready to Modernize Your Software Development?

Speak with our experts to explore how sandboxing your C modules within app containers can bring safety, modularity, and long-term flexibility to your software assets.

Contact us

Additional Resources

FEATURES

Exploring the Potential of Multi-Sandboxed Containerized Apps for IoT Devices

Managed Code Embedded Systems

FEATURES

Enforce Security and Reliability by Design with Managed Code for Embedded Systems

Managed Code Embedded Systems

VIDEO

MicroEJ Managed-C Demo: Seamlessly Combine Java and C for Secure, Agile Embedded Development

Semir Haddad

Chief Product and Strategy Officer