The goal of secure boot is to prevent unauthorized changes to the operating system. For example, secure boot can protect against attempts to push malicious software updates to the devices.
Anyone maintaining a computer wants to keep the control of software in the device. In case of a personal computer user often maintains it, and can even change the operating system. For an attacker the ability to modify the operating system is a very powerful attack. Secure boot helps the user to maintain the control.
In embedded devices, the device vendor usually maintains the software and the secure boot.
Use of secure boot in embedded devices is now gaining popularity. When an embedded device is used for critical uses in society, secure boot is an important piece of the security solution. Secure boot is also emerging as a requirement in security standards, such as the industrial automation security standard IEC62443-4-2.
My first experience with embedded secure boot dates to around 2002 when forging the architecture for the Nokia phone hardware security. It seems the result was quite a good, as Nokia used the security architecture in over 1B phones. A nice phone security history writeup can be found from Historical insight into the development of Mobile TEEs
Understanding Secure Boot
A basic secure boot implementation is not complex. When any software loads another piece of software, the loading software verifies the signature of the loaded software. This is called chain of trust. Everything starts with ROM inside a system chip, it is trusted implicitly. ROM code trusts a key programmed into one-time programmable eFuses and uses the key to verify first loaded software.
Sometimes a separate Trusted Platform Module (TPM) chip is used to provide data disk encryption at operating system level. Sometimes the chain of trust is not extended to software loaded from encrypted disk. This is somewhat less secure design, as it does not stop all scenarios where an attacker could modify the software.
The above described mechanisms are well known and practically all implementations are similar. But there are more important requirements that complicate things. This is where implementations start to differ.
Software Development Work
How can the development be done if the device only runs authorized software? Do we have to authorize every work-in-progress version of the software individually? Is every developer allowed to authorize any software at will?
Any differences between development devices and production devices are a potential source of problems. It would be nice to keep the devices as similar as possible.
Conceptually the simplest solution is to use two distinct sets of devices, one for released software and another for development work. As a downside, developers may not be able to run released software in their devices. If production software is being encrypted, this may be a challenging issue to avoid.
An alternative is to permit development software in production devices in a controlled way. The developers could then enable development software use in their production devices. When done properly, this can both secure and convenient.
Secure boot is often combined with disabling JTAG and any other hardware interfaces for software debugging, because an attacker could use those to circumvent the secure boot. This means that when a device misbehaves or fails, it cannot be investigated in the usual ways.
Building and Signing the Software
Protecting the secure boot signing keys and their use is important. If keys leak or are misused, secure boot is compromised. Restoring security after the compromise can be difficult, even if technical means exists to change the keys. Having the critical signing keys in special Hardware Secure Modules (HSM) would be nice, but the related costs may be prohibitive.
Rotating the cryptographic keys periodically is generally considered a good practice in security, but almost impossible for the root key of the embedded secure boot. Most chips have a low limit (from 1 to 9) on how many times the root keys can be rotated. The limit may be sufficient for rotation after incidents, but it is too low to permit any regular rotation.
It may be impossible to isolate signing into a separate step, which is a common practice with Windows software signing. As a result, software building requires use of the release keys. This places additional requirements for the release build environment.
Manufacturing the Device
Manufacturing a secure device requires trust on the manufacturing environment. This may be challenging when outsourcing the manufacturing.
If the boot uses encrypted images, the manufacturing must program the encryption key. Sometimes the manufacturing also generates a device certificate (such as IDevID) for the device. Both require trust on the physical programming environment; certificate creation may even need an online connection.
Initially most chips do not enforce secure boot, so it must be turned on using an eFuse.
Does the chip matter?
Every system chip (SoC) vendor has created their own implementation for the secure boot. Those may offer good solutions to some of the issues, but they may also introduce problems specific to the manufacturer. All phone SoCs offer secure boot and their features are well aligned, probably due to early Nokia influence. Among industrial devices two very common chip families are NXP iMX and Xilinx ultrascale+.
NXP iMX series has a mature secure boot with reasonably good tools and features. As an example of advanced features, iMX offers a secure way to enable investigation of devices that have failed on the field.
Xilinx ultrascale+ is a rather unique chip with integrated FPGA. The chip offers extensive features for tamper resistant designs. The documentation tends to focus on these advanced security features, and leave basic security use not so well documented. At the same time, the chip has a number of peculiarities, such as use of SHA-3 hash for signatures. This limits the available code signing solutions.
Succeeding With Secure Boot
It may be tempting to think secure boot as a simple chip feature to be enabled. In reality implementing secure boot at scale requires attention to multiple areas. When implementing the first secure boot solution, the implications of the secure boot to different processes and parts of the system can be surprising and difficult to predict.
I wish some friendly time traveler would send this writeup back to 2002 for me to read. This would have saved a lot of analysis work. But as recent studies show, it is not so easy to find time travelers from the Internet.
Implementation of secure boot often makes some of existing security gaps more visible. Fixing these gaps may be important, but also it is good to remember that it may not be reasonable to fix all gaps at once. Keeping a balanced view to risks is important!
I guess many of the readers are either involved with or planning an embedded secure boot project. I hope this writing gives you some ideas or confidence on the road towards booting securely!