By Guillaume D

2019-06-12 08:17:39 8 Comments

By reading this question, I understood, for instance, why dynamic allocation or exceptions are not recommended in environments where radiation is high, like in space or in a nuclear power plant. Concerning templates, I don't see why. Could you explain it to me?

Considering this answer, it says that it is quite safe to use.

Note: I'm not talking about complex standard library stuff, but purpose-made custom templates.


@Basile Starynkevitch 2019-06-12 08:38:00

Notice that space-compatible (radiation-hardened, aeronautics compliant) computing devices are very expensive (including to launch in space, since their weight exceeds kilograms), and that a single space mission costs perhaps hundred million € or US$. Losing the mission because of software or computer concerns has generally a prohibitive cost so is unacceptable and justifies costly development methods and procedures that you won't even dream using for developing your mobile phone applet, and using probabilistic reasoning and engineering approaches is recommended, since cosmic rays are still somehow an "unusual" event. From a high-level point of view, a cosmic ray and the bit flip it produces can be considered as noise in some abstract form of signal or of input. You could look at that "random bit-flip" problem as a signal-to-noise ratio problem, then randomized algorithms may provide a useful conceptual framework (notably at the meta level, that is when analyzing your safety-critical source code or compiled binary, but also, at critical system run-time, in some sophisticated kernel or thread scheduler), with an information theory viewpoint.

Why C++ template use is not recommended in space/radiated environment?

That recommendation is a generalization, to C++, of MISRA C coding rules and of Embedded C++ rules, and of DO178C recommendations, and it is not related to radiation, but to embedded systems. Because of radiation and vibration constraints, the embedded hardware of any space rocket computer has to be very small (e.g. for economical and energy-consumption reasons, it is more -in computer power- a Raspberry Pi-like system than a big x86 server system). Space hardened chips cost 1000x much as their civilian counterparts. And computing the WCET on space-embedded computers is still a technical challenge (e.g. because of CPU cache related issues). Hence, heap allocation is frowned upon in safety-critical embedded software-intensive systems (how would you handle out-of-memory conditions in these? Or how would you prove that you have enough RAM for all real run time cases?)

Remember that in the safety-critical software world, you not only somehow "guarantee" or "promise", and certainly assess (often with some clever probabilistic reasoning), the quality of your own software, but also of all the software tools used to build it (in particular: your compiler and your linker; Boeing or Airbus won't change their version of GCC cross-compiler used to compile their flight control software without prior written approval from e.g. FAA or DGAC). Most of your software tools need to be somehow approved or certified.

Be aware that, in practice, most C++ (but certainly not all) templates internally use the heap. And standard C++ containers certainly do. Writing templates which never use the heap is a difficult exercise. If you are capable of that, you can use templates safely (assuming you do trust your C++ compiler and its template expansion machinery, which is the trickiest part of the C++ front-end of most recent C++ compilers, such as GCC or Clang).

I guess that for similar (toolset reliability) reasons, it is frowned upon to use many source code generation tools (doing some kind of metaprogramming, e.g. emitting C++ or C code). Observe, for example, that if you use bison (or RPCGEN) in some safety critical software (compiled by make and gcc), you need to assess (and perhaps exhaustively test) not only gcc and make, but also bison. This is an engineering reason, not a scientific one. Notice that some embedded systems may use randomized algorithms, in particular to cleverly deal with noisy input signals (perhaps even random bit flips due to rare-enough cosmic rays). Proving, testing, or analyzing (or just assessing) such random-based algorithms is a quite difficult topic.

Look also into Frama-Clang and CompCert and observe the following:

  • C++11 (or following) is an horribly complex programming language. It has no complete formal semantics. The people expert enough in C++ are only a few dozens worldwide (probably, most of them are in its standard committee). I am capable of coding in C++, but not of explaining all the subtle corner cases of move semantics, or of the C++ memory model. Also, C++ requires in practice many optimizations to be used efficiently.

  • It is very difficult to make an error-free C++ compiler, in particular because C++ practically requires tricky optimizations, and because of the complexity of the C++ specification. But current ones (like recent GCC or Clang) are in practice quite good, and they have few (but still some) residual compiler bugs. There is no CompCert++ for C++ yet, and making one requires several millions of € or US$ (but if you can collect such an amount of money, please contact me by email, e.g. to [email protected], my work email). And the space software industry is extremely conservative.

  • It is difficult to make a good C or C++ heap memory allocator. Coding one is a matter of trade-offs. As a joke, consider adapting this C heap allocator to C++.

  • proving safety properties (in particular, lack of race conditions or undefined behavior such as buffer overflow at run-time) of template-related C++ code is still, in 2Q2019, slightly ahead of the state of the art of static program analysis of C++ code. My draft Bismon technical report (it is a draft H2020 deliverable, so please skip pages for European bureaucrats) has several pages explaining this in more details. Be aware of Rice's theorem.

  • a whole system C++ embedded software test could require a rocket launch (a la Ariane 5 test flight 501, or at least complex and heavy experimentation in lab). It is very expensive. Even testing, on Earth, a Mars rover takes a lot of money.

Think of it: you are coding some safety-critical embedded software (e.g. for train braking, autonomous vehicles, autonomous drones, big oil platform or oil refinery, missiles, etc...). You naively use some C++ standard container, e.g. some std::map<std::string,long>. What should happen for out of memory conditions? How do you "prove", or at least "convince", to the people working in organizations funding a 100M€ space rocket, that your embedded software (including the compiler used to build it) is good enough? A decade-year old rule was to forbid any kind of dynamic heap allocation.

I'm not talking about complex standard library stuff but purposed-made custom templates.

Even these are difficult to prove, or more generally to assess their quality (and you'll probably want to use your own allocator inside them). In space, the code space is a strong constraint. So you would compile with, for example, g++ -Os -Wall or clang++ -Os -Wall. But how did you prove -or simply test- all the subtle optimizations done by -Os (and these are specific to your version of GCC or of Clang)? Your space funding organization will ask you that, since any run-time bug in embedded C++ space software can crash the mission (read again about Ariane 5 first flight failure - coded in some dialect of Ada which had at that time a "better" and "safer" type system than C++17 today), but don't laugh too much at Europeans. Boeing 737 MAX with its MACS is a similar mess).

My personal recommendation (but please don't take it too seriously. In 2019 it is more a pun than anything else) would be to consider coding your space embedded software in Rust. Because it is slightly safer than C++. Of course, you'll have to spend 5 to 10 M€ (or MUS$) in 5 or 7 years to get a fine Rust compiler, suitable for space computers (again, please contact me professionally, if you are capable of spending that much on a free software Compcert/Rust like compiler). But that is just a matter of software engineering and software project managements (read both the Mythical Man-Month and Bullshit jobs for more, be also aware of Dilbert principle: it applies as much to space software industry, or embedded compiler industry, as to anything else).

My strong and personal opinion is that the European Commission should fund (e.g. through Horizon Europe) a free software CompCert++ (or even better, a Compcert/Rust) like project (and such a project would need more than 5 years and more than 5 top-class, PhD researchers). But, at the age of 60, I sadly know it is not going to happen (because the E.C. ideology -mostly inspired by German policies for obvious reasons- is still the illusion of the End of History, so H2020 and Horizon Europe are, in practice, mostly a way to implement tax optimizations for corporations in Europe through European tax havens), and that after several private discussions with several members of CompCert project. I sadly expect DARPA or NASA to be much more likely to fund some future CompCert/Rust project (than the E.C. funding it).

NB. The European avionics industry (mostly Airbus) is using much more formal methods approaches that the North American one (Boeing). Hence some (not all) unit tests are avoided (since replaced by formal proofs of source code, perhaps with tools like Frama-C or Astrée - neither have been certified for C++, only for a subset of C forbidding C dynamic memory allocation and several other features of C). And this is permitted by DO-178C (not by the predecessor DO-178B) and approved by the French regulator, DGAC (and I guess by other European regulators).

Also notice that many SIGPLAN conferences are indirectly related to the OP's question.

@Guillaume D 2019-06-12 08:42:18

true for standard library templates and complex features (I once used lambdas in a project that doubled my code size) but I was thinking more of custom made ones. If you make your own templates, you should know what you are doing, right? I mean if you don't know what you are coding, that is a pretty big problem.

@Basile Starynkevitch 2019-06-12 08:43:19

But how do you prove, to the people paying 100M€ a space mission, that your software is "bug-free"?

@Guillaume D 2019-06-12 08:49:08

templates are just a way of writing. For instance If i make a template function that can be used with 2 classes, I unit test the 2 uses, and a third use with another class to be sure everything is correct.

@Tarick Welling 2019-06-12 08:50:36

this is still a bit tangentially related to templates. Could you elaborate more on the questions specific problem: templates.

@Tarick Welling 2019-06-12 13:13:11

"since any run-time bug in embedded C++ space software can crash the mission (read again about Ariane 5 first flight failure," that ain't a argument in favour of C in the embedded space though. C++ has stronger type checking which would have helpen in this instance.

@Basile Starynkevitch 2019-06-12 14:20:44

AFAIK, that is not true. Because Ariane 5 was coded in some dialect of Ada, which, at the time, had a better type system than C++17 today

@T.E.D. 2019-06-12 18:30:29

The Arianne thing has been much discussed. They used fairly bog-standard Ada, but turned off some unnecessary bounds checks to save CPU cycles. That worked great. Then they ported the same code to the later Arianne 5, which had different specs, and never readdressed if those checks should be turned back on or the bounds readdressed. It was a human error, and likely would have happened in C++ as well (since by default it has no bounds checks to start with), but has nothing to do with templates or language complexity.

@Basile Starynkevitch 2019-06-12 18:38:57

Ariane was a management error. I gave a few references regarding management

@Mark 2019-06-12 19:52:55

For certain reasonable values of "error-free", it is not very difficult to make an error-free C++ compiler, it's provably impossible to make an error-free C++ compiler. Turing-completeness of the template metaprogramming system means that in order to accept all well-formed C++ programs while rejecting all ill-formed ones, you need to solve the Halting Problem.

@Basile Starynkevitch 2019-06-12 19:53:48

It is still very difficult to make an efficient, optimizing C++ compiler

@Joshua 2019-06-12 20:32:34

Unfortunately your answer really answers why not use the STL not why not use developer-provided templates.

@Samuel Liew 2019-06-12 22:51:26

Comments are not for extended discussion; this conversation has been moved to chat.

@Barmar 2019-06-12 23:36:15

I find these arguments about the complexity of the C++ language unconvincing. If the language of choice were C, they would be valid. But I read somewhere that Ada is their preferred language, and it's also a complex language, I think comparable to C++ (although I admit that I've never actually used it, I only read the specifications back in the 80's when it was being developed).

@Peter Cordes 2019-06-13 00:37:04

I find it suspicious that your example of a C++ template was std::map<std::string,long>, and then you argue against it for dynamic allocation reasons, not because it's a template. I guess you wanted to go into detail about dynamic alloc since the OP mentioned it, too, after covering templates for code-bloat and as part of the general complexity that makes verification maybe harder. It is possible to use templates safely if you think about what you're doing, but sure it's easy to get code bloat.

@Graham 2019-06-13 08:00:01

@Barmar If it convinces the C++ standards committee, it should convince you. In C, there are a fairly limited number of areas of undefined behaviour, and they are explicitly undefined in the standard. (Nasal demons, etc..) In C++, the standards committee literally stopped counting because there were too many to track.

@Barmar 2019-06-13 08:03:15

Undefined behavior is not a measure of complexity, IMHO.

@Sebastian Redl 2019-06-13 08:11:12

Re: Rust on safety-critical systems:

@Violet Giraffe 2019-06-13 12:33:04

1. Terms like 'horribly complex language" don't look good on a technical discussion / argument. 2. I don't see how it's such a big of a deal that a compiler is not verified / certified. You need to test your actual program either way, even if your compiler is formally proven. And a good test suite does not care if a bug comes from the compiler or the programmer.

@Basile Starynkevitch 2019-06-13 12:59:46

@VioletGiralle: that is your opinion, but Airbus is avoiding several (not all) units test thru a formal methods approach, and that is possible in DO178C. I know that Boeing do differently. IIRC, this is the major difference between European and North American software safety approach in avionics. Even if I might know more, I don't feel allowed to speak of it. However, I did heard talks by qualified Airbus and DGAC personnel explaining all that in great details.

@Basile Starynkevitch 2019-06-13 13:39:21

@VioletGiralle: If you know of any formal semantics - e.g. denotational semantics or axiomatic semantics or operational semantics of a very large subset of C++17 (including the memory model & multi-threading aspects) please share it with us. Since I know none, I stand on my position: C++17 is an horribly complex language.

@Basile Starynkevitch 2019-06-13 13:43:49

@VioletGiraffe: even a formal proof of some particular implementation of C++ standard containers does interest me. AFAIK, such proofs are really incomplete, but I am curious if you know them better than I do. Please share some reference with us

@Reuven Abliyev 2019-06-16 07:54:25

How all this is related to templates?

@MikeMB 2019-06-18 12:55:36

@Reuven: That's also what I'd like to know. Most of this post and many of the comments talk about things completely unrelated to templates.

@Basile Starynkevitch 2019-06-18 16:34:45

I addressed that by adding two paragraphs

@Yves Daoust 2019-06-19 07:14:31

@ReuvenAbliyev: IMO, these "displaced" comments are due to the fact that the OP is a pretty void topic. Is there any connection between the use of templates and robustness to radiations ? Or between Camel Casing and resistance to fire ? :-)

@Yves Daoust 2019-06-19 07:05:28

This statement about templates being a cause of vulnerability seems completely surrealistic to me. For two main reasons:

  • templates are "compiled away", i.e. instantiated and code-generated like any other function/member, and there is no behavior specific to them. Just as if they never existed;

  • no construction in any language is neither safe or vulnerable; if an ionizing particule changes a single bit of memory, be it in code or in data, anything is possible (from no noticeable problem occurring up to processor crash). The way to shield a system against this is by adding hardware memory error detection/correction capabilities. Not by modifying the code !

@Basile Starynkevitch 2019-06-19 14:21:35

So you trust both the most complex part of the C++ compiler front-end, and the code defining the templates. How do you exhaustively test both of them? Of course, unrelated to any cosmic ray switching a bit

@Basile Starynkevitch 2019-06-19 14:22:53

BTW, this is more a comment (quite interesting one) than an answer

@Yves Daoust 2019-06-20 06:36:09

@BasileStarynkevitch: no, this is a clear answer that templates have nothing to do with cosmic rays. Nor are loops, unsafe casts, lack of documentation and the age of the programmer.

@Basile Starynkevitch 2019-06-20 07:51:21

I might disagree with the second point. I remember having read some academic papers claiming to detect bit changes in kernel code. I really forgot the details, because that topic does not interest me. BTW Guillaume D. understanding of the relation between radiation-hardened embedded systems and dynamic allocation is too simplistic (and we both agree on that, I hope)

@Yves Daoust 2019-06-20 08:00:23

@BasileStarynkevitch: this makes little sense. When vital parts of the kernel are corrupt, say the scheduler, the kernel doesn't work anymore.

@Basile Starynkevitch 2019-06-20 08:07:09

These papers are duplicating the most critical code and checking its validity from time to time. Again, I forgot details, because thy don't interest me that much. And of course, they have a probabilistic approach (e.g. assume at most a 1 random bit flip per second)

@Basile Starynkevitch 2019-06-20 08:47:02

I downvoted, because you ignore probabilisitic approaches. It makes sense to design a system which fail only 1 out a billion times every minute, assuming a bit switch frequency of less than 1 random bit flip per second

@Yves Daoust 2019-06-20 08:50:26

@BasileStarynkevitch: the meaning of a downvote is "this answer is not useful".

@Basile Starynkevitch 2019-06-20 08:51:12

But it is not even an answer, it is a long and insightful comment about my answer

@Yves Daoust 2019-06-20 08:51:43

@BasileStarynkevitch: not at all, it is addressing the OP.

@Basile Starynkevitch 2019-06-20 08:57:06

Then you forgot clever probabilistic approaches (and that is a good enough reason to downvote). And these do exist (even if I don't understand them, because they are out of my area of expertise). Any good book on randomized algorithms would explain the relevant concepts and approaches.

@Yves Daoust 2019-06-20 09:05:30

@BasileStarynkevitch: sorry, I didn't notice that the OP was focused on randomization. (By the way, randomization is an algorithmic technique to achieve good expected complexity and has nothing to do with kernel robustness.)

@Basile Starynkevitch 2019-06-20 09:09:08

As I am suggesting, there is some indirect relation. But I don't have that much time to chat about it. Read some SIGPLAN related papers or conferences, and also TACO, TAAS, ...

@Basile Starynkevitch 2019-06-20 09:15:50


@Basile Starynkevitch 2019-06-23 12:08:45

Randomized algorithms are very well suited to handle signal (in the information theoretic sense, not the Unix sense) with noise, and the kernel robustness to e.g. random bit flips due to cosmic radiations is an instance of "signal with noise" handling

@Yves Daoust 2019-06-24 07:58:44

@BasileStarynkevitch: you are misinformed, randomized algorithms are not suited to handle noise. And signal processing has absolutely nothing to do with kernel design.

@Yves Daoust 2019-06-24 07:59:14

@BasileStarynkevitch: by the way, you forgot to mention quantum computing, this is so trendy.

@Basile Starynkevitch 2019-06-24 08:06:58

But not yet used in embedded computing. And I even heard a talk explaining that quantum computing has no practical application (outside of quantum chemistry simulation) before my expected death. The point being that practical quantum computers have very few qbits. And yes, signal processing (at the math level, not in coding) has indirect relation with OS kernel reliability techniques.

@Yves Daoust 2019-06-24 08:09:07

@BasileStarynkevitch: yes but they require robustification means because a fraction of the computations are wrong. I am sure you will find an indirect way.

@Basile Starynkevitch 2019-06-24 08:10:33

I am not interested at all in quantum computing. I'm leaving that to the next generation of software developer. At 60 years of age, I am too old for quantum computing. I never saw a quantum computer (in real life), and I don't even expect to see one

@Yves Daoust 2019-06-24 08:12:59

@BasileStarynkevitch: we are not discussing your personal interests, but the way to help the OP deal with radiations.

@Basile Starynkevitch 2019-06-24 08:17:08

And I understand that quantum computing is not relevant for that goal, within the next 20 years. But statistical and probabilistic techniques (including randomized algorithms) are relevant. Both for static source code analysis of the program, and even for radiation-proofing of the kernel (e.g. by duplicating cleverly its scheduler code). Of course, all this is indirect, as you mention

@Yves Daoust 2019-06-24 08:18:22

@BasileStarynkevitch: I said indirectly, don't you remember ?

@user6556709 2019-06-12 09:20:40

The argumentation against the usage of templates in safety code is that they are considered to increase the complexity of your code without real benefit. This argumentation is valid if you have bad tooling and a classic idea of safety. Take the following example:

template<class T>  fun(T t){

In the classic way to specify a safety system you have to provide a complete description of each and every function and structure of your code. That means you are not allowed to have any code without specification. That means you have to give a complete description of the functionality of the template in its general form. For obvious reasons that is not possible. That is BTW the same reason why function-like macros are also forbidden. If you change the idea in a way that you describe all actual instantiations of this template, you overcome this limitation, but you need proper tooling to prove that you really described all of them.

The second problem is that one:


This line is not a self-contained line. You need to look up the type of b to know which function is actually called. Proper tooling which understands templates helps here. But in this case it is true that it makes the code harder to check manually.

@Basile Starynkevitch 2019-06-12 09:25:40

Agreed, but my answer suggested that before your answer. And manual test for embedded C++ software is really too expensive. You cannot afford many Ariane 5 test flights like its 501.

@Lundin 2019-06-12 09:26:11

"The argumentation against the usage of templates in safety code is that they are considered to increase the complexity of your code without real benefit." No, that's the argument against using templates in embedded systems overall. The argument against using templates in safety code, is that there is no use whatsoever for templates in 100% deterministic code. In such systems, there's no generic programming anywhere. You can't use stuff like std::vector, because you will unlikely find a std lib compliant to safety standards. Or if you do, it will cost lots of cash.

@user6556709 2019-06-12 09:35:02

@Lundin Generic programming in the embedded world is a thing. Even down to the deep embedded stuff. That for the same raeson why it had become thing on other levels: Well tested algorithms are a nice thing.

@Lundin 2019-06-12 10:37:03

@user6556709 Yes, you use drivers + HAL, but you don't use type generic programming.

@Graham 2019-06-13 08:05:53

@uset6556709 But if you change the data type, or change ranges within the data type, it's very easy for the algorithm to not behave correctly. Ariane-501 was lost to that exact cause. So if you change anything then you no longer have a "well tested algorithm".

@user6556709 2019-06-13 08:48:34

@Graham The algorithm is still well tested in opposite to what you write from the scratch. You always have to check if you can use it but that is a different story.

@MikeMB 2019-06-18 13:05:23

@Lundin: Templates have nothing to do with deterministic or non-deterministic code. In the end, they are just a way to reuse code without dynamic dispatch(virtual functions or function pointers) and without copy-pasting code, while being a tad safer than macros. E.g. reusing the same sort algorithm to sort an array of ints and an array of shorts. And the fact that std::vector is unsuitable for safety critical real time code has nothing to do with it being a template.

@Lundin 2019-06-18 13:07:15

@MikeMB Explain that to the C++ programmers, who must also let the same function handle sorting a Foo and a Bar too, despite no such types being relevant to the application itself.

@MikeMB 2019-06-18 13:38:04

Who does? This may be the case for the author of a general purpose algorithm library, but when we are talking about safety-critical realtime code we have left the "general purpose" domain anyway and also the OP was explicit talking about purpose made custom templates.

Related Questions

Sponsored Content

10 Answered Questions

18 Answered Questions

[SOLVED] Why should C++ programmers minimize use of 'new'?

22 Answered Questions

25 Answered Questions

[SOLVED] Why do we need virtual functions in C++?

13 Answered Questions

[SOLVED] Why does C++ compilation take so long?

6 Answered Questions

16 Answered Questions

[SOLVED] Why can templates only be implemented in the header file?

1 Answered Questions

[SOLVED] C++ template typedef

11 Answered Questions

[SOLVED] Storing C++ template function definitions in a .CPP file

  • 2008-09-22 15:55:52
  • Rob
  • 288089 View
  • 442 Score
  • 11 Answer
  • Tags:   c++ templates

11 Answered Questions

Sponsored Content