Algorithms for compiler backends are written in textbooks and are correct.
Understanding such foundations inoculates against foolishness - with register targeting, one sees that recursive function calls in (say) Standard ML are exactly as efficient as for loops in a language with mutable variables.
Nowadays, there are a number of programmers who learn ML, but they often think “I will not use it since I do not understand how it works.” Or, even worse, many of them make incorrect assumptions based on arbitrary misunderstanding about implementation methods.
Adding support for a new architecture was 5000 lines in GHC.
To use my own kempe project as an example, compile times went from 18.7s to 36.2s when switching from GHC's NCG to its LLVM backend - i.e. twice as long.
Performance on benchmarks was slightly worse when compiled via LLVM.
Putatively GHC's implementation of laziness confounds LLVM optimizations, but in any case, one cannot simply bolt a LLVM backend on to any compiler and expect it to work. This surely falls short of the promise of a general-purpose compiler backend.
This has hobbled Rust, for instance; its attempts to use
uncovered several bugs over the past 6 years:
LLVM is honed on Clang; in fact it is limited as a general-purpose compiler backend.
LLVM need not be used in compilers, particularly hobby compilers.