I have seen "strict data structures, lazy functions" bandied about among Haskellers. This is bad advice. Preferable is "know what you are doing."

Here is a type from my Dickinson project:

data Expression a = Literal { exprAnn :: a, litText :: T.Text } | StrChunk { exprAnn :: a, chunkText :: T.Text } | Choice { exprAnn :: a , choices :: NonEmpty (Double, Expression a) } | Let { exprAnn :: a , letBinds :: NonEmpty (Name a, Expression a) , letExpr :: Expression a } ⋮

...and some benchmarks involving this type:

benchmarking pipeline/examples/shakespeare.dck time 280.5 μs (277.3 μs .. 285.3 μs) 0.998 R² (0.998 R² .. 0.999 R²) mean 279.3 μs (276.9 μs .. 282.2 μs) std dev 8.440 μs (6.815 μs .. 10.05 μs) variance introduced by outliers: 25% (moderately inflated)

benchmarking pipeline/examples/fortune.dck time 141.3 μs (140.7 μs .. 142.3 μs) 0.999 R² (0.999 R² .. 1.000 R²) mean 144.0 μs (142.6 μs .. 145.6 μs) std dev 4.808 μs (4.036 μs .. 5.646 μs) variance introduced by outliers: 31% (moderately inflated)

We can add strictness annotations in a couple places:

data Expression a = Literal { exprAnn :: a, litText :: !T.Text } | StrChunk { exprAnn :: a, chunkText :: !T.Text } | Choice { exprAnn :: a , choices :: NonEmpty (Double, Expression a) } | Let { exprAnn :: a , letBinds :: NonEmpty (Name a, Expression a) , letExpr :: Expression a } ⋮

This slows things down.

benchmarking pipeline/examples/shakespeare.dck time 317.6 μs (316.3 μs .. 319.1 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 316.2 μs (315.0 μs .. 317.6 μs) std dev 4.493 μs (3.435 μs .. 6.425 μs)

benchmarking pipeline/examples/fortune.dck time 164.7 μs (164.1 μs .. 165.4 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 165.0 μs (164.4 μs .. 165.8 μs) std dev 2.294 μs (1.860 μs .. 2.849 μs)

Conversely, we can add strictness annotations to the Name type:

data Name a = Name { name :: NonEmpty T.Text , unique :: !Unique , loc :: a }

Have a look at benchmarks for the renamer and scope checker:

benchmarking renamer/bench/data/nestLet.dck time 6.089 μs (6.069 μs .. 6.114 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 6.086 μs (6.072 μs .. 6.104 μs) std dev 52.70 ns (41.34 ns .. 69.76 ns)

benchmarking scope checker/bench/data/nestLet.dck time 1.588 μs (1.578 μs .. 1.601 μs) 1.000 R² (0.999 R² .. 1.000 R²) mean 1.582 μs (1.576 μs .. 1.589 μs) std dev 21.32 ns (15.98 ns .. 29.41 ns) variance introduced by outliers: 12% (moderately inflated)

Here is the lazier version of the Name type:

data Name a = Name { name :: NonEmpty T.Text , unique :: Unique , loc :: a }

The benchmarks have become slightly slower, viz.

benchmarking renamer/bench/data/nestLet.dck time 6.275 μs (6.251 μs .. 6.301 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 6.275 μs (6.253 μs .. 6.306 μs) std dev 82.70 ns (61.11 ns .. 126.0 ns)

benchmarking scope checker/bench/data/nestLet.dck time 1.707 μs (1.702 μs .. 1.714 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 1.715 μs (1.709 μs .. 1.720 μs) std dev 18.64 ns (15.61 ns .. 24.13 ns)

Conclusion

While the performance boost gained by using strictness annotations carefully is modest, it should not be left on the table.

GHC is one of the few compilers with first-class support for laziness. Laziness is not "free" in that we cannot use laziness everywhere and expect the best results, but there is some ability to guide things if we know what we're doing.