In high school calculus, there's a trick to solve complex optimization problems called Lagrangian multipliers. You might use this optimization trick to find the maximum of one curve given another curve as a constraint, e.g. maximizing non-linear utility given non-linear resource constraints.
On a more philosophical level, Lagrange multipliers show the relationship between constraints and optimization. Often, you can't have one without the other. Without constraints, many functions can't be "optimized" – they lack a global (or local) maximum (or minimum). Given enough constraints, all functions can be optimized (take the trivial constraints).
Optimization is often seen as the highest good. Programs that run more efficiently. Processes that run faster. But optimization is a trade-off and optimization is rigid. Especially early on, optimization should be an anti-goal. Instead, solve for optionality and eschew constraints.
Three posts related to the optionality/optimization trade-off.
- The U-shaped Utility of Monorepos: delineating service boundaries too early is premature optimization and causes more issues down the road. Instead, start with a monorepo and gradually split services.
- On Centralization: Centralization == optimization. Once decentralized protocols and ideas have been sufficiently proven out in the open, the best use cases are often centralized and optimized for.
- Antifragile in 2022: The other side of optimization is anti-fragility. Things that survive shocks are often made stronger. Nassim Nicolas Taleb wrote a great book on the topic.