11-17-2025, 11:22 AM
Thread 6 — Optimisation
How Mathematics Finds the Best Possible Solution
Every engineering design, every machine-learning model, every financial system, and every scientific simulation relies on one idea:
Finding the best possible solution under constraints.
That process is called optimisation, and it sits at the heart of modern mathematics and computation.
This thread explores the major optimisation methods and how they drive real-world systems.
1. What Is Optimisation?
Optimisation is the process of finding:
• the minimum value
• the maximum value
• or the best configuration
…of a function or system.
Examples:
• designing the strongest but lightest bridge
• finding the fastest rocket trajectory
• training a neural network
• minimising fuel consumption
• choosing the best investment strategy
• locating faults in engineering models
At its core, optimisation asks:
“What is the best possible outcome within the rules of this system?”
2. Types of Optimisation Problems
• Unconstrained Optimisation
Find minimum/maximum with no restrictions.
E.g. gradient descent.
• Constrained Optimisation
Solutions must follow rules (called constraints).
E.g. aircraft design, economics, structural loads.
• Linear Optimisation (Linear Programming)
When objective + constraints are linear.
Extremely fast and widely used.
• Nonlinear Optimisation
Much harder — real-world engineering is almost always nonlinear.
• Discrete / Combinatorial Optimisation
When choices are limited or integer-based.
E.g. scheduling, routing, resource allocation, cryptography.
• Global vs Local Optimisation
Local = best nearby point.
Global = absolute best point (very hard to find).
3. Core Optimisation Methods
• Gradient Descent
The foundation of machine learning.
Move in the direction of steepest descent until you reach a minimum.
Variants:
• stochastic gradient descent (SGD)
• momentum
• Adam
• RMSProp
Used for training all modern neural networks.
• Newton’s Method
Uses curvature (second derivative) information.
Fast, but computationally expensive.
• Quasi-Newton Methods (BFGS, L-BFGS)
Approximate Newton’s method without needing second derivatives.
Used in physics simulations, finance, and large-scale optimisation.
• Linear Programming — Simplex Method
A legendary algorithm.
Optimises millions of variables efficiently.
Used in:
• supply chains
• airline scheduling
• logistics
• resource optimisation
• Quadratic Programming (QP)
When objective is quadratic — standard in control theory.
• Genetic Algorithms & Evolutionary Methods
Inspired by natural selection.
Useful when the search space is chaotic or discontinuous.
• Simulated Annealing
Avoids local minima by adding controlled randomness.
• Particle Swarm Optimisation
Mimics swarm behaviour (birds, fish).
Great for global optimisation problems.
4. Optimisation in the Real World
Engineering:
• aircraft wing shape
• safer car frames
• stronger buildings
• reduced vibration systems
Physics & Cosmology:
• fitting cosmological parameters
• simulating minimal-energy configurations
• solving inverse problems
Computer Graphics & Games:
• animation
• pathfinding
• physics engines
• inverse kinematics
AI / Machine Learning:
• training neural networks
• hyperparameter tuning
• reinforcement learning
Economics & Finance:
• optimal portfolio construction
• risk minimisation
• economic equilibria
Medicine & Biology:
• optimal drug dosage models
• protein folding algorithms
• imaging reconstruction
5. The Big Idea
Optimisation is the science of improvement.
It transforms vague goals (“make this better”) into mathematical problems we can solve.
Without it, modern science and technology simply wouldn’t work.
Every design, every simulation, every AI model, every engineering structure
depends on powerful optimisation algorithms running behind the scenes.
Written by Leejohnston & Liora — The Lumin Archive Research Division
How Mathematics Finds the Best Possible Solution
Every engineering design, every machine-learning model, every financial system, and every scientific simulation relies on one idea:
Finding the best possible solution under constraints.
That process is called optimisation, and it sits at the heart of modern mathematics and computation.
This thread explores the major optimisation methods and how they drive real-world systems.
1. What Is Optimisation?
Optimisation is the process of finding:
• the minimum value
• the maximum value
• or the best configuration
…of a function or system.
Examples:
• designing the strongest but lightest bridge
• finding the fastest rocket trajectory
• training a neural network
• minimising fuel consumption
• choosing the best investment strategy
• locating faults in engineering models
At its core, optimisation asks:
“What is the best possible outcome within the rules of this system?”
2. Types of Optimisation Problems
• Unconstrained Optimisation
Find minimum/maximum with no restrictions.
E.g. gradient descent.
• Constrained Optimisation
Solutions must follow rules (called constraints).
E.g. aircraft design, economics, structural loads.
• Linear Optimisation (Linear Programming)
When objective + constraints are linear.
Extremely fast and widely used.
• Nonlinear Optimisation
Much harder — real-world engineering is almost always nonlinear.
• Discrete / Combinatorial Optimisation
When choices are limited or integer-based.
E.g. scheduling, routing, resource allocation, cryptography.
• Global vs Local Optimisation
Local = best nearby point.
Global = absolute best point (very hard to find).
3. Core Optimisation Methods
• Gradient Descent
The foundation of machine learning.
Move in the direction of steepest descent until you reach a minimum.
Variants:
• stochastic gradient descent (SGD)
• momentum
• Adam
• RMSProp
Used for training all modern neural networks.
• Newton’s Method
Uses curvature (second derivative) information.
Fast, but computationally expensive.
• Quasi-Newton Methods (BFGS, L-BFGS)
Approximate Newton’s method without needing second derivatives.
Used in physics simulations, finance, and large-scale optimisation.
• Linear Programming — Simplex Method
A legendary algorithm.
Optimises millions of variables efficiently.
Used in:
• supply chains
• airline scheduling
• logistics
• resource optimisation
• Quadratic Programming (QP)
When objective is quadratic — standard in control theory.
• Genetic Algorithms & Evolutionary Methods
Inspired by natural selection.
Useful when the search space is chaotic or discontinuous.
• Simulated Annealing
Avoids local minima by adding controlled randomness.
• Particle Swarm Optimisation
Mimics swarm behaviour (birds, fish).
Great for global optimisation problems.
4. Optimisation in the Real World
Engineering:
• aircraft wing shape
• safer car frames
• stronger buildings
• reduced vibration systems
Physics & Cosmology:
• fitting cosmological parameters
• simulating minimal-energy configurations
• solving inverse problems
Computer Graphics & Games:
• animation
• pathfinding
• physics engines
• inverse kinematics
AI / Machine Learning:
• training neural networks
• hyperparameter tuning
• reinforcement learning
Economics & Finance:
• optimal portfolio construction
• risk minimisation
• economic equilibria
Medicine & Biology:
• optimal drug dosage models
• protein folding algorithms
• imaging reconstruction
5. The Big Idea
Optimisation is the science of improvement.
It transforms vague goals (“make this better”) into mathematical problems we can solve.
Without it, modern science and technology simply wouldn’t work.
Every design, every simulation, every AI model, every engineering structure
depends on powerful optimisation algorithms running behind the scenes.
Written by Leejohnston & Liora — The Lumin Archive Research Division
