Lecture 23
November 25, 2024
Pros:
Cons:
Challenges include:
Most search algorithms look for critical points to find candidate optima. Then the “best” of the critical points is the global optimum.
Two common approaches:
Find estimate of gradient near current point and step in positive/negative direction (depending on max/min).
\[x_{n+1} = x_n \pm \alpha_n \nabla f(x_n)\]
The second is more problematic: in some cases, methods like stochastic gradient approximation or automatic differentiation can be used.
Use a sampling strategy to find a new proposal, then evaluate, keep if improvement.
Evolutionary Algorithms fall into this category.
Can also incorporate constraints into search.
Two main packages:
Random.seed!(1)
# define function to optimize
function f(x)
lnorm = LogNormal(0.25, 1)
y = rand(lnorm)
return sum(x .- y)^2
end
fobj(x) = mean([f(x) for i in 1:1000])
bounds = [0.0 1000.0]'
# optimize
results = Metaheuristics.optimize(fobj, bounds, DE())
results.best_sol
(f = 3.7256e+00, x = [1.57087])
Metaheuristics.jl
AlgorithmsMetaheuristics.jl
contains a number of algorithms, covering a variety of single-objective and multi-objective algorithms.
We won’t go into details here, and will just stick with DE()
(differential evolution) in our examples.
These methods work pretty well, but can:
These methods work pretty well, but can:
These methods work pretty well, but can:
Simulation-Optimization involves the use of a simulation model to map decision variables and other inputs to system outputs.
What kinds of methods can use for simulation-optimization?
Challenge: Underlying structure of the simulation model \(f(x)\) is unknown and can be complex.
We can use a search algorithm to navigate the response surface of the model and find an “optimum”.
We usually can’t guarantee that we can find an optimum (or even quantify an optimality gap) because:
But:
The optimal solution of a model is not an optimal solution of a problem unless the model is a perfect representation of the problem, which it never is.
— Ackoff, R. L. (1979). “The Future of Operational Research is Past.” The Journal of the Operational Research Society, 30(2), 93–104. https://doi.org/10.1057/jors.1979.22
Simulation-optimization methods typically rely on heuristics to decide that a solution is good enough. These can include
\[\begin{align*} X_{t+1} &= X_t + a_t + y_t \\ &\quad + \frac{X_t^q}{1 + X_t^q} - bX_t,\\[0.5em] y_t &\sim \text{LogNormal}(\mu, \sigma^2) \end{align*}\]
Parameter | Definition |
---|---|
\(X_t\) | P concentration in lake |
\(a_t\) | point source P input |
\(y_t\) | non-point source P input |
\(q\) | P recycling rate |
\(b\) | rate at which P is lost |
DE()
with a maximum of 5,000 function evaluations).What do you observe?
Now we can simulate to learn about how the solution performs.
Many choices were made in this optimization problem:
All of these affect the tradeoff between solution quality and computational expense.
Happy Thanksgiving!
After Break: No class, class time will be office hours for check-ins.
HW5: Due 12/5
Project: Presentation due 12/9 (cannot be late)