Lecture 22
November 20, 2024
Pros:
Cons:
Systems models often have:
Bifurcations are when the qualitative behavior of a system changes across a parameter boundary.
Feedback loops can be reinforcing or dampening.
Dampening feedback loops are associated with stable equilibria, while reinforcing feedback loops are associated with instability.
We often have multiple objectives that we want to analyze.
We could formulate constraints based on a priori assessments of acceptability:
Another common approach is to combine multiple objectives \(Z_i\) together into a weighted sum:
\[\sum_i w_i Z_i,\]
where \(\sum_i w_i = 1\).
This requires normalizing the \(Z_i\).
Can you think of problems or limits with these approaches to handling multiple objectives?
Recall the shallow lake problem from earlier in the semester.
\[\begin{align*} X_{t+1} &= X_t + a_t + y_t \\ &\quad + \frac{X_t^q}{1 + X_t^q} - bX_t,\\[0.5em] y_t &\sim \text{LogNormal}(\mu, \sigma^2) \end{align*}\]
Parameter | Definition |
---|---|
\(X_t\) | P concentration in lake |
\(a_t\) | point source P input |
\(y_t\) | non-point source P input |
\(q\) | P recycling rate |
\(b\) | rate at which P is lost |
What complicates formulating a mathematical program for the lake problem?
Our objective might be to maximize \(\sum_{t=1}^T a_t\) (as a proxy for economic activity), while keeping a low probability of eutrophication.
How can we represent uncertainty in inputs \(y_t\) in an MP?
What are some relevant objectives for the lake problem?
Many systems problems with
don’t lend themselves well to mathematical programming.
So how can we make decisions?
Often have multiple objectives at play when designing or managing environmental systems:
Some examples:
Approaches are method specific!
Key Question: what does it mean to “optimize” multiple objectives?
Linear Programming:
Simulation-Optimization:
What does it mean to “optimize” multiple objectives?
Straightforward with weighting, requires but a priori elicitation of weights.
What about if we leave the objectives unaggregated?
Simulated 100 random decisions with two objectives:
There is a tradeoff between these two objectives:
Greater releases typically means lower reliability.
What does it mean to find an “optimum” across these two objectives?
We say that a decision \(\mathbf{x}\) is dominated if there exists another solution \(\mathbf{y}\) such that for every objective metric \(Z_i(\cdot)\), \[Z_i(\mathbf{x}) > Z_i(\mathbf{y}).\]
\(\mathbf{x}\) is non-dominated if it is not dominated by any \(\mathbf{y} \neq \mathbf{x}\).
The set of non-dominated solutions is called the Pareto front (solutions are Pareto-optimal).
Every member of a Pareto front represents a different tradeoff between objectives.
This gives us two frameworks for evaluating tradeoffs:
In higher dimensions, manually screening of a Pareto front is difficult.
Can use multi-objective optimization with (certain) evolutionary algorithms.
Wednesday: Limits of Mathematical Programming
Monday: Prelim 2 Review, Wrap-Up
HW5: Will be due last week of class (12/5).