Chance-constrained Optimization Using Scenario Approach

Optimization
Uncertainty
Author

Wangkun Xu

Published

April 14, 2025

Modified

April 14, 2025

Reference: X. Geng and L. Xie, ‘Data-driven decision making in power systems with probabilistic guarantees: Theory and applications of chance-constrained optimization’, Annual Reviews in Control, vol. 47, pp. 341–363, 2019, doi: 10.1016/j.arcontrol.2019.05.005.

Formulation

Consider the following chance-constrained optimization problem: \[ \begin{aligned} & \text { (CCO): } \min _x c^{\top} x \\ & \qquad \begin{array}{l} \text { s.t. } \mathbb{P}_{\xi}(f(x, \xi) \leq 0) \geq 1-\epsilon \\ \quad x \in \mathcal{X} \end{array} \end{aligned} \] The chance constraint should be understood as joint constraint for a vector \(f\), e.g., \[ \mathbb{P}\left(f_1(x, \xi) \leq 0, f_2(x, \xi) \leq 0, \cdots, f_m(x, \xi) \leq 0\right) \geq 1-\epsilon \]

Connection to Value at Risk (VaR) VaR: \(VaR(\xi,1-\epsilon):=\inf\{\gamma:\mathcal{P}(\xi\leq \gamma)\geq 1-\epsilon\}\). I.e., the minimum value of \(\gamma\) (quantile) such that the probability of \(\xi\) being less than \(\gamma\) is at least \(1-\epsilon\).

Therefore, the chance constraint is \[ \mathbb{P}_{\xi}(f_{i}(x, \xi) \leq 0) \geq 1-\epsilon \Leftrightarrow VaR(f_{i}(x,\xi),1-\epsilon)\leq {0} \]

Violation Probability of \(x^\diamond\): \(\mathbb{V}\left(x^{\diamond}\right):=\mathbb{P}_{\xi}\left(f\left(x^{\diamond}, \xi\right) \geq 0\right)\)

Assumption.

  • Function \(f(x, \xi)\) is convex in \(x\) for every instance of \(\xi\).
  • The deterministic constraints define a convex set \(\mathcal{X}\).

Special Cases \[ \begin{aligned} & \min _{x \in \mathcal{X}} c^{\top} x \\ & \text { s.t. } \mathbb{P}\left(a^{\top} x+b^{\top} \xi+\xi^{\top} D x \leq e\right) \geq 1-\epsilon \end{aligned} \]

  • \(\xi \sim \mathcal{N}(\mu, \Sigma)\) is equivalent to, \[ \begin{aligned} & \min _{x \in \mathcal{X}} c^{\top} x \\ & \text { s.t. } e-b^{\top} \mu-\left(a+D^{\top} \mu\right)^{\top} x \geq \Phi^{-1}(1-\epsilon) \sqrt{(b+D x)^{\top} \Sigma(b+D x)} \end{aligned} \] which is second-order cone program. Note that \(\mu,\sigma\) are constant.

Scenario-based Approach

Definition

A dataset with \(N\) scenarios \(\left\{\xi^i\right\}_{i=1}^N\).
Assumption:

  • the dataset is independent and identically distributed (i.i.d.).
  • Convexity assumption.
  • Does not make any assumption on the uncertainty distribution.
  • Uniqueness and feasibility assumption.

The chance-constrained optimization problem can be formulated as, \[ \begin{aligned} & (\mathrm{SP})_N: \min _{x \in \mathcal{X}} c^{\top} x \\ & \quad \text { s.t. } f\left(x, \xi^1\right) \leq 0, \cdots, f\left(x, \xi^N\right) \leq 0 \end{aligned} \]

  • \((SP)_{N}\) is a random program because drawing iid samples is random. E.g., we can have infinite choice of N samples.
  • Sample Complexity. The number of \(N\) matters. If \(N\) is too large then it is very conservative; if \(N\) is too small, then the solution is not feasible to the original chance constraint. We would like to find a smallest \(N\) such that the chance constraint is satisfied with high confidence. “What is the smallest \(N\) such that \(V(x_{N}^\star)\leq\epsilon\) with high probability?”

Structural Property

Support Scenario. Scenario \(\xi^i\) is a support scenario for \((S P)_N\) if its removal changes the solution of \((\mathrm{SP})_N\). The set of support scenarios of \(\left(\mathrm{SP}_N\right)\) is denoted by \(\mathcal{S}\). Under the convexity assumption, \(|\mathcal{S}|\leq n\) (the number of decision variable). If \(|\mathcal{S}|=n\), \((SP)_{N}\) is called fully-supported. For non-convex problem, the number of support scenarios can be larger than \(n\).

Non-degenerate Problem. Problem \(\mathrm{SP}_N\) is said to be non-degenerate, if \(o^*(\mathcal{N})=o^*(\mathcal{S})\). I.e., \((SP)_{N}\) has the same optimum as the one with support scenario \((SP)_{S}\).

A Procedure to obtain \((SP)_{N}\)

  1. Find upper bound \(\bar{h}\geq|S|\).
  2. Find suitable complexity \(N(\bar{h},\epsilon,\beta)\) where \(\epsilon\) is the desired violation probability and \(\beta\) is the confidence.
  3. Solve the resultant \((SP)_{N}\).

Under the convexity, iid, and non-degenerate assumption, we have \[ \mathbb{P}^N\left(\mathbb{V}\left(x_N^*\right)>\epsilon\right) \leq \sum_{i=1}^{n-1}\binom{N}{i} \epsilon^i(1-\epsilon)^{N-i} \]

  • This is the probability of the (violation) probability. The probability of the violation probability on the optimal solution of \((SP)_{N}\) larger than \(\epsilon\) is smaller than the RHS.
  • \(n\): dimension of decision variables.
  • The inequality is tight if \(|S|=n\) (fully support case).
  • The RHS is the tail of binomial distribution.

The Main Results

Let \(\epsilon\in(0,1)\) be the maximum violation probability and \(\beta \in(0,1)\) be the confidence level, if we set \[ F(n, N, \epsilon) = \sum_{i=0}^{n-1}\binom{N}{i} \epsilon^i(1-\epsilon)^{N-i} \leq \beta \]

Note that \(F(n, N, \epsilon)\) is the cumulative distribution function of the binomial distribution.

Then find a minimum \(N\) such that we can have \[ \mathbb{P}^N\left(\mathbb{V}\left(x_N^*\right)>\epsilon\right) \leq \sum_{i=1}^{n-1}\binom{N}{i} \epsilon^i(1-\epsilon)^{N-i} \leq \beta \] which means the probability of not following the violation rate is bounded by \(1-\beta\), i.e, the risk of failure, or \[ \mathbb{P}^N\left(\mathbb{V}\left(x_N^*\right) \leq \epsilon\right) \geq 1-\beta \] which means the probability of satisfying the violation rate is bounded by \(\beta\).

Moreover, if we can find an upper bound to the number of support scenarios \(n>h\geq|S|\), then \(n\) can be replaced by \(h\), \[ \mathbb{P}^N\left(\mathbb{V}\left(x_N^*\right)>\epsilon\right) \leq \sum_{i=1}^{h-1}\binom{N}{i} \epsilon^i(1-\epsilon)^{N-i} \leq \beta \]

For non-convexity case, it is likely that \(h>n\). But if a h as the upper bound to \(|S|\) can be found, the above results are still valid. Consequently, we can draw \(N\) iid samples from the dataset and formulate the \((SP)_{N}\) to obtain a feasible solution.