(home)

Differentiating Solutions of DC Optimal Power Flow

In this post, we will write the DC optimal power flow (DC OPF) problem in matrix form and show how to differentiate the optimal solution with respect to the network parameters. This post largely follows the notation from our recent e-Energy paper, except that we will use the "BθB \theta" formulation of the DC power flow approximation.

Let dRnd \in \mathbb{R}^n be a vetor of nodal demands, and let gRkg \in \mathbb{R}^k be a vector of nodal generation. Let θ(π,π]n\theta \in (-\pi,\pi]^n be the bus voltage phase angles. Let fRmf \in \mathbb{R}^m be the line flows. The DC approximation of the power flow equations is p=Cgd=Bθp = Cg - d = -B \theta, where C{0,1}n×kC \in \{0,1\}^{n \times k} is a node-to-generator incidence matrix such that Ci,u=1C_{i,u} = 1 if generator uu is connected to node ii and 00 otherwise.

The susceptance matrix can be written as B=Adiag(b)AB = A^\top \operatorname{diag}(b) A, where A{1,0,1}m×nA \in \{-1,0,1\}^{m \times n} is a line-to-node incidence matrix, and bij:=Im(1/zij)=xijrij2+xij2b_{ij} := \operatorname{Im}(1/z_{ij}) = \frac{-x_{ij}}{r_{ij}^2 + x_{ij}^2} is the imaginary component of the inverse of the series impedance of line (i,j)(i,j). %Hereafter, we set L=B=AWAL = -B = A^\top W A to be the Laplacian matrix that describes the DC power flow approximation, with W=diag(b)W = -\operatorname{diag}(b).

The decision variables are collected in the vector x=[θ,g]Rn+kx = [\theta^\top,g^\top]^\top \in \mathbb{R}^{n+k}. The parameterized DC OPF program is the solution map x:RnRn+kx_\star : \mathbb{R}^n \to \mathbb{R}^{n+k}, which takes the form

x(d)arg minx=[θ,g] c(x)=12xQx+wxs.t.Bθ=CgdfWAθfggg. \begin{align*} x_\star(d) \in \argmin_{x= [\theta^\top,g^\top]^\top} \ &c(x) = \frac{1}{2}x^\top Q x + w^\top x\\ \mathsf{s.t.} \quad -&B \theta = Cg - d\\ -&\overline{f} \leq WA\theta \leq \overline{f}\\ &\underline{g} \leq g \leq \overline{g}. % &\abs{\theta_i - \theta_j} \leq \Delta_{ij} \quad \forall \ (i,j) \in \mathcal{ E}, \end{align*}

The cost function c()c(\cdot) is a convex quadratic. Let us define the matrices

Q=[τIn×n0n×k0k×ndiag(α)],G=[WA0m×kWA0m×k0k×nIk×k0k×nIk×k],F=[BC], Q = \begin{bmatrix} \tau I_{n \times n} & 0_{n \times k}\\ 0_{k \times n} & \operatorname{diag}(\alpha) \end{bmatrix}, \qquad G = \begin{bmatrix} WA & 0_{m \times k}\\ -WA & 0_{m \times k}\\ 0_{k \times n} & I_{k \times k}\\ 0_{k \times n} & -I_{k \times k} \end{bmatrix}, \qquad F = \begin{bmatrix} B & C \end{bmatrix},

where QR(n+k)×(n+k)Q \in \mathbb{R}^{(n+k) \times (n+k)}, GR2(m+k)×n+kG \in \mathbb{R}^{2(m+k)\times n+k}, and FRn×(n+k)F \in \mathbb{R}^{n \times (n+k)}. The parameter τ>0\tau>0 is a small regularization parameter to yield a strongly convex objective. In a similar manner, we define the vectors

w=[0nβ],h=[f,f,g,g], w = \begin{bmatrix} 0_{n}\\ \beta \end{bmatrix}, \qquad h = \left[ \overline{f}^\top, \overline{f}^\top, \overline{g}^\top, -\underline{g}^\top\right]^\top,

then, DCOPF program (1) is equivalent to the standard quadratic program

minx 12xQx+wxs.t.Fx=dGxh. \begin{align*} \min_{x} \ &\frac{1}{2}x^\top Q x + w^\top x\\ \mathsf{s.t.} \quad &Fx = d\\ \quad &Gx \leq h. \end{align*}

Let z=(x,μ,ν)z = (x,\mu,\nu) be the vector of primal and dual variables associated with the DC OPF program, where xRn+kx \in \mathbb{R}^{n+k}, μR2(m+k)\mu \in \mathbb{R}^{2(m+k)}, and νRn\nu \in \mathbb{R}^n, where the vector ν\nu are the locational marginal prices (LMPs). The entries of μ\mu are defined as

μ=(μf,μf,μg,μg). \mu = (\overline{\mu}^f, \underline{\mu}^f, \overline{\mu}^g, \underline{\mu}^g).

The Lagrangian of the DC OPF program is

L(z;θ)=12xQx+wx+μ(Gxh)+ν(Fxd). L(z;\theta) = \frac{1}{2}x^\top Q x + w^\top x + \mu^\top(Gx - h) + \nu^\top(Fx - d).

Next, we construct a non-linear system of equations that describes the Karush-Kuhn-Tucker conditions for optimality of the program (4). Let zz^* be a solution to the program (4). The stationarity, complimentary slackness, and equality constraint feasibility conditions form a system of equations k()k(\cdot) whose root is zz^*. It is an exercise to show that the KKT operator k()k(\cdot) is then

k(z;θ)=[xL(z;θ)diag(μ)(Gxh)Fxd]=[Qx+w+Gμ+Fνdiag(μ)(Gxh)Bθ+Cgd], k(z;\theta) = \begin{bmatrix} \nabla_{x}L(z;\theta)\\ \operatorname{diag}(\mu)(G x - h) \\ F x - d \end{bmatrix} = \begin{bmatrix} Q x + w + G^\top \mu + F^\top \nu\\ \operatorname{\sf diag}(\mu)(G x - h)\\ B\theta + Cg - d \end{bmatrix},

which satisfies k(z;θ)=0k(z^*;\theta) = 0_{}.

The Jacobian of the Karush-Kuhn Tucker (KKT) conditions with respect to the primal and dual variables is the following block matrix:

kz(z;θ)=[QGFdiag(μ)Gdiag(Gxh)02(m+k)×nF0n×2(m+k)0n×n]=[τIn×n0n×kAWAW0n×k0n×kB0k×ndiag(α)0k×m0k×mIk×kIk×kCdiag(μf)WA0m×kdiag(ff)0m×m0m×k0m×k0m×ndiag(μf)WA0m×k0m×mdiag(f+f)0m×k0m×k0m×n0k×ndiag(μg)0k×m0k×mdiag(gg)0k×k0k×n0k×ndiag(μg)0k×m0k×m0k×kdiag(gg)0k×nBC0n×m0n×m0n×k0n×k0n×n]. \begin{align*} \frac{\partial k}{\partial z}(z_\star;\theta) &= \begin{bmatrix} Q & G^\top & F^\top\\ \operatorname{diag}(\mu_\star)G & \operatorname{\sf diag}(Gx_\star - h) & 0_{2(m+k) \times n}\\ F & 0_{n \times 2(m+k)} & 0_{n \times n} \end{bmatrix}\\ &=\begin{bmatrix} \tau I_{n \times n} & 0_{n \times k} & A^\top W & -A^\top W & 0_{n \times k} & 0_{n \times k} & B\\ 0_{k \times n} & \operatorname{diag}(\alpha) & 0_{k \times m} & 0_{k \times m} & I_{k \times k} & I_{k \times k} & C^\top\\ \operatorname{diag}(\overline{\mu}^f)WA & 0_{m \times k} & \operatorname{diag}(f - \overline{f}) & 0_{m \times m} & 0_{m \times k} & 0_{m \times k} & 0_{m \times n} \\ -\operatorname{diag}(\underline{\mu}^f)WA & 0_{m \times k} & 0_{m \times m} & -\operatorname{diag}(f +\overline{f}) & 0_{m \times k} & 0_{m \times k} & 0_{m \times n}\\ 0_{k \times n} & \operatorname{diag}(\overline{\mu}^g) & 0_{k \times m} & 0_{k \times m} & \operatorname{diag}(g - \overline{g}) & 0_{k \times k} & 0_{k \times n}\\ 0_{k \times n} & \operatorname{diag}(\underline{\mu}^g) & 0_{k \times m} & 0_{k \times m} & 0_{k \times k} & \operatorname{diag}(\underline{g} -g) & 0_{k \times n}\\ B & C & 0_{n \times m} & 0_{n \times m} & 0_{n \times k} & 0_{n \times k} & 0_{n \times n} \end{bmatrix}. \end{align*}

At an optimal solution zz_\star, the Jacobian of the optimal solution with respect to the problem parameters θ\theta is then given in closed form by the implicit function theorem as

zθ=(kz(z;θ))1kθ(z;θ). \frac{\partial z_\star}{\partial \theta} = -\left(\frac{\partial k}{\partial z}(z_\star;\theta) \right)^{-1} \frac{\partial k}{\partial \theta}(z_\star;\theta).

For example, if the parameters are the demands, θ=d\theta = d, we solve the system

kθ(z;θ)=[0(n+k)×n02(m+k)×nIn×n], \frac{\partial k}{\partial \theta}(z_\star;\theta) = \begin{bmatrix} 0_{(n+k) \times n}\\ 0_{2(m+k) \times n}\\ -I_{n \times n} \end{bmatrix},

thus, for an infintesimal change in demand Δd\Delta d, and infinitesimal changes in the primal and dual variables Δx,Δμ,Δν\Delta x,\Delta\mu,\Delta \nu, we want to solve the following linear system of equations:

[QGFdiag(μ)Gdiag(Gxh)02(m+k)×nF0n×2(m+k)0n×n][ΔxΔμΔν]=[0n+k02(m+k)Δd]. \begin{bmatrix} Q & G^\top & F^\top\\ \operatorname{diag}(\mu_\star)G & \operatorname{\sf diag}(Gx_\star - h) & 0_{2(m+k) \times n}\\ F & 0_{n \times 2(m+k)} & 0_{n \times n} \end{bmatrix} \begin{bmatrix} \Delta x\\ \Delta \mu\\ \Delta \nu \end{bmatrix} = \begin{bmatrix} 0_{n+k}\\ 0_{2(m+k)}\\ \Delta d \end{bmatrix}.