(home)

Writing DC Optimal Power Flow in Matrix Form

The DC optimal power flow (DC OPF) approximation is a widely used model for power system optimization. In this post, we will write the DC OPF problem in matrix form. This post follows the notation from our recent e-Energy paper.

Parameterized DC OPF program formulation

The program parameters are fixed problem data collected in the vector θ:=(d,α,β)\boldsymbol{\theta} := (\boldsymbol{d}^\top,\boldsymbol{\alpha}^\top,\boldsymbol{\beta}^\top)^\top. The parameters dRN\boldsymbol{d} \in \mathbb{R}^{N} are the transmission-level active power demands dk=iDkdk,id_k = \sum_{i \in \mathcal{ D}_k} d_{k,i}; αRK\boldsymbol{\alpha}\in \mathbb{R}^K and βRK\boldsymbol{\beta} \in \mathbb{R}^K are the quadratic and linear generator cost coefficients, respectively. The parameterized DC OPF program is then

ming,p i=1kαigi2+βigi,s.t.F(Bgd)=p1Bg=1d,pmaxppmax0ggmax, \min_{\boldsymbol{g},\boldsymbol{p}} \ \sum_{i=1}^k \alpha_i g_i^2 + \beta_i g_i,\\ \operatorname{\sf s.t.} \quad \boldsymbol{F}(\boldsymbol{B} \boldsymbol{g} - \boldsymbol{d}) = \boldsymbol{p} \\ \quad \quad \mathbf{ 1}^\top\boldsymbol{B} \boldsymbol{g} = \mathbf{ 1}^\top\boldsymbol{d}, \\ \quad\quad -\boldsymbol{p}^{\sf max} \leq \boldsymbol{p} \leq \boldsymbol{p}^{\sf max} \\ \quad\quad \mathbf{ 0} \leq \boldsymbol{g} \leq \boldsymbol{g}^{\sf max},

where gRK\boldsymbol{g} \in \mathbb{R}^K are the power generation set points and pRM\boldsymbol{p} \in \mathbb{R}^M are the line power flows. The matrix B{0,1}N×K\boldsymbol{B} \in \{0,1\}^{N \times K} is a generator-to-node incidence matrix with entries of the form

Bi,j={1generator j at node i0otherwise, B_{i,j} = \begin{cases} 1 & \mathsf{generator} \ j \ \mathsf{at \ node} \ i\\ 0 & \mathsf{otherwise,} \end{cases}

and the matrix FRM×N\boldsymbol{F} \in \mathbb{R}^{M \times N} is the power transfer distribution factor (PTDF) matrix.

Quadratic program formulation of DC OPF

With some algebra, the parameterized DC OPF program (1) can be written as a compact, general form quadratic program (QP). To achieve this, define the constraint matrices

A(FBIM1B0M)G(IK0K×MIK0K×M0M×KIM0M×KIM),\boldsymbol{A} \triangleq\begin{pmatrix} \boldsymbol{F} \boldsymbol{B} & - \mathbf{ I}_M\\ \mathbf{ 1}^\top \boldsymbol{B} & \mathbf{ 0}_{M}^\top \end{pmatrix} \quad \boldsymbol{G} \triangleq\begin{pmatrix} -\mathbf{ I}_K & \mathbf{ 0}_{K\times M}\\ \mathbf{ I}_K & \mathbf{ 0}_{K\times M}\\ \mathbf{ 0}_{M\times K} & -\mathbf{ I}_M\\ \mathbf{ 0}_{M\times K} & \mathbf{ I}_M \end{pmatrix},

and the constraint vectors

x(gp), y(Fd1d), h(0gmaxpmaxpmax). \boldsymbol{x} \triangleq \begin{pmatrix} \boldsymbol{g}\\ \boldsymbol{p} \end{pmatrix}, \ \ \boldsymbol{y} \triangleq \begin{pmatrix} \boldsymbol{F} \boldsymbol{d}\\ \mathbf{ 1}^\top \boldsymbol{d} \end{pmatrix}, \ \ \boldsymbol{h} \triangleq \begin{pmatrix} \mathbf{ 0}\\ \boldsymbol{g}^{\sf max}\\ \boldsymbol{p}^{\sf max}\\ \boldsymbol{p}^{\sf max} \end{pmatrix}.

Furthermore, define the quadratic cost coefficient matrix and linear cost vector as

Q(diag(α)0K×M0M×KτIM×M), w(β0M), \boldsymbol{Q} \triangleq \begin{pmatrix} \operatorname{\sf diag}(\boldsymbol{\alpha}) & \mathbf{ 0}_{K \times M}\\ \mathbf{ 0}_{M \times K} & \tau \mathbf{ I}_{M \times M} \end{pmatrix}, \ \ \boldsymbol{w} \triangleq \begin{pmatrix} \boldsymbol{\beta}\\ \mathbf{ 0}_M \end{pmatrix},

where τ>0\tau>0 is a small regularization parameter to yield a strongly convex objective. The program (1) can now be written as

minx12xQx+wx,s.t.GxhAx=y. \min_{\boldsymbol{x}}\quad \frac{1}{2}\boldsymbol{x}^\top \boldsymbol{Q} \boldsymbol{x} + \boldsymbol{w}^\top \boldsymbol{x},\\%\boldsymbol{w}^\top \boldsymbol{x},\\ \mathsf{s.t.}\quad \boldsymbol{G} \boldsymbol{x} \preceq \boldsymbol{h}\\ \quad \boldsymbol{A} \boldsymbol{x} = \boldsymbol{y}.

Let μR2K+2M\boldsymbol{\mu} \in \mathbb{R}^{2K + 2M} and νRM+1\boldsymbol{\nu} \in \mathbb{R}^{M+1} be the dual variables associated with the inequality constraints Gxh\boldsymbol{G} \boldsymbol{x} \preceq \boldsymbol{h} and equality constraints Ax=y\boldsymbol{A} \boldsymbol{x} = \boldsymbol{y}, respectively. Let z:=(x,μ,ν)\boldsymbol{z} := (\boldsymbol{x}^\top,\boldsymbol{\mu}^\top,\boldsymbol{\nu}^\top)^\top be the concatenation of the primal and dual variables of the program (6). The Lagrangian of the program is

l(z;θ)12xQx+wx+μ(Gxh)+ν(Axy). l(\boldsymbol{z};\boldsymbol{\theta}) \triangleq \frac{1}{2}\boldsymbol{x}^\top \boldsymbol{Q} \boldsymbol{x} + \boldsymbol{w}^\top \boldsymbol{x} + \boldsymbol{\mu}^\top(\boldsymbol{G}\boldsymbol{x} - \boldsymbol{h}) + \boldsymbol{\nu}^\top(\boldsymbol{A}\boldsymbol{x} - \boldsymbol{y}).

Next, we construct a linear system of equations that describes the Karush-Kuhn-Tucker conditions for optimality of the program (6). Let z\boldsymbol{z}^* be a solution to the program (6). The stationarity, complimentary slackness, and equality constraint feasibility conditions form a system of equations k()\boldsymbol{k}(\cdot) whose root is z\boldsymbol{z}^*. The KKT operator k()\boldsymbol{k}(\cdot) is then

k(z;θ)=(xL(z;θ)diag(μ)(Gxh)Axy)=(Qx+w+Gμ+Aνdiag(μ)(Gxh)Axy), \boldsymbol{k}(\boldsymbol{z};\boldsymbol{\theta}) = \begin{pmatrix} \nabla_{\boldsymbol{x}}\mathcal{L}(\boldsymbol{z};\boldsymbol{\theta})\\ \operatorname{\sf diag}(\boldsymbol{\mu})(\boldsymbol{G} \boldsymbol{x} - \boldsymbol{h}) \\ \boldsymbol{A} \boldsymbol{x} - \boldsymbol{y} \end{pmatrix} = \begin{pmatrix} \boldsymbol{Q} \boldsymbol{x} + \boldsymbol{w} + \boldsymbol{G}^\top \boldsymbol{\mu} + \boldsymbol{A}^\top \boldsymbol{\nu}\\ \operatorname{\sf diag}(\boldsymbol{\mu})(\boldsymbol{G} \boldsymbol{x} - \boldsymbol{h})\\ \boldsymbol{A} \boldsymbol{x} - \boldsymbol{y} \end{pmatrix},

which satisfies k(z;θ)=0\boldsymbol{k}(\boldsymbol{z}^*;\boldsymbol{\theta}) = \mathbf{ 0}.

Linear Programming Representation of DC OPF

An even simpler model can be obtained by eliminating the quadratic terms in the cost function. This is achieved by introducing a new variable zRK+M\boldsymbol{z} \in \mathbb{R}^{K+M} and rewriting the program (6) as a linear program (LP). The LP is then

minx,zwx,s.t.Qx+Gz=w,Ax=y,Gxh. \min_{\boldsymbol{x},\boldsymbol{z}} \quad \boldsymbol{w}^\top \boldsymbol{x},\\ \mathsf{s.t.} \quad \boldsymbol{Q} \boldsymbol{x} + \boldsymbol{G}^\top \boldsymbol{z} = -\boldsymbol{w},\\ \boldsymbol{A} \boldsymbol{x} = \boldsymbol{y},\\ \boldsymbol{G} \boldsymbol{x} \preceq \boldsymbol{h}.