Samuel Talkington

Interactive Visualization: Randomized Switching for Network Reconfiguration

This interactive visualization demonstrates the core ideas from our paper Efficient Network Reconfiguration by Randomized Switching, joint work with Dmitrii M. Ostrovskii and Daniel K. Molzahn.

Reconfigure your own network now at the live demo page:

Launch Visualization →

The Problem

Consider a network G=(N,E)G=(N,E) with n=Nn = |N| nodes and m=Em=|E| edges. There is a fixed vector of edge weights wRm\boldsymbol{w} \in \mathbb{R}^m and a fixed vector of nodal demands dRn\boldsymbol{d} \in \mathbb{R}^n that sums to zero, d1=0\boldsymbol{d}^\top \mathbf{1} = 0. The fixed demand did_i of each node iNi \in N is positive for sinks and negative for sources, and each edge eEe \in E has a weight wew_e that represents some fixed, physical quantity. For example, in power systems, wew_e is the susceptance of a transmission line.

A network reconfiguration problem is to determine a vector of switching statuses s{0,1}m\boldsymbol{s} \in \{0,1\}^m. The goal is to select which edges to "close" (activate) to minimize the congestion:

φ(s)=dLsd, \varphi(\boldsymbol{s}) = \boldsymbol{d}^\top \boldsymbol{L}_{\boldsymbol{s}}^\dagger \boldsymbol{d},

where

Ls=eEaeaewese\boldsymbol{L}_{\boldsymbol{s}} = \sum_{e \in E} \boldsymbol{a}_e \boldsymbol{a}_e^\top w_e s_e

is the weighted graph Laplacian with se{0,1}s_e \in \{0,1\} indicating whether edge ee is closed, and aeae=(eiej)(eiej) \boldsymbol{a}_e\boldsymbol{a}_e^\top = (e_i - e_j)(e_i - e_j)^\top is the elementary Laplacian for edge e=(i,j)Ee = (i,j) \in E. See my blog post for more on working with Laplacians in Julia.

This is a challenging mixed-integer non-linear optimization problem. Our key insight is the following: instead of finding a single best configuration, we design a distribution over configurations, and then sample from it.

The Algorithm

We assume:

Then, we relax s{0,1}m\boldsymbol{s} \in \{0,1\}^m to s[0,1]m\boldsymbol{s} \in [0,1]^m and solve the convex optimization problem

mins[0,1]mdLsd=φ(s)s.t.eEseq,se=1eS0. \begin{align*} \min_{\boldsymbol{s} \in [0,1]^m} \hspace{1em} &\boldsymbol{d}^\top \boldsymbol{L}_{\boldsymbol{s}}^\dagger \boldsymbol{d}=\varphi(\boldsymbol{s})\\ \text{s.t.} \hspace{1.5em} &\sum_{e \in E} s_e \leq q,\\ \hspace{1.5em} &s_e = 1 \hspace{1em} \forall e \in S_0. \end{align*}

where qq is the edge budget. The relaxed problem has a generalized self-concordant structure, which we exploit using a Frank-Wolfe style algorithm.

After optimization, we obtain the fractional s[0,1]m\boldsymbol{s}_\star \in [0,1]^m where ses_e^\star represents the probability that edge ee should be closed. We then sample configurations by randomized rounding: independently close edge ee with probability ses_e^\star.

The visualization below shows:

  1. Network diagram: A 7-node network with randomly assigned edge weights and balanced demands. After optimization, edges are colored by their switching probability ses_e^\star. Green nodes are loads (positive demand), red nodes are generators (negative demand).

  2. Congestion histogram: Distribution of congestion values φ(s~)\varphi(\tilde{\boldsymbol{s}}) from sampled configurations s~{0,1}m\tilde{\boldsymbol{s}} \in \{0,1\}^m. The red dashed line shows the optimal relaxed congestion φ(s)\varphi(\boldsymbol{s}_\star).

Try It

Try with Your Own Network →

Instructions

  1. Click Optimize Switching to run the Frank-Wolfe algorithm

  2. Click Generate Config to sample a binary configuration via randomized rounding

  3. Use Generate Batch to sample many configurations at once and see the distribution concentrate around the optimum

What You Should Observe

As you generate more samples, the histogram should show:

This demonstrates the power of randomization: instead of searching over 2m2^m binary configurations, we optimize a continuous relaxation and then sample.

Technical Details

Our Frank-Wolfe type algorithm proceeds as follows:

  1. Initialize with a spanning tree (the "backbone" to guarantee connectivity)

  2. At each iteration, compute the gradient entries (φ(s))e=we(xixj)2(\nabla \varphi(\boldsymbol{s}))_e = -w_e (x_i - x_j)^2 for each edge e=(i,j)Ee = (i,j) \in E, where x=Lsd\boldsymbol{x} = \boldsymbol{L}_{\boldsymbol{s}}^\dagger \boldsymbol{d}

  3. Returns a solution of the linear approximation of the objective over the constraint polytope, i.e., the "linear minimization oracle"

  4. Take a convex combination step, but only accept if objective decreases (monotonicity)

The gradient computation only requires solving a Laplacian system, which we analyze using the pseudoinverse formula

Ls=(Ls+1n11)11n11.\boldsymbol{L}_{\boldsymbol{s}}^\dagger = \left(\boldsymbol{L}_{\boldsymbol{s}} + \tfrac{1}{n}\mathbf{ 1}\mathbf{ 1}^\top\right)^{-1} - \tfrac{1}{n}\mathbf{ 1}\mathbf{ 1}^\top.

Fast solvers are available to do this efficiently (see references below). For the full theoretical analysis and large-scale experiments (up to 10,000 nodes), see our paper.

References

The Frank-Wolfe type algorithm used here is based on:

For fast Laplacian system solvers that enable scaling to large networks:

The Laplacians.jl Julia package implements these solvers.

Citation

@misc{talkington2025efficientnetworkreconfigurationrandomized,
      title={{Efficient Network Reconfiguration by Randomized Switching}},
      author={Samuel Talkington and Dmitrii M. Ostrovskii and Daniel K. Molzahn},
      year={2025},
      eprint={2510.24458},
      archivePrefix={arXiv},
      primaryClass={math.OC},
      url={https://arxiv.org/abs/2510.24458}
}

Acknowledgments

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.