This tutorial provides a basic setup for working with multigraphs in Julia, including sampling a classical random graph model and efficiently computing the sparse matrices associated with it.
Graphs are a fundamental combinatorial object for modeling relationships between entities (and also great tool for modeling power grids). In the simplest setting, a graph consists of a set of vertices (nodes) and a set of edges connecting them. A multigraph generalizes this notion by allowing multiple edges between the same pair of vertices, and in some cases, self-loops (edges from a vertex to itself; power engineers will know this as "shunts"). This generalization is not just a mathematical curiosity, and it appears naturally in many applied settings, such as modeling parallel transmission lines in a power grid, multiple transportation routes between cities, or multiplex social networks.
From a linear-algebraic perspective, one of the most useful representations of a graph is through its incidence matrix and graph Laplacian; we'll be deadling with these two matrices throughout this tutorial:
An (edge-to-node) incidence matrix encodes the relationship between edges and vertices. For an undirected graph, each row corresponds to an edge, with entries indicating the edge's orientation relative to the vertices, and zeros elsewhere. If self-loops are included, their rows contain a single nonzero entry.
Given a vector of edge weights , the weighted Laplacian matrix is defined as
where is a diagonal matrix of edge weights. For an unweighted graph, is simply the identity; in weighted settings, the diagonal entries can represent physical properties (e.g., conductances/susceptances in an electrical network). The Laplacian is symmetric, positive semidefinite, and encodes a wealth of combinatorial and spectral information about the graph—such as connectivity, clustering, and diffusion processes. It is quite literally my favorite matrix of all time.
In this tutorial, we focus on random multigraphs, where the set of edges (and optionally, their weights) is generated by a probabilistic rule. A particularly simple and well-studied model is the Erdős–Rényi model where each possible edge between vertices appears independently with probability . Extending this to weighted and/or multigraph settings is straightforward: we can assign weights according to some distribution, and allow multiple edges to share the same endpoints.
Why random graphs? Studying the Laplacians of random graphs provides insight into average-case behavior of algorithms, robustness properties of networks, and probabilistic analogues of classical spectral graph theory results. Moreover, in scientific computing and numerical linear algebra, generating such random instances is an important step in benchmarking algorithms for large-scale sparse systems.
In what follows, we will:
Construct incidence matrices for complete graphs in Julia, with optional self-loops.
Sample weighted Erdős–Rényi multigraphs by selecting a subset of edges and assigning random weights.
Compute and visualize the associated Laplacian matrices, and verify connectivity using spectral properties.
This sets the stage for exploring not just random graph models, but also the interplay between combinatorial structure and matrix computations in a high-level, efficient language like Julia (which you should use).
With this context in place, we can now move from definitions to a concrete implementation. In the next section, we will see how to build these matrices very efficiently, simply, and quickly in Julia, starting with a complete graph (as a convenient base case) and then randomly thinning its edges and assigning weights to produce an Erdős–Rényi multigraph. Along the way, we will use Julia’s sparse array capabilities to keep our representations memory-efficient and suitable for large-scale experiments.
using LinearAlgebra
using SparseArrays
using Random
import FreeType, FileIO
using UnicodePlots
"""
complete_incidence_matrix(n::Integer; self_edges=false)
Return the edge-to-node incidence matrix **A** of the complete graph on `n`
nodes (undirected, one orientation chosen arbitrarily).
* `size(A) == (n*(n-1)//2, n)` when `self_edges == false`;
otherwise the last `n` rows form an identity block so
`size(A) == (n*(n-1)//2 + n, n)`.
Each row has at most two non-zero entries: **+1** and **−1**
(or just **+1** for a self-edge).
"""
function complete_incidence_matrix(n::Integer; self_edges::Bool = false)
# number of (unordered) edges
nCh2 = n*(n-1) ÷ 2
# total rows (add n if we keep self-edges)
m = self_edges ? nCh2 + n : nCh2
# non-zeros: 2 per ordinary edge + 1 per self-edge (if kept)
nnz = 2nCh2 + (self_edges ? n : 0)
I = Vector{Int}(undef, nnz) # row indices
J = Vector{Int}(undef, nnz) # column indices
V = Vector{Int8}(undef, nnz) # values (+1 / −1)
nz = 1 # running index into I,J,V
row = 1 # current row in the matrix
@inbounds for i in 1:n-1
for j in i+1:n
I[nz] = row; J[nz] = i; V[nz] = 1; nz += 1
I[nz] = row; J[nz] = j; V[nz] = -1; nz += 1
row += 1
end
end
if self_edges
@inbounds for i in 1:n
I[nz] = row; J[nz] = i; V[nz] = 1
nz += 1; row += 1
end
end
return sparse(I, J, V, m, n)
end
### Generate the incidence matrix of a complete graph
# number of nodes
n = 20
# number of edges; default to complete graph
m = (n*(n-1)) ÷ 2
A = complete_incidence_matrix(n; self_edges=false)
@show spy(A)
Considering nodes yields the following incidence matrix:
spy(A) = ┌─────┐
1 │⡏⠲⢄⡀⠀│ > 0
│⡧⣄⠀⠉⠒│ < 0
│⣇⠀⠙⠢⢄│
│⡇⠙⠢⣄⠀│
│⢹⠲⢄⡀⠉│
│⢸⠦⣀⠉⠒│
│⠸⣄⡈⠑⠦│
│⠀⣇⡈⠓⠤│
│⠀⢇⡉⠓⠤│
│⠀⢸⣌⠓⠤│
│⠀⠸⣤⣙⠢│
│⠀⠀⡗⢬⡑│
│⠀⠀⢹⣗⠮│
│⠀⠀⠈⣶⡭│
190 │⠀⠀⠀⠹⣿│
└─────┘
⠀1⠀⠀20⠀
380 ≠ 0
"""
Samples a random sparse complete graph with n nodes by selecting edges weights
w = abs.
"""
function sample_erdos_reyni_graph(
n,p;
# dist= x -> abs(randn(x))^2,
self_edge=false,
seed=2030 # random seed
)
nCh2 = n*(n-1)÷2
w = spzeros(nCh2)
nz = randsubseq(Xoshiro(seed), 1:nCh2, p)
nnzs = length(nz)
for ij in nz
w[ij] = abs(randn())^2
end
return w
end
### Sample a weighted Erdős-Rényi multigraph
n = 82 # number of nodes
# Erdős-Rényi multigraph threshold parameter
p = 2*log(n)/n # probability of edge existence
# see Tropp's monograph
# sample the weights
A = complete_incidence_matrix(n)
w = sample_erdos_reyni_graph(n, p)
W = spdiagm(w)
# Calculate the Laplacian matrix
L = A'*W*A
f = spy(L)
@show f
We now yield the following Laplacian matrix:
f = ┌──────────────────────────────┐
1 │⠛⢄⠀⠀⠤⡇⠄⠈⠛⠨⢇⠤⠁⠂⠅⠐⠀⠍⠅⠀⠭⠂⡠⡈⢀⡊⡍⡌⠤⠀│ > 0
│⠀⠀⠵⢇⡂⡆⠄⠄⠀⡒⡡⡂⡀⠃⡒⢀⣐⠀⠄⢂⡒⢐⡀⠌⠴⢀⢂⠂⡐⠀│ < 0
│⠤⠧⠨⠬⠑⢄⠈⢐⢂⣄⠌⠠⡂⢂⡄⠀⠤⡀⠄⢀⡠⠈⠎⠁⠀⠄⠀⠂⠀⠂│
│⡀⠁⠀⠅⢂⢀⠱⢆⠀⠠⡣⠰⢀⡀⠐⠨⡁⠂⣃⡀⠠⢐⢥⢂⡀⠀⡒⠀⠰⡄│
│⡛⡀⢠⠠⠈⢴⠀⡀⡑⢌⢀⠰⠀⠀⠄⠄⡂⢀⠂⣁⠀⠈⢠⠑⠠⢀⡄⡌⠈⠄│
│⠉⡕⠡⠪⠂⡁⢉⡊⢀⡐⡵⢏⠘⡨⠁⢥⡀⠂⠁⡀⠁⠈⡀⠌⠤⢀⠂⡆⠀⠀│
│⠡⠀⠤⠈⠨⢈⠀⠰⠀⠀⡒⡠⠑⢄⠂⠖⡉⠡⢤⢰⢀⠀⠀⠁⠂⠀⡓⢂⠠⠀│
│⢁⠁⠘⢈⠀⠉⡐⡀⠀⠅⠅⣄⢨⠄⢑⢔⠉⡈⠉⠈⡠⡀⢰⡂⣚⠀⡉⢂⣡⠀│
│⡄⠄⠐⠘⠀⠣⠡⠈⠈⢈⠠⠈⠇⡈⡃⠠⠛⢄⠀⠃⡒⠂⠤⠀⢉⠔⡂⡋⣐⡁│
│⠁⠁⠠⢁⠀⢁⠉⠸⠌⢠⠁⠠⢀⣓⡃⠀⠤⠀⠛⢄⠀⠀⠉⠴⠉⡀⡁⣀⠉⠀│
│⠣⠃⢘⢈⡀⠊⢀⢂⡀⠀⡁⠀⠀⠐⠀⠪⠸⠈⠀⠀⢑⢔⡘⢌⡒⠀⠓⢄⠚⠁│
│⡀⠪⡀⠌⠎⠁⠡⢓⢄⠒⡀⠌⠄⠀⠰⠲⠀⠃⢃⡄⡒⢌⠑⢄⠁⠢⠜⡐⠀⠀│
│⡠⠰⠐⢃⠀⠄⠀⠈⠀⢂⠀⢃⠈⠀⠚⠘⢃⠔⠃⠠⠘⠈⠡⡀⢑⢔⠃⠌⡀⠀│
│⡃⠭⠨⠐⠠⠀⠘⠈⡀⠭⠨⠤⠹⢈⠣⢈⡬⠨⠁⢨⠙⢄⢒⠡⡉⠄⠑⢄⠀⠀│
82 │⠀⠃⠐⠈⠠⠀⠐⠦⠂⠄⠀⠀⠀⠂⠁⠚⠔⠸⠃⠀⠞⠀⠀⠀⠀⠈⠀⠀⠑⠄│
└──────────────────────────────┘
⠀1⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀82⠀
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀826 ≠ 0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
Additionally, we can check if it's connected by checking if the second smallest eigenvalue is positive–unfortunately, we'll need to convert back to a dense matrix to do this:
fiedler = eigvals(Matrix(L))[2] # second smallest eigenvalue
println("Fiedler value: ", fiedler)
Fiedler value: 1.3019488070274103
In this tutorial, we built a lightweight framework for exploring basic concepts from spectral graph theory with Julia’s high-performance sparse linear algebra tools to generate and analyze random multigraphs. We saw how to:
Construct incidence matrices for complete graphs, optionally with self-loops.
Sample random subsets of edges in the Erdős–Rényi model and assign weights.
Assemble the corresponding weighted Laplacian and inspect its structure.
Use spectral properties—such as the Fiedler value—to assess connectivity.
These techniques provide a foundation for a wide range of applications, from testing numerical solvers to exploring probabilistic models of networks in physics, engineering, and computer science. By leveraging Julia’s expressive syntax and efficient sparse matrix handling, it is straightforward to scale up these experiments to large graphs and specialized random models, enabling both theoretical exploration and practical benchmarking.