Quantcast
Channel: physics.quant-ph – Annoying Precision
Viewing all articles
Browse latest Browse all 10

The Schrödinger equation on a finite graph

$
0
0

One of the most important discoveries in the history of science is the structure of the periodic table. This structure is a consequence of how electrons cluster around atomic nuclei and is essentially quantum-mechanical in nature. Most of it (the part not having to do with spin) can be deduced by solving the Schrödinger equation by hand, but it is conceptually cleaner to use the symmetries of the situation and representation theory. Deducing these results using representation theory has the added benefit that it identifies which parts of the situation depend only on symmetry and which parts depend on the particular form of the Hamiltonian. This is nicely explained in Singer’s Linearity, symmetry, and prediction in the hydrogen atom.

For awhile now I’ve been interested in finding a toy model to study the basic structure of the arguments involved, as well as more generally to get a hang for quantum mechanics, while avoiding some of the mathematical difficulties. Today I’d like to describe one such model involving finite graphs, which replaces the infinite-dimensional Hilbert spaces and Lie groups occurring in the analysis of the hydrogen atom with finite-dimensional Hilbert spaces and finite groups. This model will, among other things, allow us to think of representations of finite groups as particles moving around on graphs.

Physics

I am going to be vague about the mathematics here because it’s all about to get a lot simpler anyway. I am also not going to justify the physics because I am not particularly knowledgeable in this area.

In quantum mechanics, the states of a quantum system in a configuration space X are described by unit vectors in the Hilbert space L^2(X) up to phase, that is, up to multiplication by a complex number of norm 1. The system does not have a definite state in X; instead, if the state is \psi \in L^2(X), the probability that the state of the system lies in a region U \subset X is the integral of |\psi(x)|^2 over U.

Observables in a quantum system are given by self-adjoint operators A : L^2(X) \to L^2(X), and their values can be predicted as follows: if A is sufficiently nice, it has an orthonormal basis of eigenvectors \psi_1, \psi_2, ... with corresponding real eigenvalues \lambda_1, \lambda_2, .... If our quantum system is in a state \psi \in L^2(X), then A takes the value \lambda_k with probability |\langle \psi, \psi_k \rangle|^2, and the state \psi is then projected to the \lambda_k-eigenspace (wave function collapse). In particular, the expected value of A is given by \langle v, Av \rangle^2. (If A has continuous spectrum, as is the case with the position operator, then one must do more work. Discrete spectrum, however, occurs in many applications and is responsible for the discreteness after which quantum mechanics is named, and we will only be dealing with discrete spectrum anyway.)

Time evolution \psi \mapsto U(t) \psi of a quantum system is described by a strongly continuous one-parameter unitary group, that is, a collection of unitary operators U(t) : L^2(X) \to L^2(X), t \in \mathbb{R} such that U(t+s) = U(t) U(s) and such that \lim_{t \to t_0} U(t) = U(t_0) in the strong operator topology. By Stone’s theorem, there exists a self-adjoint operator A such that U(t) = e^{it A}. (Since U(t) can stand for a more general one-parameter group of symmetries of our quantum system, this is a form of Noether’s theorem in quantum mechanics.) An appropriate multiple of A is then the observable corresponding to energy; it is called the Hamiltonian H, we label its eigenvalues E_k, and the dynamics of our quantum system are then described by the Schrödinger equation

\displaystyle i \hbar \frac{d}{dt} \psi(t) = H \psi(t)

where \psi(t) = U(t) \psi(0) : \mathbb{R} \to L^2(X) describes the time evolution of our system (and should perhaps be written \psi(t, x)) and \hbar is the reduced Planck constant. Note that this is equivalent to \psi(t) having the form

\displaystyle \psi(t) = e^{- i \frac{H}{\hbar} t} \psi(0).

Given the eigenvalues and eigenvectors of H, we can write down the family of solutions

\displaystyle e^{-i \frac{H}{\hbar} t} \psi_k = e^{-i \frac{E_k}{\hbar} t} \psi_k

to the Schrödinger equation, and since the \psi_k are an orthonormal basis, these solutions span the space of solutions (in the Hilbert space sense). These are the solutions for which the energy H takes the definite value E_k, so they are called the base states relative to H. A generic state in L^2(X) is a linear combination of base states, so energy does not take a definite value for such a state. However, time evolution preserves the probability distribution on the set of possible energies of a state, which is a form of conservation of energy. More generally, an observable (a self-adjoint operator) A : L^2(X) \to L^2(X) is preserved by time evolution if and only if it commutes with the Hamiltonian H.

If the eigenspaces of H are all one-dimensional, as one would expect to happen for generic choices of H, then knowing the energy of a base state uniquely determines it up to phase. However, in many quantum systems of interest, H is far from generic, and the eigenspaces are larger; this means that more than one base state will have the same energy in general, which physicists refer to as degeneration. In the degenerate case, one must use additional observables other than energy to name states.

A general mechanism which produces degeneration is the existence of a sufficiently large group G of symmetries of the Hamiltonian, that is, a group G of unitary operators on L^2(X) which commute with H. The reason is that G acts on the eigenspaces of H, so if the representation of G on X is nontrivial then the eigenspaces of H must carry nontrivial irreducible subrepresentations (and in particular must degenerate if G is non-abelian). This is certainly the case for an important example, the case of a single electron in a hydrogen atom, where X = \mathbb{R}^3 and G includes the group \text{SO}(3) (but, as it turns out, actually includes the larger group \text{SO}(4)). Here, the decomposition of the eigenspaces of H into irreducible representations of G is responsible for the list of possible states of the electron. The idea here is that as long as we are going to attempt to label different parts of each energy eigenspace, we might as well do it in a G-invariant way. In any case, the decomposition of L^2(X) into irreducible representations of G, which exists independent of the form of the Hamiltonian, strongly constrains the energy eigenspaces of any Hamiltonian with G-symmetry.

The Laplacian

Solutions to the Schrödinger equation are called wave functions; the historical reason is related to wave-particle duality. Informally, one can think of an eigenvector \psi_k of the Hamiltonian with eigenvalue E_k as vibrating at angular frequency \omega = \frac{E_k}{\hbar}, a manifestation of the de Broglie relations connecting energy and frequency. This vibration is not noticeable if the state of the system is \psi_k, since time evolution will only change its phase, but if the state of the system is a superposition of states then they will interfere with one another roughly as if they were waves with the corresponding angular frequencies, as one can readily verify. (It’s not healthy to take this intuition too seriously, though. Quantum mechanics is quantum mechanics, regardless of one’s intuitions about either waves or particles.)

Part of wave-particle duality is a mathematical similarity between the eigenfunctions of Hamiltonians which occur in nature and solutions to the wave equation. A particle in \mathbb{R}^n moving under the influence of a potential V(x) : \mathbb{R}^n \to \mathbb{R} has quantum Hamiltonian

\displaystyle H = - \frac{\hbar^2}{2m} \Delta + V

where \Delta is the Laplacian, m is the mass of the particle, and V denotes the operator L^2(\mathbb{R}^n) \to L^2(\mathbb{R}^n) corresponding to multiplication by V. This is true more generally for a particle in a Riemannian manifold, where \Delta is the Laplace-Beltrami operator. Here the first term is the kinetic energy of the particle and the second term is the potential energy. It follows that solutions to the Schrödinger equation in this case involve eigenfunctions of the Laplacian, which are the same functions which correspond to the standing wave solutions of the wave equation.

This suggests that in order to find finite toy models of quantum systems, we should find a finite analogue of the Laplacian. The usual choice is as follows. Our configuration space X will be the vertices of an undirected finite graph with no self-loops, but possibly with multiple edges. We denote by V and E its edge and vertex sets, respectively, and by L^2(V) the Hilbert space of functions V \to \mathbb{C} with inner product

\displaystyle \langle \psi, \phi \rangle = \sum_{v \in V} \overline{\psi(v)} \phi(v).

The state of our particle is then described by a unit vector in L^2(V). To construct a Hamiltonian we define the Laplacian (or combinatorial Laplacian)

\displaystyle \Delta \psi(v) = \sum_{(w, v) \in E} (\psi(w) - \psi(v))).

(There is a sign convention to choose here, and I am choosing the one which gives the better analogy to the usual Laplacian.) In other words, if we define D : L^2(V) \to L^2(V) by D \psi(v) = d_v \psi(v) (the degree operator) and A : L^2(V) \to L^2(V) by A \psi(v) = \sum_{(v,w) \in E} \psi(w) (the adjacency operator), then \Delta = A - D. (In particular, if X is regular then the Laplacian differs from the adjacency matrix by an identity matrix, so talking about the eigenvalues of one is equivalent to talking about the eigenvalues of the other.)

The Laplacian naturally arises in the study of random walks and electrical networks (see, for example, Doyle and Snell). Informally it measures the extent to which the value of \psi at v differs from the average of the value at its neighbors, and if we take X = \mathbb{Z}^n with its natural graph structure then \Delta can be written as a sum of discrete second finite differences which approximates the Laplacian on \mathbb{R}^n. For example, the combinatorial Laplacian on \mathbb{Z} is just

\displaystyle \Delta f(n) = f(n+1) - 2f(n) + f(n-1).

Before looking at the Schrödinger equation, it is perhaps a good idea to look at two other natural differential equations one can define using the Laplacian on a finite graph which generalize to \mathbb{Z}^n to give, in the limit, well-known partial differential equations. First, the heat equation

\displaystyle \frac{d}{dt} \psi(t) = \Delta \psi(t)

can be motivated as follows: we want a function \psi : \mathbb{R} \to L^2(V) describing the evolution of a heat distribution on the vertices over time. Heat should tend to propagate from hot regions to cool regions, so a given vertex should lose heat proportional to how much hotter it is than its neighbors. (There should be a constant in there describing how quickly heat propagates, but it is not particularly important for the discussion that follows.) Since heat is the result of Brownian motion of particles, we expect that heat equation on a finite graph is related to the behavior of random walks on the graph.

What are the possible stable heat distributions on a finite graph? These are precisely the (real-valued) solutions to \Delta \psi(t) = 0, or the harmonic functions.

Proposition: A (real-valued) function on a finite graph is harmonic if and only if it is constant on connected components.

Proof. Verify that

\displaystyle -\langle \psi, \Delta \psi \rangle = \sum_{(v, w) \in E} |\psi(v) - \psi(w)|^2

hence that \Delta \psi = 0 implies \psi(v) = \psi(w) for every (v, w) \in E. On the other hand any such function is harmonic. One can also argue by a finite form of the maximum modulus principle: \psi is harmonic if and only if the value of \psi at any vertex is the average of its neighbors, so since there are only finitely many vertices in each connected component, there must be some vertex for which \psi(v) is maximal. Since this maximal value is the average over all its neighbors, the neighbors of v must share that value. In physical terms, there cannot be a hottest point on any connected component, since heat would have flowed away from it to some other point.

The argument above establishes more than we asked for: it shows that, not only is the Laplacian self-adjoint (hence has all real eigenvalues), but it is also negative-semidefinite (since its negative represents a positive-semidefinite quadratic form), and the multiplicity of the eigenvalue 0 is the number of connected components of the graph, and the remaining eigenvalues are all negative. The general solution to the heat equation is a linear combination of the solutions

\displaystyle e^{\Delta t} \psi_k = e^{\lambda_k t} \psi_k

corresponding to each (real) eigenvector of the Laplacian \psi_k with eigenvalue \lambda_k; since the nonzero eigenvalues are negative, these solutions have temperature decaying to zero as is physically reasonable. In fact these solutions of the heat equation correspond to modes of “pure decay,” where the temperature decays exponentially in the simplest possible way, and the general solution is a linear combination of these modes. The rate of decay from generic initial conditions is controlled by the eigenvalue closest to zero, and if the graph is connected, this is the eigenvalue second smallest in absolute value \lambda_1, a fundamental invariant called the algebraic connectivity in graph theory. It is related to several other measures of connectivity in graphs and is important in the theory of expander graphs.

Similarly, the wave equation

\displaystyle \frac{d^2}{dt^2} \psi(t) = \Delta \psi(t)

can be motivated as follows: imagine that our graph X describes a system of point masses connected by springs, one for each edge. Hooke’s law then tells us that, if we displace these point masses very slightly, the restoring force that pulls a given vertex back to equilibrium is proportional to the sum of the differences in the displacements between that vertex and its neighbors. The general solution to the wave equation is a linear combination of the solutions

\cos (\sqrt{|\lambda_k|} t) \psi_k, \sin (\sqrt{|\lambda_k|} t) \psi_k

(keeping in mind that the eigenvalues \lambda_k are negative), and it is because of this that the eigenvectors \psi_k of the Laplacian on anything are called its harmonics. The solutions above are the standing waves, and so we know we can express any solution to the wave equation as a superposition of standing waves.

I think the following would be a very fun project for a graph theory class: for various interesting graphs, plot using mathematical software the solutions to the heat and wave equations from various initial conditions. I would do this myself if I had more time.

The Schrödinger equation (without potential)

Replacing the continuous Laplacian by a discrete Laplacian, and in the absence of a potential, the Schrödinger equation on a finite graph X (which we will assume to be connected) reads

\displaystyle i \hbar \frac{d}{dt} \psi(t) = - \frac{\hbar^2}{2m} \Delta \psi(t).

The energy eigenvalues E_k of the Hamiltonian are related to the eigenvalues \lambda_k of the Laplacian by E_k = - \frac{\hbar^2}{2m} \lambda_k; in particular, they are non-negative, and the vacuum state, which has energy zero, has multiplicity 1. The corresponding eigenvector, the all-ones vector, corresponds to a state with position smeared out evenly over all of X. The general solution is a superposition of the solutions

\displaystyle e^{-i \frac{H}{\hbar} t} \psi_k = e^{i \frac{\hbar}{2m} \lambda_k} \psi_k

which are the states in which the energy has a definite value E_k. And for a general state \psi, the probability that a particle moving around X will be measured at location v \in X is given by \frac{|\psi(v)|^2}{\sum_v |\psi(v)|^2}.

If the eigenvalues of the Laplacian all have multiplicity 1, then as mentioned above, the energy constitutes a completely satisfying label for base states. Consider, therefore, what happens when the graph X has a nontrivial automorphism group G. In that case, each energy eigenspace decomposes into a direct sum of representations of G, at least one of which will be nontrivial, so if these irreducible representations have dimension greater than 1 some eigenspace must degenerate. (There is a purely graph-theoretic consequence of this argument: if the eigenvalues of the Laplacian of a graph have multiplicity 1, then the automorphism group of the graph is abelian.) One of these eigenstates, the vacuum state, always corresponds to a copy of the trivial representation. In general the character of the representation of G on L^2(V) is \text{Fix}(g), so the number of copies of the trivial representation is

\displaystyle \frac{1}{|G|} \sum_{g \in G} \text{Fix}(g)

which is also, by Burnside’s lemma, the number of orbits of the action of G on X. (The subspace spanned by copies of the trivial representation is precisely the subspace spanned by functions constant on orbits.) In particular, if G acts transitively then there are no other copies of the trivial representation. In any case, this suggests that we should restrict our attention to the positive subspace, the orthogonal complement of the all-ones vector (where the Hamiltonian has positive eigenvalues).

Example. Let X be the complete graph K_n. Then G = S_n acts, not only transitively, but double transitively, on X. This implies (this is a nice exercise) that the positive subspace is an irreducible representation of S_n, which means that it must correspond to a single energy eigenspace of dimension n-1. By computing the trace of the Laplacian, or otherwise, we see that the corresponding eigenvalue is \lambda = -n, hence corresponds to energy E = \frac{n \hbar^2}{2m}. We can further slice up this eigenspace by picking out permutations in S_n, but I don’t see a particularly good reason to do this in this case.

Time evolution starting from the state \psi(0) = \langle 1, 0, 0, ... \rangle (where the particle is at some vertex with probability 1; we can prepare this state by observing the position of the particle) is as follows. We write

\displaystyle \psi(0) = \frac{1}{n} \left( \langle 1, 1, 1, ... \rangle + \langle n-1, -1, -1, ... \rangle \right)

where the first vector is in the zero eigenspace and the second is in the positive subspace, and the Schrödinger equation then gives

\displaystyle \psi(t) = \frac{1}{n} \left( \langle 1, 1, 1, ... \rangle + e^{ - i \frac{n \hbar}{2m} t} \langle n-1, -1, -1, ... \rangle \right).

Thus at time t, the probability that the particle is at its starting vertex is

\displaystyle \frac{1}{n^2} \left( n^2 - 2(n-1) + 2(n-1) \cos \frac{n \hbar}{2m} t \right)

and the probability that it is at any particular other vertex is

\displaystyle \frac{1}{n^2} \left( 2 - 2 \cos \frac{n \hbar}{2m} t \right).

This particle is in a superposition of two base states: the vacuum state where it is uniformly distributed over all other vertices, and a positive-energy state where it is found at its original vertex with high probability and at the other vertices with low probability. The observed behavior is “interference” between these two states. One can think of these results as a very simple manifestation of the uncertainty principle: the particle resists being located entirely at its starting vertex, so it distributes itself a little across all the other vertices. (We will give a slightly better manifestation of the uncertainty principle later.)

The complete graph is somewhat anomalous in how degenerate its positive eigenspace is: the only way the group of automorphisms of a graph can act double transitively on it in general is if it is either the complete graph or the empty graph, so in any other case the positive subspace decomposes into two or more irreducible representations. We can get a grip on how many of these there are as follows. By character theory, if L^2(V) decomposes into a direct sum of n_k copies of the irreducible representation V_k of G, then

\displaystyle \sum n_k^2 = \frac{1}{|G|} \sum_{g \in G} \text{Fix}(g)^2.

By Burnside’s lemma, this is just the number of orbits of G acting on V \times V. There is always one orbit consisting of pairs (v, v) of identical vertices, and the number of remaining orbits constrains how many representations can occur in the positive subspace. In particular, since n_k^2 \ge 1, it follows that the number of orbits of G on pairs of distinct vertices is an upper bound on the number of representations the positive subspace decomposes into (hence on the number of distinct positive eigenvalues the Laplacian can have). One can think of the number of orbits as “the number of ways two distinct vertices can stand in relation to each other up to automorphism,” and for structured graphs that makes it relatively easy to count directly.

Example. Consider the Kneser graph KG_{n,2} of 2-element subsets of [n] = \{ 1, 2, ... n \}, where two subsets are joined by an edge if they are disjoint. The graph has automorphism group G = S_n acting in the obvious way, two distinct vertices can only be related to each other in two ways up to the action of G: they can be connected or not connected. It follows that the positive eigenspace decomposes into two irreducible representations of S_n of total dimension {n \choose 2} - 1. Given an element a \in [n], consider the vector v_a in the positive subspace equal to n-2 on subsets containing a and equal to -2 on subsets not containing a. The vectors v_1, ... v_n add to zero and S_n acts on them via its standard permutation reprsentation, hence they span a n-1-dimensional irreducible representation corresponding to the positive subspace of the representation of S_n on K_n. The orthogonal complement of this representation is a {n \choose 2} - n-dimensional irreducible representation whose character is now easy to write down. I will leave the computation of the corresponding energy eigenvalues as an exercise.

Momentum

In ordinary quantum mechanics, linear momentum comes from the self-adjoint operators which generate linear translations via Stone’s theorem, and angular momentum comes from the self-adjoint operators which generate rotations. So momentum is generally related to spatial symmetries. In the finite graph model, spatial symmetries – the automorphisms G of the graph – are discrete, so Stone’s theorem doesn’t apply. But we can still give a definition of a quantity which behaves something like momentum.

Definition: Let g \in G and suppose that a state \psi \in L^2(V) is an eigenvector for g with eigenvalue e^{ix}, x \in S^1 \simeq \mathbb{R}/2\pi\mathbb{Z}. Then x is the g-momentum of \psi.

In other words, it’s what the eigenvalue of a self-adjoint operator generating g would be if it existed. The g-momentum does not come from a self-adjoint operator and so is not an observable as we have narrowly defined it, but nevertheless it is still a number which behaves like ordinary momentum; one might think of it as the momentum the particle has in the “direction” given by g. And it is still possible, for an arbitrary \psi \in L^2(V), to write down the probability that the g-momentum is one of its several possible values, and to decompose the energy eigenspaces of the Hamiltonian by the eigenspaces of g, each one corresponding to a particular value for the g-momentum. And since g commutes with the Laplacian, g-momentum is conserved by time evolution.

Example. Let X be the cycle graph C_n, with vertices V = \{ 0, 1, 2, ... n-1 \}. Then G is the dihedral group D_n. Letting g be the rotation v \mapsto v+1 \bmod n, the decomposition of L^2(V) into eigenspaces of g is straightforward, since it is the same as the decomposition of the regular representation of \mathbb{Z}/n\mathbb{Z}: the eigenspace spanned by the eigenvector \psi_k = \langle 1, \zeta_n^k, \zeta_n^{2k}, ... \zeta_n^{(n-1)k} \rangle where \zeta_n = e^{ \frac{2 \pi i}{n} } is the one where the g-momentum has value \frac{2 \pi k}{n}.

In this particular case the Laplacian can be written in terms of g: it is precisely equal to g + g^{-1} - 2, hence \psi_k is an eigenvector of the Laplacian with eigenvalue \lambda_k = 2 \cos \frac{2 \pi k}{n} - 2, hence energy E_k = \frac{\hbar^2}{m} \left( 1 - \cos \frac{2 \pi k}{n} \right). Since E_k = E_{-k} it follows that eigenvectors \psi_k pair up into energy eigenspaces (and in fact, pair up into irreducible representations of D_n), and we can slice up each pair according to whether the g-momentum is directed clockwise or counterclockwise (as measured by whether it is closer to 0 in the clockwise or counterclockwise direction).

This situation is the discrete analogue of the particle in a ring, and indeed note that the eigenvectors are very similar and that if k is very small compared to n we have 2 - 2 \cos x \approx x^2, hence

\displaystyle E_k \approx \frac{\hbar^2}{2m} \left( \frac{2 \pi k}{n} \right)^2

which is very close to the lower energy levels of the spectrum in the continuous case. (One should think of \frac{2\pi}{n} as a conversion factor between the discrete and continuous Laplacians.)

The cycle graph is also not a bad place to demonstrate another form of the uncertainty principle: position and g-momentum operators do not commute. Here by a position operator we mean a self-adjoint operator x_v which projects \psi \in L^2(V) onto the subspace of functions nonzero except at a particular v \in V. In particular, as is already clear from our computations above, there is no state in L^2(V) in which both position and g-momentum are uniquely determined (a common eigenvector). Furthermore, a state in which position is completely determined is a state with maximum ambiguity in g-momentum:

\displaystyle \langle 1, 0, 0, ... \rangle = \sum_{k=0}^{n-1} \frac{1}{n} \langle 1, \zeta_n^k, \zeta_n^{2k}, ... \zeta_n^{(n-1)k} \rangle.

Conversely, a state in which g-momentum is completely determined is a state with maximum ambiguity in position. One should think of this in terms of the discrete Fourier transform, since the story parallels perfectly the story of the Fourier transform on \mathbb{R}^d which describes the relationship between position and momentum for free particles in \mathbb{R}^d.

Representations, duals, and tensor products

There is a philosophy going back at least as far as Wigner that irreducible representations of the symmetry group G of a quantum system should be identified with the possible types of particles in that system. This seems pretty reasonable: if \psi is a wave function describing some particle then g \psi for g \in G should be thought of as the same particle, just moving in a different direction, so it is natural to collect all of these wave functions into the same representation of G. Thus an arbitrary wave function is a superposition of different types of particles corresponding to the different irreducible representations of G occurring in the decomposition of L^2(V).

This gives us a physical language for making sense of basic operations on representations. For example, if W is an irreducible subrepresentation of L^2(V), then since the character of L^2(V) is real, it follows that the dual representation W^{\ast} is also an irreducible subrepresentation. (Concretely, it is obtained from W by taking the complex conjugate of the corresponding wave functions. If W has a basis of wave functions with real coordinates, then W \simeq W^{\ast}.) In physical language, one might call W^{\ast} the antiparticle of W. Note that if \psi_k \in W has energy eigenvalue E_k, then \psi_k evolves as

\displaystyle e^{- i \frac{E_k}{\hbar} t} \psi_k

whereas its antiparticle \overline{\psi_k} evolves as

\displaystyle e^{-i \frac{E_k}{\hbar} t} \overline{\psi_k}.

The probabilities one computes from this evolution are the same as the probabilities one computes from its complex conjugate

\displaystyle e^{i \frac{E_k}{\hbar} t} \psi_k

and so one might say that an antiparticle is “the same thing” as the original particle going backwards in time. But in the finite graph model one should be a little more careful about this, since the above object is not, strictly speaking, a solution to the Schrödinger equation when E_k is positive.

Particles and antiparticles are probably best known for colliding. To give this operation some kind of meaning in the finite graph model, we must first discuss the following general construction. If one quantum system is given by a Hilbert space K_1 and another is given by a Hilbert space K_2, then to study them together, we work in the Hilbert space tensor product K = K_1 \otimes K_2, and we call the corresponding quantum system the composite system. This is the natural thing to do if, for example, K_1 and K_2 are the Hilbert spaces of states of two particles and you want to study the particles together. (But it is also the natural thing to do if K_1 and K_2 describe two aspects of the same object, e.g. the x and y-coordinates of a particle in the plane.) Given any self-adjoint operator A on K_1 we get a self-adjoint operator A \otimes 1 on K, and similarly for K_2, so one can make all of the same observations on each subsystem as before. In addition, for every symmetry g of K_1 there is a corresponding symmetry g \otimes 1 of K, and similarly for K_2. If K_1, K_2 are both equipped with Hamiltonians H_1, H_2, then the operators giving the energy of each subsystem are H_1 \otimes 1, 1 \otimes H_2, so the total energy is a new Hamiltonian

H = H_1 \otimes 1 + 1 \otimes H_2

(assuming there is no interaction between the subsystems; otherwise we need a third term to describe that interaction.) The time evolution operator given by the Schrödinger equation for K is then

U(t) = e^{-i \frac{H}{\hbar} t} = e^{-i \frac{H_1}{\hbar} t} \otimes e^{-i \frac{H_2}{\hbar} t}

since H_1 \otimes 1 and 1 \otimes H_2 commute. It follows that if \psi, \psi' are states in H_1, H_2, then the time evolution of the state \psi \otimes \psi' in H is given by the individual evolution of each state in each subsystem. Of course, in general a state in H is a superposition of states of the form \psi \otimes \psi'; this is the mechanism behind quantum entanglement.

If K_1, K_2 are finite graph models L^2(V_1), L^2(V_2) on graphs X_1, X_2, then their tensor product can be identified with L^2(V_1 \times V_2), and the composite Hamiltonian is the Hamiltonian of a new finite graph model given by a new graph structure on V_1 \times V_2, the Cartesian product (or box product) of graphs X_1 \Box X_2. The box product has Laplacian

\Delta = \Delta_1 \otimes 1 + 1 \otimes \Delta_2

so it defines the correct composite Hamiltonian. There is even a Wikipedia article about this construction, in the special case of the usual discrete Laplacian on \mathbb{Z}. (Note that the Cartesian product of n copies of \mathbb{Z} is \mathbb{Z}^n with the obvious graph structure, a nice property not shared by the tensor product of graphs. More generally, it does the obvious thing to Cayley graphs.) Thus one can identify a pair of free particles traveling on X_1 and X_2 with a single free particle traveling on X_1 \Box X_2.

Remark: Philosophically, it seems to me that the distinction between the box product and the tensor product comes from whether one wants to treat the Laplacian as a one-dimensional Lie algebra acting on the graph or whether one wants to treat, say, the adjacency matrix as a monoid acting on the graph.

For every pair of eigenvectors v, w of the Laplacians \Delta_1, \Delta_2 with eigenvalues \lambda, \mu, there is an eigenvector v \otimes w of the composite Laplacian \Delta with eigenvalue \lambda + \mu, and this is a complete orthonormal basis of eigenvectors. The energy eigenspaces of \Delta decompose into irreducible representations of G_1 \times G_2 where G_i is the symmetry group of X_i, and these irreducible representations are exactly what you would expect: tensor products of irreducible representations of the groups G_1, G_2. (Note that X_1 \Box X_2 may have additional symmetries which allow us to group some of these representations together into larger irreducible representations.)

Now suppose that X_1 = X_2 = X. Then one can think of a pair of free particles on X_1, X_2 as traveling on the same copy of X (although not interacting), and instead of allowing G_1 \times G_2 = G \times G as the symmetry group we should only consider symmetries we can perform on both particles at once, that is, the diagonal copy (g, g), g \in G of G inside G \times G. Then L^2(V \times V) is the tensor square of the representation of G given by L^2(V), so one can think of a tensor product W_1 \otimes W_2 of two subrepresentations W_1, W_2 of L^2(V) as describing the behavior of a pair of particles on X, one of which is of type W_1 and one of which is of type W_2. Since W_1 \otimes W_2 is not usually itself irreducible, this representation breaks up into irreducible components, which correspond to the different types of ways (up to symmetry) that a particle of type W_1 and a particle of type W_2 can behave together on X.

A collision between a particle of type W_1 and a particle of type W_2 can then be thought of as a morphism W_1 \otimes W_2 \to W of representations of G, where W is some subrepresentation of L^2(V). That is, it is a linear and G-invariant process for obtaining a new particle. And if W_2 = W_1^{\ast}, there is a canonical G-invariant morphism W_1 \otimes W_1^{\ast} \to 1 (where 1 denotes the trivial representation) given by the dual pairing. Physicists call this morphism particle-antiparticle annihilation, since its result is the vacuum state. The dual morphism 1 \to W_1 \otimes W_1^{\ast} (whose image corresponds to the identity operator W_1 \to W_1) is particle-antiparticle creation.

If one wants to think of all these morphisms as taking place in the same Hilbert space, then the natural thing to do is to set up a Hilbert space which is large enough to accompany any finite number of particles. The operation on Hilbert spaces correspond to studying a subsystem K_1 or a subsystem K_2 (in contrast to the tensor product, where we want to study the first and the second) is the direct sum K_1 \oplus K_2, so we construct the tensor algebra

\displaystyle T(L^2(V)) = \bigoplus_{k \ge 0} L^2(V^k).

Since the tensor products of a faithful representation of a finite group contain all irreducible representations of that group, it follows that we can get all irreducible representations of G by considering enough different particles moving around on X.

In practice, people don’t work with the full tensor algebra. This is because particles in physics of the same type are actually identical (for example, every pair of electrons is identical, as is every pair of photons, etc.), so switching the identities of two particles is a physical symmetry. The two simplest cases are bosons, where switching the two particles gives exactly the same composite wave function, and fermions, where switching the two particles gives the negative of the original wave function. This means that instead of working in the full tensor square L^2(V) \otimes L^2(V), we work instead in either the symmetric square (for bosons) or the exterior square (for fermions). Similarly, for n particles, instead of working in the full tensor power we work in either the symmetric power or the exterior power. So to work with a variable number of particles, instead of working in the tensor algebra, we work in either the symmetric algebra or the exterior algebra; in either case the corresponding construction is called Fock space. Note that the need to work in exterior powers for fermions is responsible for the Pauli exclusion principle. Fermions and bosons are not easy to explain in the context of the finite graph model itself; they are a consequence of the spin-statistics theorem, which is relativistic. In any case, we now have a physical interpretation of the symmetric and exterior powers, and of the symmetric and exterior algebras.

(Regardless of whether one cares about fermions and bosons in the finite graph model, the Hilbert space L^2(V^n) describing n identical particles on X still has an S_n-symmetry relative to which it decomposes in terms of irreducible representations of S_n, so one still needs to understand the corresponding Schur functors to understand the ways in which n particles can behave up to symmetry.)


Viewing all articles
Browse latest Browse all 10

Trending Articles