# Introduction

This site describes an incomplete and ongoing project to build a **discrete, computational framework for geometry**, using "off the shelf" components from a range of mathematical fields, including graph theory, group theory, abstract algebra, representation theory, theoretical computer science, differential geometry, calculus, and others.

It asks the questions: how are we to think about geometries that have a "smallest scale"? And how do we represent geometries that are produced by computational processes?

### Motivation

For thousands of years we have encoded our intuitions of physical space into our mathematical and mental models for geometry. These continuous geometries – which assume infinite subdivisibility of space – are the foundation for much of modern mathematics and its applied branches. The resulting abstractions are now highly refined and immensely successful in their applications.

Nevertheless, much of our internal and external world is of a discrete and distributed nature, whether we consider the structures of our computer networks, our social networks, our economies, our software, our languages, our genetic lineages, even the symbolic operations of our thoughts. I strongly I believe there is a "missing theory" to express and model these "discrete worlds": a theory of geometry that rebases our intuitions on a computational footing and that can unlock new perspectives on all these phenomena.

Creating this theory will be slow work, since it must metabolize the ideas of continuous goemetry piecemeal – creating new intuition, notation, and software as it goes. Rather than working privately for years, I am inspired by the spirit of open source to maintain this site as a kind of "continuous beta", updated as ideas are clarified and elaborated.

### What is geometry?

From an abstract point of view, we may ask what geometry is fundamentally *about*.

One appealing answer is that it is the *logic of movement*: the rules by which an object or agent may be moved from place to place, or a more abstract system may transition from state to state, and the rules used to reason about these transitions. There are only two fundamental ingredients: the *states / places* that an agent or system can be in, and the *movements / transitions* available at each such state or place.

Both of these ingredients are discrete, taking only a finite number of values in any given instantiation: an agent may move from state \(\vert{x}\) to \(\vert{y}\) to \(\vert{z}\), but it does so in discrete jumps, and describes these with discrete labels: \(\tde{\vert{x}}{\vert{y}}{\card{a}}\), \(\tde{\vert{y}}{\vert{z}}{\card{b}}\). Time, if it is involved in such descriptions, is also discrete, passing like the ticking of a CPU clock.

### Novelty

None of the conceptual tools developed in this work are on their own very *novel*, although their assembly and emphasis into a unifying whole might prove to be for some readers. A theory is as much a way of seeing as it is a body of theorems, and so the approach I am pursuing hopes to offer a way of recasting and re-interpreting existing structures to emphasize a more computational and geometric perspective.

### Visualization

One of my aims is to render intuitive this approach to discrete geometry through careful visualization. Visualization is fundamental to the theory and practice of geometry, and many arguments or explanations are simpler, or indeed redundant, in a visual medium.

### Prerequisites

The perspective I advocate is cross-disciplinary, and this makes an "ideal reader" hard to gauge. Instead, I have assumed my reader to be *myself* at age 17: a curious undergraduate interested in various computer science and pure mathematics fields, but not deeply versed in any, and willing to read other references where necessary.

I use an informal style that attempts to be more engaging, rather than maximizing the information density of the text as is typical of academic writing. Eventually there will be a more terse, academic-style summaries of relevant "chunks" of work as they are completed and worthy of publication outside of this medium.

### Format

This site represents a "living document" of a work in progress, detailing the topics within quiver geometry that I am confident enough to explain and illustrate.

Some pages contain missing subsections marked "under construction" – these are conceptually complete but require writeup that I haven't got around to yet. Gray sections on the navigation menu on the left represent ideas that are, at best, partially developed, and require future work.

### Prior work and references

Unfortunately, this document doesn't currently have inline references. A full index and bibliography is forthcoming.

The closest existing body of ideas lies in geometric group theory, which roughly speaking uses Cayley graphs of groups as way to view groups as geometric objects. In a different direction, discrete differential geometry aims to discrete continuous manifolds for use in computational geometry. Lastly, topological crystallography studies graphs associated with crystallographic lattices.

### Physics

While quiver geometry doesn’t make immediate connections to physics, it does aim to populate the toolbox of ideas and methods we might use to build and analyze models of fundamental physics that *are* discrete, finite, and computable, such as that proposed by e.g. the Wolfram Physics Project, Gerard t'Hooft, and others.

If the universe *does* turn out to be operating on fundamentally discrete, computational principles, we will in retrospect see continuous geometry as an *enticing and useful illusion*, but one that never had the physical basis that was its historical justification and inspiration.

### The continuum

From a philosophical point of view, I believe we should attempt to construct continuous geometries as the limiting cases of particular discrete geometries. I'm not yet certain how to accomplish this. But I believe it will yield more insight than trying to construct discrete geometries by "discretizing" continuous geometries, by forcing us to embrace combinatorial and computational approaches from the very beginning.

# Summary and roadmap

## Overall summary

Here is a brief summary of what I've achieved so far.

The main object we consider is the [[[cardinal quiver:Graphs and quivers]]], a locally finite directed multigraph with shared oriented edge labels called *cardinals*. (We abbreviate *cardinal quiver* as *quiver*.)

Important [[[examples:Transitive quivers]]] of quivers are given by the [[[Cayley quivers]]] of Abelian groups, which can also be constructed as [[[quotients:Lattice quivers]]]. We can generalize these constructions by considering [[[group actions:Action groupoids]]], more complex [[[quotients:Intransitive lattices]]], [[[cyclic groups:Toroidal lattices]]], and [[[non-Abelian groups:Noncommutativity]]].

Notice that the connection with groups via action quivers is especially important, because it provides a kind of [[[Rosetta stone:Action groupoids#Introduction]]] to port intuition back and forth between quivers and groups. The general idea will be that quivers provide a way of handling partial symmetries of structures that elude modelling by groups.

Additional examples of quivers come directly from the causal structure of a non-deterministic computation, as expressed by a [[[rewriting system:Rewriting systems]]].

The paths on a quiver form a groupoid called the [[[path groupoid:Path groupoids]]]. Maps between path groupoids yield [[[path homomorphisms:Path homomorphisms]]]. Surjective path homomorphisms yield a notion of [[[covering:Coverings]]], coverings form a [[[lattice:Contraction lattices]]] and are associated with transitive [[[vertex colorings:Vertex colorings]]] in certain cases.

We can form [[[products of quivers:Quiver products]]], allowing us to derive new quivers from old ones. We can form some intransitive quivers in [[[surprising ways:Exceptional products]]]. Using these products we can define the local trivializations needed to define a discrete notion of a [[[fiber bundle:Fiber bundles]]].

More complex quivers can be partitioned into [[[charts:Cardinal transport#Charts]]] in which particular path relations hold, the connectivity of these charts is described by [[[another quiver:Cardinal transport#Transport atlas]]]. A notion of [[[parallel transport:Cardinal transport#Cardinal transport]]] analogous to the Ehresmann connection can be defined that relates cardinals on different charts. Curvature is described by holonomy on this derived quiver. We can compute the curvature of a quiver for the [[[Möbius strip:Cardinal transport#Curvature quiver]]] and the [[[cube:The cube]]].

A non-commutative, associative algebra can be [[[defined:Path algebra]]] on paths in a quiver. Doing this provides a substrate for a [[[calculus:Path calculus]]] of finite differences.

## Roadmap

This project continues to evolve and certain sections are either out of date or underdeveloped. Here I list some limitations and planned improvements for certain existing sections, and new sections that are on the roadmap.

#### Hypergraphs and hyperquivers

Hypergraphs lack a nice ontology in the traditional mathematical literature. One can be given in terms of type theory – [[[multisets:Multisets]]] play a crucial role.

Why do we need hypergraphs? They are, I suspect, far more pervasive than graphs in pure and applied contexts – but the lack of mathematical maturity in the abstract theory of hypergraphs means the common structure has been left latent more often than not.

#### Path homomorphisms

The section [[[Path homomorphisms]]] gives a good introduction to homomorphisms between quivers, but fails to explain the notion of topology that they induce. There is also low-hanging fruit to pick, like explaining the role of exact sequences.

#### Topology

As mentioned above, interpreting path homomorphisms between quivers as continuous maps implies a natural topological structure on the associated quivers. A section is needed to unpack this fully.

#### Symmetry

I am missing the obvious section on symmetry that would build on [[[affine path homomorphisms:Path homomorphisms#Affine path homomorphisms]]] to recover the isometry and isotropy groups of the associated crystallographic lattices in the case of lattice quivers. This will be straightforward, but the connection with **hypergroups**, especially for non-transitive lattices, is much more interesting, and unexplored.

#### Non-commutativity

The section [[[Noncommutativity]]] is currently just a stub. Careful study needs to be made of action quivers associated to the non-Abelian groups, and a link forged with a forthcoming section on Lie algebras.

#### Quiver products

The section [[[Quiver products]]] could do with some more discussion of the algebraic aspects of arrow polynomials. In particular I need to explain the operad structure precisely, as well as formalize the polynomials themselves as a ring acting on a quivers that are also valued in a (semi)ring – this perspective demands a proper treatment of quivers (and hyperquivers) as multiset-based data structures.

#### Fiber bundles

This section is incomplete, it mostly just recaps that traditional notion of fiber bundles of topological spaces. The description of discrete fiber bundles is still under construction. It is crucial for other definitions, in particular of curvature, which will be similar to the approach used by Manton.

#### Rewriting systems

The section [[[Rewriting systems]]] develops some of the ideas of how formalize the lattice of states, but needs many more examples before engaging in this formalization, which may be premature anyway. It should be split into two sections, one for examples and the second for partial state.

Nevertheless, the overall theoretical program is as follows: a rewriting system generates a rewrite quiver that describes its transitions, with the cardinal structure identifying rewrites of unique substates of the global state. Forward paths through this quiver correspond to possible evolutions of the system. Re-ordering of causally distinct rewrites (“rewrites that commute”) defines a homotopy relation between such paths, taking us between evolutions that differ in the time foliation of something called the rewrite hypergraph – which factors local states.

The generators of these reorderings form the cardinals of a quiver analogous to the Lorentz group. In this sense we can construct a computational analogue of Special Relativity, but with the appropriate quiver-theoretic tools to model the *partial* symmetry that discrete systems manifest.

#### Contraction lattices

The section [[[Contraction lattices]]] describes how contractions of a quiver form an order-theoretic lattice, but doesn't probe any of the interesting properties of this lattice in general or for particular quivers. Crucially it doesn't answer when the lattices are distributive or modular, when they satisfy the descending or ascending chain conditions, and make links between these properties and the associated algebraic properties of the path ring.

#### Path algebra and calculus

The sections [[[Path algebra]]] and [[[Path calculus]]] are missing the ambient algebra in which path algebra is to be embedded as a subalgebra. This turns out to be the [[[plan ring:Adjacency#Plan ring]]], but as that section is developed it will retread the same ground as path algebra. I'll have to restructure these sections.

#### Curvature

The section [[[Cardinal transport]]] proposes a definition of discrete curvature that is not as precise as it should be. It suffers from the lack of a quiver-theoretic notion of a (principle) fiber bundle and tangent bundle. Doing this would allow definition of curvature in terms of homology. The fiber we require is the Cayley quiver of a signed permutation group that represents possible rewrites.

#### Defects

Curvature in the quiver-theoretic picture is fully determined by the the presence of "defects". I haven't touched on these defects in any meaningful way, enumerated them, created a language to describe them computationally, demonstrated how certain computations produce them, perform calculus in the presence of them, and so on.

#### Lie algebras

The section [[[The cube]]] measures the curvature of a particular cardinal structure on the cube. But we can also measure the curvature a different way: by measuring the [[[covariant difference:Path algebra#Covariant differences]]] of a well-chosen set of path vectors. The structure constants of these vectors give us the Lie algebra \(\specialOrthogonalAlgebra{3}{\ring{\mathbb{Z}}}\). I still need to write this up.

#### Tangent bundles

As mentioned [[[above:#Curvature]]], the definition of curvature on a quiver is unsatisfactory. Currently, the transport atlas is defined as a quiver whose edges describe transitions between charts, where these edges are labeled with cardinal rewrites that capture cardinal transport. A more flexible construction would be to define the curvature quiver as a subquiver that corresponds to a section of a fibre bundle, where the fibre quiver is the Cayley quiver of a signed permutation group. This would allow us to define curvature via homology.

#### Metrics

The familiar graph distance metric, which for [[[Cayley quivers]]] corresponds to the word metric in the associated group presentation, has many undesirable properties. For example, it is not isotropic. Intriguingly, the theory of random walks on graphs can recover the Euclidean metric defined by the obvious embedding of the associated quivers into \(\realVectorSpace{\sym{n}}\). Developing this idea, particularly in connection with information theory and quiver contractions, is a high priority.

#### Geometric algebra

Geometric algebra provides a novel way to model many familiar constructs in geometry and physics by way of bivectors, multivectors, etc. I'm not sure what form geometric algebra would naturally take in quiver geometry, but I'm highly motivated to pursue this.

#### Abstract algebra

There is reason to believe that abstract algebra will have a lot to say about the path ring, but I haven't devoted enough time to thinking about this.

## Presentation

I plan to improve the overall presentation of these ideas in the following ways:

#### Video

I'd like to make some short videos that introduce the main ideas in an easy-going way, with plenty of illustrative examples.

#### Symbols

I use KaTeX in such a way that the roles of individual symbols are known: whether a given symbol represents a group, ring, quiver, path homomorphism, etc. I'd like to surface this information using some kind of interactive mechanism or legend, to make it easier for readers to parse complicated expressions when they are in doubt.

#### Material

This website is generated from a series of Mathematica notebooks using custom tooling. While all the underlying software tools I use are available on Github. Unfortunately, the storage format of notebooks is ill-suited to version control, so the notebooks themselves are not public. I'd like to fix this by moving to a purely markdown-based storage format.

#### References

I don't have any references *anywhere*! Most urgently I need to root into the literature on crystallographic groups, geometric group theory, and differential geometry.

#### Category theory

I do not meaningfully engage with category theory yet. Using category theory widely would bring simplifications to the overall conceptual structure, but make the material more alienating to those without the requisite background. I'd like to experiment with *gated content*, in which you can flip between versions of the site that are tailored for different audiences.

# Table of contents

In [[[Graphs and quivers]]] we introduce the notion of a **cardinal quiver** and describe the **local uniqueness property**. We also enumerate some small quivers.

[[[Graphs and quivers#Hero@SmallFrame]]]

In [[[Transitive quivers]]] we introduce families of simple **transitive quivers**: the **line**, **square**, **triangular**, **cubic**, **grid**, **bouquet**, and **tree** quivers.

[[[Transitive quivers#Hero@SmallFrame]]]

In [[[Path groupoids]]] we define paths on a quiver, consider their path words, and show the paths form a **path groupoid**.

[[[Path groupoids#Hero@SmallFrame]]]

In [[[Word groups]]] we define the **word group**, which describes how we can build path words out of individual cardinals.

In [[[Path homomorphisms]]] we define **path homomorphisms** between quivers, list some properties, and examine some examples to build intuition. We also consider **affine path homomorphisms** and **automorphisms** of quivers.

[[[Path homomorphisms#Hero@SmallFrame]]]

In [[[Multisets]]] we set the stage for later developments by defining the useful data-structure known as the **multiset**. We explain how to form sums, unions, and intersections of multisets, and explain their connection to images of functions. We generalize the multiset to the **signed multiset**, and show how it combines with a semigroup operation to yield a **ring**.

In [[[Word rings]]] we define the **word ring**, which is the **group ring** of the word group. The word ring models linear combinations of words. We show how to interpret elements of the word ring as modelling **multiwords**: signed multisets of words.

In [[[Adjacency]]] we contemplate how to extend the concept of an **adjacency matrix** to model the cardinal structure of a quiver. We do this by defining the **plan ring**: a matrix ring over the word ring. Under multiset duality, matrices in this ring are dual to **plans**, which are multisets of (hypothetical) paths.

[[[Adjacency#Hero@SmallFrame]]]

In [[[Cayley quivers]]] we define how groups and their presentations give rise to **Cayley quivers** and show how the simple quivers we examined in [[[Transitive quivers]]] are thus generated as presentations of \(\group{\power{\group{\mathbb{Z}}}{\sym{n}}}\).

[[[Cayley quivers#Hero@SmallFrame]]]

In [[[Action groupoids]]] we recall **group actions** and understand Cayley quivers as instances of **action quivers** associated with actions of groups on themselves.

[[[Action groupoids#Hero@SmallFrame]]]

In [[[Path quivers]]] we define the **forward**, **backward**, and **affine path quiver**, which describe the possible paths of a quiver using another quiver.

[[[Path quivers#Hero@SmallFrame]]]

In [[[Path quotients]]] we define path valuations, and use these to construct **quotients** of path quivers.

[[[Path quotients#Hero@SmallFrame]]]

In [[[Lattice quivers]]] we generate some of the familiar transitive quivers as the quotients of one-vertex **fundamental quivers** by affine path valuations associated with group presentations of \(\group{\power{\group{\mathbb{Z}}}{\sym{n}}}\). We explain how such a quotient can be seen both as a breadth-first-search and as a Cayley quiver.

[[[Lattice quivers#Hero@SmallFrame]]]

In [[[Intransitive lattices]]], we use path quotients of fundamental quivers that have two or more vertices to generate **intransitive quivers** such as the **hexagonal** and **rhombille quivers**.

[[[Intransitive lattices#Hero@SmallFrame]]]

In [[[Toroidal lattices]]], we generate square, triangular, and hexagonal **toroidal quivers** as the Cayley quivers of presentations of *finite* Abelian groups, and plot these in three dimensions and on the modular plane.

[[[Toroidal lattices#Hero@SmallFrame]]]

In [[[Noncommutativity]]], we consider quivers obtained as the Cayley quivers of non-Abelian groups.

[[[Noncommutativity#Hero@SmallFrame]]]

In [[[Rewriting systems]]], we define **rewriting systems** and the quivers they generate.

[[[Rewriting systems#Hero@SmallFrame]]]

In [[[Coverings]]], we define **graph** and **quiver coverings**, and explain their connection to **path homomorphisms**.

[[[Coverings#Hero@SmallFrame]]]

In [[[Contraction lattices]]], we examine coverings derived from contractions of vertices, and show how they possess the structure of an order-theoretic lattice.

[[[Contraction lattices#Hero@SmallFrame]]]

In [[[Vertex colorings]]], we enumerate some **regular vertex colorings** of transitive quivers, and explain their connection to contraction lattices.

[[[Vertex colorings#Hero@SmallFrame]]]

In [[[Quiver products]]], we define **arrow polynomials** which encode how to *multiply quivers*: how to form a **product quiver** out of multiple **factor quivers**. We define the **locked**, **free**, and **Cartesian** products and see how they generatize particular transitive quivers.

[[[Quiver products#Hero@SmallFrame]]]

In [[[Exceptional products]]], we apply quiver products to obtain some *intransitive* quivers, and explore the structure of the connected components these products produce.

[[[Exceptional products#Hero@SmallFrame]]]

In [[[Fiber bundles]]], we introduce the classical, continuous notion of a **fiber bundle**, and then examine its quiver-theoretic incarnation.

[[[Fiber bundles#Hero@SmallFrame]]]

In [[[Path calculus]]], we define **vertex** and **edge fields** and use them to set up the machinery of finite differences on quivers.

[[[Path calculus#Hero@SmallFrame]]]

In [[[Path algebra]]], we unify vertex and edge fields into **path vectors**, and define the operations of translation, composition, and covariant difference.

[[[Path algebra#Hero@SmallFrame]]]

In [[[Cardinal transport]]], we create a Möbius quiver, equip it with **charts**, and use it to define a **transport atlas** that tracks transitions between parallel cardinals, giving us a method for **cardinal transport**. A suitable quotient yields the **curvature quiver**.

[[[Cardinal transport#Hero@SmallFrame]]]

In [[[The cube]]], we set up an atlas on a quiver representing the Platonic cube, and calculate the transport atlas and curvature quiver, obtaining the Cayley quiver of the **binary tetrahedral group**.

[[[The cube#Hero@SmallFrame]]]

# Graphs and quivers

## Graphs

We started with the intuition that geometry should be about states/places: \(\vert{x},\vert{y},\vert{z}\) and their transitions: \(\tde{\vert{x}}{\vert{y}}{\card{a}}\), \(\tde{\vert{y}}{\vert{z}}{\card{b}}\), etc. Evidently we have the main ingredients of a **graph**, specifically a **directed graph**:

The objects \(\vert{x},\vert{y},\vert{z}\) are called **vertices.** \(\tde{\vert{x}}{\vert{y}}{\card{a}}\) is a **labeled edge**, where \(\card{a}\) is the label, which we call a **cardinal**. The unlabelled form is written \(\de{\vert{x}}{\vert{y}}\). \(\vert{x}\) is called the **tail vertex** and \(\vert{y}\) is called the **head vertex** of the edge. When vertices represent states and edges represent transitions between states, we’ll use the terms vertex/state and edge/transition synonymously.

It turns out that we will have to consider specific *kinds* of directed graphs if we wish to build a notion of geometry. These graphs are **quivers**, also known as **directed multigraphs**, which are directed graphs that allow multiple edges as well as cycles between any pair of vertices:

## Cardinal quivers

A seemingly fundamental geometrical idea is that of a **direction**.

For discrete geometry, a direction will take the form of a label for an entire *set* of transitions: these are the transitions that, although they are between different pairs of vertices, are all in the same direction (in whatever sense). We’ll call this label a **cardinal**, by analogy with the cardinals of a compass \(\list{\card{N},\card{E},\card{S},\card{W}}\). We'll write cardinals in a typewriter font to distinguish them from symbols representing vertices, etc.

A **cardinal quiver** is a quiver whose edges are labeled with symbols called **cardinals**, with some additional properties. First, while we will allow the number of vertices and edges of the quiver to be countably infinite, we will restrict the number of cardinals to be finite. Additionally, the labeling of edges by cardinals must satisfy the property of **local uniqueness**:

the list of cardinals on edges {entering, leaving} a given vertex cannot contain duplicates

For example, the situation \(\tde{\vert{y}}{\vert{x}}{\card{c}},\tde{\vert{y}}{\vert{z}}{\card{c}}\) is forbidden, as the cardinal \(\card{c}\) occurs twice when leaving \(\vert{y}\). In contrast, the situation \(\tde{\vert{x}}{\vert{y}}{\card{c}},\tde{\vert{y}}{\vert{z}}{\card{c}}\) is permitted, as \(\card{c}\) occurs entering \(\vert{y}\) once and leaving \(\vert{y}\) once. Some more examples of forbidden and permitted situations are shown below:

This uniqueness property has a very important purpose: it ensures that a pair of a vertex and a cardinal \(\tuple{\vert{v},\card{c}}\) will *unambiguously identify an edge* (if one exists), since there can only be one edge starting at \(\vert{v}\) that is labeled by \(\card{c}\). This corresponds to the basic requirements of the notion of a *direction*: it unambiguously identifies the navigational choices an agent should make. As a motivation: for "north" to be a direction, there cannot be *two ways to go north* from one location that lead to different locations.

Cardinal quivers will be our focus, so I’ll refer to them as **quivers** to avoid cumbersome language. If we need to refer to the *classical* notion of a quiver as a directed multigraph, we’ll use the term **unlabelled quiver**.

Since cardinals will almost always label multiple edges in a quiver, we will use color to represent the edges labeled by each cardinal, and choose cardinal letters e.g. \(\rform{\card{r}}\), \(\gform{\card{g}}\), \(\bform{\card{b}}\) to reflect these colors:

Here are more examples – since the arrowhead color indicates the name of the cardinal, we'll drop the legends. The cardinals label multiple edges, but still obey local uniqueness:

Note that the right-most example has some of the intuitive aspects of an actual discrete geometry: the cardinal \(\rform{\card{r}}\) can be taken (meaning a move in the direction \(\rform{\card{r}}\) can be made) for any of the states \(\vert{x},\vert{y},\vert{z}\). Taking \(\rform{\card{r}}\) repeatedly simply cycles through these states: a discrete analog of a circle.

For simplicity, we can depict quivers that contains multiple edges between the same vertices by combining the edges, using multiple arrowheads instead. Using this convention the first two quivers above look as follows:

## Enumeration

Since finite, connected quivers will play a major role in this series, it is interesting to enumerate all the finite connected quivers of a given size.

One way to organize this enumeration is in terms of what we’ll call **quiver skeletons**, which are undirected simple graphs (graphs without multiple edges between vertices) that serve as *templates* for quivers. We can enumerate these skeletons by fixing the number of vertices, then taking all possible sets of undirected edges between these vertices that include each vertex at least once and still produce a connected graph. We count these quiver skeletons up to graph isomorphism, in other words, the labeling of the vertices of these graphs is not significant.

Here is a gallery that shows the quiver skeletons for each vertex count:

The number of skeletons for \(n\) vertices is the sequence \(\{1,3,10,50,354,3883,...\}\), which is not yet in the Online Encyclopedia of Integer Sequences (OEIS).

Once we have chosen a particular skeleton, we can then go about attaching **cardinal structure** to it to turn it into a quiver (which recall means a *cardinal* quiver). Typically there are multiple possible choices, although there is an obvious symmetry we must take into account: we can perform any *global renaming* of cardinals and we’ll still have the same essential cardinal quiver. More generally, the action of the relevant symmetry group is the simultaneous relabeling of vertices, edges, and cardinals:

Here, the vertices have been rewritten \(\list{\rewrite{1}{2},\rewrite{2}{3},\rewrite{3}{1}}\) and the cardinals have been rewritten \(\list{\rewrite{\rform{\card{r}}}{\bform{\card{b}}},\rewrite{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}}\) – note the underbar on \(\rform{\inverted{\card{r}}}\), which indicates we flip the cardinal direction when rewriting \(\bform{\card{b}}\) to \(\rform{\card{r}}\). We call two such quivers to be **strongly isomorphic**, or just **isomorphic**. We distinguish this from **weakly isomorphic** quivers, which are isomorphic as directed graphs but whose cardinal structure is *not* necessarily preserved – in other words, there is no rewriting of cardinals that can relate the cardinal structure of the first and second quivers.

Let’s look at some examples of the quivers associated with skeletons. We’ll only visualize the cases involving 3 or fewer cardinals, since the number of quivers rapidly increases as a function of the number of cardinals.

Taking the skeleton on the first row, which is a one-vertex graph with a self-loop, there is exactly one possible quiver for a given number of cardinals:

These quivers are called **bouquet quivers**.

Taking the 2nd skeleton on the 2nd row as another example, there are 0 quivers with 1 cardinal, 3 quivers with 2 cardinals, and 9 quivers with 3 cardinals. (The ‘flat’ arrowheads below indicate that a cardinal is present in both orientations on that edge).

The numbers of quivers for this skeleton as a function of the number of cardinals is \(\{0,3,9,19,34,55,83,119,164,219,...\}\). The empirically-found closed form for this sequence appears to be \(6^{-1}n^3+2^{-1}n^2+3^{-1}n-1\).

For the 1st skeleton on the 2nd row, there are 2 quivers with 1 cardinal, 4 quivers with 2 cardinals, and 6 quivers with 3 cardinals:

The numbers of quivers for this skeleton as a function of the number of cardinals is \(\{2,4,6,9,12,16,20,25,30,36,42,49,...\}\), with empirically-found closed form \(4^{-1}n^2+n+8^{-1}(7+(-1)^n)\).

For the 1st skeleton on the 3rd row, there is only 1 quiver with 1 cardinal, 16 quivers with 2 cardinals, and 64 quivers with 3 cardinals:

The numbers of quivers for this skeleton as a function of the number of cardinals is \(\{1,16,64,211,551,1276,2672,...\}\). No simple closed form is apparent, though one probably exists.

The following table summarizes the number of quivers that can be obtained from the skeletons with fewer than 4 vertices. The columns indicate skeletons, and the rows numbers of cardinals, and the entries indicate the number of quivers:

Only the first 3 columns of this table are in the OEIS.

The number of 5-cardinal quivers for the final 3 skeletons are too expensive to compute in a straightforward way, these are marked with a ? in the table.

Moving on to skeletons with 4 vertices, of which there are 50, we have this table:

Finally, by summing over the tables above, we can obtain a bird’s eye view that captures how many quivers there are as a function of number of vertices \(n\) and cardinals \(c\):

Clearly, beyond \(\sym{n} = 1\), even for modest values of \(\sym{c}\), we find our armory overflowing with quivers!

# Transitive quivers

In this short section we introduce some common families of quiver that are simple and ubiquitous in quiver geometry. Their most important property is that they obey a property called **vertex transitivity**, though some fine print applies to this statement that we'll get into. We will name these families, visualize them, and give some simple properties. In later sections we will see that some of them are the simplest examples of **lattice quivers**, generated in a particular way by smaller quivers via **linear representations**. We'll also see how some of them can be formed from the **quiver products** of others.

These families are parameterized by positive integers (and infinity).

## Lattice quiver families

The first set of families, the lattice quiver families, are parameterized by a single positive integer \(0 \le \sym{n} \le \infty\), which in some sense measure their "linear size", which refers to the number of vertices visited by the longest possible path that consists of a single repeated cardinal. For \(\sym{n} = \infty\) we pass to the limit of an infinitely large quiver, and these are the quivers that obey vertex transitivity in the proper sense.

name | symbol | cardinals | dimension |
---|---|---|---|

line quiver | \(\subSize{\lineQuiver }{\sym{n}}\) | 1 | 1 |

cycle quiver | \(\subSize{\cycleQuiver }{\sym{n}}\) | 1 | 1 |

square quiver | \(\subSize{\squareQuiver }{\sym{n}}\) | 2 | 2 |

triangular quiver | \(\subSize{\triangularQuiver }{\sym{n}}\) | 3 | 2 |

cubic quiver | \(\subSize{\cubicQuiver }{\sym{n}}\) | 3 | 3 |

We will elaborate on the meaning of "dimension" shortly.

Below we show an example quiver from each family corresponding to \(\sym{n} = 5\), except for the cubic quiver, where we show \(\sym{n} = 3\) for size reasons:

### Terminology

We'll use the **line quiver** to illustrate these conventions, but the same conventions apply to the other quivers:

When \(\sym{n}\) is fixed at a particular finite value, e.g. \(\sym{n} = 5\), we’ll write \(\subSize{\lineQuiver }{5}\), and refer to this in words as the

**5-line quiver**.When \(\sym{n}\) is fixed to be merely finite, we’ll refer to e.g.

*a***finite line quiver**.When \(\sym{n}\) is fixed at infinity, we’ll write \(\subSize{\lineQuiver }{ \infty }\) and refer to this in words as the

**infinite line quiver**or just*the***line quiver**.

As a special case, we define \(\subSize{\cycleQuiver }{ \infty } = \subSize{\lineQuiver }{ \infty }\), since we cannot form a non-empty closed finite path in \(\subSize{\cycleQuiver }{ \infty }\).

### Infinite cases

For the line, square, triangular, and grid quivers, we obtain transitive quivers only when \(\sym{n} = \infty\), since these are the only quivers for which there is no periphery. Here we show finite portions of these infinite quivers, with the boundary shown faded to indicate it is only a finite view of an infinite object:

### Size

The smallest meaningful sizes of the lattice families are shown below.

At size 1, the line, square, and triangle collapse to a single vertex with no edges, the cycle is known as **1-bouquet quiver**.

Here we show a sequence of increasing size for each family:

## Special quiver families

The special families parameterize the number of *cardinals* with a parameter \(\sym{k}\)*.* The three special families are:

name | symbol | cardinals | dimension |
---|---|---|---|

bouquet quiver | \(\bouquetQuiver{\sym{k}}\) | \(\sym{k}\) | \(0\) |

grid quiver | \(\subSize{\gridQuiver{\sym{k}}}{\sym{n}}\) | \(\sym{k}\) | \(\sym{k}\) |

tree quiver | \(\subSize{\treeQuiver{\sym{k}}}{\sym{n}}\) | \(\sym{k}\) | \(\infty\) |

The tree quiver is only defined for *odd* \(\sym{n}\).

#### Bouquet quiver

We visualize the bouquet quiver \(\bouquetQuiver{\sym{k}}\) for \(\elemOf{\sym{k}}{\oneTo{4}}\):

#### Grid quiver

The grid quiver \(\subSize{\gridQuiver{\sym{k}}}{\sym{n}}\) generalizes the sequence \(\subSize{\lineQuiver }{\sym{n}},\subSize{\squareQuiver }{\sym{n}},\subSize{\cubicQuiver }{\sym{n}},\ellipsis\) to an arbitrary number of cardinals \(\sym{k}\).

We visualize the grid quiver \(\subSize{\gridQuiver{\sym{k}}}{\sym{n}}\) for \(\elemOf{\sym{k}}{\oneTo{3}}\), fixing \(\sym{n} = 2\):

#### Tree quiver

The tree quiver \(\subSize{\treeQuiver{\sym{k}}}{\sym{n}}\) is the "freest possible quiver" on \(\sym{k}\) cardinals, in the sense that every possible path one can form from these cardinals will reach a distinct vertex.

We fix \(\sym{n} = 5\), visualizing the tree quiver \(\subSize{\treeQuiver{\sym{k}}}{5}\) for \(\elemOf{\sym{k}}{\oneTo{4}}\):

We now fix \(\sym{k} = 2\), visualizing \(\subSize{\treeQuiver{2}}{\sym{n}}\) for \(\elemOf{\sym{n}}{\list{1,3,5,7,9}}\):

Lastly, we fix \(\sym{k} = 3\), visualizing \(\subSize{\treeQuiver{3}}{\sym{n}}\) for \(\elemOf{\sym{n}}{\list{1,3,5,7}}\):

## Cardinals

When we wish to be explicit about the cardinals present in these quivers, we will use square bracket notation, e.g. \(\bindCards{\subSize{\triangularQuiver }{\sym{n}}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\), to indicate the names of the cardinals present in the quiver. We'll refer to this in English as "the triangle quiver on \(\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}\)".

The one-dimensional quivers, the line and cycle quiver, both have only one cardinal:

The square quiver has two cardinals:

The triangular and cubic quivers have three cardinals:

## Per-cardinal dimensions

We can form a "rectangular" square quiver by specifying different dimensions for the different cardinals. We'll use the notation \(\bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \sym{w},\bform{\card{b}}\compactBindingRuleSymbol \sym{h}}\) in this case. Here is a rectangular square quiver \(\bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol 5,\bform{\card{b}}\compactBindingRuleSymbol 3}\):

When the symbols of the cardinals are not relevant, we will just refer to this quiver as \(\bindCardSize{\squareQuiver }{\sym{w},\sym{h}}\).

## Transitivity

A quiver is **transitive** if its vertices *all look the same*, or equivalently, that *no vertex is special.* We will have to develop the path groupoid to define this more precisely, but for now we can understand this property as implying the condition that all vertices have the same degree (number of incident edges), and the same cardinals available to them.

This condition is *not* sufficient to ensure a quiver is transitive, but it is necessary. Again, we'll have to wait for a more precise definition.

The quivers we consider here are all transitive in this sense, but typically only when we consider their infinite versions. \(\subSize{\lineQuiver }{3}\) for example, has a "middle" vertex and two "end vertices": all have *different* cardinals available to them. \(\subSize{\lineQuiver }{ \infty }\), in contrast, has no special vertices at all. The only exception to this pattern is \(\subSize{\cycleQuiver }{\sym{n}}\), which is transitive for any \(\sym{n}\).

As an example of an *intransitive* lattice, consider the hexagonal quiver (which we will define formally in a later section):

Although all vertices have degree 3, the vertices consist of two types: the "inward" vertices with incident cardinals \(\list{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\), and the "outward" vertices with incident cardinals \(\list{\rform{\inverted{\card{r}}},\gform{\inverted{\card{g}}},\bform{\inverted{\card{b}}}}\). We will examine these more general quivers in the section [[[Intransitive lattices]]].

## Vertex and edge counts

The number of vertices and edges as a function of the size is shown in a table below:

quiver | # vertices | # edges |
---|---|---|

\(\bouquetQuiver{\sym{k}}\) | \(1\) | \(\sym{k}\) |

\(\subSize{\lineQuiver }{\sym{n}}\) | \(\sym{n}\) | \(\sym{n} - 1\) |

\(\subSize{\cycleQuiver }{\sym{n}}\) | \(\sym{n}\) | \(\sym{n}\) |

\(\subSize{\squareQuiver }{\sym{n}}\) | \(\power{\sym{n}}{2}\) | \(2 \, \sym{n} \, \paren{\sym{n} - 1}\) |

\(\subSize{\triangularQuiver }{\sym{n}}\) | \(\frac{1}{2} \, \sym{n} \, \paren{\sym{n} + 1}\) | \(\frac{3}{2} \, \sym{n} \, \paren{\sym{n} - 1}\) |

\(\subSize{\cubicQuiver }{\sym{n}}\) | \(\power{\sym{n}}{3}\) | \(3 \, \power{\sym{n}}{2} \, \paren{\sym{n} - 1}\) |

\(\subSize{\gridQuiver{\sym{k}}}{\sym{n}}\) | \(\power{\sym{n}}{\sym{k}}\) | \(\sym{k} \, \power{\sym{n}}{\sym{k} - 1} \, \paren{\sym{n} - 1}\) |

\(\subSize{\treeQuiver{\sym{k}}}{2 \, \sym{n} + 1}\) | \(\frac{\sym{k} \, \power{\paren{2 \, \sym{k} - 1}}{\sym{n}} - 1}{\sym{k} - 1}\) | \(\frac{\sym{k} \, \paren{\power{\paren{2 \, \sym{k} - 1}}{\sym{n}} - 1}}{\sym{k} - 1}\) |

### Dimension

To define the dimension of a particular infinite family of quivers, we can look at the number of vertices that can be reached from any origin vertex as a function of the number of edges we are allowed to traverse.

Here we illustrate this idea for the line, square, and triangular, and cubic quivers:

The number of vertices in such a discrete ball of radius \(\sym{r}\) is given by the following table:

z | z | z | z |
---|---|---|---|

quiver | #ball vertices | leading term | dimension |

\(\bouquetQuiver{\sym{k}}\) | \(1\) | \(1\) | \(0\) |

\(\subSize{\lineQuiver }{ \infty }\) | \(\poly{2 \, \sym{r} - 3}\) | \(\sym{r}\) | \(1\) |

\(\subSize{\cycleQuiver }{\sym{n}}\) | \(\min(\poly{2 \, \sym{r} - 3},\sym{n})\) | \(\sym{r}\) | \(1\) |

\(\subSize{\squareQuiver }{ \infty }\) | \(\poly{2 \, \sym{r} \, \paren{\sym{r} \, -1} + 1}\) | \(\power{\sym{r}}{2}\) | \(2\) |

\(\subSize{\triangularQuiver }{ \infty }\) | \(\poly{3 \, \sym{r} \, \paren{\sym{r} \, -1} + 1}\) | \(\power{\sym{r}}{2}\) | \(2\) |

\(\subSize{\cubicQuiver }{ \infty }\) | \(\poly{2 / 3 \, \sym{r} \, \paren{\sym{r} \, \paren{2 \, \sym{r} - 3} + 4} - 1}\) | \(\power{\sym{r}}{3}\) | \(3\) |

\(\treeQuiver{\sym{k}}\) | \(\frac{\sym{k} \, \power{\paren{2 \, \sym{k} - 1}}{\sym{r} - 1} - 1}{\sym{k} - 1}\) | \(\notApplicable\) | \(\infty\) |

We now see the justification for the dimensions we listed before: a transitive quiver has dimension \(\sym{d}\) if it the number of vertices in a radius-\(\sym{r}\) ball scales as \(\power{\sym{r}}{\sym{d}}\). The tree quiver \(\treeQuiver{\sym{k}}\) involves a term \(\power{\paren{2 \, \sym{k} - 1}}{\sym{r} - 1}\), hence its Taylor expansion has non-zero terms of all degrees in \(\sym{r}\), and we say it has infinite dimension.

## Enriched cardinal structure

We will at times wish to "enrich" the cardinal structure of the **line** and **cycle** quivers.

### Serial cardinals

The first kind of enrichment is **serial**: instead of employing a single cardinal \(\card{c}\), we employ a *series* of cardinals \(\card{c}_1,\card{c}_2,\ellipsis ,\card{c}_{\sym{m}}\) that is taken in alternation. We'll write this as \(\card{\card{c}_1}\serialCardSymbol \card{\card{c}_2}\serialCardSymbol \ellipsis \serialCardSymbol \card{\card{c}_{\sym{m}}}\).

Let's take for example \(\bindCards{\subSize{\lineQuiver }{\sym{n}}}{\rform{\card{r}}\serialCardSymbol \bform{\card{b}}}\), which we'll refer to in English as "the 2-line quiver on \(\rform{\card{r}}\,\)*then* \(\bform{\card{b}}\)".

Here we have a series of 3 cardinals:

The same construction works as you would expect for the cycle quiver:

For a series of 3 cardinals:

### Parallel cardinals

The second kind of enrichment is **parallel**: in place of a single cardinal \(\card{c}\) we use a set of cardinals \(\card{c}_1,\card{c}_2,\ellipsis ,\card{c}_{\sym{m}}\), all present on the same edge. We write this as \(\card{\card{c}_1}\parallelCardSymbol \card{\card{c}_2}\parallelCardSymbol \ellipsis \parallelCardSymbol \card{\card{c}_{\sym{m}}}\).

Here we show \(\bindCards{\subSize{\lineQuiver }{\sym{n}}}{\rform{\card{r}}\parallelCardSymbol \bform{\card{b}}}\), which we'll refer to in English as "the 2-line quiver on \(\rform{\card{r}}\parallelCardSymbol \bform{\card{b}}\)".

We can also use the multi-arrowhead convention, in which case e.g. \(\bindCards{\subSize{\lineQuiver }{3}}{\rform{\card{r}}\parallelCardSymbol \bform{\card{b}}}\) looks like:

Again, an important application is to put a cardinal in parallel with its inverse:

However, this violates the local uniqueness property for \(\sym{n}>2\):

We can however *combine* the parallel and serial enrichments to sidestep this limitation. There are two obvious ways to do this:

The situation with the cycle quiver is similar. However, we cannot form \(\bindCards{\subSize{\cycleQuiver }{\sym{n}}}{\rform{\card{r}}\parallelCardSymbol \rform{\inverted{\card{r}}}}\) for *any* \(\sym{n}\).

We can however form \(\bindCards{\subSize{\cycleQuiver }{1}}{\rform{\card{r}}},\bindCards{\subSize{\cycleQuiver }{1}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}},\bindCards{\subSize{\cycleQuiver }{1}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}\parallelCardSymbol \bform{\card{b}}},\ellipsis\) etc., obtaining the **bouquet quivers**:

# Path groupoids

## Path words

The fundamental object of study in quiver geometry is the **path**, which is what it sounds like: a *journey* in the quiver, starting at some particular initial vertex, called the **tail** vertex, and ending at some final vertex, called the **head** vertex.

For concrete examples, we'll use the following quiver:

We'll write paths using the notation \(\paren{\pathWord{\vert{tail}}{\word{\card{c}}{\card{a}}{\card{r}}{\card{d}}{\card{i}}{\card{n}}{\card{a}}{\card{l}}{\card{s}}}{\vert{head}}}\), where \(\vert{tail}\) and \(\vert{head}\) are vertices and \(\word{\card{c}}{\card{a}}{\card{r}}{\card{d}}{\card{i}}{\card{n}}{\card{a}}{\card{l}}{\card{s}}\) is a **path word**: a sequence of cardinals. The parentheses are optional and are added for clarity. Here are some examples to make the idea clear:

Notice in the second and fourth examples: when we traverse an edge in the opposite direction to its associated cardinal e.g. \(\rform{\card{r}}\), we record this cardinal in the path word **inverted**, written with an underbar \(\rform{\inverted{\card{r}}}\).

Our priority will be to describe how to *compose* paths, and how this behavior yields a path groupoid.

## Path operations

### Path composition

There is a natural operation we can perform to combine two paths \(\path{P}\) and \(\path{R}\): we can compose them "head-to-tail".

We'll call this **path composition**, which we'll write \(\pathCompose{\path{P}}{\path{R}}\) and pronounce "\(\path{P}\) composed with \(\path{R}\)". It behaves word-algebraically as:

We'll now perform the composition \(\pathCompose{\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}}{\vert{y}}}}{\paren{\pathWord{\vert{y}}{\word{\gform{\card{g}}}}{\vert{z}}}}\) to give \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}{\gform{\card{g}}}}{\vert{z}}}\):

As an example of a multiplication that is *not* defined, consider \(\pathCompose{\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}}{\vert{y}}}}{\paren{\pathWord{\vert{z}}{\word{\bform{\card{b}}}}{\vert{x}}}}\). The first path ends at a different vertex from where the second path starts (\(\vert{y} \neq \vert{z}\)), so we cannot form a composite path.

We indicate this with the **null result**, written \(\nullPath\).

We can however compose these paths in the reverse order:

This should make it clear, if it wasn’t already, that path composition is not a commutative operation.

### Inversion

We define the *inverse* of a path \(\path{P}\), written \(\inverted{\path{P}}\) or \(\inverse{\path{P}}\), to be the path followed in the *opposite* direction.

Notice that the cardinals of the path word are simultaneously reversed and inverted.

### Empty paths

It is important to emphasize that **empty paths** serve a crucial role, since they behave like **identity elements** the leave paths unchanged under composition.

For example, we can left-multiply any path having tail vertex \(\vert{x}\) with the empty path \(\identityElement{x} = \paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}}\).

Similarly we can right-multiply any path with head vertex \(\vert{y}\) with the empty path \(\identityElement{y}\):

Loosely speaking, the **inverse** of an element is something that multiplies with it to produce an identity element. It should be clear that inverse of a path, when composed with it on the left or right, yields the empty path on its tail or head respectively.

Algebraically, \(\pathCompose{\path{P}}{\inverse{\path{P}}} = \identityElement{\vert{t}}\), \(\pathCompose{\inverse{\path{P}}}{\path{P}} = \identityElement{\vert{h}}\), where \(\identityElement{\vert{h}}\) and \(\identityElement{\vert{t}}\) are the empty paths at the tail vertex \(\vert{t}\) and head vertex \(\vert{h}\) of \(\path{P}\):

### Cancellation

We’ve seen above that we cancel *backtracking* paths when composing:

Here's a more complex example:

When operating at the level of path words, we performed the **word simplification** \(\concat{\rform{\card{r}} \rform{\inverted{\card{r}}}} = 1\) in the first case and \(\concat{\rform{\card{r}} \rform{\card{r}} \rform{\inverted{\card{r}}}} = \rform{\card{r}}\) in the second case. The symbol \("1"\) here refers to the empty word, which is otherwise hard to indicate.

There is in fact only one kind of simplification we can do, which is to cancel neighboring inverted cardinals: \(\rewrite{\concat{\wordSymbol{L} \inverted{\card{c}} \card{c} \wordSymbol{R}}}{\concat{\wordSymbol{L} \wordSymbol{R}}}\) and \(\rewrite{\concat{\wordSymbol{L} \card{c} \inverted{\card{c}} \wordSymbol{R}}}{\concat{\wordSymbol{L} \wordSymbol{R}}}\), where \(\wordSymbol{L}\) and \(\wordSymbol{R}\) stand for any **sub-words** consisting of zero or more cardinals.

But while the path word \(\concat{\card{c} \inverted{\card{c}}}\) represents a *trivial loop* in some sense, it is crucial we do not cancel sub words corresponding to *non-trivial loops* that involve more than one cardinal:

## Path groupoid

### Groups and groupoids

Let's recall the elementary notions of a **group** and a **groupoid.**

A **group** is a structure \(\tuple{\group{G},\Gmult }\) consisting of a set of elements \(\group{G}\) and a multiplication \(\functionSignature{\function{\Gmult }}{\tuple{\group{G},\group{G}}}{\group{G}}\). The multiplication is associative, so \(\groupElement{f}\Gmult \paren{\groupElement{g}\Gmult \groupElement{h}} = \paren{\groupElement{f}\Gmult \groupElement{g}}\Gmult \groupElement{h}\) for all \(\elemOf{\groupElement{f},\groupElement{g},\groupElement{h}}{\group{G}}\). Each element \(\groupElement{g}\) also has an inverse \(\groupInverse{\groupElement{g}}\) satisfying \(\groupInverse{\groupElement{g}}\Gmult \groupElement{g} = \groupElement{g}\Gmult \groupInverse{\groupElement{g}} = \groupElement{e}\), where \(\groupElement{e}\Gmult \groupElement{g} = \groupElement{g}\Gmult \groupElement{e} = \groupElement{g}\) defines the **identity** or **unit** element \(\elemOf{\groupElement{e}}{\group{G}}\).

A **groupoid** is, roughly speaking, a group in which the multiplication \(\Gmult\) becomes a partial function, and so need not be defined for all pairs \(\elemOf{\groupoidElement{g},\groupoidElement{h}}{\groupoid{G}}\). Importantly, a groupoid does not need to have a *unique* identity: in general, there can be multiple identity elements that satisfy the required properties for different subsets of \(G\). We'll call these the **units** of the groupoid.

### Path groupoid

It is easy to check that the set of finite paths \(\pathList(\quiver{Q})\) of a cardinal quiver \(\quiver{Q}\) and the operations \(\pathComposeSymbol{}\) and \(\pathReverse{□}\), together define a **path groupoid**, written \(\pathGroupoid{\quiver{Q}}\), that describes how paths compose. The **units** of the path groupoid are the empty paths on the vertices of the quiver, and the **inverse** of a path is the reversal of that path. As you'd expect, when two paths are not head-to-tail and so cannot be composed, the groupoid multiplication \(\pathComposeSymbol{}\) between them is undefined.

# Word groups

In this short section we introduce objects that nevertheless plays a vital role in understanding paths on quivers. These are **word groups** that describe *possible* path words, but divorced from any particular path in a quiver.

#### Notation

We write the **word group** on a set of cardinals as \(\bindCards{\wordGroupSymbol }{\card{\card{c}_1},\card{\card{c}_2},\ellipsis ,\card{\card{c}_{\sym{n}}}}\).

The word group of a quiver \(\quiver{Q}\), written \(\wordGroup{\quiver{Q}}\), is just the word group on the cardinals of that quiver.

If we wish only to specify how *many* cardinals are present, we will write this as \(\wordGroup{\quiver{\sym{n}}}\).

If it is clear from context what cardinals we are talking about, we'll just write \(\wordGroupSymbol\).

#### Elements

A group elements \(\elemOf{\groupElement{ \omega }}{\wordGroupSymbol }\) is a **word**, which is a finite sequence of (possibly inverted) cardinals. For example:

We reserve the symbol \(\card{1}\) to refer to the group identity, the **empty word** consisting of zero cardinals, which would otherwise be hard to indicate textually, since it naturally be written as a blank space.

The number of words is obviously infinite if there is at least one cardinal.

#### Reduced form

The words are subject to the identity that we can rewrite any **subword** (any contiguous subsequence of cardinals) according to \(\concat{\card{c} \inverted{\card{c}}} = \concat{\inverted{\card{c}} \card{c}} = \card{1}\). Removing such adjacent inverses is called **reduction**.

We typically prefer to write these words in **reduced form**, so that the word \(\word{\gform{\card{g}}}{\rform{\card{r}}}{\rform{\inverted{\card{r}}}}{\bform{\card{b}}}\) is reduced to \(\word{\gform{\card{g}}}{\bform{\card{b}}}\).

#### Concatenation

The group multiplication of two words \(\elemOf{\groupElement{ \upsilon },\groupElement{ \omega }}{\wordGroupSymbol }\) is simply their **concatenation**, which we will write by putting words next to each other with a small gap: \(\concat{\groupElement{ \upsilon }\,\groupElement{ \omega }}\).

For the the case \(\groupElement{ \upsilon } = \word{\gform{\card{g}}}{\bform{\card{b}}}{\rform{\ncard{r}}},\groupElement{ \omega } = \word{\rform{\card{r}}}\), this looks like \(\concat{\groupElement{ \upsilon }\,\groupElement{ \omega }} = \concat{\word{\gform{\card{g}}}{\bform{\card{b}}}{\rform{\ncard{r}}}\,\word{\rform{\card{r}}}}\), which reduces to \(\word{\gform{\card{g}}}{\bform{\card{b}}}\).

Notice that if there is more than one cardinal, the group is *not* Abelian, since \(\concat{\groupElement{ \upsilon }\,\groupElement{ \omega }} \neq \concat{\groupElement{ \omega }\,\groupElement{ \upsilon }}\) in general. The order of cardinals in a word matters!

Here we list some concatenations of words in \(\bindCards{\wordGroupSymbol }{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\):

\[ \begin{csarray}{rlcl}{abeeb} \word{\card{1}} & \word{\card{1}} & = & \word{\card{1}}\\ \word{\card{1}} & \word{\rform{\card{r}}} & = & \word{\rform{\card{r}}}\\ \word{\rform{\card{r}}} & \word{\rform{\ncard{r}}} & = & \word{\card{1}}\\ \word{\rform{\card{r}}}{\bform{\card{b}}} & \word{\gform{\card{g}}} & = & \word{\rform{\card{r}}}{\bform{\card{b}}}{\gform{\card{g}}}\\ \word{\rform{\card{r}}}{\rform{\card{r}}}{\bform{\card{b}}} & \word{\bform{\ncard{b}}}{\gform{\card{g}}}{\gform{\card{g}}} & = & \word{\rform{\card{r}}}{\rform{\card{r}}}{\gform{\card{g}}}{\gform{\card{g}}} \end{csarray} \]#### Inverses

To invert a word we reverse its individual letters and invert them.

Here we list a small number of examples of words from \(\wordGroup{\quiver{Q}}\) for \(\cardinalList(\quiver{Q}) = \{\rform{\card{r}},\bform{\card{b}}\}\), side-by-side with their inverses:

\[ \begin{csarray}{rlrlrlrl}{aeieieiei} \groupElement{ \omega } & \groupInverse{\groupElement{ \omega }} & \groupElement{ \omega } & \groupInverse{\groupElement{ \omega }} & \groupElement{ \omega } & \groupInverse{\groupElement{ \omega }} & \groupElement{ \omega } & \groupInverse{\groupElement{ \omega }}\\ \word{\card{1}} & \word{\card{1}} & \word{\rform{\card{r}}} & \word{\rform{\ncard{r}}} & \word{\rform{\card{r}}}{\rform{\card{r}}} & \word{\rform{\ncard{r}}}{\rform{\ncard{r}}} & \word{\rform{\card{r}}}{\gform{\card{g}}}{\bform{\card{b}}} & \word{\bform{\ncard{b}}}{\gform{\ncard{g}}}{\rform{\ncard{r}}}\\ & & \word{\gform{\card{g}}} & \word{\gform{\ncard{g}}} & \word{\rform{\card{r}}}{\gform{\card{g}}} & \word{\gform{\ncard{g}}}{\rform{\ncard{r}}} & & \\ & & \word{\bform{\card{b}}} & \word{\bform{\ncard{b}}} & \word{\gform{\card{g}}}{\bform{\ncard{b}}} & \word{\bform{\card{b}}}{\gform{\ncard{g}}} & & \end{csarray} \]#### Being free

Beyond the identity \(\concat{\card{c} \inverted{\card{c}}} = \concat{\inverted{\card{c}} \card{c}} = \card{1}\), which reflects in some sense the most generic property of a group, we do not impose any further relations (hence the term "free"). This implies that if two elements of \(\wordGroupSymbol\) "look different" in reduced form – that is, they contain a difference sequence of cardinals – they *are* different elements of the group.

#### Relationship to paths

As the name suggests, the path word of a path in \(\quiver{Q}\) is an element of the word group \(\wordGroup{\quiver{Q}}\), and path composition (when defined) will yield a path whose word is the concatenation (the group operation of \(\wordGroupSymbol\)) of the path words:

\[ \wordOf(\pathCompose{\path{P_1}}{\path{P_2}}) = \concat{\wordOf(\path{P_1})\,\wordOf(\path{P_2})} \]A choice of vertex *and* a word will uniquely identify a path in a quiver, if one exists. Due to the local uniqueness property, there cannot be *more* than one path starting at a given vertex that posses a given word. However, there may be *zero* paths starting at a vertex with a given word.

It should be obvious that \(\functionSignature{\wordOf}{\pathGroupoid{\quiver{Q}}}{\wordGroup{\quiver{Q}}}\) is a groupoid homomorphism from the path groupoid to the word group (as \(\wordGroup{\quiver{Q}}\) is a group it is naturally also a groupoid).

# Path homomorphisms

## Introduction

In this section, we define a **path homomorphism** between two quivers \(\quiver{Q}\), \(\quiver{R}\). Path homomorphisms are the natural candidate for a notion of a "continuous map" between two quivers, although we will not concentrate on defining notions of continuity or topology just yet.

## Maps between paths

First, consider a map \(\function{ \gamma }\) sending paths in a quiver \(\quiver{Q}\) to paths in a quiver \(\quiver{R}\). We can see such a map as being between the path groupoids of \(\quiver{Q}\) and \(\quiver{R}\):

\[ \functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}} \]A **path homomorphism** is such a map that *also* possesses the property of **compatibility**:

This states that we can compose paths in \(\quiver{Q}\) and then apply \(\function{ \gamma }\), or we can apply \(\function{ \gamma }\) and then compose the resulting paths in \(\quiver{R}\), and we will obtain the same result. This is in fact equivalent to the statement that \(\function{ \gamma }\) is a **groupoid homomorphism**.

### Properties

Before seeing any examples, we can deduce some simple properties of a path homomorphism. Firstly, let us consider a path \(\path{P}\), and the empty path starting at its tail vertex, written \(\pathTail{\path{P}}\), and head vertex, written \(\pathHead{\path{P}}\):

\[ \begin{aligned} \path{P}&= \paren{\pathWord{\vert{x}}{\wordSymbol{W}}{\vert{y}}}\\ \pathTail{\path{P}}&= \paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}}\\ \pathHead{\path{P}}&= \paren{\pathWord{\vert{y}}{\emptyWord{}}{\vert{y}}}\end{aligned} \]By definition, we have that \(\pathTail{\path{P}}\) and \(\pathHead{\path{P}}\) are the left and right units of \(\path{P}\) under path composition:

\[ \pathCompose{\pathTail{\path{P}}}{\path{P}} = \path{P} = \pathCompose{\path{P}}{\pathHead{\path{P}}} \]Applying a path homomorphism \(\function{ \gamma }\) to these equations, and using compatibility, we obtain:

\[ \begin{aligned} \function{ \gamma }(\pathCompose{\pathTail{\path{P}}}{\path{P}})&= \function{ \gamma }(\path{P})= \function{ \gamma }(\pathCompose{\path{P}}{\pathHead{\path{P}}})\\ \pathCompose{\function{ \gamma }(\pathTail{\path{P}})}{\function{ \gamma }(\path{P})}&= \function{ \gamma }(\path{P})= \pathCompose{\function{ \gamma }(\path{P})}{\function{ \gamma }(\pathHead{\path{P}})}\end{aligned} \]Therefore we have that \(\function{ \gamma }(\pathTail{\path{P}})\) and \(\function{ \gamma }(\pathHead{\path{P}})\) act like the left and right units for \(\function{ \gamma }(\path{P})\). Since the only units are the empty paths, we can conclude that \(\graphHomomorphism{ \gamma }\) maps empty paths to empty paths, and we can write:

\[ \begin{aligned} \function{ \gamma }(\pathTail{\path{P}})&= \pathTail{\function{ \gamma }(\path{P})}\\ \function{ \gamma }(\pathHead{\path{P}})&= \pathHead{\function{ \gamma }(\path{P})}\end{aligned} \]We can leverage this fact to understand how \(\graphHomomorphism{ \gamma }\) interacts with path reversal (the inverse of the path groupoid), since \(\pathCompose{\path{P}}{\pathReverse{\path{P}}} = \pathTail{\path{P}}\).

\[ \begin{aligned} \pathCompose{\path{P}}{\pathReverse{\path{P}}}&= \pathTail{\path{P}}\\ \function{ \gamma }(\pathCompose{\path{P}}{\pathReverse{\path{P}}})&= \function{ \gamma }(\pathTail{\path{P}})\\ \pathCompose{\function{ \gamma }(\path{P})}{\function{ \gamma }(\pathReverse{\path{P}})}&= \pathTail{\function{ \gamma }(\path{P})}\end{aligned} \]Therefore \(\function{ \gamma }(\pathReverse{\path{P}})\) behaves as the inverse of \(\function{ \gamma }(\path{P})\), since it right-multiplies with it to produce the unit \(\pathTail{\function{ \gamma }(\path{P})}\). Therefore we can conclude that \(\function{ \gamma }\) preserves path reversal:

\[ \function{ \gamma }(\pathReverse{\path{P}}) = \pathReverse{\function{ \gamma }(\path{P})} \]### Summary

We can summarize the properties of a path homomorphism (strictly speaking the first property is merely the definition):

\[ \begin{aligned} \function{ \gamma }(\pathCompose{\path{P_1}}{\path{P_2}})&= \pathCompose{\function{ \gamma }(\path{P_1})}{\function{ \gamma }(\path{P_2})}&\quad \textrm{preserves composition}\\ \function{ \gamma }(\pathTail{\path{P}})&= \pathTail{\function{ \gamma }(\path{P})}&\quad \textrm{preserves left units}\\ \function{ \gamma }(\pathHead{\path{P}})&= \pathHead{\function{ \gamma }(\path{P})}&\quad \textrm{preserves right units}\\ \function{ \gamma }(\pathReverse{\path{P}})&= \pathReverse{\function{ \gamma }(\path{P})}&\quad \textrm{preserves reversal}\end{aligned} \]## Examples

To keep things simple, we'll start out by considering the special case take \(\quiver{R} = \quiver{Q}\), in other words, we will consider maps that sends path in \(\quiver{Q}\) to paths in \(\quiver{Q}\). We'll take \(\quiver{Q}\) to be the 2-line quiver \(\bindCards{\subSize{\lineQuiver }{2}}{\card{c}}\):

#### Non-example

Imagine we try to define a path homomorphism \(\functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}}\), where we initially define its behavior on two particular paths \(\gbform{\path{P_1}},\rbform{\path{P_2}}\) in \(\quiver{Q}\):

There is **no** path homomorphism that behaves like this, because while we can form the composition \(\pathCompose{\gbform{\path{P_1}}}{\rbform{\path{P_2}}} = \rgform{\path{P_3}}\), we cannot compose their images under \(\function{ \gamma }\): \(\pathCompose{\function{ \gamma }(\gbform{\path{P_1}})}{\function{ \gamma }(\rbform{\path{P_2}})} = \nullElement\), as the head of \(\function{ \gamma }(\gbform{\path{P_1}})\) is not equal to the tail of \(\function{ \gamma }(\rbform{\path{P_2}})\).

#### Example

Now let us consider the map \(\functionSignature{\pathHomomorphism{ \rho }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}}\), illustrated below for paths \(\gbform{\path{P_1}},\rbform{\path{P_2}},\rgform{\path{P_3}}\) as well as the empty paths \(\identityElement{\vert{x}} = \paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}},\identityElement{\vert{y}} = \paren{\pathWord{\vert{y}}{\emptyWord{}}{\vert{y}}},\identityElement{\vert{z}} = \paren{\pathWord{\vert{z}}{\emptyWord{}}{\vert{z}}}\).

This map *does* satisfy compatibility, making it a path homomorphism. Intuitively, \(\graphHomomorphism{ \gamma }\) "reflects paths in the left-right axis" of the line quiver.

## Path tables

We can also visualize the behavior of above map as a **path table** showing the pairing of \(\path{P}\) and \(\function{ \gamma }(\path{P})\) for several paths \(\elemOf{\path{P}}{\pathList(\quiver{Q})}\):

To avoid having to illustrate \(\graphHomomorphism{ \gamma }\) on *every* possible path, we can exploit the fact that \(\graphHomomorphism{ \gamma }\) must preserve path reversal. The (undepicted) value of \(\function{ \gamma }(\pathWord{\vert{z}}{\word{\ncard{c}}{\ncard{c}}}{\vert{z}})\) is implied by the (depicted) value \(\function{ \gamma }(\pathWord{\vert{x}}{\word{\card{c}}{\card{c}}}{\vert{z}})\):

This symmetry implies the remaining paths:

Since these pairs are implied, we need not show these mappings in path tables.

#### Example: zero homomorphisms

Here we provide another example of a homomorphism, a **zero homomorphism** that sends every path to an empty path:

There are three such possible zero homomorphisms, because we can choose to send paths to the empty path on any of the three vertices in \(\quiver{Q}\).

#### Example: partial zero homomorphisms

We can also construct path homomorphisms that "zero" *some* paths but not all paths. For example, we can zero the portion of any path that is "left" of the central vertex, \(\vert{y}\):

#### Example: maps between two different quivers

So far, we've considered path represents from a single quiver to itself. Now let's consider a path homomorphism \(\functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}}\) between a line quiver \(\quiver{Q} = \bindCards{\subSize{\lineQuiver }{2}}{\rform{\card{r}}}\), and a different quiver \(\quiver{R}\) on cardinals \(\gform{\card{g}},\bform{\card{b}}\):

Here we show the path table for \(\function{ \gamma }\):

## Images of edges

Let us consider the full path table for the homomorphism we just defined:

This path table is obviously highly redundant, since many of the rows are implied by others:

Rows 7, 8, and 9 are implied by 4, 5, and 6 via path reversal: \(\function{ \gamma }(\pathReverse{\path{P}}) = \pathReverse{\function{ \gamma }(\path{P})}\).

Rows 1, 2, and 3, which describe where to send empty paths on vertices \(\vert{x},\vert{y},\vert{z}\), can be extracted from the heads (or tails) of the images of any paths that start (or end) at these vertices, e.g. rows 4 and 5.

Row 6 depicts the image of path \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}{\rform{\card{r}}}}{\vert{z}}}\), but this path can be represented as the composition \(\pathCompose{\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}}{\vert{y}}}}{\paren{\pathWord{\vert{y}}{\word{\rform{\card{r}}}}{\vert{z}}}}\), and hence is determined by rows 4 and 5 (and compatibility).

All in all, only rows 4 and 5 are necessary to fully describe the behavior of \(\function{ \gamma }\):

But these are just the images of the edges of \(\quiver{Q}\)!

### Summary

We can see that the behavior of a path homomorphism \(\functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}}\) is uniquely determined by a choice of path \(\elemOf{\function{ \gamma }(\pathWord{\vert{x}}{\word{\card{c}}}{\vert{y}})}{\pathList(\quiver{Q})}\) for every edge \(\elemOf{\tde{\vert{x}}{\vert{y}}{\card{c}}}{\edgeList(\quiver{Q})}\). The behavior of \(\function{ \gamma }\) on *longer* paths in \(\quiver{Q}\) is then determined by composing the images of the sequence of edges in the path:

The only requirement we must impose on these choices for individual edges is that the images of the edges can be composed in \(\quiver{R}\) when two edges are head-to-tail in \(\quiver{Q}\):

\[ \pathCompose{\function{ \gamma }(\pathWord{\vert{x}}{\word{\card{c}}}{\vert{y}})}{\function{ \gamma }(\pathWord{\vert{y}}{\word{\card{d}}}{\vert{z}})} \neq \nullElement \]## Lengthening and shortening

To get a further feeling for what kind of homomorphisms are possible, consider the following two graphs \(\quiver{Q} = \bindCards{\subSize{\lineQuiver }{3}}{\rform{\card{r}}}\) and \(\path{R} = \bindCards{\subSize{\lineQuiver }{4}}{\rform{\card{r}}}\):

#### Lengthening

Our path homomorphism \(\functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}}\) is defined by the following path table:

Our homomorphism \(\function{ \gamma }\) does a "jump" when traversing the middle edge in \(\quiver{Q}\). We say it is a **path lengthening**, because there is at least one path whose image under \(\function{ \gamma }\) is longer than the path:

This is equivalent to there being at least one edge that is sent to a path longer than 1:

\[ \existsForm{\elemOf{\edge{e}}{\edgeList(\quiver{Q})}}{\length(\graphHomomorphism{ \gamma }(\edge{e}))>1} \]We say a path homomorphism \(\function{ \gamma }\) is \(\sym{k}\)-**Lipschitz** if the lengths of images of edges under \(\function{ \gamma }\) are bounded above by \(\sym{k}\):

#### Shortening

Let us now consider a path homomorphism \(\functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{R}}}{\pathGroupoid{\quiver{Q}}}\) with the opposite feature:

Our homomorphism \(\pathHomomorphism{ \rho }\) now "shortens" a particular edge, namely \(\paren{\pathWord{\vert{w}}{\word{\rform{\card{r}}}}{\vert{x}}}\), sending it to \(\pathHomomorphism{ \rho }(\pathWord{\vert{w}}{\word{\rform{\card{r}}}}{\vert{x}}) = \paren{\pathWord{\vert{b}}{\emptyWord{}}{\vert{b}}}\). Any longer paths that pass through this edge will similarly be shortened.

We say \(\pathHomomorphism{ \rho }\) is a **path shortening**, because there is at least one path whose image under \(\function{ \gamma }\) is shorter than the path:

This is equivalent to there being at least one edge that is sent to the empty path:

\[ \existsForm{\elemOf{\edge{e}}{\edgeList(\quiver{Q})}}{\length(\graphHomomorphism{ \gamma }(\edge{e})) = 0} \]### Composition

We can of course *compose* these path homomorphisms, in either order.

First, let's examine \(\functionSignature{\function{\paren{\functionComposition{\pathHomomorphism{ \rho }\functionCompositionSymbol \function{ \gamma }}}}}{\quiver{Q}}{\quiver{Q}}\), which is \(\functionSignature{\pathHomomorphism{ \rho }}{\quiver{R}}{\quiver{Q}}\) applied to the result of \(\functionSignature{\function{ \gamma }}{\quiver{Q}}{\quiver{R}}\).

This is the identity path homomorphism on \(\quiver{Q}\).

Now let's examine \(\functionSignature{\function{\paren{\functionComposition{\function{ \gamma }\functionCompositionSymbol \pathHomomorphism{ \rho }}}}}{\quiver{R}}{\quiver{R}}\), which is \(\functionSignature{\function{ \gamma }}{\quiver{Q}}{\quiver{R}}\) applied to the result of \(\functionSignature{\pathHomomorphism{ \rho }}{\quiver{R}}{\quiver{Q}}\):

This is certainly not the identity path homomorphism, and is both a shortening and a lengthening.

### Length preserving

We say a path homomorphism \(\function{ \gamma }\) is **length preserving** if it is neither a shortening or a lengthening. This is equivalent to saying that the length of the image of every edge edge is exactly 1:

A length preserving homomorphism \(\functionSignature{\function{ \gamma }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{R}}}\) map edges of \(\quiver{Q}\) to *oriented* edges of \(\quiver{R}\): a directed edge of \(\quiver{Q}\) can be sent to a directed edge of \(\quiver{R}\), but in a sense opposite to its ordinary orientation. The "horizontal reflection" homomorphism on \(\bindCards{\subSize{\lineQuiver }{2}}{\rform{\card{r}}}\) we considered previously was of such a type.

## Path homomorphisms into the line quiver

For any quiver \(\quiver{Q}\) there is a particular family of path homomorphisms that is particularly easy to characterize: the path homomorphisms onto the line quiver \(\subSize{\lineQuiver }{ \infty }\).

To construct these, let us first define an **integer vertex field** to be a function \(\functionSignature{\function{f}}{\vertexList(\quiver{Q})}{\mathbb{Z}}\) that assigns to each vertex of \(\quiver{Q}\) an integer.

Below we show an example integer vertex field on the quiver \(\quiver{Q} = \subSize{\squareQuiver }{2}\), where the values of \(\function{f}\) are displayed beside each vertex:

We will now interpret the value \(\function{f}(\vert{v})\) as determining a vertex of the line quiver \(\bindCards{\subSize{\lineQuiver }{ \infty }}{\gform{\card{g}}}\):

Therefore, \(\function{f}\) induces part of the behavior of a path homomorphism \(\pathHomomorphism{ \rho }\), specifically, the behavior of the empty paths of \(\quiver{Q}\):

\[ \pathHomomorphism{ \rho }(\pathWord{\vert{v}}{\emptyWord{}}{\vert{v}})\defEqualSymbol \pathWord{\function{f}(\vert{v})}{\emptyWord{}}{\function{f}(\vert{v})} \]We now need only define \(\pathHomomorphism{ \rho }\) on the edges of \(\quiver{Q}\). To do this, we note that for any edge \(\elemOf{\tde{\vert{u}}{\vert{v}}{\card{c}}}{\edgeList(\quiver{Q})}\), we are forced by compatibility to send the corresponding path \(\paren{\pathWord{\vert{u}}{\word{\card{c}}}{\vert{v}}}\) to some path in \(\subSize{\lineQuiver }{ \infty }\) with tail vertex \(\function{f}(\vert{u})\) and head vertex \(\function{f}(\vert{v})\). However, there is only *one* path in \(\subSize{\lineQuiver }{ \infty }\)between any two given vertices, and hence we have no choice in defining the remaining behavior of \(\pathHomomorphism{ \rho }\).

The formal definition of \(\pathHomomorphism{ \rho }\) then is as follows:

\[ \pathHomomorphism{ \rho }(\pathWord{\vert{u}}{\word{\card{c}}}{\vert{v}})\defEqualSymbol \paren{\pathWord{\function{f}(\vert{u})}{\repeatedPower{\word{\gform{\card{g}}}}{\function{f}(\vert{v}) - \function{f}(\vert{u})}}{\function{f}(\vert{v})}} \]We show part of the path table to make the idea clear:

Notice how the image of each edge under \(\pathHomomorphism{ \rho }\) encodes the value of \(\function{f}\) at the head and tail of the edge, and the cardinal content of the path encodes the difference between these values. In [[[Path calculus]]] we’ll develop a more formal approach to this kind of gradient-like operation.

The converse direction is trivially true: any path homomorphism into the line quiver gives us an integer vertex field, by simply numbering the vertices of the line quiver and taking pre-images of empty paths.

Therefore, we have that there is a bijection between integer vertex fields and path homomorphisms into the line quiver.

#### Connected quivers

We have just seen that a function \(\functionSignature{\function{f}}{\vertexList(\quiver{Q})}{\vertexList(\subSize{\lineQuiver }{ \infty })}\) induces a unique path homomorphism \(\functionSignature{\pathHomomorphism{ \rho }}{\quiver{Q}}{\subSize{\lineQuiver }{ \infty }}\), defining \(\pathHomomorphism{ \rho }(\de{\vert{u}}{\vert{v}})\) to be the unique path in \(\subSize{\lineQuiver }{ \infty }\) between vertices \(\function{f}(\vert{u})\) and \(\function{f}(\vert{v})\).

We can quite easily generalize this construction, replacing the \(\subSize{\lineQuiver }{ \infty }\) with an arbitrary connected quiver \(\quiver{R}\). However, for a given function \(\functionSignature{\function{f}}{\vertexList(\quiver{Q})}{\vertexList(\quiver{R})}\) there may be *more than one* path between \(\function{f}(\vert{u})\) and \(\function{f}(\vert{v})\); we may in fact pick *any* path in \(\pathList(\path{R},\function{f}(\vert{u}),\function{f}(\vert{v}))\) to define the behavior of the path homomorphism on an edge \(\pathHomomorphism{ \rho }(\de{\vert{u}}{\vert{v}})\). Hence, we no longer have a bijection – typically there will be an infinite number of homomorphisms corresponding to a choice of function \(\function{f}\).

## Endomorphisms and automorphisms

#### Endomorphisms

A special role is played by the set of path homomorphisms \(\functionSignature{\pathHomomorphism{ \alpha }}{\pathGroupoid{\quiver{Q}}}{\pathGroupoid{\quiver{Q}}}\) from a quiver \(\quiver{Q}\) to itself. We'll write this set as \(\endomorphisms(\quiver{Q})\). We will now attach the structure of a **semigroup** to this set. That is to say, we will define an associative binary operation that corresponds to the composition of homomorphisms.

Like all function compositions, path homomorphism composition is associative. But is the composition of two homomorphisms \(\functionComposition{\pathHomomorphism{ \alpha }\functionCompositionSymbol \pathHomomorphism{ \beta }}\) itself a homomorphism? This is easily seen to be true:

\[ \begin{aligned} \function{\paren{\functionComposition{\pathHomomorphism{ \alpha }\functionCompositionSymbol \pathHomomorphism{ \beta }}}}(\pathCompose{\path{P_1}}{\path{P_2}})&= \pathHomomorphism{ \alpha }(\pathHomomorphism{ \beta }(\pathCompose{\path{P_1}}{\path{P_2}}))\\ &= \pathHomomorphism{ \alpha }(\pathCompose{\pathHomomorphism{ \beta }(\path{P_1})}{\pathHomomorphism{ \beta }(\path{P_2})})\\ &= \pathCompose{\pathHomomorphism{ \alpha }(\pathHomomorphism{ \beta }(\path{P_1}))}{\pathHomomorphism{ \alpha }(\pathHomomorphism{ \beta }(\path{P_2}))}\\ &= \pathCompose{\function{\paren{\functionComposition{\pathHomomorphism{ \alpha }\functionCompositionSymbol \pathHomomorphism{ \beta }}}}(\path{P_1})}{\function{\paren{\functionComposition{\pathHomomorphism{ \alpha }\functionCompositionSymbol \pathHomomorphism{ \beta }}}}(\path{P_2})}\end{aligned} \]Therefore, the set \(\endomorphisms(\quiver{Q})\) is a semigroup. It would further be a **group** if path homomorphisms were invertible, but we have seen examples of path homomorphisms that, for example, send all paths to the empty path on a particular vertex.

#### Automorphisms

So how can we constrain path homomorphisms to be invertible? It will turn out to be enough to ask that they are merely *surjective*, so that every possible path is the image of at least one other path:

Why does surjectivity imply injectivity? We prove this by contradiction. Assume that a surjective path homomorphism \(\pathHomomorphism{ \alpha }\) is not injective, so that:

\[ \existsForm{\path{P},\primed{\path{P}},\path{P} \neq \primed{\path{P}}}{\pathHomomorphism{ \alpha }(\path{P}) = \pathHomomorphism{ \alpha }(\primed{\path{P}})} \]If we decompose \(\path{P}\) and \(\primed{\path{P}}\) into sequences of individual 1-paths, we must obtain at least one pair \(\paren{\pathWord{\vert{u}}{\word{\card{c}}}{\vert{v}}},\paren{\pathWord{\vert{u}}{\word{\card{\primed{c}}}}{\primed{\vert{v}}}}\) such that \(\pathHomomorphism{ \alpha }(\pathWord{\vert{u}}{\word{\card{c}}}{\vert{v}}) = \pathHomomorphism{ \alpha }(\pathWord{\vert{u}}{\word{\card{\primed{c}}}}{\primed{\vert{v}}})\) but \(\vert{v} \neq \primed{\vert{v}}\).

## Affine path homomorphisms

You may have noticed that the discussion of path homomorphisms of quivers has not relied in any way on the cardinal structure of the quivers. In fact, everything we have said is applicable also to ordinary directed graphs and their path groupoids.

#### Word automorphisms

Recall that [[[word group:Word groups]]] \(\wordGroup{\quiver{Q}}\) is the free group on the cardinals of \(\quiver{Q}\).

We now consider a group homomorphism \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroupSymbol }{\wordGroupSymbol }\), or **automorphism** as such homomorphisms are called. We can think of such a homomorphism \(\groupHomomorphism{ \phi }\) as a way of "rewriting" words as other words, cardinal-by-cardinal.

Let's consider an example homomorphism on a quiver with cardinals \(\cardinalList(\quiver{Q}) = \list{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\). We define \(\groupHomomorphism{ \phi }\) as:

\[ \groupHomomorphism{ \phi } = \groupWordRewriting{\rewritingRule{\word{\rform{\card{r}}}}{\word{\gform{\card{g}}}{\rform{\card{r}}}},\rewritingRule{\word{\gform{\card{g}}}}{\word{1}},\rewritingRule{\word{\bform{\card{b}}}}{\word{\rform{\ncard{r}}}}} \]The empty word has no letters to rewrite, so it remains the empty word:

\[ \groupHomomorphism{ \phi }(\card{1}) = \card{1} \]The words with one cardinal are rewritten as given in the definition of \(\groupHomomorphism{ \phi }\). Because \(\groupHomomorphism{ \phi }\) is a group homomorphism, it must replace the *inverses* of these cardinals with the *inverses* of the corresponding right-hand-sides:

Longer words are rewritten left-to-right, since this is how a group homomorphism to \(\wordGroupSymbol\) must work. Here we show a few examples of rewriting of longer words:

\[ \begin{array}{rclcl} \groupHomomorphism{ \phi }(\word{\rform{\card{r}}}{\rform{\card{r}}}) & = & \groupHomomorphism{ \phi }(\rform{\card{r}})\iGmult \groupHomomorphism{ \phi }(\rform{\card{r}}) & = & \word{\gform{\card{g}}}{\rform{\card{r}}}{\gform{\card{g}}}{\rform{\card{r}}}\\ \groupHomomorphism{ \phi }(\word{\rform{\ncard{r}}}{\rform{\ncard{r}}}) & = & \groupHomomorphism{ \phi }(\rform{\inverted{\card{r}}})\iGmult \groupHomomorphism{ \phi }(\rform{\inverted{\card{r}}}) & = & \word{\rform{\ncard{r}}}{\gform{\ncard{g}}}{\rform{\ncard{r}}}{\gform{\ncard{g}}}\\ \groupHomomorphism{ \phi }(\word{\gform{\card{g}}}{\gform{\card{g}}}) & = & \groupHomomorphism{ \phi }(\gform{\card{g}})\iGmult \groupHomomorphism{ \phi }(\gform{\card{g}}) = \concat{\card{1}\,\card{1}} & = & \card{1}\\ \groupHomomorphism{ \phi }(\word{\bform{\card{b}}}{\bform{\card{b}}}) & = & \groupHomomorphism{ \phi }(\bform{\card{b}})\iGmult \groupHomomorphism{ \phi }(\bform{\card{b}}) & = & \word{\rform{\ncard{r}}}{\rform{\ncard{r}}}\\ \groupHomomorphism{ \phi }(\word{\rform{\card{r}}}{\bform{\card{b}}}) & = & \groupHomomorphism{ \phi }(\rform{\card{r}})\iGmult \groupHomomorphism{ \phi }(\bform{\card{b}}) = \concat{\word{\gform{\card{g}}}{\rform{\card{r}}}\,\rform{\inverted{\card{r}}}} & = & \gform{\card{g}}\\ \groupHomomorphism{ \phi }(\word{\rform{\card{r}}}{\gform{\card{g}}}{\bform{\card{b}}}) & = & \groupHomomorphism{ \phi }(\rform{\card{r}})\iGmult \groupHomomorphism{ \phi }(\gform{\card{g}})\iGmult \groupHomomorphism{ \phi }(\bform{\card{b}}) = \concat{\word{\gform{\card{g}}}{\rform{\card{r}}}\,\card{1}\,\rform{\inverted{\card{r}}}} & = & \gform{\card{g}}\\ \groupHomomorphism{ \phi }(\word{\rform{\card{r}}}{\gform{\card{g}}}{\bform{\ncard{b}}}) & = & \groupHomomorphism{ \phi }(\rform{\card{r}})\iGmult \groupHomomorphism{ \phi }(\gform{\card{g}})\iGmult \groupHomomorphism{ \phi }(\bform{\inverted{\card{b}}}) = \concat{\word{\gform{\card{g}}}{\rform{\card{r}}}\,\card{1}\,\rform{\card{r}}} & = & \word{\gform{\card{g}}}{\rform{\card{r}}}{\rform{\card{r}}} \end{array} \]Repeating the story of the composition of path homomorphisms, we can now form the endomorphism semigroup \(\endomorphisms(\wordGroupSymbol )\) of word homomorphisms.

### Automorphism group

If we want the group \(\automorphisms(\wordGroupSymbol )\) rather than the semigroup \(\endomorphisms(\wordGroupSymbol )\), we need to limit ourselves to *invertible* word homomorphisms.

As a first step to characterizing these, observe that any **signed** **permutation** of the set of cardinals gives an invertible homomorphism. Signed permutations are permutations that can *additionally* invert each element. Since the number of permutations of \(\sym{n}\) elements is \(\factorial{\sym{n}}\), and the number of inverse patterns is \(\power{2}{\sym{n}}\), there are \(\factorial{\sym{n}} \, \power{2}{\sym{n}}\) signed permutations.

For example, the 8 possible signed permutations on \(\list{\rform{\card{r}},\bform{\card{b}}}\) are \(\list{\tuple{\rform{\card{r}},\bform{\card{b}}},\tuple{\inverted{\rform{\card{r}}},\bform{\card{b}}},\tuple{\rform{\card{r}},\inverted{\bform{\card{b}}}},\tuple{\inverted{\rform{\card{r}}},\inverted{\bform{\card{b}}}},\tuple{\bform{\card{b}},\rform{\card{r}}},\tuple{\inverted{\bform{\card{b}}},\rform{\card{r}}},\tuple{\bform{\card{b}},\inverted{\rform{\card{r}}}},\tuple{\inverted{\bform{\card{b}}},\inverted{\rform{\card{r}}}}}\), yielding the following word automorphisms:

\[ \begin{array}{c} \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\card{r}}},\rewritingRule{\bform{\card{b}}}{\bform{\card{b}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\inverted{\card{r}}}},\rewritingRule{\bform{\card{b}}}{\bform{\card{b}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\card{r}}},\rewritingRule{\bform{\card{b}}}{\bform{\inverted{\card{b}}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\inverted{\card{r}}}},\rewritingRule{\bform{\card{b}}}{\bform{\inverted{\card{b}}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\card{b}}},\rewritingRule{\bform{\card{b}}}{\rform{\card{r}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\inverted{\card{b}}}},\rewritingRule{\bform{\card{b}}}{\rform{\card{r}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\card{b}}},\rewritingRule{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\inverted{\card{b}}}},\rewritingRule{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}} \end{array} \]Moving to the set \(\list{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\) yields 48 signed permutations: \(\tuple{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}},\tuple{\inverted{\rform{\card{r}}},\gform{\card{g}},\bform{\card{b}}},\tuple{\rform{\card{r}},\inverted{\gform{\card{g}}},\bform{\card{b}}},\tuple{\rform{\card{r}},\gform{\card{g}},\inverted{\bform{\card{b}}}}\), \(\ellipsis\), \(\tuple{\inverted{\bform{\card{b}}},\inverted{\gform{\card{g}}},\rform{\card{r}}},\tuple{\inverted{\bform{\card{b}}},\gform{\card{g}},\inverted{\rform{\card{r}}}},\tuple{\bform{\card{b}},\inverted{\gform{\card{g}}},\inverted{\rform{\card{r}}}},\tuple{\inverted{\bform{\card{b}}},\inverted{\gform{\card{g}}},\inverted{\rform{\card{r}}}}\), yielding 48 word automorphisms:

\[ \begin{array}{c} \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\card{r}}},\rewritingRule{\gform{\card{g}}}{\gform{\card{g}}},\rewritingRule{\bform{\card{b}}}{\bform{\card{b}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\inverted{\card{r}}}},\rewritingRule{\gform{\card{g}}}{\gform{\card{g}}},\rewritingRule{\bform{\card{b}}}{\bform{\card{b}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\card{r}}},\rewritingRule{\gform{\card{g}}}{\gform{\inverted{\card{g}}}},\rewritingRule{\bform{\card{b}}}{\bform{\card{b}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\rform{\card{r}}},\rewritingRule{\gform{\card{g}}}{\gform{\card{g}}},\rewritingRule{\bform{\card{b}}}{\bform{\inverted{\card{b}}}}}\\ \verticalEllipsis \\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\inverted{\card{b}}}},\rewritingRule{\gform{\card{g}}}{\gform{\inverted{\card{g}}}},\rewritingRule{\bform{\card{b}}}{\rform{\card{r}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\inverted{\card{b}}}},\rewritingRule{\gform{\card{g}}}{\gform{\card{g}}},\rewritingRule{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\card{b}}},\rewritingRule{\gform{\card{g}}}{\gform{\inverted{\card{g}}}},\rewritingRule{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}}\\ \groupWordRewriting{\rewritingRule{\rform{\card{r}}}{\bform{\inverted{\card{b}}}},\rewritingRule{\gform{\card{g}}}{\gform{\inverted{\card{g}}}},\rewritingRule{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}} \end{array} \]Being obviously invertible, the signed permutations on \(\sym{n}\) cardinals form a group. This group is known as the **Coxeter group** \(\group{B_{\sym{n}}}\).

#### Lengthening automorphisms

This exhausts the word automorphisms that do not lengthen words. But it is not immediately obvious that they *can* lengthen word while still remaining invertible. Here we demonstrate one particular example and explain why it is invertible:

To show invertibility, we need to explicitly construct words (of any length) that map to the single-cardinal words \(\list{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\). Inverses of longer words can then be formed by composing these primitive inverses. To find these primitive inverses, we observe that:

\[ \begin{array}{rcccccl} \rform{\card{r}} & = & \concat{\word{\rform{\card{r}}}{\gform{\ncard{g}}}\,\word{\gform{\card{g}}}{\bform{\ncard{b}}}\,\bform{\card{b}}} & = & \concat{\groupHomomorphism{ \phi }(\rform{\card{r}})\,\groupHomomorphism{ \phi }(\gform{\card{g}})\,\groupHomomorphism{ \phi }(\bform{\card{b}})} & = & \groupHomomorphism{ \phi }(\word{\rform{\card{r}}}{\gform{\card{g}}}{\bform{\card{b}}})\\ \gform{\card{g}} & = & \concat{\word{\gform{\card{g}}}{\bform{\ncard{b}}}\,\bform{\card{b}}} & = & \concat{\groupHomomorphism{ \phi }(\gform{\card{g}})\,\groupHomomorphism{ \phi }(\bform{\card{b}})} & = & \groupHomomorphism{ \phi }(\word{\gform{\card{g}}}{\bform{\card{b}}})\\ \bform{\card{b}} & = & & & & = & \groupHomomorphism{ \phi }(\bform{\card{b}}) \end{array} \]Hence we have that \(\inverse{\groupHomomorphism{ \phi }} = \groupWordRewriting{\rewritingRule{\word{\rform{\card{r}}}}{\word{\rform{\card{r}}}{\gform{\card{g}}}{\bform{\card{b}}}},\rewritingRule{\word{\gform{\card{g}}}}{\word{\gform{\card{g}}}{\bform{\card{b}}}},\rewritingRule{\word{\bform{\card{b}}}}{\word{\bform{\card{b}}}}}\).

More generally, characterizing the group structure of the automorphism group of the free group is tricky, with the most famous result being a presentation in terms of so-called Nielsen transformations, which we will not go into here.

#### Abelian word group

If we temporarily imagine the word group to be Abelian, in other words, if \(\word{\rform{\card{r}}}{\gform{\card{g}}} = \word{\gform{\card{g}}}{\rform{\card{r}}},\word{\gform{\card{g}}}{\bform{\card{b}}} = \word{\bform{\card{b}}}{\gform{\card{g}}}\), etc., things become much easier. In this setting, we can represent a word homomorphism \(\groupHomomorphism{ \phi }\) on \(\sym{n}\) cardinals as an \(\sym{n}\times \sym{n}\) integral matrix \(\matrix{M_{\groupHomomorphism{ \phi }}}\) by choosing an ordering of the cardinals to serve as the basis . Here we depict four examples:

\[ \begin{csarray}{cccc}{aamam} \groupHomomorphism{ \phi } & \matrix{M_{\groupHomomorphism{ \phi }}} & \groupHomomorphism{ \phi } & \matrix{M_{\groupHomomorphism{ \phi }}}\\ & & & \\ \begin{array}{l} \langle \kern{2pt}\rewritingRule{\word{\rform{\card{r}}}}{\word{\rform{\card{r}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\gform{\card{g}}}}{\word{\gform{\card{g}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\bform{\card{b}}}}{\word{\bform{\card{b}}}}\kern{2pt} \rangle \end{array} & {\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}} & \begin{array}{l} \langle \kern{2pt}\rewritingRule{\word{\rform{\card{r}}}}{\word{\rform{\card{r}}}{\gform{\card{g}}}{\bform{\card{b}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\gform{\card{g}}}}{\word{\gform{\card{g}}}{\bform{\card{b}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\bform{\card{b}}}}{\word{\bform{\card{b}}}}\kern{2pt} \rangle \end{array} & {\begin{pmatrix}1&1&1\\0&1&1\\0&0&1\end{pmatrix}}\\ & & & \\ \begin{array}{l} \langle \kern{2pt}\rewritingRule{\word{\rform{\card{r}}}}{\word{\card{1}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\gform{\card{g}}}}{\word{\gform{\card{g}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\bform{\card{b}}}}{\word{\bform{\card{b}}}}\kern{2pt} \rangle \end{array} & {\begin{pmatrix}0&0&0\\0&1&0\\0&0&1\end{pmatrix}} & \begin{array}{l} \langle \kern{2pt}\rewritingRule{\word{\rform{\card{r}}}}{\word{\rform{\ncard{r}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\gform{\card{g}}}}{\word{\bform{\card{b}}}},\\ \phantom{ \langle \kern{2pt}}\rewritingRule{\word{\bform{\card{b}}}}{\word{\gform{\card{g}}}}\kern{2pt} \rangle \end{array} & {\begin{pmatrix}-1&0&0\\0&0&1\\0&1&0\end{pmatrix}} \end{csarray} \]Finding the inverse homomorphism is equivalent to inverting its integral matrix:

\[ \matrix{M_{\inverse{\groupHomomorphism{ \phi }}}} = \inverse{\matrix{M_{\groupHomomorphism{ \phi }}}} \]Even better, it is easily checked that an integral matrix is only invertible if its determinant is \(\pm 1\), allowing us to characterize the automorphism group of Abelian word homomorphisms as the integral orthogonal group \(\orthogonalGroup{\sym{n}}{\ring{\mathbb{Z}}}\).

## Visualization

We can visualize the action of various word homomorphisms on some familiar transitive quivers to gain intuition for their behavior.

# Multisets

## Introduction

### History

The multiset is a simple and natural piece of mathematical technology that has been strangely underused in the last hundred years. That the multiset has failed to live up to its potential is due probably to historical accident. One possible explanation is that as the need for foundational rigor became urgent, modern mathematics quickly standardized on the set as a kind of universal building block, starving out other data structures, like the multiset. Such is the unfortunate role of fashion in mathematics. Luckily, multisets have seen renewed interest with the work of more computation-oriented mathematicians like Donald Knuth.

### Motivation

We are taking the time to explain multisets because they play an important *interpretative* role, giving us an alternative way of understanding algebraic structures. This role will only become clear in the next two sections: [[[Word rings]]], where we'll elements of the word ring as multisets of words ("multiwords"), and [[[Adjacency]]], where we'll interpret matrices as multisets of paths ("multipaths"). Interleaving the explanation of multisets with those topics would be confusing, so we'll front-load the work in understanding multisets here, in one place.

## Sets and multisets

### Sets vs multisets

To contrast sets with multisets, we first repeat some basic facts about sets.

Sets are the building blocks of modern mathematics. They consist of a collection of *elements*: \(\sym{X} = \set{\sym{x_1},\sym{x_2},\ellipsis ,\sym{x_{\sym{n}}}}\). We can express that an element is a member of a set with the notation \(\elemOf{\setElementSymbol{x}}{\setSymbol{X}}\).

Among the fundamental properties of a sets is that the elements have no defined *order*. Hence, the expressions \(\set{\rform{\card{r}},\bform{\card{b}}}\textAnd \set{\bform{\card{b}},\rform{\card{r}}}\) describe the same set. Furthermore, an object is either an element of a set or it is not. It is meaningless to say that it occurs *twice*, for example. Hence, the expressions \(\set{\rform{\card{r}},\rform{\card{r}},\bform{\card{b}}}\textAnd \set{\rform{\card{r}},\bform{\card{b}}}\) describe the same set – especially when *constructing* sets we rely on this redundancy being irrelevant.

If we instead specify that repetition *does* matter, we arrive at the idea of a **multiset**. A multiset can contain an object *more than once*. The number of times a multiset contains an object will be called the **multiplicity** of that object.

We'll write multisets with double-struck braces: \(\sym{X} = \multiset{\sym{x_1},\sym{x_2},\ellipsis ,\sym{x_{\sym{n}}}}\). Hence, \(\multiset{\rform{\card{r}},\rform{\card{r}},\bform{\card{b}}}\textAnd \multiset{\rform{\card{r}},\bform{\card{b}}}\) describe *different* multisets: the multiplicity of \(\rform{\card{r}}\) is \(1\) in the first and \(2\) in the second.

We’ll denote the multiplicity of an object \(\multisetElementSymbol{x}\) in a multiset \(\multisetSymbol{M}\) by \(\multisetSymbol{M}\multisetMultiplicitySymbol \multisetElementSymbol{x}\). For example, in the multiset \(\multisetSymbol{M} = \multiset{\rform{\card{r}},\rform{\card{r}},\bform{\card{b}}}\), we have that \(\multisetSymbol{M}\multisetMultiplicitySymbol \rform{\card{r}} = 2\), \(\multisetSymbol{M}\multisetMultiplicitySymbol \bform{\card{b}} = 1\), and \(\multisetSymbol{M}\multisetMultiplicitySymbol \gform{\card{g}} = 0\).

### The set of multisets

The powerset \(\powerSet{\multisetSymbol{B}}\) of a set \(\multisetSymbol{B}\) is the set of subsets of \(\multisetSymbol{B}\). We'll call these the **sets on** \(\multisetSymbol{B}\), and \(\multisetSymbol{B}\) the **base set**.

We can similarly form the set of *multisets* whose objects are taken from \(\multisetSymbol{B}\), written \(\multisets{\multisetSymbol{B}}\). We'll call these the **multisets on** \(\multisetSymbol{B}\).

Any particular object from the base set can occur any finite number of times in one such multiset, so \(\multisets{\multisetSymbol{B}}\) is generally an infinite set (unless \(\multisetSymbol{B}\) is empty). Let's look at some examples of this simple and natural idea.

If \(\multisetSymbol{B}\) is a **singleton**, that is, contains only one element, then the multisets in \(\multisets{\multisetSymbol{B}}\) can contain this element any finite number of times. Here we list some multisets in \(\multisets{\multisetSymbol{B}}\) for \(\multisetSymbol{B} = \set{\rform{\card{r}}}\):

Clearly, we can identify these multisets with the positive integers, since they count the number of times that \(\rform{\card{r}}\) is present.

If \(\multisetSymbol{B}\) contains two elements, then the multisets in \(\multisets{\multisetSymbol{B}}\) can be identified with *pairs* of positive integers, counting how many times the two elements are present. Here we list some elements of the set \(\multisets{\multisetSymbol{B}}\) for \(\multisetSymbol{B} = \set{\rform{\card{r}},\bform{\card{b}}}\):

More generally, we can identify *any* multiset \(\elemOf{\multisetSymbol{M}}{\multisets{\multisetSymbol{B}}}\) with the function \(\functionSignature{\function{\boundMultiplicityFunction{\multisetSymbol{M}}}}{\multisetSymbol{B}}{\mathbb{N}}\) that takes an element of \(\multisetSymbol{B}\) and returns its multiplicity in \(\multisetSymbol{M}\).

For example, we can identify the multiset \(\multisetSymbol{M} = \multiset{\rform{\card{r}},\rform{\card{r}},\bform{\card{b}}}\) with the function \(\boundMultiplicityFunction{\multisetSymbol{M}} = \assocArray{\mto{\rform{\card{r}}}{2},\mto{\bform{\card{b}}}{1}}\).

Formally, then, we have a bijection between multisets on \(\multisetSymbol{B}\) on one side and functions from \(\multisetSymbol{B}\) to the natural numbers on the other (**multiplicity functions**):

We'll shortly see how we can encode the natural operations on a multiset in terms of operations on such multiplicity functions.

### Translating between sets and multisets

We can identify a set \(\multisetSymbol{X}\) with a multiset in which each element \(\elemOf{\setElementSymbol{x}}{\multisetSymbol{X}}\) appears *exactly* once. For example:

In other words, we have a bijection between on one hand \(\powerSet{\multisetSymbol{B}}\), the sets on \(\multisetSymbol{B}\), and on the other hand the multisets on \(\multisetSymbol{B}\) with maximum multiplicity 1:

\[ \powerSet{\setSymbol{B}}\bijectiveSymbol \setConstructor{\multisetSymbol{M}}{\elemOf{\multisetSymbol{M}}{\multisets{\setSymbol{B}}},\max(\boundMultiplicityFunction{\multisetSymbol{M}}) = 1} \]To a given set \(\multisetSymbol{X}\) in terms we define the corresponding multiset via its multiplicity function \(\boundMultiplicityFunction{\multisetSymbol{X}}\):

\[ \boundMultiplicityFunction{\multisetSymbol{X}}\defEqualSymbol \mto{\setElementSymbol{b}}{\begin{cases} 1 &\text{if } \elemOf{\setElementSymbol{b}}{\setSymbol{X}}\\ 0 &\text{otherwise} \end{cases} } \]Conversely, given a multiset defined by \(\boundMultiplicityFunction{\multisetSymbol{X}}\) with \(\max(\boundMultiplicityFunction{\multisetSymbol{X}}) = 1\), we define the corresponding set \(\multisetSymbol{X}\) as follows:

\[ \setSymbol{X}\defEqualSymbol \setConstructor{\setElementSymbol{b}}{\elemOf{\setElementSymbol{b}}{\setSymbol{B}},\function{\boundMultiplicityFunction{\multisetSymbol{X}}}(\setElementSymbol{b}) = 1} \]### Projection and lift

We can define this bijection via two functions called **projection** and **lift**:

Lift takes a set and gives the multiset that contains each element exactly once:

\[ \lift(\setSymbol{A})\defEqualSymbol \multisetConstructor{\signedMultisetElementSymbol{x}}{\elemOf{\signedMultisetElementSymbol{x}}{\setSymbol{A}}} \]Projection takes a multiset and gives the set of its elements:

\[ \projection(\setSymbol{A})\defEqualSymbol \setConstructor{\signedMultisetElementSymbol{x}}{\elemOf{\signedMultisetElementSymbol{x}}{\setSymbol{A}}} \]Lifting and projecting a set yields the same set:

\[ \projection(\lift(\setSymbol{A}))\identicallyEqualSymbol \setSymbol{A} \]## Operations on multisets

How can we combine multisets? We'll look to sets for inspiration.

For sets \(\multisetSymbol{X},\multisetSymbol{Y}\), we can form the **union** \(\multisetSymbol{X}\setUnionSymbol \multisetSymbol{Y}\) – the set of elements that are in \(\multisetSymbol{X}\) and/or \(\multisetSymbol{Y}\), the **intersection** \(\multisetSymbol{X}\setIntersectionSymbol \multisetSymbol{Y}\) – the set of elements in both \(\multisetSymbol{X}\textAnd \multisetSymbol{Y}\), and the **relative complement** \(\multisetSymbol{X}\setRelativeComplementSymbol \multisetSymbol{Y}\) – the set of elements in \(\multisetSymbol{X}\) but not in \(\multisetSymbol{Y}\).

For multisets \(\multisetSymbol{M}\textAnd \multisetSymbol{N}\), we now define the equivalent operations **multiset union** \(\multisetSymbol{M}\multisetUnionSymbol \multisetSymbol{N}\), **multiset intersection** \(\multisetSymbol{M}\multisetIntersectionSymbol \multisetSymbol{N}\), and **multiset relative complement** \(\multisetSymbol{M}\multisetRelativeComplementSymbol \multisetSymbol{N}\). Notice the dot to distinguish these operations from their set counterparts.

We will define these in terms of the multiplicity functions \(\boundMultiplicityFunction{\multisetSymbol{M}}\textAnd \boundMultiplicityFunction{\multisetSymbol{N}}\):

\[ \begin{aligned} \boundMultiplicityFunction{\paren{\multisetSymbol{M}\multisetUnionSymbol \multisetSymbol{N}}}&\defEqualSymbol \max(\boundMultiplicityFunction{\multisetSymbol{M}},\boundMultiplicityFunction{\multisetSymbol{N}})\\ \boundMultiplicityFunction{\paren{\multisetSymbol{M}\multisetIntersectionSymbol \multisetSymbol{N}}}&\defEqualSymbol \min(\boundMultiplicityFunction{\multisetSymbol{M}},\boundMultiplicityFunction{\multisetSymbol{N}})\\ \boundMultiplicityFunction{\paren{\multisetSymbol{M}\multisetRelativeComplementSymbol \multisetSymbol{N}}}&\defEqualSymbol \max(\boundMultiplicityFunction{\multisetSymbol{M}} - \boundMultiplicityFunction{\multisetSymbol{N}},0)\end{aligned} \]Here, when we apply \(\max\textAnd \min\) to *functions*, we are creating a new function that applies that operation to their outputs e.g. \(\function{\max(\function{f},\function{g})}(\multisetElementSymbol{x})\defEqualSymbol \max(\function{f}(\multisetElementSymbol{x}),\function{g}(\multisetElementSymbol{x}))\).

### Examples

Here is an example of a multiset union and the corresponding operation on multiplicity functions, where the base set is \(\multisetSymbol{B} = \set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\):

\[ \begin{array}{cccccc} \multiset{\rform{\card{r}},\rform{\card{r}},\gform{\card{g}}} & \multisetUnionSymbol & \multiset{\rform{\card{r}},\bform{\card{b}},\bform{\card{b}}} & = & \multiset{\rform{\card{r}},\rform{\card{r}},\gform{\card{g}},\bform{\card{b}},\bform{\card{b}}} & \\ & & & & & \\ \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 2, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & \multisetUnionSymbol & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 1, & \\ & \gform{\card{g}} & \mtoSymbol & 0, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & = & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 2, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & \end{array} \]Here's an example of an intersection:

\[ \begin{array}{cccccc} \multiset{\rform{\card{r}},\rform{\card{r}},\gform{\card{g}},\bform{\card{b}},\bform{\card{b}}} & \multisetIntersectionSymbol & \multiset{\rform{\card{r}},\bform{\card{b}},\bform{\card{b}}} & = & \multiset{\rform{\card{r}},\bform{\card{b}},\bform{\card{b}}} & \\ & & & & & \\ \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 2, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 0 & \rangle \end{csarray} & \multisetIntersectionSymbol & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 1, & \\ & \gform{\card{g}} & \mtoSymbol & 0, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & = & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 1, & \\ & \gform{\card{g}} & \mtoSymbol & 0, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & \end{array} \]Here's an example of a relative complement:

\[ \begin{array}{cccccc} \multiset{\rform{\card{r}},\rform{\card{r}},\gform{\card{g}},\bform{\card{b}},\bform{\card{b}}} & \multisetRelativeComplementSymbol & \multiset{\rform{\card{r}},\bform{\card{b}},\bform{\card{b}}} & = & \multiset{\rform{\card{r}},\gform{\card{g}}} & \\ & & & & & \\ \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 2, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 0 & \rangle \end{csarray} & \multisetRelativeComplementSymbol & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 1, & \\ & \gform{\card{g}} & \mtoSymbol & 0, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & = & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 1, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 0 & \rangle \end{csarray} & \end{array} \]### As extensions of set operations

The following elementary identities connect these operations on multisets to their counterparts on sets:

\[ \begin{aligned} \projection(\multisetSymbol{M}\multisetUnionSymbol \multisetSymbol{N})&= \projection(\multisetSymbol{M})\setUnionSymbol \projection(\multisetSymbol{N})\\ \projection(\multisetSymbol{M}\multisetIntersectionSymbol \multisetSymbol{N})&= \projection(\multisetSymbol{M})\setIntersectionSymbol \projection(\multisetSymbol{N})\\ \lift(\setSymbol{X}\setUnionSymbol \setSymbol{Y})&= \lift(\setSymbol{X})\multisetUnionSymbol \lift(\setSymbol{Y})\\ \lift(\setSymbol{X}\setIntersectionSymbol \setSymbol{Y})&= \lift(\setSymbol{X})\multisetIntersectionSymbol \lift(\setSymbol{Y})\end{aligned} \]This occurs because the \(\max\textAnd \min\) behave like logical \(\orSymbol\) and \(\andSymbol\) when operating on multiplicities \(\set{0,1}\). It's in this sense that our multiset operations generalize the set operations.

### Multiset sums

An additional operation is now available to us, which is simply to **sum** two multisets together:

As with \(\max\textAnd \min\), we are adding functions together in the sense that \(\function{\paren{\function{f} + \function{g}}}(\multisetElementSymbol{x})\defEqualSymbol \function{f}(\multisetElementSymbol{x}) + \function{g}(\multisetElementSymbol{x})\).

Here's an example of a multiset sum:

\[ \begin{array}{cccccc} \multiset{\rform{\card{r}},\rform{\card{r}},\gform{\card{g}}} & \multisetSumSymbol & \multiset{\rform{\card{r}},\bform{\card{b}},\bform{\card{b}}} & = & \multiset{\rform{\card{r}},\rform{\card{r}},\rform{\card{r}},\gform{\card{g}},\bform{\card{b}},\bform{\card{b}}} & \\ & & & & & \\ \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 2, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 0 & \rangle \end{csarray} & \multisetSumSymbol & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 1, & \\ & \gform{\card{g}} & \mtoSymbol & 0, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & = & \begin{csarray}{rrcll}{abbba} \langle & \rform{\card{r}} & \mtoSymbol & 3, & \\ & \gform{\card{g}} & \mtoSymbol & 1, & \\ & \bform{\card{b}} & \mtoSymbol & 2 & \rangle \end{csarray} & \end{array} \]### Multiples of a multiset

We introduce a convenient notation for the act of adding a multiset to itself \(\sym{n}\) times:

\[ \repeatedMultiset{\sym{n}}{\multisetSymbol{M}}\syntaxEqualSymbol \overRepeated{\multisetSymbol{M}\multisetSumSymbol \multisetSymbol{M}\multisetSumSymbol \ellipsis \multisetSumSymbol \multisetSymbol{M}}{\sym{n}} \]As an example, we have: \(\repeatedMultiset{3}{\multiset{\rform{\card{r}},\bform{\card{b}}}}\syntaxEqualSymbol \multiset{\rform{\card{r}},\rform{\card{r}},\rform{\card{r}},\bform{\card{b}},\bform{\card{b}},\bform{\card{b}}}\)

In terms of multiplicity functions, the we simply multiply the multiplicity function:

\[ \boundMultiplicityFunction{\paren{\repeatedMultiset{\sym{n}}{\multisetSymbol{M}}}}\defEqualSymbol \sym{n} \, \boundMultiplicityFunction{\multisetSymbol{M}} \]We have the following properties of multiples:

\[ \begin{aligned} \repeatedMultiset{\sym{n}}{\paren{\multisetSymbol{A}\multisetSumSymbol \multisetSymbol{B}}}&= \sym{n} \, \multisetSymbol{A}\multisetSumSymbol \sym{n} \, \multisetSymbol{B}\\ \repeatedMultiset{\sym{n}}{\paren{\multisetSymbol{A}\multisetUnionSymbol \multisetSymbol{B}}}&= \sym{n} \, \multisetSymbol{A}\multisetUnionSymbol \sym{n} \, \multisetSymbol{B}\\ \repeatedMultiset{\sym{n}}{\paren{\multisetSymbol{A}\multisetIntersectionSymbol \multisetSymbol{B}}}&= \sym{n} \, \multisetSymbol{A}\multisetIntersectionSymbol \sym{n} \, \multisetSymbol{B}\\ \repeatedMultiset{\paren{\sym{n} + \sym{m}}}{\multisetSymbol{A}}&= \repeatedMultiset{\sym{n}}{\multisetSymbol{A}}\multisetSumSymbol \repeatedMultiset{\sym{m}}{\multisetSymbol{A}}\\ \repeatedMultiset{\sym{n}}{\paren{\repeatedMultiset{\sym{m}}{\multisetSymbol{A}}}}&= \repeatedMultiset{\paren{\sym{n} \, \sym{m}}}{\multisetSymbol{A}}\\ \repeatedMultiset{1}{\multisetSymbol{A}}&= \multisetSymbol{A}\\ \repeatedMultiset{0}{\multisetSymbol{A}}&= \multiset{}\end{aligned} \]### Cardinality of a multiset

The **cardinality** of a multiset \(\multisetSymbol{A}\), written \(\multisetCardinality{\multisetSymbol{A}}\), is the number of elements of the multiset – in other words, the total multiplicity of all objects in the multiset. We can also express it as the total of the multiplicity function:

We show some examples below:

\[ \begin{aligned} \multisetCardinality{\multiset{}}&= 0\\ \multisetCardinality{\multiset{\rform{\card{r}}}}&= 1\\ \multisetCardinality{\multiset{\rform{\card{r}},\rform{\card{r}}}}&= 2\\ \multisetCardinality{\multiset{\rform{\card{r}},\rform{\card{r}},\bform{\card{b}}}}&= 3\\ \multisetCardinality{\multiset{\bform{\card{b}},\bform{\card{b}},\bform{\card{b}}}}&= 3\end{aligned} \]The following identities is easily verified:

\[ \begin{aligned} \multisetCardinality{\multisetSymbol{A}\multisetSumSymbol \multisetSymbol{B}}&= \multisetCardinality{\multisetSymbol{A}} + \multisetCardinality{\multisetSymbol{B}}\end{aligned} \]So are the following inequalities:

\[ \begin{nsarray}{c} \multisetCardinality{\multisetSymbol{A}\multisetIntersectionSymbol \multisetSymbol{B}} \le \multisetCardinality{\multisetSymbol{A}},\multisetCardinality{\multisetSymbol{B}} \le \multisetCardinality{\multisetSymbol{A}\multisetUnionSymbol \multisetSymbol{B}}\\ \multisetCardinality{\multisetSymbol{A}} - \multisetCardinality{\multisetSymbol{B}} \le \multisetCardinality{\multisetSymbol{A}\multisetRelativeComplementSymbol \multisetSymbol{B}} \le \multisetCardinality{\multisetSymbol{A}} \le \multisetCardinality{\multisetSymbol{A}\multisetUnionSymbol \multisetSymbol{B}} \le \multisetCardinality{\multisetSymbol{A}} + \multisetCardinality{\multisetSymbol{B}} \end{nsarray} \]## Multiset-multiplicity duality

We saw that multisets can be represented one-to-one by multiplicity functions.

There is a bijection between \(\multisets{\setSymbol{X}}\), the set of *finite* multisets on a base set \(\setSymbol{X}\), and the **integrable** functions from \(\setSymbol{X}\) to the natural numbers. By this we mean the set of functions with finite total. We'll write this set of functions as \(\finiteTotalFunctionSpace{\setSymbol{X}}{\baseField{\mathbb{N}}}\), because it necessarily consists of those functions in \(\functionSpace{\setSymbol{X}}{\baseField{\mathbb{N}}}\) that are only non-zero on a finite subset of \(\setSymbol{X}\):

Then the bijection is:

\[ \begin{aligned} \multisets{\setSymbol{X}}& \approx \finiteTotalFunctionSpace{\setSymbol{X}}{\baseField{\mathbb{N}}}\\ \multisetSymbol{M}& \approx \boundMultiplicityFunction{\multisetSymbol{M}}\end{aligned} \]More importantly, this bijection is an *isomorphism*, because it preserves the multiset operations of union, intersection, sum, etc., which we revisit here:

We say that the multiset \(\multisetSymbol{M}\) is **dual** to the multiplicity function \(\boundMultiplicityFunction{\multisetSymbol{M}}\).

We can express this isomorphism as a **isomorphism of realms**, where a **realm** is a set with the algebraic structure of sums, unions, and intersections. We will not enter into the realm of realms, however.

## Multiset constructors

Let's consider a general multiset constructor:

\[ \multisetConstructor{\function{F}(\multisetElementSymbol{x}_1,\multisetElementSymbol{x}_2,\ellipsis ,\multisetElementSymbol{x}_{\sym{n}})}{\elemOf{\multisetElementSymbol{x}_1}{\multisetSymbol{M}_1},\elemOf{\multisetElementSymbol{x}_2}{\multisetSymbol{M}_2},\ellipsis ,\elemOf{\multisetElementSymbol{x}_{\sym{n}}}{\multisetSymbol{M}_{\sym{n}}}} \]Here, \(\function{F}\) is some function that computes an output value from inputs \(\multisetElementSymbol{x}_1,\multisetElementSymbol{x}_2,\ellipsis ,\multisetElementSymbol{x}_{\sym{n}}\). Where do these inputs come from? The values of \(\multisetElementSymbol{x}_1,\multisetElementSymbol{x}_2,\ellipsis ,\multisetElementSymbol{x}_{\sym{n}}\) are drawn from multisets (or sets) \(\multisetSymbol{M}_1,\multisetSymbol{M}_2,\ellipsis ,\multisetSymbol{M}_{\sym{n}}\) via the **bindings** \(\elemOf{\multisetElementSymbol{x}_{\sym{i}}}{\multisetSymbol{M}_{\sym{i}}}\). Bindings are indications that the multiset constructor should "draw an element from \(\multisetSymbol{M}_{\sym{i}}\) and store it in \(\multisetElementSymbol{x}_{\sym{i}}\)". The multiset constructor performs bindings in *all possible ways* in order to build a multiset of results.

This is very similar to how ordinary set constructor notation works, except for one important distinction. In set constructor notation, a particular value can only be drawn from a set *once –* or more precisely, if we draw a particular value more than once, we would correspondingly collect a particular result \(\function{F}(\multisetElementSymbol{x}_1,\multisetElementSymbol{x}_2,\ellipsis ,\multisetElementSymbol{x}_{\sym{n}})\) more than once, but this would still result in the same constructed set! In contrast, multisets *care about multiplicity*. Therefore we draw an element from each \(\multisetSymbol{M}_{\sym{i}}\) exactly as many times as it appears. Here we show this difference for a simple example:

Similarly, when \(\function{F}\) produces a particular result multiple times, it will appear in the constructed multiset with the corresponding multiplicity. In the following example, \(\modFunction(\setElementSymbol{x},2)\) outputs the value \(0\) three times as \(\setElementSymbol{x}\) is bound the integers \(\zeroTo{4}\), because \(\modFunction(0,2) = \modFunction(2,2) = \modFunction(4,2) = 0\):

\[ \multisetConstructor{\modFunction(\setElementSymbol{x},2)}{\elemOf{\setElementSymbol{x}}{\set{0,1,2,3,4}}} = \multiset{0,0,0,1,1} \]The set version of this construction doesn't capture this multiplicity:

\[ \setConstructor{\modFunction(\setElementSymbol{x},2)}{\elemOf{\setElementSymbol{x}}{\set{0,1,2,3,4}}} = \set{0,1} \]## Images and preimages of sets

We now review some basic facts about images and preimages of functions, to set the stage for analogous definitions for multisets. These ideas can be found in any undergraduate text on set theory. If this is familiar, you can jump to [[[#Images and preimages of multisets]]].

#### Definition

Consider a function \(\functionSignature{\function{f}}{\setSymbol{X}}{\setSymbol{Y}}\). We wish to apply it "set-wise", asking how it acts on a *set* of inputs (\(\setSymbol{A} \subseteq \setSymbol{X}\)) to give a set of outputs (\(\setSymbol{B} \subseteq \setSymbol{Y}\)). We similarly might wish to know what set of inputs corresponds to a given set of outputs.

To formalize these ideas, we define the **image** and **preimage** functions of \(\function{f}\), written \(\imageModifier{\function{f}}\) and \(\preimageModifier{\function{f}}\) respectively. They have the following signatures:

We define them to using set constructor notation, where the image \(\imageModifier{\function{f}}\) function collects the outputs corresponding to the provided set of inputs, and the preimage \(\preimageModifier{\function{f}}\) collects the inputs corresponding to the provided set of outputs:

\[ \begin{aligned} \function{\imageModifier{\function{f}}}(\setSymbol{A})&\defEqualSymbol \setConstructor{\function{f}(\setElementSymbol{x})}{\elemOf{\setElementSymbol{x}}{\setSymbol{A}}}\\ \function{\preimageModifier{\function{f}}}(\setSymbol{B})&\defEqualSymbol \setConstructor{\setElementSymbol{x}}{\elemOf{\setElementSymbol{x}}{\setSymbol{X}},\elemOf{\function{f}(\setElementSymbol{x})}{\setSymbol{B}}}\end{aligned} \]#### Example

Consider the function \(\functionSignature{\function{f}}{\set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\set{\rgform{\card{x}},\gbform{\card{y}}}}\) defined by \(\function{f} = \assocArray{\mto{\rform{\card{r}}}{\rgform{\card{x}}},\mto{\gform{\card{g}}}{\rgform{\card{x}}},\mto{\bform{\card{b}}}{\gbform{\card{y}}}}\).

We’ll say that e.g. \(\set{\rgform{\card{x}}}\) is the **image** of \(\set{\rform{\card{r}}}\) under \(\function{f}\), meaning that \(\function{\imageModifier{\function{f}}}(\set{\rform{\card{r}}}) = \set{\rgform{\card{x}}}\) (for singletons, people sometimes elide the braces). For \(\function{f}\), we have the following images:

Since we've listed the images of all subsets of \(\set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\), we can collect these together to give the explicit value of \(\imageModifier{\function{f}}\):

\[ \imageModifier{\function{f}} = \begin{csarray}{rrclrclrclrcll}{abbbcbbcbbcbba} \langle & \set{} & \mtoSymbol & \set{}, & \set{\rform{\card{r}}} & \mtoSymbol & \set{\rgform{\card{x}}}, & \set{\gform{\card{g}}} & \mtoSymbol & \set{\rgform{\card{x}}}, & \set{\bform{\card{b}}} & \mtoSymbol & \set{\gbform{\card{y}}}, & \\ & \set{\rform{\card{r}},\gform{\card{g}}} & \mtoSymbol & \set{\rgform{\card{x}}}, & \set{\rform{\card{r}},\bform{\card{b}}} & \mtoSymbol & \set{\rgform{\card{x}},\gbform{\card{y}}}, & \set{\gform{\card{g}},\bform{\card{b}}} & \mtoSymbol & \set{\rgform{\card{x}},\gbform{\card{y}}}, & \set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}} & \mtoSymbol & \set{\rgform{\card{x}},\gbform{\card{y}}} & \rangle \end{csarray} \]Note the important but elementary fact that the image function \(\imageModifier{\function{f}}\) preserves unions for *any* function \(\function{f}\):

We show one example of this identity in action :

\[ \begin{csarray}{rclcccc}{acccccc} \preimageModifier{\function{f}}(\set{\rform{\card{r}}} & \cup & \set{\bform{\card{b}}}) & = & \function{\preimageModifier{\function{f}}}(\set{\rform{\card{r}},\bform{\card{b}}}) & = & \set{\rgform{\card{x}},\gbform{\card{y}}}\\[0.5em] \function{\preimageModifier{\function{f}}}(\set{\rform{\card{r}}}) & \cup & \function{\preimageModifier{\function{f}}}(\set{\bform{\card{b}}}) & = & \set{\rgform{\card{x}}}\setUnionSymbol \set{\gbform{\card{y}}} & = & \set{\rgform{\card{x}},\gbform{\card{y}}} \end{csarray} \]However, the image function does not preserve intersections or relative complements. Here are counterexamples for each case:

\[ \begin{csarray}{rclcccc}{acccccc} \imageModifier{\function{f}}(\set{\rform{\card{r}}} & \cap & \set{\gform{\card{g}}}) & = & \function{\imageModifier{\function{f}}}(\set{}) & = & \set{}\\[0.5em] \function{\imageModifier{\function{f}}}(\set{\rform{\card{r}}}) & \cap & \function{\imageModifier{\function{f}}}(\set{\gform{\card{g}}}) & = & \set{\rgform{\card{x}}}\setIntersectionSymbol \set{\rgform{\card{x}}} & = & \set{\rgform{\card{x}}} \end{csarray} \] \[ \begin{csarray}{rclcccc}{acccccc} \preimageModifier{\function{f}}(\set{\rform{\card{r}}} & \setminus & \set{\gform{\card{g}}}) & = & \function{\preimageModifier{\function{f}}}(\set{\rform{\card{r}}}) & = & \set{\rgform{\card{x}}}\\[0.5em] \function{\preimageModifier{\function{f}}}(\set{\rform{\card{r}}}) & \setminus & \function{\preimageModifier{\function{f}}}(\set{\gform{\card{g}}}) & = & \set{\rgform{\card{x}}}\setRelativeComplementSymbol \set{\rgform{\card{x}}} & = & \set{} \end{csarray} \] \[ \begin{csarray}{rclcccc}{acccccc} \imageModifier{\function{f}}(\set{\rform{\card{r}}} & \cap & \set{\gform{\card{g}}}) & = & \function{\imageModifier{\function{f}}}(\set{}) & = & \set{}\\[0.5em] \function{\imageModifier{\function{f}}}(\set{\rform{\card{r}}}) & \cap & \function{\imageModifier{\function{f}}}(\set{\gform{\card{g}}}) & = & \set{\rgform{\card{x}}}\setIntersectionSymbol \set{\rgform{\card{x}}} & = & \set{\rgform{\card{x}}}\\[1em] \imageModifier{\function{f}}(\set{\rform{\card{r}}} & \setminus & \set{\gform{\card{g}}}) & = & \function{\imageModifier{\function{f}}}(\set{\rform{\card{r}}}) & = & \set{\rgform{\card{x}}}\\[0.5em] \function{\imageModifier{\function{f}}}(\set{\rform{\card{r}}}) & \setminus & \function{\imageModifier{\function{f}}}(\set{\gform{\card{g}}}) & = & \set{\rgform{\card{x}}}\setRelativeComplementSymbol \set{\rgform{\card{x}}} & = & \set{} \end{csarray} \]In general, however, we *do* have that have the following inequalities for intersections and complements:

Let's examine the preimage function \(\preimageModifier{\function{f}}\), which again we can write as a literal mapping for our example:

\[ \preimageModifier{\function{f}} = \assocArray{\mto{\set{}}{\set{}},\mto{\set{\rgform{\card{x}}}}{\set{\rform{\card{r}},\gform{\card{g}}}},\mto{\set{\gbform{\card{y}}}}{\set{\bform{\card{b}}}},\mto{\set{\rgform{\card{x}},\gbform{\card{y}}}}{\set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}} \]The preimage function is quite nice, since preserves intersections*,* unions, and relative complements for *any* function:

Here we show some examples for these identities for our specific function, but leave the general proofs as exercises:

\[ \begin{csarray}{rclcccc}{acccccc} \preimageModifier{\function{f}}(\set{\rgform{\card{x}}} & \cup & \set{\gbform{\card{y}}}) & = & \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}},\gbform{\card{y}}}) & = & \set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\\[0.5em] \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}}}) & \cup & \function{\preimageModifier{\function{f}}}(\set{\gbform{\card{y}}}) & = & \set{\rform{\card{r}},\gform{\card{g}}}\setUnionSymbol \set{\bform{\card{b}}} & = & \set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\\[1em] \preimageModifier{\function{f}}(\set{\rgform{\card{x}}} & \cap & \set{\rgform{\card{x}},\gbform{\card{y}}}) & = & \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}}}) & = & \set{\rform{\card{r}},\gform{\card{g}}}\\[0.5em] \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}}}) & \cap & \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}},\gbform{\card{y}}}) & = & \set{\rform{\card{r}},\gform{\card{g}}}\setIntersectionSymbol \set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}} & = & \set{\rform{\card{r}},\gform{\card{g}}}\\[1em] \preimageModifier{\function{f}}(\set{\rgform{\card{x}},\gbform{\card{y}}} & \setminus & \set{\gbform{\card{y}}}) & = & \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}}}) & = & \set{\rform{\card{r}},\gform{\card{g}}}\\[0.5em] \function{\preimageModifier{\function{f}}}(\set{\rgform{\card{x}},\gbform{\card{y}}}) & \setminus & \function{\preimageModifier{\function{f}}}(\set{\gbform{\card{y}}}) & = & \set{\rform{\card{r}},\gform{\card{g}}}\setRelativeComplementSymbol \set{\bform{\card{b}}} & = & \set{\rform{\card{r}},\gform{\card{g}}} \end{csarray} \]We now extend the image and preimage functions to the setting of multisets.

## Images and preimages of multisets

The **multi-image function** \(\multiImageModifier{\function{f}}\) and **multi-preimage function** \(\multiPreimageModifier{\function{f}}\)are functions that play the same role as the image function \(\imageModifier{\function{f}}\) and preimage function \(\preimageModifier{\function{f}}\), but operate on multisets on the domain and codomain:

They are defined as follows, where \(\elemOf{\multisetSymbol{A}}{\multisets{\setSymbol{X}}}\textAnd \elemOf{\multisetSymbol{B}}{\multisets{\setSymbol{Y}}}\):

\[ \begin{aligned} \function{\multiImageModifier{\function{f}}}(\setSymbol{A})&\defEqualSymbol \multisetConstructor{\function{f}(\setElementSymbol{x})}{\elemOf{\setElementSymbol{x}}{\setSymbol{A}}}\\ \function{\multiPreimageModifier{\function{f}}}(\setSymbol{B})&\defEqualSymbol \multisetConstructor{\setElementSymbol{x}}{\elemOf{\setElementSymbol{x}}{\setSymbol{X}},\elemOf{\function{f}(\setElementSymbol{x})}{\setSymbol{B}}}\end{aligned} \]The only difference from the definitions of the image and preimage functions are that we are now using multiset constructor notation, rather than set constructor notation.

#### Example

Let's compute the multi-image function for an example function \(\function{f}\). We'll use the same function \(\function{f}\) we examined [[[above:#Images and preimages of sets]]], which was \(\functionSignature{\function{f}}{\set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\set{\rgform{\card{x}},\gbform{\card{y}}}}\) defined by \(\function{f} = \assocArray{\mto{\rform{\card{r}}}{\rgform{\card{x}}},\mto{\gform{\card{g}}}{\rgform{\card{x}}},\mto{\bform{\card{b}}}{\gbform{\card{y}}}}\).

Then the explicit value of the multi-image function \(\multiImageModifier{\function{f}}\) is given by:

\[ \multiImageModifier{\function{f}} = \begin{csarray}{rrclrclrclrcll}{abbbcbbcbbcbba} \langle & \multiset{} & \mtoSymbol & \multiset{}, & \multiset{\rform{\card{r}}} & \mtoSymbol & \multiset{\rgform{\card{x}}}, & \multiset{\gform{\card{g}}} & \mtoSymbol & \multiset{\rgform{\card{x}}}, & \multiset{\bform{\card{b}}} & \mtoSymbol & \multiset{\gbform{\card{y}}}, & \\ & \multiset{\rform{\card{r}},\gform{\card{g}}} & \mtoSymbol & \multiset{\rgform{\card{x}},\rgform{\card{x}}}, & \multiset{\rform{\card{r}},\bform{\card{b}}} & \mtoSymbol & \multiset{\rgform{\card{x}},\gbform{\card{y}}}, & \multiset{\gform{\card{g}},\bform{\card{b}}} & \mtoSymbol & \multiset{\rgform{\card{x}},\gbform{\card{y}}}, & \multiset{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}} & \mtoSymbol & \multiset{\rgform{\card{x}},\rgform{\card{x}},\gbform{\card{y}}} & \rangle \end{csarray} \]Note the only difference here between \(\multiImageModifier{\function{f}}\textAnd \imageModifier{\function{f}}\) is the behavior on \(\multiset{\rform{\card{r}},\gform{\card{g}}}\) / \(\set{\rform{\card{r}},\gform{\card{g}}}\).

The multi pre-image function \(\multiPreimageModifier{\function{f}}\) has explicit value:

\[ \multiPreimageModifier{\function{f}} = \assocArray{\mto{\multiset{}}{\multiset{}},\mto{\multiset{\rgform{\card{x}}}}{\multiset{\rform{\card{r}},\gform{\card{g}}}},\mto{\multiset{\gbform{\card{y}}}}{\multiset{\bform{\card{b}}}},\mto{\multiset{\rgform{\card{x}},\gbform{\card{y}}}}{\multiset{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}} \]Note there is no difference between \(\multiPreimageModifier{\function{f}}\textAnd \preimageModifier{\function{f}}\) if we replace multisets with the corresponding sets:

\[ \preimageModifier{\function{f}} = \assocArray{\mto{\set{}}{\set{}},\mto{\set{\rgform{\card{x}}}}{\set{\rform{\card{r}},\gform{\card{g}}}},\mto{\set{\gbform{\card{y}}}}{\set{\bform{\card{b}}}},\mto{\set{\rgform{\card{x}},\gbform{\card{y}}}}{\set{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}} \]What is the reason for this? First, observe that pre-images of different singleton sets are always disjoint, because \(\function{f}\) is a single-valued function: any element \(\setElementSymbol{z}\) that was in the intersection \(\function{\preimageModifier{\function{f}}}(\set{\setElementSymbol{x}})\setIntersectionSymbol \function{\preimageModifier{\function{f}}}(\set{\setElementSymbol{y}})\) would imply that both \(\function{f}(\setElementSymbol{z}) = \setElementSymbol{x}\textAnd \function{f}(\setElementSymbol{z}) = \setElementSymbol{y}\). Therefore, the multiplicities of objects in \(\function{\multiPreimageModifier{\function{f}}}(\setSymbol{X})\) cannot exceed 1, and so the multiset \(\function{\multiPreimageModifier{\function{f}}}(\setSymbol{X})\) projects to the set \(\function{\multiPreimageModifier{\function{f}}}(\setSymbol{X})\):

\[ \projection(\function{\multiPreimageModifier{\function{f}}}(\setSymbol{X})) = \function{\preimageModifier{\function{f}}}(\setSymbol{X}) \]### Multiplicity functions of multi-images

Let us examine the behavior of \(\multiImageModifier{\function{f}}\) from the point of view of the multiplicity function of the output multiset \(\function{\multiImageModifier{\function{f}}}(\setSymbol{A})\) for a particular input multiset \(\elemOf{\multisetSymbol{A}}{\multisets{\setSymbol{X}}}\). In other words, we wish to understand the function \(\functionSignature{\function{\boundMultiplicityFunction{\function{\multiImageModifier{\function{f}}}(\multisetSymbol{A})}}}{\multisets{\setSymbol{Y}}}{\mathbb{N}}\):

Simply put, \(\boundMultiplicityFunction{\function{\multiImageModifier{\function{f}}}(\multisetSymbol{A})}\) will count how many times any \(\elemOf{\multisetElementSymbol{y}}{\setSymbol{Y}}\) occurs in the multi-image \(\function{\multiImageModifier{\function{f}}}(\multisetSymbol{A})\). This is easy to define as a cardinality:

\[ \function{\boundMultiplicityFunction{\function{\multiImageModifier{\function{f}}}(\multisetSymbol{A})}}(\signedMultisetElementSymbol{y}) = \function{\multiImageModifier{\function{f}}}(\multisetSymbol{A})\multisetMultiplicitySymbol \setElementSymbol{y} = \cardinalityConstructor{\function{f}(\setElementSymbol{x}) = \setElementSymbol{y}}{\elemOf{\setElementSymbol{x}}{\multisetSymbol{A}}} \]It is clear that the number of ways that \(\function{f}(\setElementSymbol{x}) = \setElementSymbol{y}\) can be true in the multiset sum \(\multisetSymbol{A}\multisetSumSymbol \primed{\multisetSymbol{A}}\) is the sum of the number of ways it can be true in \(\multisetSymbol{A}\) and the number of ways it can be true in \(\primed{\multisetSymbol{A}}\):

\[ \boundMultiplicityFunction{\function{\multiImageModifier{\function{f}}}(\multisetSymbol{A}\multisetSumSymbol \primed{\multisetSymbol{A}})} = \boundMultiplicityFunction{\function{\multiImageModifier{\function{f}}}(\multisetSymbol{A})} + \boundMultiplicityFunction{\function{\multiImageModifier{\function{f}}}(\primed{\multisetSymbol{A}})} \]Therefore we have that multi-image functions preserve multiset sums, or – speaking more geometrically – are "linear":

\[ \function{\multiImageModifier{\function{f}}}(\multisetSymbol{A}\multisetSumSymbol \primed{\multisetSymbol{A}}) = \function{\multiImageModifier{\function{f}}}(\multisetSymbol{A}) + \function{\multiImageModifier{\function{f}}}(\primed{\multisetSymbol{A}}) \]The same is *not* true for multiset intersection, and relative complement, for the same reason that ordinary image functions do not preserve set intersection and relative complement.

## Multiset products

In this section we'll establish how to multiply two multisets together. Doing this will require us to choose a function with appropriate properties to combine elements from each multiset together, to yield a resulting multiset.

### Images of multiple arguments

In the previous section we defined how we can "lift" the an function \(\functionSignature{\function{f}}{\setSymbol{X}}{\setSymbol{Y}}\) between sets \(\setSymbol{X},\setSymbol{Y}\), to the multi-image \(\functionSignature{\function{\multiImageModifier{\function{f}}}}{\multisets{\setSymbol{X}}}{\multisets{\setSymbol{Y}}}\) between multisets on \(\setSymbol{X},\setSymbol{Y}\):

\[ \begin{aligned} \function{\multiImageModifier{\function{f}}}(\setSymbol{A})&\defEqualSymbol \multisetConstructor{\function{f}(\setElementSymbol{x})}{\elemOf{\setElementSymbol{x}}{\setSymbol{A}}}\end{aligned} \]We can easily extend this idea to a function of \(\sym{n}\) arguments \(\functionSignature{\function{g}}{\tuple{\setSymbol{X}_1,\setSymbol{X}_2,\ellipsis ,\setSymbol{X}_{\sym{n}}}}{\setSymbol{Y}}\), lifting it into a function \(\functionSignature{\function{\multiImageModifier{\function{g}}}}{\tuple{\multisets{\setSymbol{X}_1},\multisets{\setSymbol{X}_2},\ellipsis ,\multisets{\setSymbol{X}_{\sym{n}}}}}{\multisets{\setSymbol{Y}}}\):

\[ \begin{aligned} \function{\multiImageModifier{\function{g}}}(\setSymbol{A}_1,\setSymbol{A}_2,\ellipsis ,\setSymbol{A}_{\sym{n}})&\defEqualSymbol \multisetConstructor{\function{f}(\setElementSymbol{x}_1,\setElementSymbol{x}_2,\ellipsis ,\setElementSymbol{x}_{\sym{n}})}{\elemOf{\setElementSymbol{x}_1}{\setSymbol{A}_1},\elemOf{\setElementSymbol{x}_2}{\setSymbol{A}_2},\ellipsis ,\elemOf{\setElementSymbol{x}_{\sym{n}}}{\setSymbol{A}_{\sym{n}}}}\end{aligned} \]For similar reasons to the single-argument case, \(\multiImageModifier{\function{g}}\) preserves multiset sums in any of its arguments individually – it is "multilinear".

### Semigroups

We now focus our attention on binary functions that define the product of **semigroups**, as these will form useful ingredients in cooking up multiset products.

A **semigroup** on a set \(\setSymbol{X}\) is a pair \(\tuple{\setSymbol{X},\sgdot }\), where \(\functionSignature{\function{\sgdot }}{\tuple{\setSymbol{X},\setSymbol{X}}}{\setSymbol{X}}\) is an associative binary operation:

### Lifting semigroup products

Let us now consider how to lift the semigroup product \(\sgdot\), defined on \(\setSymbol{X}\), to operate on multisets on \(\setSymbol{X}\). We should technically write this multi-image function as \(\multiImageModifier{\sgdot }\), but we will use the symbol \(\multiImageColorModifier{\srdot }\) instead, as we did for multiset versions of union, etc.

To recall, the multi-image function that lifts \(\sgdot\) is the function \(\functionSignature{\function{\multiImageColorModifier{\srdot }}}{\tuple{\multisets{\setSymbol{X}},\multisets{\setSymbol{X}}}}{\multisets{\setSymbol{X}}}\) defined as:

\[ \appliedRelation{\multisetSymbol{A} \multiImageColorModifier{\srdot } \multisetSymbol{B}}\defEqualSymbol \multisetConstructor{\multisetElementSymbol{a}\sgdot \multisetElementSymbol{b}}{\elemOf{\multisetElementSymbol{a}}{\multisetSymbol{A}},\elemOf{\multisetElementSymbol{b}}{\multisetSymbol{B}}} \]This function endows \(\multisets{\setSymbol{X}}\) with an interesting structure: that of a **semiring**. First, let's explain what a semiring is!

### Semirings

A **semiring** is a tuple \(\tuple{\setSymbol{M},\srplus ,\srdot }\), where \(\functionSignature{\function{\srdot }}{\tuple{\setSymbol{X},\setSymbol{X}}}{\setSymbol{X}}\) is an associative binary operation and \(\functionSignature{\function{\srplus }}{\tuple{\setSymbol{X},\setSymbol{X}}}{\setSymbol{X}}\) is a commutative, associative binary operation with identity that distributes over \(\srdot\):

Additionally they must have an additive identity (the zero element), and usually also have one or more multiplicative identities.

Probably the simplest example of a semiring is given by the natural numbers under their ordinary sum and product: \(\tuple{\mathbb{N}, + ,\times }\).

Semirings are an "API" for the most general kinds of number-like objects, requiring only that they can be added and multiplied in a sensible way.

### Semigroups to semirings

We now show how an arbitrary semigroup yields a semiring via multi-images:

We define the **multiset semiring** induced by a semigroup \(\tuple{\setSymbol{X},\sgdot }\), to be the semiring \(\tuple{\multisets{\setSymbol{X}},\multisetSumSymbol ,\srdot = \multiImageModifier{\sgdot }}\), which we can denote \(\multisetSemiring{\setSymbol{X}}{\sgdot }\):

the semiring elements are the multisets on \(\setSymbol{X}\)

the semiring addition \(\srplus\) is given by multiset sum \(\multisetSumSymbol\)

the semiring product \(\srdot\) is given by the multi-image function \(\multiImageModifier{\sgdot }\) that lifts the

*semigroup*product \(\sgdot\)

Let's verify this *is* actually a semiring.

The additive identity is obvious: it is the empty multiset, since \(\multisetSymbol{A}\multisetSumSymbol \multiset{} = \multisetSymbol{A}\).

Associativity of the semiring product \(\srdot\) follows from associativity of \(\sgdot\) straightforwardly.

Distributivity of \(\srdot\) over \(\srplus\) follows from the fact that *any* \(n\)-ary multi-image function is multilinear: it preserves multiset sums in each of its arguments. We'll spelling this out for left-distributivity, right-distributivity is proved in a similar way:

### Example 1: "tally" multisets

Let's flex our muscles a bit to see how these multiset semirings might model familiar parts of mathematics.

Let's consider the multisets on a singleton set \(\bform{\setSymbol{X_0}} = \set{\bform{\sym{x}}}\). We'll name this set \(\rform{\setSymbol{X_1}}\):

\[ \rform{\setSymbol{X_1}} = \multisets{\bform{\setSymbol{X_0}}} = \set{\styledMultiset{\rform}{},\styledMultiset{\rform}{\bform{\sym{x}}},\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}}},\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}},\bform{\sym{x}}},\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}},\bform{\sym{x}},\bform{\sym{x}}},\ellipsis } \]We saw earlier that the multisets on a singleton are in bijection with the natural numbers, where we are essentially "tallying by finger counting":

\[ \begin{aligned} \rform{\setSymbol{X_1}}& \approx \mathbb{Z}\\ \underRepeated{\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}},\ellipsis ,\bform{\sym{x}}}}{\sym{n}}& \approx \sym{n}\end{aligned} \]From now on, we'll express these multisets in a more compact form using integer multiples:

\[ \repeatedMultiset{\sym{n}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\syntaxEqualSymbol \underRepeated{\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}},\ellipsis ,\bform{\sym{x}}}}{\sym{n}} \]Using this notation, we have:

\[ \begin{aligned} \repeatedMultiset{0}{\styledMultiset{\rform}{\bform{\sym{x}}}}&\syntaxEqualSymbol \styledMultiset{\rform}{}\\ \repeatedMultiset{1}{\styledMultiset{\rform}{\bform{\sym{x}}}}&\syntaxEqualSymbol \styledMultiset{\rform}{\bform{\sym{x}}}\\ \repeatedMultiset{2}{\styledMultiset{\rform}{\bform{\sym{x}}}}&\syntaxEqualSymbol \styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}}}\\ \repeatedMultiset{3}{\styledMultiset{\rform}{\bform{\sym{x}}}}&\syntaxEqualSymbol \styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}},\bform{\sym{x}}}\end{aligned} \]To define the semiring structure on \(\rform{\setSymbol{X_1}} = \multisets{\bform{\setSymbol{X_0}}}\), we must choose a semigroup product \(\functionSignature{\function{\sgdot }}{\tuple{\bform{\setSymbol{X_0}},\bform{\setSymbol{X_0}}}}{\bform{\setSymbol{X_0}}}\) on \(\bform{\setSymbol{X_0}} = \set{\bform{\sym{x}}}\). But there is only one possible such structure, given by \(\bform{\identity} = \assocArray{\mto{\tuple{\bform{\sym{x}},\bform{\sym{x}}}}{\bform{\sym{x}}}}\). Therefore our semiring product on \(\rform{\setSymbol{X_1}}\) is \(\rform{\msrdot } = \multiImageModifier{\bform{\identity}}\), yielding the the semiring structure we'll call \(\rform{\multisetSemiringSymbol{M_1}}\):

\[ \rform{\multisetSemiringSymbol{M_1}} = \tuple{\rform{\setSymbol{X_1}},\rform{\msrplus },\rform{\msrdot }} = \tuple{\multisets{\bform{\setSymbol{X_0}}},\rform{\msrplus },\multiImageModifier{\bform{\identity}}} \]\(\rform{\multisetSemiringSymbol{M_1}}\) has the following properties:

the sum of two multisets corresponds to the sum of two natural numbers: \(\repeatedMultiset{\sym{m}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\,\rform{\msrplus }\,\repeatedMultiset{\sym{n}}{\styledMultiset{\rform}{\bform{\sym{x}}}} = \repeatedMultiset{\paren{\sym{m} + \sym{n}}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\)

product of two multisets corresponds to the product of two natural numbers: \(\repeatedMultiset{\sym{m}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\,\rform{\msrdot }\,\repeatedMultiset{\sym{n}}{\styledMultiset{\rform}{\bform{\sym{x}}}} = \repeatedMultiset{\paren{\sym{m} \, \sym{n}}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\)

Hence, \(\rform{\multisetSemiringSymbol{M_1}}\) is isomorphic to the semiring of the natural numbers:

\[ \rform{\multisetSemiringSymbol{M_1}}\isomorphicSymbol \semiring{\mathbb{N}} \]We have some additional properties of \(\rform{\multisetSemiringSymbol{M_1}}\) that exploit the other multiset operations:

\[ \begin{aligned} \repeatedMultiset{\sym{m}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\multisetUnionSymbol \repeatedMultiset{\sym{n}}{\styledMultiset{\rform}{\bform{\sym{x}}}}&= \max(\sym{m},\sym{n})\\ \repeatedMultiset{\sym{m}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\multisetIntersectionSymbol \repeatedMultiset{\sym{n}}{\styledMultiset{\rform}{\bform{\sym{x}}}}&= \min(\sym{m},\sym{n})\\ \repeatedMultiset{\sym{m}}{\styledMultiset{\rform}{\bform{\sym{x}}}}\multisetRelativeComplementSymbol \repeatedMultiset{\sym{n}}{\styledMultiset{\rform}{\bform{\sym{x}}}}&= \sym{m} - \sym{n}\end{aligned} \]### Example 2: univariate polynomials

We'll now form the multisets on \(\rform{\setSymbol{X_1}}\), and call this \(\gform{\setSymbol{X_2}}\):

\[ \gform{\setSymbol{X_2}} = \multisets{\rform{\setSymbol{X_1}}} = \multisets{\multisets{\bform{\setSymbol{X_0}}}} = \multisets{\set{\styledMultiset{\rform}{},\styledMultiset{\rform}{\bform{\sym{x}}},\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}}},\ellipsis }} \]Before trying to interpret \(\gform{\setSymbol{X_2}}\), we'll choose to abbreviate the elements of \(\rform{\setSymbol{X_1}}\) in a more suggestive way:

\[ \power{\rform{\sym{x}}}{\sym{n}}\syntaxEqualSymbol \underRepeated{\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}},\ellipsis ,\bform{\sym{x}}}}{\sym{n}} \]As a special case, we'll use \(\rform{1}\) to denote the empty multiset \(\elemOf{\styledMultiset{\rform}{}}{\rform{\setSymbol{X_1}}}\).

We can now list some of the infinitely many elements of \(\gform{\setSymbol{X_2}}\):

\[ \gform{\setSymbol{X_2}} = \begin{nsarray}{rlllll} \lbrace & \phantom{,}\styledMultiset{\gform}{}, & & & & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\rform{1}}, & \phantom{,}\styledMultiset{\gform}{\rform{1},\rform{1}}, & \phantom{,}\styledMultiset{\gform}{\rform{1},\rform{1},\rform{1}}, & \phantom{,}\ellipsis , & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1}}, & \phantom{,}\ellipsis , & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1},\rform{1}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1},\rform{1},\rform{1}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1},\rform{1},\rform{1}}, & \phantom{,}\ellipsis , & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2}}, & \phantom{,}\ellipsis , & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1}}, & \phantom{,}\ellipsis , & & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1}}, & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1}}, & \phantom{,}\ellipsis , & & \\ \phantom{\lbrace } & \phantom{,}\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1},\rform{1}}, & \phantom{,}\ellipsis , & & & \rbrace \end{nsarray} \]We can further also use integer multiple notation to write these elements of \(\gform{\setSymbol{X_2}}\) in a more compact way:

\[ \styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1},\rform{1},\rform{1},\rform{1}}\syntaxEqualSymbol \styledMultiset{\gform}{\power{\rform{\sym{x}}}{2}}\,\gform{\msrplus }\,\repeatedMultiset{2}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1}}}\,\gform{\msrplus }\,\repeatedMultiset{3}{\styledMultiset{\gform}{\rform{1}}} \]On the other hand, we can avoid using any notation at all, and write an element of \(\gform{\setSymbol{X_2}}\) directly as a multiset of multisets on \(\bform{\setSymbol{X_0}} = \set{\bform{\sym{x}}}\):

\[ \styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1},\rform{1},\rform{1},\rform{1}}\syntaxEqualSymbol \begin{nsarray}{l} \gform{\openMultiset }\kern{2pt}\styledMultiset{\rform}{\bform{\sym{x}},\bform{\sym{x}}},\\ \phantom{\openMultiset \kern{2pt}}\styledMultiset{\rform}{\bform{\sym{x}}},\styledMultiset{\rform}{\bform{\sym{x}}},\\ \phantom{\openMultiset \kern{2pt}}\styledMultiset{\rform}{},\styledMultiset{\rform}{},\styledMultiset{\rform}{}\kern{2pt}\gform{\closeMultiset } \end{nsarray} \]#### General form

A general element \(\elemOf{\multisetSymbol{M}}{\gform{\setSymbol{X_2}}}\) can be written as a sum of elements of \(\rform{\setSymbol{X_1}}\), with corresponding multiplicities \(\sym{m}_{\sym{i}}\). Recall that only finitely many of these can be non-zero.

\[ \multisetSymbol{M} = \styledIndexSum{\gform}{\repeatedMultiset{\sym{m}_{\sym{i}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}}}}}{\sym{i} \ge 0}{} \]The summation symbol \(\gform\indexSumSymbol{}\) is colored to indicate that is represents the composition of binary multiset sums \(\gform{\msrplus }\).

#### Polynomials

It should be transparent now that there is a bijection between \(\gform{\setSymbol{X_2}}\) and the polynomials in variable \(\bform{\sym{x}}\) with non-negative integer coefficients:

\[ \gform{\setSymbol{X_2}}\bijectiveSymbol \polynomialRing{\semiring{\mathbb{N}}}{\bform{\sym{x}}} \]#### Semiring product

Next, we seek a product \(\gform{\msrdot }\) so that \(\tuple{\gform{\setSymbol{X_2}},\gform{\msrplus },\gform{\msrdot }}\) forms a semiring.

But we have an obvious candidate: the lift \(\multiImageModifier{\rform{\msrdot }}\) of the product \(\rform{\msrdot }\) of \(\rform{\multisetSemiringSymbol{M_1}}\), since the semiring \(\tuple{\rform{\setSymbol{X_1}},\rform{\msrplus },\rform{\msrdot }}\) defines a semigroup \(\tuple{\rform{\setSymbol{X_1}},\rform{\msrdot }}\).

Setting \(\gform{\msrdot } = \multiImageModifier{\rform{\msrdot }}\), we can define the semiring \(\gform{\multisetSemiringSymbol{M_2}} = \tuple{\gform{\setSymbol{X_2}},\gform{\msrplus },\gform{\msrdot }}\).

As an example, let's see how the product \(\gform{\msrdot }\) acts on two elements of \(\gform{\setSymbol{X_2}}\):

\[ \begin{aligned} \paren{\repeatedMultiset{3}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{2}}}\,\gform{\msrplus }\,\multiset{\rform{1}}}\,\gform{\msrdot }\,\paren{\repeatedMultiset{2}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1}}}}&= \styledMultiset{\gform}{\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{2},\power{\rform{\sym{x}}}{0}}\,\gform{\msrdot }\,\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{1}}\\ &= \styledMultiset{\gform}{\power{\rform{\sym{x}}}{2}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{2}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{2}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{0}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{2}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{2}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{2}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{0}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{1}}\\ &= \styledMultiset{\gform}{\power{\rform{\sym{x}}}{3},\power{\rform{\sym{x}}}{3},\power{\rform{\sym{x}}}{3},\power{\rform{\sym{x}}}{1},\power{\rform{\sym{x}}}{3},\power{\rform{\sym{x}}}{3},\power{\rform{\sym{x}}}{3},\power{\rform{\sym{x}}}{1}}\\ &= \repeatedMultiset{6}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{3}}}\,\gform{\msrplus }\,\repeatedMultiset{2}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{1}}}\end{aligned} \]This corresponds to the product of polynomials:

\[ \paren{\poly{3 \, \power{\bform{\sym{x}}}{2} + 1}} \, \paren{\poly{2 \, \bform{\sym{x}}}} = \poly{6 \, \power{\bform{\sym{x}}}{3} + 2 \, \bform{\sym{x}}} \]#### Product computation

We can compute the product of two general elements of \(\elemOf{\multisetSymbol{A},\multisetSymbol{B}}{\gform{\multisetSemiringSymbol{M_2}}}\) by first expressing them as finite sums of elements of elements of \(\rform{\multisetSemiringSymbol{M_1}}\):

\[ \multisetSymbol{A} = \styledIndexSum{\gform}{\repeatedMultiset{\sym{a}_{\sym{i}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}}}}}{\sym{i} \ge 0}{}\quad \multisetSymbol{B} = \styledIndexSum{\gform}{\repeatedMultiset{\sym{b}_{\sym{i}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}}}}}{\sym{i} \ge 0}{} \]Then the product is given by:

\[ \begin{aligned} \multisetSymbol{A}\,\gform{\msrdot }\,\multisetSymbol{B}&= \paren{\styledIndexSum{\gform}{\repeatedMultiset{\sym{a}_{\sym{i}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}}}}}{\sym{i}}{}}\,\gform{\msrdot }\,\paren{\styledIndexSum{\gform}{\repeatedMultiset{\sym{b}_{\sym{j}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{j}}}}}{\sym{j}}{}}\\ &= \styledIndexSum{\gform}{\paren{\repeatedMultiset{\sym{a}_{\sym{i}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}}}}}\,\gform{\msrdot }\,\paren{\repeatedMultiset{\sym{b}_{\sym{j}}}{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{j}}}}}}{\substack{\sym{i},\;\sym{j}}}{}\\ &= \styledIndexSum{\gform}{\underRepeated{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}},\ellipsis ,\power{\rform{\sym{x}}}{\sym{i}}}}{\sym{a}_{\sym{i}}}\,\gform{\msrdot }\,\underRepeated{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{j}},\ellipsis ,\power{\rform{\sym{x}}}{\sym{j}}}}{\sym{b}_{\sym{j}}}}{\substack{\sym{i},\;\sym{j}}}{}\\ &= \styledIndexSum{\gform}{\underRepeated{\styledMultiset{\gform}{\power{\rform{\sym{x}}}{\sym{i}}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{\sym{j}},\ellipsis ,\power{\rform{\sym{x}}}{\sym{i}}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{\sym{j}}}}{\sym{a}_{\sym{i}} \, \sym{b}_{\sym{j}}}}{\substack{\sym{i},\;\sym{j}}}{}\\ &= \styledIndexSum{\gform}{\repeatedMultiset{\sym{a}_{\sym{i}} \, \sym{b}_{\sym{j}}}{\power{\rform{\sym{x}}}{\sym{i}}\,\rform{\msrdot }\,\power{\rform{\sym{x}}}{\sym{j}}}}{\substack{\sym{i},\;\sym{j}}}{}\\ &= \styledIndexSum{\gform}{\repeatedMultiset{\sym{a}_{\sym{i}} \, \sym{b}_{\sym{j}}}{\power{\rform{\sym{x}}}{\sym{i} + \sym{j}}}}{\substack{\sym{i},\;\sym{j}}}{}\end{aligned} \]Solving for the multiplicity in \(\multisetSymbol{A}\,\gform{\msrdot }\,\multisetSymbol{B}\) of a generic element \(\elemOf{\power{\rform{\sym{x}}}{\sym{i}}}{\rform{\setSymbol{X_1}}}\), we can remove one of the indices of the double summation:

\[ \begin{aligned} \paren{\multisetSymbol{A}\,\gform{\msrdot }\,\multisetSymbol{B}}\multisetMultiplicitySymbol \power{\rform{\sym{x}}}{\sym{n}}&= \indexSum{\paren{\multisetSymbol{A}\multisetMultiplicitySymbol \power{\rform{\sym{x}}}{\sym{i}}} \, \paren{\multisetSymbol{B}\multisetMultiplicitySymbol \power{\rform{\sym{x}}}{\sym{j}}}}{\sym{i} + \sym{j} = \sym{n}}{}\\ &= \indexSum{\paren{\multisetSymbol{A}\multisetMultiplicitySymbol \power{\rform{\sym{x}}}{\sym{i}}} \, \paren{\multisetSymbol{B}\multisetMultiplicitySymbol \power{\rform{\sym{x}}}{\sym{n} - \sym{j}}}}{\sym{i} \le \sym{n}}{}\end{aligned} \]These computations may seem obvious when we have the polynomial interpretation of these multisets in mind, and in a sense it is pedantic to spell them out. But it *is* perhaps surprising that we were *naturally led* to this semiring structure as soon as we chose the base set \(\bform{\setSymbol{X_0}} = \set{\bform{\sym{x}}}\).

In summary, we have the semiring isomorphism between the positive-integral univariate polynomials \(\polynomialRing{\semiring{\mathbb{N}}}{\bform{\sym{x}}}\) and the multiset semiring on the multiset semiring on \(\set{\bform{\sym{x}}}\) (phew!):

\[ \gform{\multisetSemiringSymbol{M_2}}\isomorphicSymbol \polynomialRing{\semiring{\mathbb{N}}}{\bform{\sym{x}}} \]### Summary of examples

Our last two examples have covered one semigroup and two multiset semirings involved, let's summarize the situation in a table:

elements | semiring | semigroup | product | models |
---|---|---|---|---|

\(\bform{\setSymbol{X_0}} = \set{\bform{\sym{x}}}\) | \(\notApplicable\) | \(\tuple{\bform{\setSymbol{X_0}},\bform{\identity}}\) | \(\bform{\identity} = \mto{\tuple{\bform{\sym{x}},\bform{\sym{x}}}}{\bform{\sym{x}}}\) | 'identity' |

\(\rform{\setSymbol{X_1}} = \multisets{\bform{\setSymbol{X_0}}}\) | \(\rform{\multisetSemiringSymbol{M_1}} = \tuple{\rform{\setSymbol{X_1}},\rform{\msrplus },\rform{\msrdot }}\) | \(\tuple{\rform{\setSymbol{X_1}},\rform{\msrdot }}\) | \(\rform{\msrdot } = \multiImageModifier{\bform{\identity}}\) | natural numbers |

\(\gform{\setSymbol{X_2}} = \multisets{\rform{\setSymbol{X_1}}}\) | \(\gform{\multisetSemiringSymbol{M_2}} = \tuple{\gform{\setSymbol{X_2}},\gform{\msrplus },\gform{\msrdot }}\) | \(\tuple{\gform{\setSymbol{X_2}},\gform{\msrdot }}\) | \(\gform{\msrdot } = \multiImageModifier{\rform{\msrdot }}\) | polynomials in \(\bform{\sym{x}}\) |

The univariate polynomials modelled by \(\gform{\multisetSemiringSymbol{M_2}}\) are specifically those that have natural-number coefficients. We will shortly see how to obtain the full ring of polynomials with integer-valued coefficients using **signed multisets**.

Next, we will extend our notion of multisets slightly, which will allow us to beef up these constructions of the semirings \(\semiring{\mathbb{N}}\) and \(\polynomialRing{\semiring{\mathbb{N}}}{\bform{\sym{x}}}\) to constructions of the rings \(\ring{\mathbb{Z}}\) and \(\polynomialRing{\ring{\mathbb{Z}}}{\bform{\sym{x}}}\).

## Signed multisets

We can extend the idea of a multiset in a very simple way to yield a **signed multiset**.

We can think of a normal, unsigned multiset \(\elemOf{\multisetSymbol{A}}{\multisets{\setSymbol{X}}}\) on a base set \(\setSymbol{X}\) as an unordered list of elements from \(\setSymbol{X}\). To form a signed multiset, we allow some of these elements to be present in **negated form** in the multiset. This is similar to the way we conceptualize of negative integers are the negated forms of positive integers.

To indicate the presence of a negated form of an element \(\elemOf{\setElementSymbol{x}}{\setSymbol{X}}\), we write it with an overbar: \(\negated{\signedMultisetElementSymbol{x}}\). For example, the following signed multiset consists of three non-negated copies of \(\signedMultisetElementSymbol{x}\), and two negated copies of \(\signedMultisetElementSymbol{y}\):

\[ \signedMultiset{\signedMultisetElementSymbol{x},\signedMultisetElementSymbol{x},\signedMultisetElementSymbol{x},\negated{\signedMultisetElementSymbol{y}},\negated{\signedMultisetElementSymbol{y}}} \]If a single object is present in both negated and non-negated forms, these "cancel" each other in pairs:

\[ \signedMultiset{\signedMultisetElementSymbol{x},\signedMultisetElementSymbol{x},\signedMultisetElementSymbol{x},\negated{\signedMultisetElementSymbol{x}},\negated{\signedMultisetElementSymbol{x}}} = \signedMultiset{\signedMultisetElementSymbol{x},\signedMultisetElementSymbol{x},\negated{\signedMultisetElementSymbol{x}}} = \signedMultiset{\signedMultisetElementSymbol{x}} \]A signed multiset is in **normal form** if it is written so that no cancellable pairs are present.

Essentially, the "net" number of ordinary or negated copies of an object gives the multiplicity of that object, which can now be a *negative integer* if there are negated copies present.

We write the set of signed multisets on a base set \(\setSymbol{X}\) as \(\signedMultisets{\setSymbol{X}}\).

### Signed multiplicities

For a multiset \(\elemOf{\multisetSymbol{A}}{\multisets{\setSymbol{X}}}\), the multiplicity function \(\functionSignature{\function{\boundMultiplicityFunction{\multisetSymbol{A}}}}{\setSymbol{X}}{\mathbb{N}}\) associates to each element \(\elemOf{\setElementSymbol{x}}{\setSymbol{X}}\) a natural number \(\multisetSymbol{X}\multisetMultiplicitySymbol \setElementSymbol{x}\).

For a signed multiset \(\elemOf{\multisetSymbol{B}}{\signedMultisets{\setSymbol{X}}}\), the multiplicity function \(\functionSignature{\function{\boundMultiplicityFunction{\multisetSymbol{B}}}}{\setSymbol{X}}{\mathbb{Z}}\) associates to each element of \(\setSymbol{X}\) an *integer*. Negative integers indicate negative multiplicities.

All other operations on signed multisets are defined in an identical way relative to their multiset relatives, with one exception – the relative complement no longer requires a \(\max(□,0)\) in its definition, since negative multiplicities are permitted:

\[ \begin{aligned} \boundSignedMultiplicityFunction{\paren{\multisetSymbol{M}\multisetSumSymbol \multisetSymbol{N}}}&= \boundMultiplicityFunction{\multisetSymbol{M}} + \boundMultiplicityFunction{\multisetSymbol{N}}\\ \boundSignedMultiplicityFunction{\paren{\multisetSymbol{M}\multisetUnionSymbol \multisetSymbol{N}}}&= \max(\boundMultiplicityFunction{\multisetSymbol{M}},\boundMultiplicityFunction{\multisetSymbol{N}})\\ \boundSignedMultiplicityFunction{\paren{\multisetSymbol{M}\multisetIntersectionSymbol \multisetSymbol{N}}}&= \min(\boundMultiplicityFunction{\multisetSymbol{M}},\boundMultiplicityFunction{\multisetSymbol{N}})\\ \boundSignedMultiplicityFunction{\paren{\multisetSymbol{M}\multisetRelativeComplementSymbol \multisetSymbol{N}}}&= \boundMultiplicityFunction{\multisetSymbol{M}} - \boundMultiplicityFunction{\multisetSymbol{N}}\\ \boundMultiplicityFunction{\paren{\repeatedMultiset{\sym{n}}{\multisetSymbol{M}}}}&= \sym{n} \, \paren{\boundMultiplicityFunction{\multisetSymbol{M}}}\end{aligned} \]## Signed sets

Analogous to the signed multisets \(\signedMultisets{X}\) on a set \(\setSymbol{X}\) are the **signed sets** \(\signedSubsets{X}\) on \(\setSymbol{X}\).

Just as for multisets, any element \(\elemOf{\setElementSymbol{x}}{\setSymbol{X}}\) can be present in a signed set \(\elemOf{\signedSetSymbol{A}}{\signedSubsets{X}}\) in a signed set in unnegated or negated form. When present in negated form it is written \(\negated{\signedSetElementSymbol{x}}\). A given element \(\setElementSymbol{x}\) can not be present in both unnegated and negated forms simultaneously. To illustrate the possibilities, we show the set of signed sets on a variety of base sets:

\[ \begin{aligned} \signedSubsets{\set{}}&= \signedSet{}\\[0.5em] \signedSubsets{\set{\rform{\setElementSymbol{a}}}}&= \set{\signedSet{},\signedSet{\rform{\setElementSymbol{a}}},\signedSet{\negated{\rform{\setElementSymbol{a}}}}}\\[0.5em] \signedSubsets{\set{\rform{\setElementSymbol{a}},\bform{\setElementSymbol{b}}}}&= \set{\signedSet{},\signedSet{\rform{\setElementSymbol{a}}},\signedSet{\negated{\rform{\setElementSymbol{a}}}},\signedSet{\bform{\setElementSymbol{b}}},\signedSet{\bform{\negated{\setElementSymbol{b}}}},\signedSet{\rform{\setElementSymbol{a}},\bform{\setElementSymbol{b}}},\signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}},\signedSet{\rform{\negated{\setElementSymbol{a}}},\bform{\setElementSymbol{b}}},\signedSet{\rform{\negated{\setElementSymbol{a}}},\bform{\negated{\setElementSymbol{b}}}}}\end{aligned} \]In general, for a base set \(\setSymbol{X}\) of cardinality \(\setCardinality{\setSymbol{X}} = \sym{n}\) the cardinality of the set of signed sets on \(\setSymbol{X}\) is:

\[ \setCardinality{\signedSubsets{\setSymbol{X}}} = \power{3}{\sym{n}} \]This is because in a given signed set \(\elemOf{\signedSetSymbol{A}}{\signedSubsets{\setSymbol{X}}}\) element of \(\setSymbol{X}\) can either 1) not occur in \(\signedSetSymbol{A}\), 2) occur unnegated or 3) occur negated.

#### Multiplicity functions

To help us think about signed sets, we can regard them in terms of their multiplicity functions, which can now take on values in the set \(\set{-1,0,1}\) to indicate the presence, absence, or presence in negated form of any given element.

For example, here are some multiplicity functions for signed sets on base set \(\set{\rform{\setElementSymbol{a}},\bform{\signedSetElementSymbol{b}},\gform{\setElementSymbol{c}}}\):

\[ \begin{csarray}{rcrrrrrrrrrrrrr}{accbbbbbbbbbbbb} \boundMultiplicityFunction{\signedSet{\rform{\setElementSymbol{a}}}} & = & \langle & \rform{\signedSetElementSymbol{a}} & \mtoSymbol & 1, & & \bform{\signedSetElementSymbol{b}} & \mtoSymbol & 0, & & \gform{\signedSetElementSymbol{c}} & \mtoSymbol & 0 & \rangle \\[0.5em] \boundMultiplicityFunction{\signedSet{\rform{\negated{\setElementSymbol{a}}}}} & = & \langle & \rform{\signedSetElementSymbol{a}} & \mtoSymbol & -1, & & \bform{\signedSetElementSymbol{b}} & \mtoSymbol & 0, & & \gform{\signedSetElementSymbol{c}} & \mtoSymbol & 0 & \rangle \\[0.5em] \boundMultiplicityFunction{\signedSet{\rform{\setElementSymbol{a}},\bform{\setElementSymbol{b}}}} & = & \langle & \rform{\signedSetElementSymbol{a}} & \mtoSymbol & 1, & & \bform{\signedSetElementSymbol{b}} & \mtoSymbol & 1, & & \gform{\signedSetElementSymbol{c}} & \mtoSymbol & 0 & \rangle \\[0.5em] \boundMultiplicityFunction{\signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}}} & = & \langle & \rform{\signedSetElementSymbol{a}} & \mtoSymbol & 1, & & \bform{\signedSetElementSymbol{b}} & \mtoSymbol & -1, & & \gform{\signedSetElementSymbol{c}} & \mtoSymbol & 0 & \rangle \\[0.5em] \boundMultiplicityFunction{\signedSet{\rform{\negated{\setElementSymbol{a}}},\bform{\negated{\setElementSymbol{b}}},\gform{\negated{\setElementSymbol{c}}}}} & = & \langle & \rform{\signedSetElementSymbol{a}} & \mtoSymbol & -1, & & \bform{\signedSetElementSymbol{b}} & \mtoSymbol & -1, & & \gform{\signedSetElementSymbol{c}} & \mtoSymbol & -1 & \rangle \end{csarray} \]#### Operations

We can define union and intersection operations on signed sets by applying the definition of these operations in terms of multiplicity functions that we obtained in the case of ordinary sets, which we recall below:

\[ \begin{aligned} \boundMultiplicityFunction{\paren{\setSymbol{M}\setUnionSymbol \setSymbol{N}}}&\defEqualSymbol \max(\boundMultiplicityFunction{\setSymbol{M}},\boundMultiplicityFunction{\setSymbol{N}})\\ \boundMultiplicityFunction{\paren{\setSymbol{M}\setIntersectionSymbol \setSymbol{N}}}&\defEqualSymbol \min(\boundMultiplicityFunction{\setSymbol{M}},\boundMultiplicityFunction{\setSymbol{N}})\\ \boundMultiplicityFunction{\paren{\setSymbol{M}\setRelativeComplementSymbol \setSymbol{N}}}&\defEqualSymbol \max(\boundMultiplicityFunction{\setSymbol{M}} - \boundMultiplicityFunction{\setSymbol{N}},0)\end{aligned} \]To handle signed multiplicities for signed sets \(\signedSetSymbol{A}\) and \(\signedSetSymbol{B}\), we need only modify the relative complement, which should now "clip" the resulting multiplicity to be no greater than 1 and no less than 0:

\[ \begin{aligned} \boundMultiplicityFunction{\paren{\signedSetSymbol{A}\setUnionSymbol \signedSetSymbol{B}}}&\defEqualSymbol \max(\boundMultiplicityFunction{\signedSetSymbol{A}},\boundMultiplicityFunction{\signedSetSymbol{B}})\\ \boundMultiplicityFunction{\paren{\signedSetSymbol{A}\setIntersectionSymbol \signedSetSymbol{B}}}&\defEqualSymbol \min(\boundMultiplicityFunction{\signedSetSymbol{A}},\boundMultiplicityFunction{\signedSetSymbol{B}})\\ \boundMultiplicityFunction{\paren{\signedSetSymbol{A}\setRelativeComplementSymbol \signedSetSymbol{B}}}&\defEqualSymbol \clip(\boundMultiplicityFunction{\signedSetSymbol{A}} - \boundMultiplicityFunction{\signedSetSymbol{B}},-1,1)\end{aligned} \]With these definitions in hand, we can look at some examples of unions on the signed sets on base set \(\set{\rform{\setElementSymbol{a}},\bform{\signedSetElementSymbol{b}},\gform{\setElementSymbol{c}}}\):

\[ \begin{csarray}{rclcl}{acccc} \signedSet{\rform{\setElementSymbol{a}}} & \cup & \signedSet{} & = & \signedSet{\rform{\setElementSymbol{a}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}} & \cup & \signedSet{\rform{\setElementSymbol{a}}} & = & \signedSet{\rform{\setElementSymbol{a}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}} & \cup & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{\rform{\setElementSymbol{a}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}} & \cup & \signedSet{\gform{\setElementSymbol{c}}} & = & \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}},\gform{\setElementSymbol{c}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}} & \cup & \signedSet{\bform{\setElementSymbol{b}},\gform{\negated{\setElementSymbol{c}}}} & = & \signedSet{\rform{\setElementSymbol{a}},\bform{\setElementSymbol{b}},\gform{\negated{\setElementSymbol{c}}}} \end{csarray} \]Some intersections:

\[ \begin{csarray}{rclcl}{acccc} \signedSet{\rform{\setElementSymbol{a}}} & \cap & \signedSet{} & = & \signedSet{}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}} & \cap & \signedSet{\rform{\setElementSymbol{a}}} & = & \signedSet{\rform{\setElementSymbol{a}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}} & \cap & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{\rform{\negated{\setElementSymbol{a}}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}} & \cap & \signedSet{\gform{\setElementSymbol{c}}} & = & \signedSet{\bform{\negated{\setElementSymbol{b}}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}} & \cap & \signedSet{\bform{\setElementSymbol{b}},\gform{\negated{\setElementSymbol{c}}}} & = & \signedSet{\bform{\negated{\setElementSymbol{b}}},\gform{\negated{\setElementSymbol{c}}}} \end{csarray} \]Some relative complements:

\[ \begin{csarray}{rclclrclclrclclrc}{acccciccccciccccc} \signedSet{\rform{\setElementSymbol{a}}} & \setminus & \signedSet{} & = & \signedSet{\rform{\setElementSymbol{a}}} & & \signedSet{} & \setminus & \signedSet{} & = & \signedSet{} & & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & \setminus & \signedSet{} & = & \signedSet{\rform{\negated{\setElementSymbol{a}}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}} & \setminus & \signedSet{\rform{\setElementSymbol{a}}} & = & \signedSet{} & & \signedSet{} & \setminus & \signedSet{\rform{\setElementSymbol{a}}} & = & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & \setminus & \signedSet{\rform{\setElementSymbol{a}}} & = & \signedSet{\rform{\negated{\setElementSymbol{a}}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}} & \setminus & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{\rform{\setElementSymbol{a}}} & & \signedSet{} & \setminus & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{\rform{\setElementSymbol{a}}} & & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & \setminus & \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{} \end{csarray} \]#### Sum

When we introduced the sum of multisets, as distinct from the union of multisets, the reader might wonder why there was no such distinction for ordinary sets. The reason is that sets take multiplicities in \(\set{0,1}\), and for these values there is no distinction between the \(\max(□,□)\) (the multiplicity implementation of union) and \(\min(□ + □,1)\) (the multiplicity implementation of sum). For signed multisets these implementations are distinct, as there is no longer a \(\min\) at play. For signed sets, these are again distinct, although here the multiplicity implementation of sum is now \(\clip(□ + □,-1,1)\):

\[ \paren{\signedSetSymbol{A}\multisetSumSymbol \signedSetSymbol{B}}\signedMultisetMultiplicitySymbol \signedSetElementSymbol{c}\defEqualSymbol \clip(\paren{\signedSetSymbol{A}\signedMultisetMultiplicitySymbol \signedSetElementSymbol{c}} + \paren{\signedSetSymbol{B}\signedMultisetMultiplicitySymbol \signedSetElementSymbol{c}},-1,1) \]Here we show some examples of the sums of signed sets:

\[ \begin{csarray}{rcl}{acc} \signedSet{\rform{\setElementSymbol{a}}}\multisetSumSymbol \signedSet{} & = & \signedSet{}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}}\multisetSumSymbol \signedSet{\rform{\setElementSymbol{a}}} & = & \signedSet{\rform{\setElementSymbol{a}}}\\[0.5em] \signedSet{\rform{\negated{\setElementSymbol{a}}}}\multisetSumSymbol \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{\rform{\negated{\setElementSymbol{a}}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}}}\multisetSumSymbol \signedSet{\rform{\negated{\setElementSymbol{a}}}} & = & \signedSet{}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}}\multisetSumSymbol \signedSet{\gform{\setElementSymbol{c}}} & = & \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}},\gform{\setElementSymbol{c}}}\\[0.5em] \signedSet{\rform{\setElementSymbol{a}},\bform{\negated{\setElementSymbol{b}}}}\multisetSumSymbol \signedSet{\bform{\setElementSymbol{b}},\gform{\negated{\setElementSymbol{c}}}} & = & \signedSet{\rform{\setElementSymbol{a}},\gform{\negated{\setElementSymbol{c}}}} \end{csarray} \]An interesting aspect of signed set summation is that it is *not* associative, unlike for multiset summation. For multisets \(\multisetSymbol{X},\multisetSymbol{Y},\multisetSymbol{Z}\), we have that \(\multisetSymbol{X}\multisetSumSymbol \paren{\multisetSymbol{Y}\multisetSumSymbol \multisetSymbol{Z}}\identicallyEqualSymbol \paren{\multisetSymbol{X}\multisetSumSymbol \multisetSymbol{Y}}\multisetSumSymbol \multisetSymbol{Z}\), which justifies writing sums of three or more sets without any parentheses \(\multisetSymbol{X}\multisetSumSymbol \multisetSymbol{Y}\multisetSumSymbol \multisetSymbol{Z}\) since the order of binary operations does not matter. This is not true for signed sets because of the presence of \(\clip\):

#### Signed projection and lift

Just like for sets and multisets, we can **lift** and **project** between signed sets and signed multisets.

**Signed lifting** takes a signed set and gives the signed multiset that contains each element either negated or unnegated exactly once:

**Signed projection** takes a signed multiset and gives the signed set that contains an element unnegated if it occurs a positive number of times, and negated if it occurs a negative number of times:

Here we use \(\positiveSignedPart{\signedMultisetSymbol{A}}\) to denote the ordinary multiset consisting of all the unnegated elements of the signed multiset \(\signedMultisetSymbol{A}\), and \(\negativeSignedPart{\signedMultisetSymbol{A}}\) to be the same but for negated elements (where the negated elements occur unnegated in the result, of course):

\[ \begin{aligned} \positiveSignedPart{\signedMultisetSymbol{A}}\multisetMultiplicitySymbol \multisetElementSymbol{c}&\defEqualSymbol \max(\signedMultisetSymbol{A}\multisetMultiplicitySymbol \multisetElementSymbol{c},0)\\ \negativeSignedPart{\signedMultisetSymbol{A}}\multisetMultiplicitySymbol \multisetElementSymbol{c}&\defEqualSymbol \max(\minus{\signedMultisetSymbol{A}\multisetMultiplicitySymbol \multisetElementSymbol{c}},0)\end{aligned} \]As before, lifting and projecting a signed set yields the same set:

\[ \function{\signed{\projection}}(\function{\signed{\lift}}(\setSymbol{A}))\identicallyEqualSymbol \setSymbol{A} \]## Signed multiset rings

#### Recap

In the [[[previous section:#Semirings]]] we saw how the multisets on a set with an associated semigroup operation give rise to a semiring. Specifically if we have a semigroup operation \(\functionSignature{\function{\sgdot }}{\tuple{\setSymbol{X},\setSymbol{X}}}{\setSymbol{X}}\) on a set \(\setSymbol{X}\), we can equip the set of multisets \(\multisets{X}\) with a multiplication \(\msrdot \defEqualSymbol \multiImageModifier{\sgdot }\) to yield a semiring that we will denote \(\multisetSemiring{\setSymbol{X}}{\sgdot }\).

Compactly, then, we have \(\multisetSemiring{\setSymbol{X}}{\sgdot }\defEqualSymbol \tuple{\multisets{X},\multisetSumSymbol ,\msrdot }\). To recap, the semiring operations are:

adding two elements \(\multisetSymbol{A},\multisetSymbol{B}\) of the semiring corresponds to adding the corresponding multisets (\(\isomorphicSymbol\) adding their multiplicity functions):

multiplying two elements of the semiring corresponds to \(\sgdot\)-multiplying all pairs that can be formed from the two multisets:

#### Multiplicity functions

We can also state the semiring sum and product in terms of multiplicity functions. The multiplicity of some element \(\multisetElementSymbol{c}\) in the sum and product is computed as:

\[ \begin{aligned} \paren{\multisetSymbol{A}\msrplus \multisetSymbol{B}}\multisetMultiplicitySymbol \multisetElementSymbol{c}&= \multisetSymbol{A}\multisetMultiplicitySymbol \multisetElementSymbol{c} + \multisetSymbol{B}\multisetMultiplicitySymbol \multisetElementSymbol{c}\\ \paren{\multisetSymbol{A}\msrdot \multisetSymbol{B}}\multisetMultiplicitySymbol \multisetElementSymbol{c}&= \indexSum{\paren{\multisetSymbol{A}\multisetMultiplicitySymbol \multisetElementSymbol{a}} \, \paren{\multisetSymbol{B}\multisetMultiplicitySymbol \multisetElementSymbol{b}}}{\multisetElementSymbol{a}\sgdot \multisetElementSymbol{b} = \multisetElementSymbol{c}}{}\end{aligned} \]We can now extend this situation to signed multisets by simply allowing the corresponding multiplicities to be negative.

#### Signed multiset ring

If \(\functionSignature{\function{\sgdot }}{\tuple{\setSymbol{X},\setSymbol{X}}}{\setSymbol{X}}\) is a semigroup operation on a set \(\setSymbol{X}\), we can now define the **signed multiset ring**, written \(\signedMultisetRing{\setSymbol{X}}{\sgdot }\) to be the following tuple:

Here, \(\msrdot\) is again the multi-image lift \(\multiImageModifier{\sgdot }\) of the semigroup operation \(\sgdot\), this time to operate on signed multisets. How does this work?

The most natural way to understand this is to first extend \(\sgdot\) to be a binary operation on **signed sets** on \(\setSymbol{X}\).

## Advanced topics

### Lattice structure

### Convexity

### Galois connection

# Word rings

## Definition

The **word ring**, written \(\wordRingSymbol\), is the so-called **group ring** of the word group \(\wordGroupSymbol\): the ring of formal \(\ring{\mathbb{Z}}\)-linear combinations of elements of the group \(\wordGroupSymbol\). We will explain this idea with examples.

## Examples

### Example: one cardinal

For the word group \(\bindCards{\wordGroupSymbol }{\rform{\card{r}}}\), we can only form words from the single cardinal \(\rform{\card{r}}\). All of these words \(\elemOf{\groupElement{w}}{\bindCards{\wordGroupSymbol }{\rform{\card{r}}}}\) can be described by simply counting how many times the cardinal \(\rform{\card{r}}\) is repeated in \(\groupElement{w}\) (negative counts indicate that we are repeating \(\rform{\inverted{\card{r}}}\) instead):

\[ \groupElement{w} = \overRepeated{\concat{\rform{\card{r}} \rform{\card{r}} \ellipsis \rform{\card{r}}}}{\sym{p}} = \groupPower{\rform{\card{r}}}{\sym{p}} \]Therefore we can write the set of elements compactly as :

\[ \bindCards{\wordGroupSymbol }{\rform{\card{r}}} = \setConstructor{\groupPower{\rform{\card{r}}}{\sym{p}}}{\elemOf{\sym{p}}{\group{\mathbb{Z}}}} \]Word concatenation simply adds these exponents:

\[ \groupPower{\rform{\card{r}}}{\sym{p}}\gdot \groupPower{\rform{\card{r}}}{\sym{q}} = \overRepeated{\concat{\rform{\card{r}} \ellipsis \rform{\card{r}}}}{\sym{p}}\gdot \overRepeated{\concat{\rform{\card{r}} \ellipsis \rform{\card{r}}}}{\sym{q}} = \overRepeated{\concat{\concat{\rform{\card{r}} \ellipsis \rform{\card{r}}}\,\concat{\rform{\card{r}} \ellipsis \rform{\card{r}}}}}{\sym{p} + \sym{q}} = \groupPower{\rform{\card{r}}}{\sym{p} + \sym{q}} \]Therefore \(\bindCards{\wordGroupSymbol }{\rform{\card{r}}}\) is isomorphic to the group \(\group{\mathbb{Z}}\), the integers under addition.

The **word ring** \(\bindCards{\wordRingSymbol }{\rform{\card{r}}}\) consists of integer-weighted *sums* of elements of \(\bindCards{\wordRingSymbol }{\rform{\card{r}}}\). It is the simplest answer to the question: "how do we define addition and subtraction of group elements?". Once we allow these new operations, we can add a given group element \(\groupElement{w}\) to itself any number of times. We can keep track of the number of times we have added (or subtracted) a group element from the zero element with an integer coefficient:

Similarly, we can add two differing group elements to each other; since they are different, these two terms of a sum remain separate:

\[ \overRepeated{\groupElement{w} + \ellipsis + \groupElement{w}}{\sym{m}} + \overRepeated{\groupElement{v} + \ellipsis + \groupElement{v}}{\sym{n}} = \linearCombinationCoefficient{m} \, \groupElement{w} + \linearCombinationCoefficient{n} \, \groupElement{v} \]Therefore, a general element of \(\bindCards{\wordRingSymbol }{\rform{\card{r}}}\) can be written as an integral linear combination of words. We'll write the coefficients with the same symbol as the ring element to make keeping track of these easier, but we'll gray the coefficient symbol to distinguish it:

\[ \groupElement{w} = \indexSum{\linearCombinationCoefficient{w_p} \, \groupPower{\rform{\card{r}}}{\sym{p}}}{\elemOf{\sym{p}}{\group{\mathbb{Z}}}}{} \]#### Sum

To add two elements \(\groupElement{w},\groupElement{v}\) of the word ring, we take formally add the two sums. Since addition is commutative, we can re-arrange the terms to group like powers of \(\rform{\card{r}}\), and we obtain a sum whose coefficients are the sums of the coefficients of \(\groupElement{w}\textAnd \groupElement{v}\):

\[ \groupElement{w} + \groupElement{v} = \indexSum{\paren{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} + \linearCombinationCoefficient{v_{\gbform{\sym{p}}}}} \, \groupPower{\rform{\card{r}}}{\gbform{\sym{p}}}}{\elemOf{\gbform{\sym{p}}}{\group{\mathbb{Z}}}}{} \]#### Product

To multiply two elements \(\groupElement{w},\groupElement{v}\) of the word ring, we multiply them as sums in the normal way:

\[ \begin{aligned} \groupElement{w}\gDot \groupElement{v}&= \paren{\indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \groupPower{\rform{\card{r}}}{\gbform{\sym{p}}}}{\gbform{\sym{p}}}{}}\gDot \paren{\indexSum{\linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \groupPower{\rform{\card{r}}}{\rbform{\sym{q}}}}{\rbform{\sym{q}}}{}}\\ &= \indexSum{\paren{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \groupPower{\rform{\card{r}}}{\gbform{\sym{p}}}}\gDot \paren{\linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \groupPower{\rform{\card{r}}}{\rbform{\sym{q}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\\ &= \indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \paren{\groupPower{\rform{\card{r}}}{\gbform{\sym{p}}}\gdot \groupPower{\rform{\card{r}}}{\rbform{\sym{q}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\\ &= \indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \groupPower{\rform{\card{r}}}{\gbform{\sym{p}} + \rbform{\sym{q}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\end{aligned} \]If we "gather like powers of \(\rform{\card{r}}\)" we obtain:

\[ \begin{aligned} \groupElement{w}\gDot \groupElement{v}&= \indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \groupPower{\rform{\card{r}}}{\gbform{\sym{p}} + \rbform{\sym{q}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\\ &= \indexSum{\paren{\indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{\gbform{\sym{p}} + \rbform{\sym{q}} = \rgform{\sym{n}}}} \, \groupPower{\rform{\card{r}}}{\rgform{\sym{n}}}}{\rgform{\sym{n}}}{}\\ &= \indexSum{\paren{\indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rgform{\sym{n}} - \gbform{\sym{p}}}}}{\gbform{\sym{p}}}{}} \, \groupPower{\rform{\card{r}}}{\rgform{\sym{n}}}}{\rgform{\sym{n}}}{}\end{aligned} \]Sums (or integrals) of this form are generally known as **convolutions**.

An alternate way of presenting this result is to write \(\groupElement{w}\gDot \groupElement{v}\) "coefficientwise", that is, to define the coefficient of \(\groupPower{\rform{\card{r}}}{\rgform{\sym{n}}}\) in the product \(\groupElement{w}\gDot \groupElement{v}\) in terms of \(\groupElement{w}\textAnd \groupElement{v}\):b

\[ \coefficient(\groupElement{w}\gDot \groupElement{v},\groupPower{\rform{\card{r}}}{\rgform{\sym{n}}}) = \indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rgform{\sym{n}} - \gbform{\sym{p}}}}}{\gbform{\sym{p}}}{} \]### Example: two cardinals

Like before, we'll start with the word group \(\wordGroupSymbol [\rform{\card{r}},\bform{\card{b}}]\), whose elements look like this:

\[ \bindCards{\wordGroupSymbol }{\rform{\card{r}},\bform{\card{b}}} = \list{\word{1},\word{\rform{\card{r}}},\word{\rform{\ncard{r}}},\word{\bform{\card{b}}},\word{\bform{\ncard{b}}},\word{\rform{\card{r}}}{\rform{\card{r}}},\word{\rform{\ncard{r}}}{\rform{\ncard{r}}},\word{\rform{\card{r}}}{\bform{\card{b}}},\word{\rform{\card{r}}}{\bform{\ncard{b}}},\word{\rform{\ncard{r}}}{\bform{\card{b}}},\word{\rform{\ncard{r}}}{\bform{\ncard{b}}},\word{\bform{\card{b}}}{\rform{\card{r}}},\word{\bform{\card{b}}}{\rform{\ncard{r}}},\word{\bform{\ncard{b}}}{\rform{\card{r}}},\word{\bform{\ncard{b}}}{\rform{\ncard{r}}},\word{\bform{\card{b}}}{\bform{\card{b}}},\word{\bform{\ncard{b}}}{\bform{\ncard{b}}},\word{\rform{\card{r}}}{\rform{\card{r}}}{\rform{\card{r}}},\word{\rform{\card{r}}}{\rform{\card{r}}}{\bform{\card{b}}},\ellipsis } \]Therefore, the word ring \(\wordRingSymbol [\rform{\card{r}},\bform{\card{b}}]\) consists of any \(\mathbb{Z}\)-linear combination of elements from \(\wordGroupSymbol [\rform{\card{r}},\bform{\card{b}}]\). An element of \(\elemOf{\groupElement{w}}{\wordRing{\quiver{2}}}\) can be written as:

\[ \groupElement{w} = \indexSum{\linearCombinationCoefficient{w_i} \, \wordSymbol{e}_{\sym{i}}}{\sym{i}}{} \]Here, \(\wordSymbol{e} = \list{\word{1},\word{\rform{\card{r}}},\word{\rform{\ncard{r}}},\word{\bform{\card{b}}},\word{\bform{\ncard{b}}},\word{\rform{\card{r}}}{\rform{\card{r}}},\word{\rform{\ncard{r}}}{\rform{\ncard{r}}},\word{\rform{\card{r}}}{\bform{\card{b}}},\ellipsis }\) is a sequence that enumerates the elements of \(\wordRingSymbol [\rform{\card{r}},\bform{\card{b}}]\) in *some* order. This serves as an (infinite) **basis** for the word ring, and effectively allows us to model elements of the ring as infinite-dimensional integer-valued "vectors" with respect to this basis.

#### Sum

The sum is defined by adding the coefficients elementwise, as before:

\[ \groupElement{w} + \groupElement{v} = \indexSum{\paren{\linearCombinationCoefficient{w_{\gbform{\sym{i}}}} + \linearCombinationCoefficient{v_{\gbform{\sym{i}}}}} \, \wordSymbol{e}_{\gbform{\sym{i}}}}{\gbform{\sym{i}}}{} \]#### Product

The product in the group ring is defined by expanding the product-of-sums into a sum-of-products. Each individual product is then an ordinary group multiplication, which we know how to do. We "gather the like terms" here: the coefficient of each word in the product \(\groupElement{w}\gDot \groupElement{v}\) is, by definition, the sum of the ways it can be formed as products of terms in the sums for \(\groupElement{w}\textAnd \groupElement{v}\):

\[ \begin{aligned} \groupElement{w}\gDot \groupElement{v}&= \paren{\indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \wordSymbol{e}_{\gbform{\sym{p}}}}{\gbform{\sym{p}}}{}}\gDot \paren{\indexSum{\linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \wordSymbol{e}_{\rbform{\sym{q}}}}{\rbform{\sym{q}}}{}}\\ &= \indexSum{\paren{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \wordSymbol{e}_{\gbform{\sym{p}}}}\gDot \paren{\linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \wordSymbol{e}_{\rbform{\sym{q}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\\ &= \indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \paren{\groupElement{\wordSymbol{e}_{\gbform{\sym{p}}}}\gdot \groupElement{\wordSymbol{e}_{\rbform{\sym{q}}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\end{aligned} \]Again, we can gather like terms, in other words, those group elements \(\wordSymbol{e}_{\rgform{\sym{k}}}\) for which \(\groupElement{\wordSymbol{e}_{\gbform{\sym{p}}}}\gdot \groupElement{\wordSymbol{e}_{\rbform{\sym{q}}}} = \wordSymbol{e}_{\rgform{\sym{k}}}\):

\[ \begin{aligned} \groupElement{w}\gDot \groupElement{v}&= \indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}} \, \paren{\groupElement{\wordSymbol{e}_{\gbform{\sym{p}}}}\gdot \groupElement{\wordSymbol{e}_{\rbform{\sym{q}}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{}\\ &= \indexSum{\paren{\indexSum{\linearCombinationCoefficient{w_{\gbform{\sym{p}}}} \, \linearCombinationCoefficient{v_{\rbform{\sym{q}}}}}{\substack{\gbform{\sym{p}},\;\rbform{\sym{q}}}}{\groupElement{\wordSymbol{e}_{\gbform{\sym{p}}}}\gdot \groupElement{\wordSymbol{e}_{\rbform{\sym{q}}}} = \wordSymbol{e}_{\rgform{\sym{k}}}}} \, \wordSymbol{e}_{\rgform{\sym{n}}}}{\rgform{\sym{n}}}{}\end{aligned} \]### Enumeration of basis elements

To define the 2-cardinal word ring \(\wordRing{\quiver{2}}\), we made use of the set of possible words \(\wordSymbol{e}_{\sym{i}}\) as a basis. It was enough that it was defined in *some* linear order so that we could index over it in the sum.

In the case of the 1-cardinal ring \(\wordRingSymbol [\rform{\card{r}}]\) we could use as basis the bi-infinite sequence \(\list{\ellipsis ,\repeatedPower{\rform{\card{r}}}{-2},\repeatedPower{\rform{\card{r}}}{-1},\repeatedPower{\rform{\card{r}}}{0},\repeatedPower{\rform{\card{r}}}{1},\repeatedPower{\rform{\card{r}}}{2},\ellipsis }\), indexed by \(\mathbb{Z}\). This order allowed us to rewrite the conditional inner sum \(\indexSum{\paren{\indexSum{\linearCombinationCoefficient{w_p} \, \linearCombinationCoefficient{v_q}}{\sym{p} + \sym{q} = \sym{n}}{}} \, \groupPower{\rform{\card{r}}}{\sym{n}}}{\sym{n}}{}\) as \(\indexSum{\paren{\indexSum{\linearCombinationCoefficient{w_p} \, \linearCombinationCoefficient{v_{n - p}}}{\sym{p}}{}} \, \groupPower{\rform{\card{r}}}{\sym{n}}}{\sym{n}}{}\) since it amounted to the requirement that \(\sym{p} + \sym{q} = \sym{n}\), allowing us to solve for \(\sym{q}\) explicitly.

But for 2 or more cardinals, a linear enumeration of the basis elements is both complicated to define, and doesn't allow for such a neat arithmetic form for the inner sum. The larger point is that this linear enumeration of basis elements is a kind of holdover from traditional mathematics, where questions of computation and notation aren't usually primary, and it is useful to think about ring elements as being infinite-dimensional integer "vectors".

Luckily, it turns out there is a more combinatorial and computational way of thinking about group rings that builds them on a different interpretation. To introduce this idea, we first have to define and build intuition for an underutilized structure in mathematics, the **multiset**.

### Connection to polynomials

The elements of the word ring \(\wordRingSymbol [\rform{\card{r}}]\) look awfully similar to how one adds and multiply polynomials (where \(\rform{\card{r}}\) plays the role of the indeterminate), except in this case we allow negative powers. Indeed \(\wordRingSymbol [\rform{\card{r}}]\) is a so-called **polynomial ring**:

This recalls the Laurent polynomials, although the coefficients appearing in the elements of the word ring are valued in \(\mathbb{Z}\) rather than \(\mathbb{C}\).

The word ring \(\wordRing{\quiver{\sym{k}}}\) for \(\sym{k}>1\) can be thought of as consisting of multivariate, non-commutative polynomials, with negative powers allowed.

## Word rings as multisets

A concrete interpretation of an element \(\elemOf{\ringElement{ \omega }}{\wordRingSymbol }\) can be given when the integer coefficients are all *positive*. In this case, we can see \(\ringElement{ \omega }\) as representing a **multiset** of words, where the number of times a given word \(\elemOf{\groupElement{w}}{\wordGroupSymbol }\) is present in the multiset is represented by the coefficient of \(\groupElement{w}\) in the sum \(\ringElement{ \omega }\). The zero element of the ring is then just the empty multiset. But since coefficients can be negative, it is natural to extend our interpretation to see a general element \(\ringElement{ \omega }\) as representing a **signed multiset**, in which paths may appear in their ordinary form, or in a "negated" form, with the total multiplicity being described by the coefficient.

From now on, whenever we use the term **multiset**, it should be interpreted as **signed multiset**. This elision makes for less clunky language, but more importantly it sets up a simple and useful combinatorial interpretation of the constructions we'll consider. In each case we can still extend this interpretation to the setting of signed multisets, but doing this explicitly would bog us down for no real reward.

### Operations

Addition of two elements corresponds to merging of the two corresponding multisets.

But how do we multiply two multisets of words? Group rings define multiplication by linearity, where we multiply the formal combinations by distributing the two formal sums through one-another, yielding a sum of pairwise group multiplications – where the group operation in \(\wordGroupSymbol\) is the familiar the **concatenation** of words. Under the multiset interpretation, this corresponds to forming a multiset of all possible concatenations of a word from the first multiset and a word from the second multiset.

# Adjacency

## Introduction

In this section we review the **adjacency matrix** of a graph and consider some of its basic properties. We ask how a similar structure might be devised to represent the cardinal structure of a quiver. Motivated by this question, we'll use the **word ring** to define **routes** and **plans**, which model subjective states of knowledge about potential paths in a quiver. Multisets allow us to obtain an isomorphism between **plans** and **plan matrices**. We then construct a particular plan matrix that plays the role of an adjacency matrix: the **adjacency plan matrix** (which has a corresponding **adjacency plan**).

Note: this section is largely incomplete and should be ignored for now.

## Adjacency matrices of graphs

The adjacency matrix \(\matrix{A}\) of a finite directed graph \(\graph{G}\) with vertices \(\sym{V} = \list{\vert{v_1},\vert{v_2},\ellipsis ,\vert{v_{\sym{n}}}} = \vertexList(\graph{G})\) is an \(\sym{n}\times \sym{n}\) matrix with entries:

\[ \matrixPart{\matrix{A}}{\matrixRowPart{\sym{i}}}{\matrixColumnPart{\sym{j}}}\defEqualSymbol \begin{cases} 1 &\text{if } \elemOf{\de{\vert{v_{\sym{i}}}}{\vert{v_{\sym{j}}}}}{\graph{G}}\\ 0 &\text{otherwise} \end{cases} \]For a finite directed *multigraph*, the entries can be arbitrary non-negative integers that simply count the number of edges \(\de{\vert{v_{\sym{i}}}}{\vert{v_{\sym{j}}}}\).

### Examples

Here we illustrate some example graphs and their corresponding adjacency matrices:

### Properties

#### Matrix powers and walks

### Quivers

How do we model the cardinal structure that a quiver imposes on a digraph? A famous quip in computer science goes that "all problems can be solved by an additional level of indirection". Our additional level of indirection is via the cardinals: we now consider a mapping from cardinals to adjacency matrices:

\[ \matrix{A}\defEqualSymbol \assocArray{\mto{\card{c}_1}{\matrix{A}_1},\mto{\card{c}_2}{\matrix{A}_2},\ellipsis ,\mto{\card{c}_{\sym{k}}}{\matrix{A}_{\sym{k}}}} \]Each \(\matrix{A}_{\sym{i}}\) gives the ordinary digraph adjacency matrix for the subgraph of the quiver \(\quiver{Q}\) consisting only of edges labeled with the cardinal \(\card{c}_{\sym{i}}\).

Here we show these mappings for some simple quivers:

### Back to single matrices

An idea that might have occurred to the reader is to combine the per-cardinal adjacency matrices into a single adjacency matrix:

But if this idea is to work, what *are* these matrices? More specifically, what is the meaning of sums of the form \(\gform{\card{g}} + \rform{\card{r}}\) that can appear in their entries? These are simply elements of the [[[word ring:Word rings]]] that we introduced in a previous section.

### Adjacency ring

The approach we used above was to represent the cardinal structure of a quiver by storing a separate adjacency matrix for each cardinal, gathering these into a mapping \(\assocArray{\mto{\card{c}_1}{\matrix{A}_1},\mto{\card{c}_2}{\matrix{A}_2},\ellipsis ,\mto{\card{c}_{\sym{k}}}{\matrix{A}_{\sym{k}}}}\). This allowed us to *distinguish* adjacency on a per-cardinal basis.

But there is more algebraic way to do this, which is by treating the cardinals as being the **indeterminates** of a **polynomial**, although the misleading term *variable* is sometimes used.

We can then represent a mapping \(\assocArray{\mto{\card{c}_1}{\matrix{A}_1},\mto{\card{c}_2}{\matrix{A}_2},\ellipsis ,\mto{\card{c}_{\sym{n}}}{\matrix{A}_{\sym{n}}}}\) as a polynomial \(\polynomial{a} = \poly{\card{c}_1 \, \matrix{A}_1 + \card{c}_2 \, \matrix{A}_2 + \ellipsis + \card{c_{\sym{k}}} \, \matrix{A}_{\sym{k}}}\), where the \(\card{c}_{\sym{i}}\) are indeterminates and the adjacency matrices \(\matrix{A}_{\sym{i}}\) are coefficients.

Formally, \(\polynomial{a}\) is an element of a (multivariate) **polynomial ring** \(\polynomialRing{\ring{R}}{\indeterminate{\card{c}_1},\indeterminate{\card{c}_2},\indeterminate{\ellipsis },\indeterminate{\card{c}_{\sym{n}}}}\), where the coefficient ring is \(\ring{R} = \matrixRing{\ring{\mathbb{Z}}}{\sym{n}}\), the **matrix ring** of \(\sym{n}\times \sym{n}\) matrices with integer entries. Each

The adjacency polynomial \(\polynomial{a}\) represents the quiver in a particular sense that we will now explore.

## Routes, multiroutes, and plans

### Motivation

We will now construct a family of structures that represents subjective states of knowledge of an agent about how they might navigate a quiver \(\quiver{Q}\).

The subjective states will describe how an agent believes it can move between ordered pairs of vertices \(\vertOf{\vert{v_{\sym{i}}},\vert{v_{\sym{j}}}}{\quiver{Q}}\), which we'll call **endpoints**. We'll organize the various forms of knowledge as follows:

a

**route**\(\routeSymbol{R}\) for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\) is an*imagined path*from**origin**vertex \(\vert{v}\) to**destination**vertex \(\vert{w}\) (this need not correspond to an actual path)a

**multiroute**\(\multirouteSymbol{R}\) for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\) is a multiset of routes that all share the same endpoints \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\)a

**plan**\(\planSymbol{P}\) is a multiset of routes,*not necessarily*for the same endpoints

A plan and a multiroute are said to be **empty** if they are empty as multisets.

Being a multiset of routes, a multiroute is necessarily also a plan. Conversely, a plan might contain routes with differing endpoints, and hence is not in general a multiroute. We can however refer to the multiroute for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\,\)*within* a plan, which is the multiset of all those routes with those particular endpoints.

A route is **fictitious** if it does not describe a path in the quiver. Likewise, a multiroute or plan is fictitious if it contains a single fictitious route.

A plan does *not* have to be **consistent**: a plan might contain a route for \(\fromTo{\vert{u}\fromToSymbol \vert{v}}\) and a route for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\), but no route for \(\fromTo{\vert{u}\fromToSymbol \vert{w}}\). This doesn't match our intuition about navigation, which is that if we know how to get from \(\vert{u}\) to \(\vert{v}\) and from \(\vert{v}\) to \(\vert{w}\) then we implicitly know how to get from \(\vert{u}\) to \(\vert{w}\). We will create an exact definition of consistency later.

The **route word** for a route for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\) is a **path word** that describes the imagined path from vertex \(\vert{v}\) to vertex \(\vert{w}\) as a sequence of cardinals.

The **multiroute word** for a multiroute for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\) is a path multiword that describes the multiset of imagined paths from vertex \(\vert{v}\) to vertex \(\vert{w}\) in terms of their cardinals.

### Interpretation

As we said, plans are constructs that describe what an agent believes about how it is possible to navigate a quiver. These beliefs do not have to true, or even consistent, but we will of course define the logic of how to build true and consistent beliefs.

### Notation

In [[[Path groupoids]]] we introduced the notation \(\paren{\pathWord{\vert{u}}{\wordSymbol{W}}{\vert{v}}}\) to refer to a path in a quiver. (Multi)routes are distinct from paths, since while paths must exist in the quiver, (multi)routes can be *fictitious*. Therefore we introduce distinct notation \(\route{\vert{u}}{\wordSymbol{W}}{\vert{v}}\) for a route for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\) with path word \(\wordSymbol{W}\) and \(\multiroute{\vert{u}}{\multiwordSymbol{W}}{\vert{v}}\) to represent a multiroute for \(\fromTo{\vert{v}\fromToSymbol \vert{w}}\) with path multiword \(\elemOf{\multiwordSymbol{W}}{\wordRingSymbol }\).

Recall that a multiword \(\multiwordSymbol{W}\) is a formal sum of words \(\elemOf{\wordSymbol{W_{\sym{i}}}}{\wordGroupSymbol }\) with coefficients \(\elemOf{\sym{n_{\sym{i}}}}{\ring{\mathbb{Z}}}\):

\[ \multiwordSymbol{W} = \sym{n_1} \, \wordSymbol{W_1} + \sym{n_2} \, \wordSymbol{W_2} + \ellipsis \]The multiroute with multiword \(\multiroute{\vert{u}}{\multiwordSymbol{W}}{\vert{v}}\) with path multiword \(\multiwordSymbol{W}\) is by definition the formal sum of routes with corresponding path words from \(\multiwordSymbol{W}\):

\[ \multiroute{\vert{u}}{\multiwordSymbol{W}}{\vert{v}}\identicallyEqualSymbol \sym{n_1} \, \route{\vert{u}}{\wordSymbol{W_1}}{\vert{v}} + \sym{n_2} \, \route{\vert{u}}{\wordSymbol{W_2}}{\vert{v}} + \ellipsis \]Recall that when all \(n_{\sym{i}} \ge 0\), we can [[[interpret:Multisets#Signed multisets]]] a multiword as a multiset, where \(\sym{n_{\sym{i}}}\) are the multiplicities for each distinct word (if some \(\sym{n_{\sym{i}}}<0\), we have instead a **signed multiset**). Then the above formal sum is equivalent to building a multiroute as a multiset of routes, one for each word in the multiword:

### Representation

We've introduced the high-level concepts of routes, multiroutes, and plans. We'll now show how a plan can be represented by a matrix of a particular type.

Consider an \(\sym{n}\times \sym{n}\) matrix \(\matrix{M}\), where \(\sym{n} = \vertexCountOf{\quiver{Q}}\) is the number of vertices, and whose entries are multiwords, namely elements of the word ring \(\wordRingSymbol\).

A matrix \(\matrix{M}\,\)*represents* a plan \(\planSymbol{P}\) as follows:

The \(\tuple{\sym{i},\sym{j}}\) entry of \(\matrix{M}\), written \(\matrixPart{\matrix{M}}{\sym{i}}{\sym{j}}\), gives the multiroute word for \(\fromTo{\vert{v_{\sym{i}}}\fromToSymbol \vert{v_{\sym{j}}}}\).

Let's rephrase this in terms of individual routes, using the [[[multiset interpretation:Word rings#Word rings as multisets]]] of the word ring \(\wordRingSymbol\). Since each entry \(\matrixPart{\matrix{M}}{\sym{i}}{\sym{j}}\) is an element of the \(\wordRingSymbol\), we can interpret the entry as a multiset of path words. These path words will be the route words for individual routes for \(\fromTo{\vert{v_{\sym{i}}}\fromToSymbol \vert{v_{\sym{j}}}}\).

\[ \planSymbol{P} = \multisetConstructor{\route{\vert{v_{\sym{i}}}}{\wordSymbol{W}}{\vert{v_{\sym{j}}}}}{\begin{array}{c} 1 \le \sym{i},\sym{j} \le \sym{n}\\ \elemOf{\wordSymbol{W}}{\matrixPart{\matrix{M}}{\sym{i}}{\sym{j}}} \end{array} } \]We can see now more clearly the utility of the word ring: multiwords in the word ring represent multisets of path words. Fixing an origin and destination \(\fromTo{\vert{v_{\sym{i}}}\fromToSymbol \vert{v_{\sym{j}}}}\), each path word describes a route, and hence a multiset of path words describes a multiroute. Organizing these multiwords into a matrix, we can represent an entire plan. Next, we'll consider the significance of the ring addition and multiplication of such plan matrices.

## Ring structure of plans

### Matrix ring

The set of \(\sym{n}\times \sym{n}\) matrices (whose entries are taken from a ring \(\ring{R}\)) forms a ring in its own right, called a **matrix ring** and written \(\matrixRing{\ring{R}}{\sym{n}}\), with addition and multiplication given by normal algorithms for matrices:

(The addition and multiplication of matrix entries on the rhs of these definitions takes place in the ring \(\ring{R}\) of matrix entries.)

### Plan ring

The matrices we used to represent plans for a quiver \(\quiver{Q}\) form a matrix ring, which we will call the **plan matrix ring**, written \(\planRing{\quiver{Q}}\):

This is the \(\sym{n}\times \sym{n}\) matrix ring (where \(\sym{n} = \vertexCountOf{\quiver{Q}}\) is the number of vertices), whose entries are elements of the word ring \(\wordRingSymbol\) over the cardinals of the quiver. We'll call a matrix from the plan ring a **plan matrix**.

### Sums of matrices

The sum \(\matrix{X} + \matrix{Y}\) of two plan matrices \(\elemOf{\matrix{X},\matrix{Y}}{\planRing{\quiver{Q}}}\) is given by merging \(\matrix{X}\) and \(\matrix{Y}\) seen as multisets of routes.

### Products of matrices

What about multiplication \(\matrix{X}\matrixDotSymbol \matrix{Y}\) of two plan matrices \(\matrix{X},\matrix{Y}\)? By the ordinary definition of matrix multiplication in a matrix ring, we have:

\[ \matrixPart{\paren{\matrix{X}\matrixDotSymbol \matrix{Y}}}{\sym{i}}{\sym{j}} = \indexSum{\matrixPart{\matrix{X}}{\sym{i}}{\sym{k}} \, \matrixPart{\matrix{Y}}{\sym{k}}{\sym{j}}}{\sym{k} \le \sym{n}}{} \]For fixed \(\sym{i},\sym{j},\sym{k}\), the product \(\matrixPart{\matrix{X}}{\sym{i}}{\sym{k}} \, \matrixPart{\matrix{Y}}{\sym{k}}{\sym{j}}\) is the product of two multiwords. Under the multiset interpretation, this product is the multiset formed from all possible concatenations of a word from \(\matrixPart{\matrix{X}}{\sym{i}}{\sym{k}}\) and a word from \(\matrixPart{\matrix{Y}}{\sym{k}}{\sym{j}}\) respectively. Each such pair of words represents a route from \(\matrix{X}\) for \(\fromTo{\vert{v_{\sym{i}}}\fromToSymbol \vert{v_{\sym{k}}}}\) and a route from \(\matrix{Y}\) for \(\fromTo{\vert{v_{\sym{k}}}\fromToSymbol \vert{v_{\sym{j}}}}\). Therefore each concatenated word in the result can be interpreted as the path word for a route for \(\fromTo{\vert{v_{\sym{i}}}\fromToSymbol \vert{v_{\sym{j}}}}\). The entire result is the multiset of such composite path words: a multiword.

For a fixed \(\sym{i}\) and \(\sym{j}\), the sum \(\indexSum{\matrixPart{\matrix{X}}{\sym{i}}{\sym{k}} \, \matrixPart{\matrix{Y}}{\sym{k}}{\sym{j}}}{\sym{k} \le \sym{n}}{}\) over *all* \(\sym{k}\) is therefore a multiset that consists of the path words of composite routes that start at \(\vert{v_{\sym{i}}}\) and end at \(\vert{v_{\sym{j}}}\), having passed through *any* \(\vert{v_{\sym{k}}}\): the first part of each composite route comes from \(\matrix{X}\), the second from \(\matrix{Y}\).

Therefore, the product \(\matrix{X}\matrixDotSymbol \matrix{Y}\) represents the plan that consists of the *all* composite routes formed from *any* route in \(\matrix{X}\) followed by *any* route in \(\matrix{Y}\), where-ever these can be composed. The matrix product is therefore the natural way to combine two plans in an ordered way to form another plan.

### Plan matrices vs plans

Plan matrices are distinct from plans themselves, although we have shown how they are related under the [[[multiset interpretation of rings:Multisets#Signed multisets]]]. Specifically, a plan matrix represents a (possibly signed) multiset of routes. Is there a natural equivalence between plans and plan matrices?

The answer is *yes,* but in a similar sense to how matrices correspond to linear operators for a finite-dimensional vector space. Once we have chosen an ordered basis for a vector space, we can represent a linear operator as a matrix. Similarly, once we choose a labeling of vertices of a quiver, we can use plan matrices to represent plans, since the entries of the plan matrix are relative to a particular ordering of vertices. In the setting of a (countably) infinite quiver, nothing much changes, though our matrices become infinite. Questions of "convergence" are appropriate to worry about: we might require than any particular sum that occurs in the definition of plan matrix multiplication involves only a finite number of non-zero terms, but we will see how this can be achieved later.

### Abstract operations on plans

We have already see how to define plan matrix multiplication and addition, and have attached multiset interpretations to these. It is worth now defining the same operations *abstractly* on plans themselves, since they are arguably simpler when we do not insist on representing plans as plan matrices:

The sum of two plans \(\planSymbol{X} + \planSymbol{Y}\) is the union of the \(\planSymbol{X}\) and \(\planSymbol{Y}\) as multisets of routes:

\[ \planSymbol{X} + \planSymbol{Y}\defEqualSymbol \planSymbol{X}\setUnionSymbol \planSymbol{Y} \]The product of two plans \(\planSymbol{X} \, \planSymbol{Y}\) is the multiset of compositions of routes from \(\planSymbol{X}\) and routes from \(\planSymbol{Y}\):

\[ \planSymbol{X} \, \planSymbol{Y}\defEqualSymbol \multisetConstructor{\pathCompose{\routeSymbol{X}}{\routeSymbol{Y}}}{\begin{array}{c} \elemOf{\routeSymbol{X}}{\planSymbol{X}}\\ \elemOf{\routeSymbol{Y}}{\planSymbol{Y}} \end{array} } \]Implicitly, if two routes cannot be composed, in other words if \(\pathCompose{\routeSymbol{X}}{\routeSymbol{Y}} = \nullPath\), that term is ignored in the construction.

In fact, plans form a ring, the **plan ring**, just like plan matrices do. These rings are isomorphic, as one might imagine.

## Consistency

We mentioned earlier that plans need not be consistent, although we didn't define precisely what consistency *is*. Roughly speaking, a plan is consistent if it is closed under composition with itself, meaning that any two routes in the plan that can be composed to form another route will form a composite route that is already in the plan. We naturally expect this of intelligent agents, since if such an agent knows how to get from A to B and from B to C, it necessarily *must* know how to get from A to C – if it does not already have a route from A to C, it can simply compose a route from A to B with a route from B to C.

What is the mathematical condition that corresponds to this notion of consistency? Quite simply, if a plan \(\planSymbol{P}\) is consistent, then \(\planSymbol{P} \, \planSymbol{P} = \planSymbol{P}\) – or in ring-theory terminology, \(\planSymbol{P}\) is idempotent. We will show how to construct such consistent plans shortly.

## Cardinal and adjacency plans

We are now ready to engage with the original goal, which was to represent the cardinal structure of a quiver with a suitable generalization of an adjacency matrix.

Now that we have plan matrices, we can form the adjacency plan matrix \(\matrix{P}\) for a quiver. Here we show one example:

Notice the presence of inverted cardinals below the diagonal. Why is this necessary? The reason is that we can form paths out of cardinals *or* their inverses. In other words, we can traverse an edge \(\tde{1}{2}{\rform{\card{r}}}\) in the forward direction, yielding the path \(\pathWord{1}{\word{\rform{\card{r}}}}{2}\), or we can traverse it in the backward direction, yielding the path \(\pathWord{2}{\word{\rform{\ncard{r}}}}{1}\). If we wish for *all* paths of length \(\sym{n}\) to be present in the iterated adjacency plan matrix \(\power{\matrix{P}}{\sym{n}}\), we must encode a given edge in the adjacency plan matrix in both the forward and backward orientations, corresponding to a normal and inverted cardinal present in entries mirrored around the diagonal.

### Powers of the adjacency plan

### Eigenvectors of adjacency

### Plan Laplacian

# Cayley quivers

## Introduction

In [[[Transitive quivers]]] we defined families of transitive quivers like the **line**, **square**, and **triangular** quivers. We now define how to obtain a **Cayley quiver** from a group by choosing a set of generators, or equivalently by choosing a presentation for a group. We explain how the simple transitive quivers are generated this way from \(\group{\power{\group{\mathbb{Z}}}{\sym{n}}}\).

## Groups

We briefly recall the definition of a **group** and some related ideas for readers who need a *very* short refresher:

#### Definition

A **group** is a structure \(\tuple{\group{G},\Gmult }\) consisting of a set of elements \(\group{G}\) and a **group multiplication** (or **group operation**) \(\functionSignature{\function{\Gmult }}{\tuple{\group{G},\group{G}}}{\group{G}}\).

The multiplication is associative, so \(\groupElement{f}\Gmult \paren{\groupElement{g}\Gmult \groupElement{h}} = \paren{\groupElement{f}\Gmult \groupElement{g}}\Gmult \groupElement{h}\) for all \(\elemOf{\groupElement{f},\groupElement{g},\groupElement{h}}{\group{G}}\).

Each element \(\groupElement{g}\) also has an **inverse** \(\groupInverse{\groupElement{g}}\) satisfying \(\groupInverse{\groupElement{g}}\Gmult \groupElement{g} = \groupElement{g}\Gmult \groupInverse{\groupElement{g}} = \groupIdentity{e}\), where \(\groupIdentity{e}\Gmult \groupElement{g} = \groupElement{g}\Gmult \groupIdentity{e} = \group{G}\) defines the **identity** or **unit** element \(\elemOf{\groupIdentity{e}}{\group{G}}\).

We will sometimes drop the symbol \(\Gmult\) and write \(\groupElement{g}\Gmult \groupElement{h}\) as simply \(\groupElement{g}\iGmult \groupElement{h}\).

#### Commutators

The commutator of two group elements is defined as follows:

\[ \groupCommutator{\groupElement{g}}{\groupElement{h}}\defEqualSymbol \groupElement{g}\iGmult \groupElement{h}\iGmult \groupInverse{\groupElement{g}}\iGmult \groupInverse{\groupElement{h}} \]#### Generators

A set of **generators** of a group \(\list{\groupGenerator{g_1},\groupGenerator{g_2},\ellipsis ,\groupGenerator{g_{\sym{n}}}}\) is a set of elements such that every *other* element of a group can be expressed as a product of the generators or their inverses.

#### Group words

A **group word** over a set of generators is a product of generators or their inverses: an expression of the form \(\groupElement{\groupElement{h}_1}\iGmult \groupElement{\groupElement{h}_2}\iGmult \ellipsis \iGmult \groupElement{\groupElement{h}_{\sym{m}}}\), where each \(\elemOf{\groupElement{h}_{\sym{i}}}{\list{\groupGenerator{g_1},\groupInverse{\groupGenerator{g_1}},\ellipsis ,\groupGenerator{g_{\sym{n}}},\groupInverse{\groupGenerator{g_{\sym{n}}}}}}\). We can combine neighboring generators when they are the same, expressing the word as a sequence (of some length smaller length \(\sym{m}\)) of integer powers \(\sym{p}_{\sym{j}}\) of generators \(\groupGenerator{g_{\sym{i}_{\sym{j}}}}\).

Then we can restate the earlier definition: a set of generators is a set of elements such that every other element of the group can be written as group word over the set.

#### Presentations

A **presentation** of a group \(\group{G}\) is the data:

where the \(\groupElement{g}_{\sym{i}}\) are generators of \(\group{G}\) and the \(\groupRelator{R_{\sym{j}}}\) are group words over the \(\groupElement{g}_{\sym{i}}\) called **relators**.

The relators are, by definition, equal to \(\groupElement{e}\), the identity element of the group. One can think of each relator as a concise way to encode an "identity" or "equation".

For example, consider the infinite Abelian group \(\group{\power{\group{\mathbb{Z}}}{2}}\), whose elements are the set \(\setConstructor{\tuple{\sym{a},\sym{b}}}{\elemOf{\sym{a},\sym{b}}{\mathbb{Z}}}\), identity \(\groupIdentity{e} = \tuple{0,0}\), with the group operation being elementwise addition.

Choosing generators \(\groupGenerator{x} = \tuple{1,0}\) and \(\groupGenerator{y} = \tuple{0,1}\), our presentation must now capture the fact that the group is Abelian, in other words, that the generators commute: \(\groupGenerator{x}\iGmult \groupGenerator{y} = \groupGenerator{y}\iGmult \groupGenerator{x}\). We can rewrite this identity as \(\groupCommutator{\groupGenerator{x}}{\groupGenerator{y}} = \groupGenerator{y}\iGmult \groupGenerator{x}\iGmult \groupInverse{\groupGenerator{y}}\iGmult \groupInverse{\groupGenerator{x}} = \groupIdentity{e}\), making the only relator of this group \(\groupCommutator{\groupGenerator{x}}{\groupGenerator{y}}\).

Therefore we have the following presentation for \(\group{\power{\group{\mathbb{Z}}}{2}}\) (which we will name \(\translationPresentation{2}\)):

\[ \translationPresentation{2}\defEqualSymbol \groupPresentation{\groupGenerator{x},\groupGenerator{y}}{\groupCommutator{\groupGenerator{x}}{\groupGenerator{y}}} \]There are *many* presentations of a given group, since we can rewrite relations in terms of each other, and similarly with generators.

Note: when naming presentations, there is ambiguity about whether the term \(\translationPresentation{2}\) refers to the presentation itself or the group it generates. We'll resolve this ambiguity using English, saying "the group presented by \(\translationPresentation{2}\)".

#### Free groups

The **free group** on \(\sym{n}\) generators, written \(\freeGroup{\sym{n}}\), is a group with presentation \(\groupPresentation{\groupGenerator{g_1},\groupGenerator{g_2},\groupGenerator{\ellipsis },\groupGenerator{g_{\sym{n}}}}{\emptySet }\), in other words, the group with no relations. The elements of such a group are **uniquely** described by a group word: two group element are identical if and only if their words are also identical. Normally, relations prevent us from easily telling when two group words describe the same group element, since applying the relations to rewrite one group word may or may not produce the other word, and it can be arbitrarily hard to determine when this is the case (this is known as the **word problem** for the group). Free groups on more than 0 generators have infinitely many elements, since we can form group words of arbitrary length by combining any number of \(\groupGenerator{g_{\sym{i}}}\) as we please.

The **free Abelian** group on \(\sym{n}\) generators is a group with presentation \(\groupPresentation{\groupGenerator{g_1},\groupGenerator{g_2},\groupGenerator{\ellipsis },\groupGenerator{g_{\sym{n}}}}{\groupCommutator{\groupGenerator{g_{\sym{i}}}}{\groupGenerator{g_{\sym{j}}}}}\). It is like the free group, but we also have that the generators all commute, and hence a group element is uniquely determine by the counts of the possible \(\groupGenerator{g_{\sym{i}}}\) in the word (with these counts being negative to indicate the inverses of \(\groupGenerator{g_{\sym{i}}}\)'s).

## Cayley quivers

We now give a simple way to generate a cardinal quiver from a group, which we'll call a **Cayley quiver**. This construction was originally developed by Arthur Cayley, who showed how to produce the so-called Cayley graph – but we call these Cayley quivers to emphasize that they do in fact have a natural cardinal structure on them.

Let \(\group{G}\) be a group, with generators \(\setSymbol{J} = \list{\groupGenerator{\rform{j_1}},\groupGenerator{\bform{j_2}},\ellipsis }\). Then \(\bindCayleyQuiver{\group{G}}{\rform{\card{c_1}}\bindingRuleSymbol \groupGenerator{\rform{j_1}},\bform{\card{c_2}}\bindingRuleSymbol \groupGenerator{\bform{j_2}},\ellipsis }\) is the **Cayley quiver** on these generators, where cardinal \(\card{c_{\sym{i}}}\) represents generator \(\groupGenerator{j_{\sym{i}}}\). We can also write just \(\bindCayleyQuiver{\group{G}}{\groupGenerator{\rform{j_1}},\groupGenerator{\bform{j_2}},\ellipsis }\) or \(\bindCayleyQuiver{\group{G}}{\setSymbol{J}}\) if we use the same symbols \(\card{j},\groupGenerator{j}\) for the generators and cardinals.

The vertices and edges of the Cayley quiver \(\bindCayleyQuiver{\group{G}}{\sym{J}}\) are defined as follows:

\[ \begin{aligned} \vertexList(\bindCayleyQuiver{\group{G}}{\setSymbol{J}})&\defEqualSymbol \group{G}\\ \edgeList(\bindCayleyQuiver{\group{G}}{\setSymbol{J}})&\defEqualSymbol \setConstructor{\tde{\groupElement{g}}{\groupElement{g}\Gmult \groupGenerator{j}}{\card{j}}}{\elemOf{\groupElement{g}}{\group{G}},\;\elemOf{\groupGenerator{j}}{\setSymbol{J}}}\end{aligned} \]In other words, the vertices of \(\bindCayleyQuiver{\group{G}}{\setSymbol{J}}\) are elements of the group, and the edges are the transitions between elements that occur when right-multiplying by a generator \(\groupGenerator{j}\). Each edge is labeled by the cardinal \(\card{j}\) for that generator.

### Cayley quivers from presentations

A Cayley quiver \(\bindCayleyQuiver{\group{G}}{\sym{J}}\) is defined by a group \(\group{G}\,\)*and* a choice of generators \(\sym{J} = \list{\groupGenerator{\rform{j_1}},\groupGenerator{\bform{j_2}},\ellipsis }\) of \(\group{G}\). But a group *presentation* \(\presentation{Z} = \groupPresentation{\groupGenerator{\rform{j_1}},\groupGenerator{\bform{j_2}},\groupGenerator{\ellipsis }}{\groupElement{\ellipsis }}\) of \(\group{G}\) already specifies this choice of generators. Therefore we can talk about the Cayley quiver associated with such a group presentation \(\presentation{Z}\). We'll use a similar notation, writing just \(\cayleyQuiverSymbol{\presentation{Z}}\), or \(\bindCayleyQuiver{\presentation{Z}}{\groupGenerator{\card{c}_1},\groupGenerator{\card{c}_2},\ellipsis }\) if we wish to specify to explicitly name the cardinals corresponding to the generators of the presentation.

### Example: Cyclic group

Let's explore a very simple example: the cyclic group \(\cyclicGroup{\sym{n}}\) for \(\sym{n} = 3\). We can write the elements of this group in terms of a generator \(\groupGenerator{g}\), and since 3 is prime, we can use any non-identity element to do this:

\[ \cyclicGroup{3} = \list{\groupIdentity{e},\groupGenerator{g},\groupPower{\groupGenerator{g}}{2}} \]We obtain the Cayley quiver \(\bindCayleyQuiver{\cyclicGroup{3}}{\groupGenerator{g}}\), shown below:

This is isomorphic to the 3-cycle quiver \(\subSize{\cycleQuiver }{3}\).

We now construct and name the obvious presentation of the cyclic group \(\cyclicGroup{\sym{n}}\) for arbitrary \(\sym{n}\). The name we'll use will look a little strange at first, but will make more sense from the perspective of [[[Toroidal lattices]]]:

\[ \bindCardSize{\translationPresentation{1}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{n}}}\defEqualSymbol \groupPresentation{\rform{\groupGenerator{r}}}{\groupPower{\rform{\groupElement{r}}}{\sym{n}}} \]We can now state the general fact that Cayley quivers of these presentations are isomorphic to cyclic quivers:

\[ \bindCardSize{\cayleyQuiverSymbol{\translationPresentation{1}}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{n}}}\isomorphicSymbol \bindCardSize{\subSize{\cycleQuiver }{\sym{n}}}{\rform{\card{r}}} \]### Example: integers

Consider \(\group{\mathbb{Z}}\), the group of the integers under addition. Then it's not hard to see that \(\bindCayleyQuiver{\group{\mathbb{Z}}}{\groupGenerator{1}}\), the Cayley quiver using the integer 1 as the generator, is the following infinite quiver:

Let us name the obvious presentation \(\translationPresentation{1}\) of \(\group{\mathbb{Z}}\):

\[ \bindCardSize{\translationPresentation{1}}{\groupGenerator{\rform{r}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}}}{\emptySet } \]The Cayley quiver of this presentation is isomorphic to the line quiver:

\[ \cayleyQuiverSymbol{\translationPresentation{1}}\isomorphicSymbol \subSize{\lineQuiver }{ \infty } \]### Example: integer plane

Consider \(\group{\power{\group{\mathbb{Z}}}{2}}\), the group of pairs of integers under addition. Then \(\bindCayleyQuiver{\group{\power{\group{\mathbb{Z}}}{2}}}{\tuple{1,0},\tuple{0,1}}\) is the square quiver, a fragment shown below:

Let's construct a presentation of \(\groupPower{\group{\mathbb{Z}}}{2}\) and name it \(\translationPresentation{2}\). It’s easy to check that this is a presentation of the group \(\groupPower{\group{\mathbb{Z}}}{2}\) via \(\groupGenerator{\rform{r}}\isomorphicSymbol \tuple{1,0},\groupGenerator{\bform{b}}\isomorphicSymbol \tuple{0,1}\).

\[ \bindCardSize{\translationPresentation{2}}{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\bform{b}}}} \]Then Cayley quiver of this presentation is isomorphic to the square quiver:

\[ \cayleyQuiverSymbol{\translationPresentation{2}}\isomorphicSymbol \subSize{\squareQuiver }{ \infty } \]### Example: triangular plane

Staying with \(\group{\power{\group{\mathbb{Z}}}{2}}\), we now examine the Cayley quiver \(\bindCayleyQuiver{\group{\power{\group{\mathbb{Z}}}{2}}}{\tuple{1,-1,0},\tuple{0,1,-1},\tuple{-1,0,1}}\), a fragment shown below:

We name a presentation \(\starTranslationPresentation{3}\) of \(\group{\power{\group{\mathbb{Z}}}{2}}\) below.

\[ \bindCardSize{\starTranslationPresentation{3}}{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\gform{g}}},\groupCommutator{\groupGenerator{\gform{g}}}{\groupGenerator{\bform{b}}},\groupGenerator{\rform{r}}\iGmult \groupGenerator{\gform{g}}\iGmult \groupInverse{\groupGenerator{\bform{b}}}} \]Again, it’s easy to verify that this is a presentation of \(\group{\power{\group{\mathbb{Z}}}{2}}\) via generators \(\groupGenerator{\rform{r}}\isomorphicSymbol \tuple{1,-1,0},\groupGenerator{\gform{g}}\isomorphicSymbol \tuple{0,1,-1},\groupGenerator{\bform{b}}\isomorphicSymbol \tuple{-1,0,1}\), which satisfy the relators, since e.g. \(\tuple{1,-1,0} + \tuple{0,1,-1} = \minus{\tuple{-1,0,1}}\).

The Cayley quiver of this presentation is isomorphic to the triangular quiver:

\[ \cayleyQuiverSymbol{\starTranslationPresentation{3}}\isomorphicSymbol \subSize{\triangularQuiver }{ \infty } \]#### Dependence on generators

Notice the important fact that the two presentations of \(\group{\power{\group{\mathbb{Z}}}{2}}\) given by \(\starTranslationPresentation{3}\) and \(\translationPresentation{2}\) result in non-isomorphic Cayley quivers, being the triangular quiver and the square quiver respectively. This demonstrates that the choice of generators *does* matter.

### Example: free group

Consider \(\freeGroup{\sym{J}}\), the free group on two generators \(\sym{J} = \list{\groupGenerator{\rform{j_1}},\groupGenerator{\bform{j_2}}}\). The (infinite) Cayley quiver \(\bindCayleyQuiver{\freeGroup{\sym{J}}}{\sym{J}}\) is:

Generalizing to \(\sym{J}\) of arbitrary size, we have that:

\[ \bindCayleyQuiver{\freeGroup{\sym{J}}}{\sym{J}}\isomorphicSymbol \subSize{\treeQuiver{\setCardinality{\sym{J}}}}{ \infty } \]### Example: Sym(3)

Consider the symmetric group \(\graph{G} = \symmetricGroup{3}\). It can be generated by the transpositions \(\sym{J} = \list{\transposition{1}{2},\transposition{2}{3}}\). The Cayley quiver \(\bindCayleyQuiver{\graph{G}}{\sym{J}}\) is:

Each vertex is labeled with the cycle form of the corresponding permutation in \(\symmetricGroup{3}\).

## Choice of generators

As we saw above with the two presentations of \(\group{\power{\group{\mathbb{Z}}}{2}}\) given by \(\starTranslationPresentation{3}\) and \(\translationPresentation{2}\), different choices of generators for the same group can produce non-isomorphic Cayley quivers. As a further example example, let us revisit the presentation of the integer plane:

\[ \bindCardSize{\translationPresentation{2}}{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\bform{b}}}} \]We can obtain a different presentation by introducing an additional generator \(\groupGenerator{\gform{g}}\) with associated relator \(\groupGenerator{\gform{g}} = \groupGenerator{\rform{r}}\iGmult \groupGenerator{\bform{b}}\). In \(\group{\power{\group{\mathbb{Z}}}{2}}\) \(\groupGenerator{\gform{g}}\) is realized as \(\groupGenerator{\gform{g}}\isomorphicSymbol \tuple{1,1}\). The new presentation is:

\[ \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\bform{b}}},\groupGenerator{\rform{r}}\iGmult \groupGenerator{\bform{b}}\iGmult \groupInverse{\groupGenerator{\gform{g}}}} \]This presents the same group \(\group{\power{\group{\mathbb{Z}}}{2}}\) because we can write any word in this new presentation in terms of the original generators \(\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}\) by replacing any \(\groupGenerator{\gform{g}}\)’s with \(\groupGenerator{\rform{r}}\iGmult \groupGenerator{\bform{b}}\). So this is a *redundant* presentation.

Its associated Cayley quiver looks like this:

Another example is provided by choosing different generators of the symmetric group \(\symmetricGroup{3}\). Instead of \(\sym{J} = \list{\rform{\transposition{1}{2}},\bform{\transposition{2}{3}}}\), we shall choose \(\sym{J} = \list{\rform{\transposition{1}{2}},\bform{\permutationCycle{1\permutationCycleSymbol 2\permutationCycleSymbol 3\permutationCycleSymbol 1}}}\). This yields the following Cayley quiver \(\bindCayleyQuiver{\graph{G}}{\sym{J}}\):

This *triangular prism* is clearly distinct from the hexagon we obtained from the original choice of generators. This fact can be deduced directly from the generators themselves: a cardinal corresponding to a transposition (a 2-cycle) must have order 2, and hence will have orbits in the Cayley quiver of that length. Likewise for cardinals corresponding to longer cycles.

## Transitive quivers as Cayley quivers

The above examples hopefully give the impression that the quivers we introduced in [[[Transitive quivers]]] appear to arise naturally as the Cayley quivers of simple groups. Notice that we cannot do this for finite quiver in each family (e.g. \(\subSize{\lineQuiver }{\sym{n}}\) for finite \(\sym{n}\)), since these are not actually transitive. Only for the infinite quivers such as \(\subSize{\lineQuiver }{ \infty }\) do we have Cayley quivers – the exception being \(\subSize{\cycleQuiver }{\sym{n}}\) which *is* transitive for any \(\sym{n}\).

We can list these relationships in the form of a table, showing the quiver and corresponding group and group presentation that generates it as a Cayley quiver. We attach names to some of these presentations, as they will be generally useful:

quiver | group | group presentation |
---|---|---|

\(\bindCards{\subSize{\lineQuiver }{ \infty }}{\rform{\card{r}}}\) | \(\group{\mathbb{Z}}\) | \(\bindCardSize{\translationPresentation{1}}{\groupGenerator{\rform{r}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}}}{\emptySet }\) |

\(\bindCards{\subSize{\cycleQuiver }{\sym{n}}}{\rform{\card{r}}}\) | \(\cyclicGroup{\sym{n}}\) | \(\bindCardSize{\translationPresentation{1}}{\groupGenerator{\rform{r}}\compactBindingRuleSymbol \modulo{\sym{n}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}}}{\groupPower{\groupGenerator{\rform{r}}}{\sym{n}}}\) |

\(\bindCards{\subSize{\squareQuiver }{ \infty }}{\rform{\card{r}},\bform{\card{b}}}\) | \(\groupPower{\group{\mathbb{Z}}}{2}\) | \(\bindCardSize{\translationPresentation{2}}{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\bform{b}}}}\) |

\(\bindCards{\subSize{\triangularQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\) | \(\groupPower{\group{\mathbb{Z}}}{2}\) | \(\bindCardSize{\starTranslationPresentation{3}}{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\gform{g}}},\groupCommutator{\groupGenerator{\gform{g}}}{\groupGenerator{\bform{b}}},\groupGenerator{\rform{r}}\iGmult \groupGenerator{\gform{g}}\iGmult \groupInverse{\groupGenerator{\bform{b}}}}\) |

\(\bindCards{\subSize{\cubicQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\) | \(\groupPower{\group{\mathbb{Z}}}{3}\) | \(\bindCardSize{\translationPresentation{3}}{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\gform{g}}},\groupCommutator{\groupGenerator{\gform{g}}}{\groupGenerator{\bform{b}}}}\) |

\(\subSize{\gridQuiver{\sym{k}}}{ \infty }\) | \(\groupPower{\group{\mathbb{Z}}}{\sym{k}}\) | \(\bindCardSize{\translationPresentation{\sym{k}}}{\card{\card{c}_1},\ellipsis ,\card{\card{c}_{\sym{k}}}}\defEqualSymbol \groupPresentation{\groupGenerator{\card{c}_{\sym{i}}}}{\groupCommutator{\groupElement{\card{c}_{\sym{i}}}}{\groupElement{\card{c}_{\sym{j}}}}}\) |

\(\subSize{\treeQuiver{\sym{k}}}{ \infty }\) | \(\freeGroup{\sym{C}}\) | \(\groupPresentation{\groupGenerator{\card{c}_{\sym{i}}}}{\emptySet }\) |

\(\bouquetQuiver{\sym{k}}\) | \(\list{\groupElement{e}}\) | \(\groupPresentation{\groupGenerator{\card{c}_{\sym{i}}}}{\groupElement{\card{c}_{\sym{i}}}}\) |

Some notes: for the final three rows, we write \(\card{c}_{\sym{i}}\) to denote the sequence \(\card{c}_1,\ellipsis ,\card{c}_{\sym{k}}\), etc. Lastly, the presentation \(\groupPresentation{\groupGenerator{\card{c}_{\sym{i}}}}{\groupElement{\card{c}_{\sym{i}}}}\) is "degenerate": the \(\card{c}_{\sym{i}}\) here are actually all the identity element \(\groupElement{e}\) of the trivial group \(\list{\groupElement{e}}\).

# Action groupoids

## Introduction

In this section, we'll further explore the connection between [[[Transitive quivers]]] and groups that we started in [[[Cayley quivers]]]. We'll see that a transitive quiver that is the Cayley quiver of a group has an important property: its path groupoid is a so-called **action groupoid** of that group corresponding to the **regular action**. We can then consider actions beyond the regular action, yielding the notion of an **action quiver**.

There are two important consequences of the ideas in this section: firstly, we obtain a dictionary to translate between the language of quiver geometry and group theory when dealing with Cayley quivers, and secondly, we can see in what exact sense quiver geometry can take us *beyond* group theory, and the spaces that group theory can describe.

## Group actions

A **right group action** \(\action{A}\) is the *binding* of a group \(\group{G}\) to an object \(\sym{X}\) on which it acts, expressed as a two-argument map \(\functionSignature{\function{\action{A}}}{\tuple{\sym{X},\group{G}}}{\sym{X}}\) that encodes how elements of \(\group{G}\) effect elements of \(\sym{X}\). By fixing the second argument of the map \(\action{A}\) we obtain a family of maps \(\setConstructor{\functionSignature{\function{\action{A}_{\groupElement{g}}}}{\sym{X}}{\sym{X}}}{\elemOf{\groupElement{g}}{\group{G}}}\), called **Cayley functions**. It is these maps that *encode* the behavior of the group \(\group{G}\), with function composition playing the role of group multiplication.

An action must have the property that we can compose two elements of the group and then act with the result, and this is the same as acting with the individual elements in sequence:

\[ \function{\action{A}_{\groupElement{g}\iGmult \groupElement{h}}}(\sym{x}) = \function{\action{A}_{\groupElement{h}}}(\function{\action{A}_{\groupElement{g}}}(\sym{x})) \]Note that we will work only with right actions, as defined above, as these will look more natural for our applications, although left actions are more common in other literature. The condition for a left action is similar but the functions \(\action{A}_{\groupElement{h}}\) and \(\action{A}_{\groupElement{g}}\) in the above condition are applied in the opposite order.

A final and obvious requirement is that the identity element of the group should leave \(\sym{X}\) unchanged:

\[ \function{\action{A}_{\groupElement{e}}}(\sym{x}) = \sym{x} \]## Self actions

Perhaps the most natural group action of \(\group{G}\) is given by the **self action**, in which we allow a group to act *on itself*, so that the \(\group{G}\)-set is \(\group{G}\) itself: \(\sym{X} = \group{G}\). The action is defined by \(\mto{\tuple{\groupoidElement{x},\groupElement{g}}}{\groupElement{x}\Gmult \groupElement{g}}\). The representative of a particular group element \(\groupElement{g}\) is the function that right-multiplies by \(\groupElement{g}\):

This action is **faithful**, meaning that it fully captures the behavior of the original group. We can state this formally in terms of the curried form of \(\function{A}\), which is then an **injective group homomorphism** \(\functionSignature{\function{\action{A}}}{\group{G}}{\symmetricGroup{\group{G}}}\). The famous Cayley's theorem states that \(\group{G}\) is **isomorphic** to its image under \(\function{A}\): we have "embedded" \(\group{G}\) into the group of permutations of its elements. Any action that is equivalent to the self-action (under relabelling of elements of the \(\group{G}\)-set) is known as a **regular** action.

For brevity we'll sometimes write the self action of \(\group{G}\) as \(\selfAction{\group{G}}\).

## Action groupoids

With these terms defined, we can now define the **action groupoid** of a particular group action \(\action{A}\). We'll write this groupoid as \(\actionGroupoid{\action{A}}\). The elements of \(\actionGroupoid{\action{A}}\) are given below:

In other words, the elements of \(\actionGroupoid{\action{A}}\) are "acts": an element \(\elemOf{\sym{x}}{\sym{X}}\), a choice of \(\elemOf{\groupElement{g}}{\group{G}}\), and its image under \(\action{A}_{\groupElement{g}}\).

We can see \(\actionGroupoid{\action{A}}\) as encoding a family of relations \(\actionGroupoid{\action{A}}_{\groupElement{g}}\) indexed by \(\group{G}\):

\[ \actionGroupoid{\action{A}}_{\groupElement{g}}\defEqualSymbol \setConstructor{\tuple{\sym{x},\sym{y}}}{\sym{y} = \function{\action{A}_{\groupElement{g}}}(\sym{x}),\;\elemOf{\sym{x},\sym{y}}{\sym{X}}} \]To use a database analogy, we have “grouped by” the 2’nd element of each 3-tuple to obtain this indexed family, and this retains the same information:

\[ \actionGroupoid{\action{A}}\isomorphicSymbol \paren{\mto{\groupElement{g}}{\actionGroupoid{\action{A}}_{\groupElement{g}}}} \]These relations \(\actionGroupoid{\action{A}}_{\groupElement{g}}\) are literally just the set-theoretic relations for the functions \(\action{A}_{\groupElement{g}}\), so we have done nothing very sophisticated here.

Groupoid multiplication of elements of \(\actionGroupoid{\action{A}}\) is defined as:

\[ \tuple{\sym{x},\groupElement{g},\vert{y}}\gmult \tuple{\sym{y},\groupElement{h},\sym{z}}\defEqualSymbol \tuple{\sym{x},\groupElement{g}\iGmult \groupElement{h},\sym{z}} \]The multiplication is **not** defined when the first "act" ends at a different element of \(\sym{X}\) from where the second "act" begins (\(\sym{y} \neq \primed{\sym{y}}\)):

## Action quiver

### Examples: Sym(3)

Consider the symmetric group \(\symmetricGroup{3}\) acting on the list \(\list{\rform{\filledSquareToken },\gform{\filledSquareToken },\bform{\filledSquareToken }}\). Let's visualize the action quiver, where we choose as generators of the group the transpositions \(\transposition{1}{2}\) and \(\transposition{2}{3}\) to be our cardinals:

This is isomorphic to the Cayley quiver of \(\symmetricGroup{3}\).

But we can also act on a list that contains two indistinguishable elements, for example, \(\list{\rform{\filledSquareToken },\rform{\filledSquareToken },\bform{\filledSquareToken }}\):

Effectively we have *glued* together the vertices of the Cayley quiver in which the green and red elements occur in the same pattern of positions. This is no longer a Cayley quiver for *any* group. This can easily be deduced because it is not transitive: the end vertices have degree 1, and the central vertex degree 2.

### Examples: Sym(4)

Let us repeat this idea for \(\symmetricGroup{4}\), first acting on the set \(\list{\rform{\filledSquareToken },\gform{\filledSquareToken },\bform{\filledSquareToken },\waform{\filledSquareToken }}\) via generators \(\transposition{1}{2},\transposition{2}{3},\transposition{3}{4}\):

You might notice that we have obtained the edge graph of the truncated octahedron:

This kind of object is known as a **permutohedron**.

We will now allow \(\symmetricGroup{4}\) to act on a list with only three distinct elements \(\list{\rform{\filledSquareToken },\rform{\filledSquareToken },\gform{\filledSquareToken },\bform{\filledSquareToken }}\):

Again we obtain a quiver that is *not* a Cayley quiver.

Lastly, we can act on a set with only two distinct elements \(\list{\rform{\filledSquareToken },\rform{\filledSquareToken },\bform{\filledSquareToken },\bform{\filledSquareToken }}\):

### Action quiver

In all of these examples, we have constructed particular *actions* of the symmetric group \(\symmetricGroup{\sym{n}}\) with the choice of generators given by the transpositions \(\transposition{1}{2},\transposition{2}{3},\ellipsis ,\transposition{\paren{\sym{n} - 1}}{\sym{n}}\). We now formalize this construction.

Recall from [[[Cayley quivers]]] that the Cayley quiver \(\bindCayleyQuiver{\group{G}}{\sym{J}}\) for group \(\group{G}\) and set of generators \(\sym{J} = \list{\groupGenerator{j}_1,\groupGenerator{j}_2,\ellipsis }\) is defined as follows:

\[ \begin{aligned} \vertexList(\bindCayleyQuiver{\group{G}}{\sym{J}})&\defEqualSymbol \group{G}\\ \edgeList(\bindCayleyQuiver{\group{G}}{\sym{J}})&\defEqualSymbol \setConstructor{\tde{\groupElement{g}}{\groupElement{g}\Gmult \groupGenerator{j}}{\card{j}}}{\elemOf{\groupElement{g}}{\group{G}},\;\elemOf{\groupGenerator{j}}{\sym{J}}}\end{aligned} \]We now extend this idea to arbitrary action \(\action{A}\) on the group \(\group{G}\) over \(\group{G}\)-set \(\sym{X}\), again defined in terms of a set of generators \(\sym{J} = \list{\groupGenerator{j}_1,\groupGenerator{j}_2,\ellipsis }\). Associated to each \(\elemOf{\groupGenerator{j}}{\sym{J}}\) is a **Cayley function** \(\functionSignature{\function{\action{A}_{\groupGenerator{j}}}}{\sym{X}}{\sym{X}}\). We define the **action quiver** \(\bindActionQuiver{\action{A}}{\sym{J}}\) as follows:

In other words, the edges of \(\bindActionQuiver{\action{A}}{\sym{J}}\) depict "primitive acts" of \(\group{G}\) on the elements of \(\sym{X}\): those acts associated with generators \(\elemOf{\groupGenerator{j}}{\sym{J}}\).

The Cayley quiver \(\bindCayleyQuiver{\group{G}}{\sym{J}}\) is obviously an example of an action quiver \(\bindActionQuiver{\action{A}}{\sym{J}}\), specifically in which the action is the self-action \(\selfAction{\group{G}}\) of \(\group{G}\), which is now revealed to be the motivation for the notation \(\bindCayleyQuiver{\group{G}}{\sym{J}}\) in the first place!

## Path vs. action groupoid

You might notice that the groupoid multiplication in the action groupoid \(\actionGroupoid{\action{A}}\) of a group action \(\action{A}\) is very similar to the group multiplication in the path groupoid \(\pathGroupoid{\quiver{Q}}\) of a quiver \(\quiver{Q}\) that we saw in [[[Path groupoids]]].

For action groupoids, we can only compose two acts when the first "ends" at a state where the second "begins":

\[ \tuple{\sym{x},\groupElement{g},\vert{y}}\gmult \tuple{\sym{y},\groupElement{h},\sym{z}} = \tuple{\sym{x},\groupElement{g}\iGmult \groupElement{h},\sym{z}} \]For path groupoids, we can only compose two paths when the first ends at a vertex where the second begins:

\[ \pathCompose{\paren{\pathWord{\vert{x}}{\wordSymbol{V}}{\vert{y}}}}{\paren{\pathWord{\vert{y}}{\wordSymbol{W}}{\vert{z}}}} = \paren{\pathWord{\vert{y}}{\concat{\wordSymbol{V} \wordSymbol{W}}}{\vert{z}}} \]Indeed, it is a simple but useful exercise to check that this is similarity is an *groupoid isomorphism* in the case where the quiver is an action quiver:

In words, **acts** *are* **paths**, with all acts being formed from **primitive acts**, which are **edges** (or their inverses).

## Summary

We can now state the relationship between group actions and action quivers in the following "commutative diagram" (whose formal status will have to wait until we introduce category theory at a later stage):

In words: when a quiver is an action quiver of group \(\group{G}\), its path groupoid coincides with action groupoid of \(\group{G}\), summarized in the motto "acts are paths".

The relationship can be specialized for the case of a Cayley quiver:

The benefit of keeping this diagram in mind is that it will allow us to interpret various constructions on quivers in terms of groups. When the quivers are transitive quivers, we will end up describing familiar mathematical constructions on groups. But crucially, for non-transitive quivers, we will find ourselves outside the domain of group theory, but with intuitions to guide us from the transitive case.

# Path quivers

## Recap

To recap what we accomplished in the previous section: we defined the **path groupoid** \(\pathGroupoid{\quiver{Q}}\) of a (cardinal) quiver \(\quiver{Q}\), where a **path** \(\elemOf{\path{P}}{\pathGroupoid{\quiver{Q}}}\) is uniquely determined by its **path word** \(\paren{\pathWord{\vert{t}}{\wordSymbol{W}}{\vert{h}}}\).

## Path quivers

We’ll now give a way to attach a **cardinal structure** to the path groupoid \(\pathGroupoid{\quiver{Q}}\), turning it from a groupoid into a quiver, called the **path quiver**, written \(\pathQuiver{\quiver{Q}}\). We’ll refer to \(\quiver{Q}\) itself as the **base quiver**.

There will in fact be three kinds of path quiver associated to a base quiver \(\quiver{Q}\):

z | z |
---|---|

\(\pathQuiver{\quiver{Q}}\) | affine path quiver |

\(\forwardPathQuiver{\quiver{Q}}{\vert{v}}\) | forward path quiver based at vertex \(\vert{v}\) |

\(\backwardPathQuiver{\quiver{Q}}{\vert{v}}\) | backward path quiver based at vertex \(\vert{v}\) |

For the forward and backward path quivers, \(\vert{v}\) is an arbitrary vertex in \(\quiver{Q}\) which we will call the **origin** or **base point**. Therefore there is a family of quivers \(\forwardPathQuiver{\quiver{Q}}{\vert{v}}\), \(\backwardPathQuiver{\quiver{Q}}{\vert{v}}\) given by choices of \(\elemOf{\vert{v}}{\vertexList(\quiver{Q})}\). In contrast, there is only one affine path quiver \(\pathQuiver{\quiver{Q}}\).

## Forward path quiver

We'll start with the forward path quiver. It is defined as follows:

\[ \begin{aligned} \vertexList(\forwardPathQuiver{\quiver{Q}}{\vert{x}})&\defEqualSymbol \pathList(\quiver{Q},\vert{x},\blank)\\ \\ \edgeList(\forwardPathQuiver{\quiver{Q}}{\vert{x}})&\defEqualSymbol \setConstructor{\tde{\path{P}}{\path{R}}{\card{c}}}{\pathCompose{\path{P}}{\card{c}} = \path{R}}\end{aligned} \]Expressing this in words, we say that a vertex of \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) is a path \(\path{P}\) in \(\quiver{Q}\) starting at the base vertex \(\vert{x}\), and an edge of \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) is \(\tde{\path{P}}{\path{R}}{\card{c}}\), where \(\path{P}\) is a path that can be **extended** by the cardinal \(\elemOf{\card{c}}{\cardinalList(\quiver{Q})}\) to give the path \(\path{R}\).

Essentially, the forward path quiver \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) "trees out" all paths in \(\quiver{Q}\) that can be constructed starting at vertex \(\vert{x}\).

## Example: 2-bouquet quiver

The idea is most intuitive when presented visually. We’ll show the path quiver for a base quiver \(\quiver{Q} = \bindCards{\bouquetQuiver{2}}{\rform{\card{r}},\bform{\card{b}}}\), namely a **2-bouquet quiver** on cardinals \(\rform{\card{r}}\) then \(\bform{\card{b}}\). We'll refer to the single vertex of \(\quiver{Q}\) as \(\vert{x}\).

Here we show a fragment of the infinite forward path quiver \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\). Each vertex will be illustrated with the path that it represents.

This is already a complex example, so studying this visualization carefully is instructive.

Notice that the "origin vertex" in \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) is the empty path, shown in the center as a copy of \(\quiver{Q}\) with a teal dot, having path word \(\paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}}\). This empty path is the unit \(\identityElement{x}\) in the path groupoid.

To understand the nature of the path quiver, we’ll focus on the origin, representing the empty path \(\paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}}\):

We can extend the empty path \(\paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}}\) in four ways: taking \(\rform{\card{r}}\) gives us the "east" vertex \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}}{\vert{x}}}\), taking \(\bform{\card{b}}\) gives us the "north" vertex \(\paren{\pathWord{\vert{x}}{\word{\bform{\card{b}}}}{\vert{x}}}\). If we instead take the inverted cardinals \(\rform{\inverted{\card{r}}}\) or \(\bform{\inverted{\card{b}}}\) we obtain paths that traverse the underlying edges in the *anticlockwise direction:* taking \(\rform{\inverted{\card{r}}}\) gives us the "west" vertex \(\paren{\pathWord{\vert{x}}{\word{\rform{\ncard{r}}}}{\vert{x}}}\), taking \(\bform{\inverted{\card{b}}}\) gives us the "south vertex" \(\paren{\pathWord{\vert{x}}{\word{\bform{\ncard{b}}}}{\vert{x}}}\).

Let's now examine another section of \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) centered on the east vertex:

Taking \(\rform{\card{r}}\) again from \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}}{\vert{x}}}\) gives the "east east" vertex \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}{\rform{\card{r}}}}{\vert{x}}}\), illustrated as a loop with a double arrowhead. Taking \(\bform{\card{b}}\) gives the "north east" vertex \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}{\bform{\card{b}}}}{\vert{x}}}\), in which we first loop clockwise around the \(\rform{\card{r}}\) cycle and then clockwise around the \(\bform{\card{b}}\) cycle. Taking \(\bform{\inverted{\card{b}}}\) gives the "south east" vertex \(\paren{\pathWord{\vert{x}}{\word{\rform{\card{r}}}{\bform{\ncard{b}}}}{\vert{x}}}\), which loops anticlockwise around the \(\bform{\card{b}}\) cycle instead. Taking \(\rform{\inverted{\card{r}}}\,\)**backtracks** to the empty path we've already seen.

### Extension and retraction

Notice that we *can* and *should* interpret "extend path by \(\rform{\card{r}}\)" as the inverse operation to "extend path by \(\rform{\inverted{\card{r}}}\)", since the sub-word \(\word{\rform{\card{r}}}{\rform{\ncard{r}}}\) is formally equal to the empty sub-word, and can be removed from any word. Hence when a path word ends in an \(\rform{\card{r}}\), "extend path by \(\rform{\inverted{\card{r}}}\)" is equivalent to "retract path by \(\rform{\card{r}}\)".

### Tree structure

You may have noticed that \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) is in fact a *tree,* rooted at the empty path. This is because a vertex in \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\), which is a path in \(\quiver{Q}\), is uniquely determined by its path word, and there is only one way of extending the empty path word \(\paren{\pathWord{\vert{x}}{\emptyWord{}}{\vert{x}}}\) by individual cardinals to obtain this path word. Seen as a graph, this quiver is sometimes known as a **Bethe lattice** or **regular tree**.

We can generalize this observation with the following theorem:

\[ \forwardPathQuiver{\bouquetQuiver{\sym{k}}}{\vert{}}\isomorphicSymbol \treeQuiver{\sym{k}}{ \infty } \]In English, this is the statement that forward path quiver of a \(\sym{k}\)-bouquet quiver is an infinite \(\sym{k}\)-tree quiver on the same cardinals.

## Example: 2-line quiver

The path quiver need not be infinite. Let's consider the quiver \(\quiver{Q} = \bindCards{\subSize{\lineQuiver }{2}}{\rform{\card{r}}\serialCardSymbol \bform{\card{b}}}\). We'll label the vertices \(\vert{x}\), \(\vert{y}\), \(\vert{z}\).

Here is the path quiver \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\):

This path quiver is not only finite, it is isomorphic to the original quiver! In general, this will occur whenever the quiver is a tree, since there is then only one path from the origin (wherever we choose it) to every other vertex.

We’ve seen \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\), but let’s examine how it compares to \(\forwardPathQuiver{\quiver{Q}}{\vert{y}}\) and \(\forwardPathQuiver{\quiver{Q}}{\vert{z}}\):

As you can see, these are all isomorphic, and the choice of origin is in some sense irrelevant. We can summarize this observation with the following theorem (the base vertex is not shown here, as it is irrelevant):

\[ \forwardPathQuiver{\subSize{\lineQuiver }{\sym{n}}}{\vert{}}\isomorphicSymbol \subSize{\lineQuiver }{\sym{n}} \]## Example: finite tree

The fact that a tree quiver is isomorphic to its forward path quivers can be surprising at first. Let's illustrate this isomorphism for a more elaborate tree, with root vertex \(\vert{0}\):

Again, notice that the cardinal structure of the base quiver \(\quiver{Q}\) is mirrored exactly by the structure of the path quiver \(\forwardPathQuiver{\quiver{Q}}{\vert{0}}\).

This fact is also true of the transitive trees we defined in the [[Transitive quivers]]:

\[ \forwardPathQuiver{\subSize{\treeQuiver{\sym{k}}}{\sym{n}}}{\vert{}}\isomorphicSymbol \subSize{\treeQuiver{\sym{k}}}{\sym{n}} \]## Example: 2-cycle quiver

For now, we’ll conclude this section by looking at the quiver \(\quiver{Q} = \bindCards{\subSize{\cycleQuiver }{2}}{\rform{\card{r}}\serialCardSymbol \bform{\card{b}}}\) with vertices \(\vert{x}\), \(\vert{y}\):

The path quiver structure is much simpler than for the 2-bouquet quiver, because the once a path is non-empty, it can only be extended in one possible way.

This path quiver is a *bi-infinite linear chain*, starting at the empty path in the middle. Here, a path must take the cardinals \(\rform{\card{r}}\), \(\bform{\card{b}}\) in alternating fashion, but it can *wind clockwise*, giving the (infinite) left side of the chain, or anticlockwise, giving to the (infinite) right side.

For a general \(\sym{n}\)-cycle quiver, we have the following theorem (modulo base vertex):

\[ \forwardPathQuiver{\subSize{\cycleQuiver }{\sym{n}}}{\vert{}}\isomorphicSymbol \subSize{\lineQuiver }{ \infty } \]## Fundamental cycles

When a cycle is present in a finite quiver, then it is easy to see that the corresponding path quiver will be infinite: the presence of cycles guarantees the path quiver will be infinite, because we can make arbitrarily long paths by traversing the cycle repeatedly. One way to understand this is to consider only the **fundamental cycles** of the quiver: these are cycles that form a sub-path of *any* larger cycle. Any infinite path must contain these fundamental cycles as sub-paths infinitely many times (traversed in an appropriate order). So far we haven’t been too precise about what defines such a fundamental cycle, and in fact there is some freedom in how they are defined that is analogous to the choice of a basis in a vector space, so we will return to this topic when we have the requisite mathematical machinery.

#### Grid quiver

For now, we can observe that for the infinite grid quiver \(\subSize{\gridQuiver{\sym{k}}}{ \infty }\), we can extend the empty path in exactly \(2 \, \sym{k}\) ways (the cardinals and their inverses), and we can extend a non-empty path in exactly \(2 \, \sym{k} - 1\) distinct ways to avoid backtracking. Since we cannot extend two *distinct* paths to yield the *same* path, the path quiver is necessarily a tree. Therefore, we have the following theorem:

The situation for a finite grid quiver is more complex, as the number of ways we can extend a path depends on which vertex we are on the base quiver: there could be between \(\sym{k} - 1\) and \(2 \, \sym{k} - 1\) possible extensions. For example, on a finite line quiver, there is 1 possible extension for an interior vertex, and 0 possible extensions on the two edge vertices. On a square quiver, there are 3 possible extensions for an interior vertex, 2 possible extensions for an edge vertex, and only 1 possible extension for the 4 corner vertices.

## Backward path quiver

#### Intuition

We constructed the forward path quiver \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\) by successively extending the empty path \(\paren{\pathWord{x}{\emptyWord{}}{x}}\,\)*on the right,* yielding edges of the form:

We can perform a *mirror construction* to this, where we instead extending the empty path \(\paren{\pathWord{x}{\emptyWord{}}{x}}\,\)*on the left*, yielding edges of the form:

Notice that the cardinal labeling this edge is \(\mcard{c}\), which we call a **mirror cardinal**. It is distinct from \(\card{c}\) itself.

The resulting path quiver is called the **backward path quiver** \(\backwardPathQuiver{\quiver{Q}}{\vert{x}}\), and shares the same vertices as \(\forwardPathQuiver{\quiver{Q}}{\vert{x}}\), since extending the empty path on the left by \(\mcard{c}\) yields the same path as extending it on the right by \(\inverted{\card{c}}\).

#### Definition

More formally, we can write \(\backwardPathQuiver{\quiver{Q}}{\vert{x}}\) as:

\[ \begin{aligned} \vertexList(\pathQuiver{\quiver{Q}})&\defEqualSymbol \pathList(\quiver{Q},\blank,\vert{x})\\ \\ \edgeList(\pathQuiver{\quiver{Q}})&\defEqualSymbol \setConstructor{\tde{\path{P}}{\path{R}}{\mcard{c}}}{\pathCompose{\card{c}}{\path{P}} = \path{R}}\end{aligned} \]## Affine path quiver

The utility of considering both the forward and backward path quivers is that they allow us to construct the **affine path quiver** \(\pathQuiver{\quiver{Q}}\).

While the forward and backward path quivers depend on a choice of base vertex, the affine path quiver does not. It is constructed as follows:

\[ \begin{aligned} \vertexList(\pathQuiver{\quiver{Q}})&\defEqualSymbol \pathList(\quiver{Q},\blank,\blank)\\ \\ \edgeList(\pathQuiver{\quiver{Q}})&\defEqualSymbol \begin{array}{c} \setConstructor{\tde{\path{P}}{\path{R}}{\card{c}}}{\pathCompose{\path{P}}{\card{c}} = \path{R}}\\ \cup \\ \setConstructor{\tde{\path{P}}{\path{R}}{\mcard{c}}}{\pathCompose{\card{c}}{\path{P}} = \path{R}} \end{array} \end{aligned} \]Expressing this in words, we say a vertex of \(\pathQuiver{\quiver{Q}}\) is a path \(\path{P}\) in \(\quiver{Q}\), and an edge of \(\pathQuiver{\quiver{Q}}\) is either:

\(\tde{\path{P}}{\path{R}}{\card{c}}\), where \(\path{P}\) is a path that can be

**extended on the right**by the cardinal \(\elemOf{\card{c}}{\cardinalList(\quiver{Q})}\) to give the path \(\path{R}\)\(\tde{\path{P}}{\path{R}}{\mcard{c}}\), where \(\path{P}\) is a path that can be

**extended on the left**by the cardinal \(\elemOf{\card{c}}{\cardinalList(\quiver{Q})}\) to give the path \(\path{R}\)

#### Connectedness

An important fact about the affine path quiver \(\pathQuiver{\quiver{Q}}\) is that it is connected iff the quiver \(\quiver{Q}\) is connected. Why is this?

Consider two vertices \(\path{P},\path{R}\) of \(\pathQuiver{\quiver{Q}}\), corresponding to paths in \(\quiver{Q}\) as follows:

\[ \path{P} = \paren{\pathWord{\tvert{p}}{\wordSymbol{P}}{\hvert{p}}}\quad \path{R} = \paren{\pathWord{\tvert{r}}{\wordSymbol{R}}{\hvert{r}}} \]Clearly there is a path in \(\pathQuiver{\quiver{Q}}\) connecting \(\path{P}\) to \(\paren{\pathWord{\hvert{p}}{\emptyWord{}}{\hvert{p}}}\), given by retractions on the left according to the cardinals of \(\wordSymbol{P}\). Similarly there is a path connecting \(\path{R}\) to \(\paren{\pathWord{\tvert{r}}{\emptyWord{}}{\tvert{r}}}\), given by retractions on the right according to the cardinals of \(\wordSymbol{R}\). Lastly, since \(\quiver{Q}\) is connected there is at least one path \(\paren{\pathWord{\hvert{p}}{\wordSymbol{S}}{\tvert{r}}}\) in \(\quiver{Q}\), which implies a path in \(\pathQuiver{\quiver{Q}}\) connecting \(\paren{\pathWord{\hvert{p}}{\emptyWord{}}{\hvert{p}}}\) to \(\paren{\pathWord{\hvert{p}}{\wordSymbol{S}}{\tvert{r}}}\) by extension on the right, or if we prefer a path connecting \(\paren{\pathWord{\tvert{r}}{\emptyWord{}}{\tvert{r}}}\) to \(\paren{\pathWord{\hvert{p}}{\wordSymbol{S}}{\tvert{r}}}\) by extension on the left. Therefore we have a path connecting \(\path{P}\) to \(\path{R}\), specifically:

\[ \paren{\pathWord{\paren{\pathWord{\tvert{p}}{\wordSymbol{P}}{\hvert{p}}}}{\concat{\inverted{\mirror{\wordSymbol{P}}} \wordSymbol{S} \wordSymbol{R}}}{\paren{\pathWord{\tvert{r}}{\wordSymbol{R}}{\hvert{r}}}}} \]#### Visualization

We'll focus on the following base quiver:

Here we show the forward path quivers and backward path quivers side-by-side:

## Summary

We've defined the forward, backward, and affine path quivers. The forward path quiver will be most useful in the sections that follow.

We also saw examples, and inferred the following general relationships:

base quiver | forward path quiver |
---|---|

\(\bouquetQuiver{\sym{k}}\) | \(\subSize{\treeQuiver{\sym{k}}}{ \infty }\) |

\(\subSize{\lineQuiver }{\sym{n}}\) | \(\subSize{\lineQuiver }{\sym{n}}\) |

\(\subSize{\treeQuiver{\sym{k}}}{\sym{n}}\) | \(\subSize{\treeQuiver{\sym{k}}}{\sym{n}}\) |

\(\subSize{\cycleQuiver }{\sym{n}}\) | \(\subSize{\lineQuiver }{ \infty }\) |

\(\subSize{\gridQuiver{\sym{k}}}{ \infty }\) | \(\subSize{\treeQuiver{\sym{k}}}{ \infty }\) |

# Path quotients

## Motivation

In this section, we're going to set up a construction that we'll use in the next section to build infinite but highly regular quivers from simpler finite quivers. The construction uses a **path valuation** (a function on paths) to define when two paths on a finite quiver \(\quiver{Q}\) are **equivalent**. By identifying equivalent paths, we will effectively "glue together" sets of vertices of the infinite **forward path quiver** \(\forwardPathQuiver{\quiver{Q}}{\vert{v}}\), turning it from an infinite tree into a so-called **lattice quiver**.

This gluing procedure is a very common idea in abstract algebra, where it generally goes by the name of a **quotient**. Hence we are defining a **path quotient**.

Note: if this section is too abstract for your taste, you can safely skip ahead to the next section.

## Recap

There are by now several moving pieces, so let's do a quick recap:

From [[[Path groupoids]]], we saw that a path \(\path{P} = \paren{\pathWord{\vert{h}}{\wordSymbol{W}}{\vert{t}}}\) is uniquely determined by its initial (or final) vertex and its **path word** \(\wordSymbol{W}\). We can define the function \(\wordOf(\path{P})\defEqualSymbol \wordSymbol{W}\) that sends a path to its path word. Two different paths can have the same path word. The **path groupoid** \(\pathGroupoid{\quiver{Q}}\) is the groupoid whose elements are paths in \(\quiver{Q}\), and whose multiplication is path composition.

From [[[Path quivers]]], we defined vertices of the **forward** **path quiver** \(\forwardPathQuiver{\quiver{Q}}{\vert{v}}\) to correspond to the paths of \(\quiver{Q}\) that start at \(\vert{v}\):

The vertices of the more general **affine path quiver** \(\pathQuiver{\quiver{Q}}\) correspond to *all* paths of \(\quiver{Q}\), regardless of where they start:

By moving to the more general setting of the affine path quiver, we gain an important property: the affine path quiver \(\pathQuiver{\quiver{Q}}\) is *closed under path reversal.* In comparison, reversing a path \(\elemOf{\path{P}}{\forwardPathQuiver{\quiver{Q}}{\vert{v}}}\), only yields another path in \(\forwardPathQuiver{\quiver{Q}}{\vert{v}}\) when \(\headVertex(\path{P}) = \tailVertex(\path{P}) = \vert{v}\), in other words, when the path is a cycle.

In either cases, the edges of a path quiver correspond to **retractions** *or* **extensions** of paths by following a cardinal available at the head of the path, giving us the cardinal structure of the path quiver.

We see then that \(\pathGroupoid{\quiver{Q}}\) and \(\pathQuiver{\quiver{Q}}\) are closely related: the elements of the path groupoid \(\pathGroupoid{\quiver{Q}}\) are vertices of the path quiver \(\pathQuiver{\quiver{Q}}\).

## Path valuations

We now define a **path valuation** \(\functionSignature{\pathMap{ \mu }}{\pathGroupoid{\quiver{Q}}}{\groupoid{V}}\) to be a **groupoid homomorphism** from the path groupoid \(\pathGroupoid{\quiver{Q}}\) to an arbitrary group \(\group{V}\), which we call the **value group**. We say it assigns a **path value** \(\pathMap{ \mu }(\path{P})\) to each path \(\elemOf{\path{P}}{\pathGroupoid{\quiver{Q}}}\).

A groupoid homomorphism simply means that \(\pathCompose{\path{P_1}}{\path{P_2}} = \pathMap{ \mu }(\path{P_1})\gmult \pathMap{ \mu }(\path{P_2})\); this amounts to saying that the values of paths are “compatible” with composition: taking values of two paths \(\path{P_1},\path{P_2}\) and multiplying them in the value group gives the same result as taking the value of the composition of the two paths. This situation is depicted in the following diagram (the vertical arrows actually refer to binary functions, but you get the idea):

### Example: word

Perhaps the simplest path valuation of all is given by \(\wordOf\), the function that takes a path to its path word, where that path word is seen as an element of the **word group**, written \(\wordGroup{\quiver{Q}}\), which we defined in [[[Path homomorphisms]]]:

We saw earlier that the function \(\wordOf\) is a groupoid homomorphism. But it is not in general surjective: we can form words \(\elemOf{\wordSymbol{W}}{\wordGroup{\quiver{Q}}}\) that might not correspond to the path word of any path in \(\quiver{Q}\).

### Example: signed length

As a less trivial example, we can take \(\groupoid{V}\) to be the group(oid) of the integers numbers \((\mathbb{Z},+ )\) under addition, and the valuation \(\pathMap{ \mu }\) to be the **signed length** of a path, written \(\signedLength\), in which cardinals count as +1 and inverses of cardinals count as \(-1\).

Then \(\functionSignature{\signedLength}{\pathGroupoid{\quiver{Q}}}{\group{\mathbb{Z}}}\) is indeed a homomorphism, since the signed length of the composition of two paths is the sum of their signed lengths: \(\signedLength(\pathCompose{\path{P_1}}{\path{P_2}}) = \signedLength(\path{P_1}) + \signedLength(\path{P_2})\). Empty paths are sent to \(0\), which is of course the identity element of \(\group{\mathbb{Z}}\).

As this example shows, the homomorphism can forget information about the original path, but whatever information it retains must be "composable".

### Affine path valuations

A family of path valuations, called **affine path valuations**, can be constructed from group homomorphisms of the form \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroup{\quiver{Q}}}{\group{V}}\).

The group homomorphism \(\groupHomomorphism{ \phi }\) induces a path valuation \(\functionSignature{\pathMap{\affineModifier{\groupHomomorphism{ \phi }}}}{\pathQuiver{\quiver{Q}}}{\group{V}}\) in the following way:

\[ \pathMap{\affineModifier{\groupHomomorphism{ \phi }}}(\pathWord{\vert{x}}{\word{\card{c_1}}{\card{\ellipsis }}{\card{c_{\sym{n}}}}}{\vert{y}})\defEqualSymbol \indexProd{\groupHomomorphism{ \phi }(\card{c_i})}{i = 1}{\sym{n}} = \groupHomomorphism{ \phi }(\card{c_1})\gmult \ellipsis \gmult \groupHomomorphism{ \phi }(\card{c_{\sym{n}}}) \]In English: an affine path valuation \(\affineModifier{\groupoidFunction{ \phi }}\) calculates the value \(\pathMap{\affineModifier{\groupHomomorphism{ \phi }}}(\path{P})\) of a path \(\path{P} = \pathWord{\vert{x}}{\word{\card{c_1}}{\card{\ellipsis }}{\card{c_{\sym{n}}}}}{\vert{y}}\) as the group product of the values \(\groupoidFunction{ \phi }(\card{c_{\sym{i}}})\) in its path word \(\word{\card{c_1}}{\card{\ellipsis }}{\card{c_{\sym{n}}}}\).

What is natural about constructing value functions in this way is that they are automatically groupoid homomorphisms due to the associativity of the underlying groupoid multiplication in the group \(\group{V}\). We use the word *affine* because this construction forces the valuation to ignore the head or tail of a path: it only depends on the "cardinal content" of the path.

We've already seen two examples of affine path valuations: it is easy to verify that the signed length \(\signedLength\) is an affine path valuation given by \(\groupHomomorphism{ \phi }(\card{c}) = 1\) for \(\elemOf{\card{c}}{\cardinalList(\quiver{Q})}\), which of course implies that \(\groupHomomorphism{ \phi }(\inverted{\card{c}}) = -1\). It’s also easy to see that the function \(\functionSignature{\wordOf}{\pathGroupoid{\quiver{Q}}}{\wordGroup{\quiver{Q}}}\) is an affine path valuation, induced by the identity group homomorphism on \(\wordGroup{\quiver{Q}}\).

### Semivaluations

We can obtain a wider range of path valuations if we relax our requirement that a valuation be a groupoid homomorphism. Forgetting the inverse operation of our groupoids (path and value), and considering them as so-called **semigroupoids with identity**, it becomes easier to construct path valuations, since semigroupoid homomorphism is a weaker requirement. We'll call these **path semivaluations**.

An example of a path semivaluation is the intuitive function \(\functionSignature{\length}{\pathGroupoid{\quiver{Q}}}{\semiring{\mathbb{N}}}\) that measures the *ordinary* length of a path, which is the length of its path word:

And similar to the affine path valuation case, we can use a **semigroup homomorphism** \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroup{\quiver{Q}}}{\quiver{S}}\) to induce a path semivaluation \(\functionSignature{\function{\affineModifier{\groupoidFunction{ \phi }}}}{\pathGroupoid{\quiver{Q}}}{\quiver{S}}\).

Repeating the earlier story with \(\signedLength\), \(\length\) can be seen as being induced by the semigroup homomorphism \(\function{ \phi }(\card{c}) = \function{ \phi }(\inverted{\card{c}}) = 1\). Unsurprisingly, we can also construct "cardinal-sensitive" variants of \(\length\), which assign \(\function{ \phi _{\sym{j}}}(\card{c_{\sym{i}}}) = \delta _{i,j}\).

### Notation

To ease the notation, we will often write quotients involving affine path valuations in the following more compact notation:

\[ \compactQuotient{\quiver{F}}{\vert{v}}{\groupHomomorphism{ \phi }}\defEqualSymbol \quotient{\forwardPathQuiver{\quiver{Q}}{\vert{v}}}{\affineModifier{\groupHomomorphism{ \phi }}} \]## Path quotients

Path valuations will allow us to construct path quotients.

Path quotients are a simple but powerful way of deriving quivers from other quivers, whenever we have a path valuation \(\pathMap{ \mu }\). We’ll shortly use this to begin constructing discrete geometries.

Let \(\quiver{F}\) be a quiver and \(\functionSignature{\pathMap{ \mu }}{\pathQuiver{\quiver{F}}}{\groupoid{V}}\) be a path valuation on \(\quiver{F}\). The **forward quotient** \(\groupoid{G} = \quotient{\forwardPathQuiver{\quiver{F}}{\vert{x}}}{\pathMap{ \mu }}\) is the quiver defined as follows:

a vertex \(\vert{v}\) of \(\groupoid{G}\) is the equivalence class of paths beginning at \(\vert{x}\) that have the value \(\groupoidElement{v}\) under \(\pathMap{ \mu }\), in other words the set \(\function{\inverse{\pathMap{ \mu }}}(\vert{v}) \subseteq \pathQuiver{\quiver{F}}\)

an edge \(\tde{\vert{u}}{\vert{v}}{\card{c}}\) exists in \(G\) iff one of the paths in \(\function{\inverse{\pathMap{ \mu }}}(\vert{v})\) is an extension of one of the paths in \(\function{\inverse{\pathMap{ \mu }}}(\vert{u})\) by the cardinal \(\elemOf{\card{c}}{\cardinalList(\quiver{F})}\).

#### Example: signed length

To illustrate a path quotient, we’ll use the following tree quiver with root vertex shown at the top:

We’ll construct the quotient \(\quotient{\forwardPathQuiver{\quiver{F}}{\vert{0}}}{\signedLength}\) using the **signed length** \(\signedLength\) path valuation we introduced earlier:

We’ve drawn the equivalence classes of paths by just superimposing those paths on each diagram.

By definition, there is one class for each value in \(\signedLength(\pathGroupoid{\quiver{F}})\), namely for each length of path. These values are shown in a label above each class, starting with 0 on the left and ending with 3 on the right.

A technicality worth mentioning: the cardinal \(\gform{\card{g}}\) should occur *twice* between the length 1 and length 2 classes, since there are two paths that can be extended by \(\gform{\card{g}}\), being \(\de{\word{\rform{\card{r}}}}{\word{\rform{\card{r}}}{\gform{\card{g}}}}\) and \(\de{\word{\gform{\card{g}}}}{\word{\gform{\card{g}}}{\gform{\card{g}}}}\). But we ignore this multiplicity, for the reason that this results in no violation of the local uniqueness property we require of quivers.

#### Example: cardinal-sensitive length

Let’s perform a similar quotient, except using the cardinal-sensitive signed length \(\signedLength_{\gform{\card{g}}}\) we defined earlier. This will measure a path length in which *only* the cardinal \(\gform{\card{g}}\) counts towards the total.

As an exercise, I suggest checking the existence of every cardinal in the path quiver above, and make sure the equivalence classes are as you suspect.

#### Example: modulo signed length

In the last example, we will look at the signed length *modulo* an integer. We'll apply this path valuation to the path quiver of a 1-bouquet \(\quiver{B} = \bindCards{\bouquetQuiver{1}}{\rform{\card{r}}}\):

As we saw before, \(\forwardPathQuiver{\quiver{B}}{\vert{0}} = \bindCards{\subSize{\lineQuiver }{ \infty }}{\rform{\card{r}}}\). Here we show a fragment of \(\forwardPathQuiver{\quiver{B}}{\vert{0}}\), with the particular paths illustrated:

We'll now form the quotient \(\quotient{\forwardPathQuiver{\quiver{B}}{\vert{0}}}{\paren{\modLabeled{\signedLength}{3}}}\):

As with any infinite path quiver, the equivalence classes shown above, namely \(\list{\word{\card{1}},\word{\rform{\card{r}}}{\rform{\card{r}}}{\rform{\card{r}}}}\), \(\list{\word{\rform{\card{r}}},\word{\rform{\ncard{r}}}{\rform{\ncard{r}}}}\), \(\list{\word{\rform{\card{r}}}{\rform{\card{r}}},\word{\rform{\ncard{r}}}}\) are just fragments of infinite sets, of the form:

\[ \setConstructor{\repeatedPower{\rform{\card{r}}}{3 \, \sym{n} + \sym{m}}}{\elemOf{\sym{n}}{\mathbb{Z}}} \]where \(\elemOf{\sym{m}}{\list{0,1,2}}\) is the value: the signed cardinal length *modulo* 3.

# Lattice quivers

## Recap

In [[[Path quivers]]] we saw how to construct the forward path quiver \(\forwardPathQuiver{\quiver{F}}{\vert{x}}\) from a quiver \(\quiver{F}\), and in [[[Path quotients]]] we saw how to define the path quotient quiver \(\quotient{\forwardPathQuiver{\quiver{F}}{\vert{x}}}{\pathMap{ \mu }}\) with respect some path valuation \(\functionSignature{\groupoidHomomorphism{ \mu }}{\pathGroupoid{\quiver{F}}}{\group{V}}\), which maps paths to "path values" in some group \(\group{V}\). We examined the particularly simple case of affine path valuations \(\affineModifier{\groupoidHomomorphism{ \phi }}\) that are induced by a group homomorphism \(\functionSignature{\groupoidHomomorphism{ \phi }}{\wordGroup{\quiver{F}}}{\group{V}}\), yielding quotients written \(\compactQuotient{\quiver{F}}{\vert{x}}{\groupoidFunction{ \phi }}\). The affine path valuations are path valuations that depend only on the word of a path: \(\function{\affineModifier{\groupoidHomomorphism{ \phi }}}(\path{P}) = \groupoidHomomorphism{ \phi }(\wordOf(\path{P}))\).

We'll now put this construction to work to build **lattice quivers**, which are quotients of simple **fundamental quivers** \(\quiver{F}\) by affine path valuations induced by homomorphisms into Abelian groups that capture the translational symmetries of crystallographic lattices. These will give us an alternative way to view the quivers we constructed in [[[Transitive quivers]]] and [[[Cayley quivers]]].

## Representations

### Group representations

Recall from [[[Cayley quivers]]] that a **right group action** \(\function{A}\) is a "binding" of a group \(\group{G}\) to an object \(\sym{X}\) on which it acts, expressed as a structure-preserving map \(\functionSignature{\function{\action{A}}}{\tuple{\sym{X},\group{G}}}{\sym{X}}\). By fixing the second argument of the two-argument map \(\function{A}\) we obtain a family of functions \(\setConstructor{\functionSignature{\function{\action{A}_{\groupElement{g}}}}{\sym{X}}{\sym{X}}}{\elemOf{\groupElement{g}}{\group{G}}}\), it is these **Cayley functions** that *encode* the behavior of the group \(\group{G}\), with function composition playing the role of group multiplication.

Group actions are usually obtained by associating each element of \(\group{G}\) with some form of *isomorphism* of the object \(\sym{X}\). This can be formalized using a group homomorphism \(\functionSignature{\groupoidFunction{ \pi }}{\group{G}}{\automorphisms(\sym{X})}\), where \(\automorphisms(\sym{X})\) represents the group of relevant symmetries of the object \(\sym{X}\).

When \(\sym{X}\) is a set, the isomorphisms are permutations of \(\sym{X}\), so that \(\automorphisms(\sym{X}) = \symmetricGroup{\sym{X}}\), and the group actions are known as **permutation representations**.

When \(\sym{X}\) is a vector space, these group actions are known as **linear representations**. In that case, \(\automorphisms(\sym{X})\) is just the space of linear transformations of \(\sym{X}\), and if this space is also finite dimensional, we can represent these linear transformations as square matrices living in a subgroup of \(\generalLinearGroup{n}{\field{K}}\), for some field \(\field{K}\), typically \(\field{\mathbb{R}}\) or \(\field{\mathbb{C}}\). A vast body of theory has been built around studying and classifying such representations with many applications in physics and elsewhere.

To summarize, group representations are *models* of groups: concrete instantiations of the behavior of the group in the form of transformations of a set or vector space. These instantiations can capture *all* the behavior, meaning that different group elements are sent to unique transformations, in which case they are called **faithful**. Or they might lose information; in the case of the **trivial representation**, every group element is represented as the identity transformation, and we've lost *all* the information about the original group.

## Path representations

### Linear path representations

Having introduced linear representations of groups, it is obvious to ask how we might build linear representations of the path *groupoid* of a quiver: we might call these **linear** **path representations**.

Clearly a linear path representation would involve a groupoid homomorphism \(\functionSignature{\groupoidHomomorphism{ \mu }}{\pathGroupoid{\quiver{F}}}{\generalLinearGroup{\sym{n}}{\field{K}}}\). But we’ve already encountered a class of such homomorphisms in the form of affine path valuations \(\functionSignature{\function{\affineModifier{\groupHomomorphism{ \phi }}}}{\pathGroupoid{\quiver{F}}}{\group{V}}\) induced by group homomorphisms \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroup{\quiver{F}}}{\group{V}}\). If we now choose a linear representation \(\functionSignature{\groupHomomorphism{ \pi }}{\group{V}}{\generalLinearGroup{\sym{n}}{\field{K}}}\) of the group \(\group{V}\), we can compose these \(\pathMap{ \mu }\defEqualSymbol \functionComposition{\groupHomomorphism{ \pi }\functionCompositionSymbol \affineModifier{\groupHomomorphism{ \phi }}}\) to obtain a linear path representation \(\functionSignature{\groupoidHomomorphism{ \mu }}{\pathGroupoid{\quiver{F}}}{\generalLinearGroup{\sym{n}}{\field{K}}}\).

#### Describing linear path representations

Recall an important property of group homomorphisms \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroup{\quiver{F}}}{\group{V}}\): they are uniquely and freely determined by their behavior on the cardinals of \(\quiver{Q}\), since these are the generators of \(\wordGroup{\quiver{F}}\). This allows us to describe \(\groupHomomorphism{ \phi }\) as an association between cardinals and \(\sym{n}\times \sym{n}\) matrices:

\[ \groupHomomorphism{ \phi } = \homomorphismMapping{\mto{\card{c}_1}{\matrix{M_1}},\ellipsis ,\mto{\card{c}_{\sym{k}}}{\matrix{M_{\sym{k}}}}} \]#### Quotients vs path representations

As before, a path representation *models* paths as elements of \(\generalLinearGroup{n}{\field{K}}\), but by taking quotients we will be deliberately discarding information in order to conflate precisely those paths sharing a tail vertex that we wish to lead to the same head vertex. For suitable representations, this conflation will "fold" the tree \(\subSize{\treeQuiver{\sym{k}}}{ \infty }\) produced by the forward path quiver into a finite-dimensional quiver. (In abstract algebra, this kind of construction is ubiquitous, generically referred to as taking a quotient of a freely generated object in order to impose some desired relations).

While we will be focused on simply using path quotients to construct lattice quivers, we will keep the path representation interpretation in mind for the remainder of this section, and the matrix machinery that goes along with group representations will be an useful "universal format" for doing computations.

## Computation

We wish to actually *compute* the quotient \(\compactQuotient{\quiver{F}}{\vert{x}}{\groupoidFunction{ \phi }}\), where \(\groupoidFunction{ \phi }\) is a map assigning a matrix \(\groupoidFunction{ \phi }(\card{c})\) to each cardinal \(\elemOf{\card{c}}{\cardinalList(\quiver{F})}\). Of course, this quiver is usually infinite, so we can only generate a finite part of it, which we'll generally call a **fragment**.

The following describes how we can use an fundamental algorithm from computer science to compute the lattice quiver, known as a **breadth-first graph traversal**.

First, define a **state** as representing an equivalence class of paths starting at the base vertex \(\vert{v}\). We'll represent these states with pairs \(\tuple{\vert{x},\matrix{M}}\), where \(\vert{x}\) is a vertex of the fundamental quiver representing the head of the path, and \(\matrix{M}\) is the value of the path as represented by a matrix.

We start with a single state \(\tuple{\vert{x},\matrix{M}} = \tuple{\vert{v},\matrix{I}}\) representing the empty path at the the base vertex \(\vert{v}\), with path matrix \(\matrix{M}\) equal to the identity matrix \(\matrix{I}\). We then extend this empty path by all cardinals \(\card{c}_{\sym{i}}\) corresponding to edges \(\tde{\vert{v}}{\vert{w_i}}{\card{\card{c}_{\sym{i}}}}\) in the fundamental quiver, right-multiplying our current value of \(\matrix{M}\) by the corresponding cardinal matrices \(\groupoidFunction{ \phi }(\card{c})\) to obtain new states \(\tuple{\vert{w_i},\matrix{M}\matrixDotSymbol \groupoidHomomorphism{ \phi }(\card{c}_{\sym{i}})}\).

We then go on to explore these new states in a first-in last-out fashion, stopping when we reach some depth. Sometimes we will encounter a state that we have created before – this is to say, we will find two different paths in the fundamental quiver that produce the same state – but we can avoid re-exploring these old states by caching them in a suitable data structure.

By storing only states and their adjacency, and not entire paths, we avoid performing an exponential amount of computation as the number of visited states increases (this is a form of so-called **dynamic programming**).

The graph that we produce in this exploration has vertices given by states \(\tuple{\vert{v},\matrix{M}}\) and edges given by transitions \(\tde{\tuple{\vert{v},\matrix{M}}}{\tuple{\primed{\vert{v}},\primed{\matrix{M}}}}{\card{\card{c}_{\sym{i}}}}\), where \(\primed{\matrix{M}} = \matrix{M}\matrixDotSymbol \groupoidHomomorphism{ \phi }(\card{c}_{\sym{i}})\) and \(\elemOf{\tde{\vert{v}}{\primed{\vert{v}}}{\card{\card{c}_{\sym{i}}}}}{\edgeList(\quiver{F})}\).

In other contexts, a graph so produced is variously known as a **state-transition graph**, **multiway graph**, or in the group-theory context, a **Cayley graph**. In our context, these graphs are quivers, since the have an additional cardinal structure in which each edge is labeled with the cardinal we took to generate that particular state. Moreover, these quivers represent particular quotients of path quivers. When the quotient is in terms of a homomorphism into a translation group, we'll simply use the term **lattice quivers**.

### Notation

We will denote the quiver generated by exploring fundamental quiver \(\quiver{F}\), starting at vertex \(\vert{x}\), using the affine path valuation induced by \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroup{\quiver{F}}}{\generalLinearGroup{\sym{n}}{\field{K}}}\), to depth \(\sym{d}\), with the expression:

\[ \latticeBFS{\quiver{F},\vert{x},\groupoidFunction{ \phi },\sym{d}} \]By setting \(\sym{d} = \infty\) we represent the "ongoing breadth-first search" that typically does not ever terminate, which we interpret as computing the full forward path quiver quotient to arbitrary depth:

\[ \compactQuotient{\quiver{F}}{\vert{x}}{\groupoidFunction{ \phi }} = \limit{\latticeBFS{\quiver{F},\vert{x},\groupoidFunction{ \phi },\sym{d}}}{\sym{d} \to \infty } \]This limit is true in the sense that every vertex / edge of the quotient will eventually be generated by the search for some sufficiently large \(\sym{d}\), and conversely every vertex / edge generated by the search corresponds to some vertex / edge of the quotient.

### Translation matrices

While cardinal matrices \(\groupoidHomomorphism{ \phi }(\card{c}_{\sym{i}})\) are in general \(\sym{n}\times \sym{n}\), we will be limiting ourselves to **translation matrices**, which can be put into a simple upper triangular form as follows:

These matrices have the property that:

\[ \begin{aligned} \translationVector{\sym{x}}\matrixDotSymbol \translationVector{\primed{\sym{x}}}&= \translationVector{\sym{x} + \primed{\sym{x}}}\\ \translationVector{\sym{x},\sym{y}}\matrixDotSymbol \translationVector{\primed{\sym{x}},\primed{\sym{y}}}&= \translationVector{\sym{x} + \primed{\sym{x}},\sym{y} + \primed{\sym{y}}}\\ \translationVector{\sym{x},\sym{y},\sym{z}}\matrixDotSymbol \translationVector{\primed{\sym{x}},\primed{\sym{y}},\primed{\sym{z}}}&= \translationVector{\sym{x} + \primed{\sym{x}},\sym{y} + \primed{\sym{y}},\sym{z} + \primed{\sym{z}}}\end{aligned} \]## Line lattice

To kick things off let’s look at probably the simplest possible fundamental quiver: a 1-bouquet quiver \(\bindCards{\bouquetQuiver{1}}{\rform{\card{r}}}\).

We'll associate its only cardinal \(\rform{\card{r}}\) with a \(2 \times 2\) matrix, representing a generator of the group \(\mathbb{Z}\).

Now, because this quiver only has one vertex, *all* of the paths in \(\quiver{F}\) are trivially composable, and the path groupoid \(\pathGroupoid{\quiver{F}}\) is isomorphic to the word group \(\wordGroup{\quiver{F}}\) on one cardinal, which is itself isomorphic to \(\mathbb{Z}\) – since we just count how many times that cardinal appears in a given word.

We wish to generate the lattice quiver, which is the quotient \(\compactQuotient{\bindCardSize{\bouquetQuiver{1}}{\rform{\card{r}}}}{\vert{o}}{\bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}}}\), where \(\bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}}\) is the group homomorphism \(\functionSignature{\function{\bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}}}}{\wordGroup{\quiver{F}}}{\generalLinearGroup{2}{\ring{\mathbb{Z}}}}\) defined by:

\[ \bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1}}} \]We can traverse the single fundamental quiver edge \(\tde{\vert{x}}{\vert{x}}{\rform{\card{r}}}\) in the forward *or* reverse direction, so we can reach two states from the initial state. We continue this process until we have reached the specified traversal depth.

Here’s the lattice quiver for our simple fundamental quiver (the origin vertex, corresponding to state \(\tuple{\vert{o},\matrix{I}}\), is shown larger):

Let's label these vertices with their corresponding matrices to see better what is going on:

Note that the top-right entry of each matrix is effectively acting as a 1D coordinate.

We state without proof:

\[ \bindCardSize{\subSize{\lineQuiver }{ \infty }}{\rform{\card{r}}}\isomorphicSymbol \compactQuotient{\bindCardSize{\bouquetQuiver{1}}{\rform{\card{r}}}}{\vert{o}}{\bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}}} \]Additionally, when explored to finite depth, we have:

\[ \bindCardSize{\subSize{\lineQuiver }{2 \, \sym{n} + 1}}{\rform{\card{r}}}\isomorphicSymbol \latticeBFS{\bindCardSize{\bouquetQuiver{1}}{\rform{\card{r}}},\vert{o},\bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}},\sym{n}} \]### Animation

We can demonstrate the gradual building of the infinite line lattice by breadth first search:

## Square lattice

Let’s move on to a two-dimensional lattice quiver. We’ve added a cardinal \(\bform{\card{b}}\) to the fundamental quiver:

Notice that the cardinal matrices are the two unit translation matrices for \(\mathbb{Z}^2\):

\[ \bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}},\bform{\card{b}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,0}},\mto{\bform{\card{b}}}{\translationVector{0,1}}} \]As you would expect, the lattice quiver this generates is a 2D grid:

Again, notice the 2D “translation coordinates” that the representation matrices contain:

Note that negative integers are displayed as red, rather than with a minus sign.

Since all our matrices are translation matrices, we can extract the coordinate vector from each matrix for a cleaner graphic:

Again, we state without proof:

\[ \bindCardSize{\subSize{\squareQuiver }{ \infty }}{\rform{\card{r}},\bform{\card{b}}}\isomorphicSymbol \compactQuotient{\bindCardSize{\bouquetQuiver{2}}{\rform{\card{r}},\bform{\card{b}}}}{\vert{o}}{\bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}},\bform{\card{b}}}} \]### Animation

We can demonstrate the gradual building of the infinite square lattice by breadth first search:

## Paths and relations

Although the point should be quite clear, let's *visualize* how vertices in the lattice quiver correspond to paths in the fundamental quiver that have the same value.

Here we have two paths with words \(\word{\rform{\card{r}}}{\bform{\card{b}}}\) and \(\word{\bform{\card{b}}}{\rform{\card{r}}}\) in the fundamental quiver, shown on the right. The fact that terminate at the same vertex in the lattice quiver reflects the fact that the values of those two paths are the same: \(\matrix{M_{\rform{\card{r}}}}\matrixDotSymbol \matrix{M_{\bform{\card{b}}}} = \matrix{M_{\bform{\card{b}}}}\matrixDotSymbol \matrix{M_{\rform{\card{r}}}}\). This must be true because the cardinal matrices \(\matrix{M_i}\) represent an Abelian group (namely a translation group) and in fact this path diagram is the defining relation of this particular group.

We'll call this a **path relation** of a lattice quiver, and write it algebraically as \(\word{\rform{\card{r}}}{\bform{\card{b}}}\pathIso \word{\bform{\card{b}}}{\rform{\card{r}}}\). The symbol \(\pathIso\) is understood to mean that the two paths with the given path words that share a tail vertex, also share a head vertex in the lattice quiver.

## Triangular lattice

Perhaps a more interesting example of path relations comes from the following lattice quiver with *three* cardinals:

Here, the cardinal matrices are the following translations of \(\mathbb{Z}^3\):

\[ \bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,-1,0}},\mto{\gform{\card{g}}}{\translationVector{0,1,-1}},\mto{\bform{\card{b}}}{\translationVector{-1,0,1}}} \]This generates a triangular lattice, shown here to depth 3:

Although these matrices represent translations in \(\mathbb{Z}^3\), they are not linearly independent, and so their action is two dimensional, yielding a planar lattice quiver.

Let's visualize their translation vectors:

We can also look at a path diagram for the triangular lattice:

Again, the fact that \(\word{\gform{\card{g}}}{\rform{\card{r}}}\pathIso \word{\bform{\ncard{b}}}\) reflects the property of the matrices \(\matrix{M_{\rform{\card{r}}}}\), \(\matrix{M_{\gform{\card{g}}}}\), \(\matrix{M_{\bform{\card{b}}}}\) that \(\matrix{M_{\rform{\card{r}}}}\matrixDotSymbol \matrix{M_{\gform{\card{g}}}} = \inverse{\matrix{M_{\bform{\card{b}}}}}\), or more simply that \(\matrix{M_{\rform{\card{r}}}}\matrixDotSymbol \matrix{M_{\gform{\card{g}}}}\matrixDotSymbol \matrix{M_{\bform{\card{b}}}} = \matrix{I}\).

So, while we pulled these matrices out of a hat, we can interpret them as *encoding* the following path relations:

The full set of relations can be illustrated as path diagrams as follows:

Lastly, we again state without proof:

\[ \bindCardSize{\subSize{\triangularQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\isomorphicSymbol \compactQuotient{\bindCardSize{\bouquetQuiver{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\vert{o}}{\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}} \]## Connection to Cayley quivers

We've given a construction of some familiar quivers as **linear path representations**, which are particular path quotients \(\compactQuotient{\quiver{F}}{\vert{x}}{\groupHomomorphism{ \phi }}\) induced by group homomorphisms of the form \(\functionSignature{\groupHomomorphism{ \phi }}{\wordGroup{\quiver{F}}}{\generalLinearGroup{\sym{n}}{\field{K}}}\). These homomorphism are uniquely defined by choices of invertible matrices \(\matrix{M_{\sym{i}}}\) for each cardinal \(\card{c}_{\sym{i}}\) of \(\quiver{Q}\), letting us write \(\groupHomomorphism{ \phi } = \assocArray{\mto{\card{c}_1}{\matrix{M_1}},\ellipsis ,\mto{\card{c}_{\sym{k}}}{\matrix{M_{\sym{k}}}}}\).

It's important to emphasize how this construction yields a Cayley quiver in the special case that \(\quiver{F}\) is a bouquet quiver. This is because the path groupoid of a bouquet quiver is just the word group (as all paths can be composed), so we can write:

\[ \pathGroupoid{\quiver{F}} = \wordGroup{\quiver{F}} \]Hence, for bouquet quivers a (linear) path representation \(\functionSignature{\groupoidHomomorphism{ \mu }}{\pathGroupoid{\quiver{F}}}{\generalLinearGroup{\sym{n}}{\field{K}}}\) is just a group homomorphism \(\functionSignature{\groupoidHomomorphism{ \mu }}{\wordGroup{\quiver{F}}}{\generalLinearGroup{\sym{n}}{\field{K}}}\). The image of \(\groupoidHomomorphism{ \mu }\) is then a subgroup \(\groupoidHomomorphism{ \mu }(\wordGroupSymbol ) \subseteq \generalLinearGroup{\sym{n}}{\field{K}}\) generated by \(\sym{J} = \setConstructor{\groupoidHomomorphism{ \mu }(\card{c}_{\sym{i}})}{1 \le \sym{i} \le \sym{k}}\). Then the regular action of this subgroup yields a Cayley quiver:

\[ \bindCayleyQuiver{\groupoidHomomorphism{ \mu }(\wordGroupSymbol )}{\sym{J}} \]Why is this quiver the same as that produced by \(\compactQuotient{\quiver{F}}{\vert{o}}{\groupoidHomomorphism{ \mu }}\congruentSymbol \latticeBFS{\quiver{F},\vert{o},\groupoidHomomorphism{ \mu }, \infty }\)? Since \(\quiver{F}\) only has one vertex \(\vert{o}\), the states \(\tuple{\vert{v},\matrix{M}}\) involved in the bfs construction all share the fundmanetal vertex \(\vert{v} = \vert{o}\). Hence vertices of the multiway graph are uniquely described by matrices \(\matrix{M}\), and edges \(\tde{\matrix{M}}{\primed{\matrix{M}}}{\card{\card{c}_{\sym{i}}}}\) exist iff \(\primed{\matrix{M}} = \matrix{M}\matrixDotSymbol \pathMap{ \mu }(\card{c}_{\sym{i}})\). This is by definition a right self-action of the subgroup \(\groupoidHomomorphism{ \mu }(\wordGroupSymbol )\) on itself, with generators \(\sym{J} = \setConstructor{\groupoidHomomorphism{ \mu }(\card{c}_{\sym{i}})}{1 \le \sym{i} \le \sym{k}}\), as required:

\[ \bindCayleyQuiver{\groupoidHomomorphism{ \mu }(\wordGroupSymbol )}{\sym{J}} = \compactQuotient{\quiver{F}}{\vert{o}}{\groupoidHomomorphism{ \mu }} \]## Summary

### Named group homomorphisms

We collect the group homomoprhisms \(\groupoidFunction{ \phi }\) used in the linear path representations above:

\[ \begin{aligned} \bindCardSize{\translationWordHomomorphism{1}}{\rform{\card{r}}}&\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1}}}\\ \bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}},\bform{\card{b}}}&\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,0}},\mto{\bform{\card{b}}}{\translationVector{0,1}}}\\ \bindCardSize{\translationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}&\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,0,0}},\mto{\gform{\card{g}}}{\translationVector{0,1,0}},\mto{\bform{\card{b}}}{\translationVector{0,0,1}}}\\ \bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}&\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,-1,0}},\mto{\gform{\card{g}}}{\translationVector{0,1,-1}},\mto{\bform{\card{b}}}{\translationVector{-1,0,1}}}\end{aligned} \]### Homomorphisms and presentations

We can naturally associate these group homomorphisms with particular named [[[group presentations:Cayley quivers#presentations]]] that generate the same lattice quivers as their Cayley quivers (we discussed this from a slightly different perspective [[[above:Lattice quivers#Connection to Cayley quivers]]]).

For example, the homomorphism \(\functionSignature{\function{\starTranslationWordHomomorphism{3}}}{\wordGroup{\quiver{3}}}{\generalLinearGroup{3}{\ring{\mathbb{Z}}}}\) defined by \(\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,-1,0}},\mto{\gform{\card{g}}}{\translationVector{0,1,-1}},\mto{\bform{\card{b}}}{\translationVector{-1,0,1}}}\) is associated with the presentation \(\starTranslationPresentation{3}\) defined by \(\bindCardSize{\starTranslationPresentation{3}}{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\gform{g}}},\groupCommutator{\groupGenerator{\gform{g}}}{\groupGenerator{\bform{b}}},\groupGenerator{\rform{r}}\iGmult \groupGenerator{\gform{g}}\iGmult \groupInverse{\groupGenerator{\bform{b}}}}\).

Specifically, the image \(\function{\starTranslationWordHomomorphism{3}}(\wordGroup{\quiver{3}})\) is a subgroup of \(\generalLinearGroup{3}{\ring{\mathbb{Z}}}\) that is isomorphic to the group presented by \(\starTranslationPresentation{3}\), with the isomorphism sending cardinals \(\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}\) (which are the generators) of \(\wordGroup{\quiver{3}}\) to the generators \(\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}\) used by the presentation \(\starTranslationPresentation{3}\). This isomorphism makes \(\starTranslationWordHomomorphism{3}\) into the "identity homomorphism":

\[ \bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\isomorphicSymbol \assocArray{\mto{\rform{\card{r}}}{\groupGenerator{\rform{r}}},\mto{\gform{\card{g}}}{\groupGenerator{\gform{g}}},\mto{\bform{\card{b}}}{\groupGenerator{\bform{b}}}} \]### Transitive lattice quivers

These named homomorphisms and presentations allows us to express how to generate the various transitive lattice quivers succintly in the following table:

name | quiver | fun. quiver | homomorphism | presentation |
---|---|---|---|---|

line quiver | \(\subSize{\lineQuiver }{ \infty }\) | \(\bouquetQuiver{1}\) | \(\functionSignature{\function{\translationWordHomomorphism{1}}}{\wordGroup{\quiver{1}}}{\group{\mathbb{Z}}}\) | \(\translationPresentation{1}\) presenting \(\group{\mathbb{Z}}\) |

square quiver | \(\subSize{\squareQuiver }{ \infty }\) | \(\bouquetQuiver{2}\) | \(\functionSignature{\function{\translationWordHomomorphism{2}}}{\wordGroup{\quiver{2}}}{\group{\power{\group{\mathbb{Z}}}{2}}}\) | \(\translationPresentation{2}\) presenting \(\group{\power{\group{\mathbb{Z}}}{2}}\) |

triangular quiver | \(\subSize{\triangularQuiver }{ \infty }\) | \(\bouquetQuiver{3}\) | \(\functionSignature{\function{\starTranslationWordHomomorphism{3}}}{\wordGroup{\quiver{3}}}{\group{\power{\group{\mathbb{Z}}}{3}}}\) | \(\starTranslationPresentation{3}\) presenting \(\group{\power{\group{\mathbb{Z}}}{2}}\) |

cubic quiver | \(\subSize{\cubicQuiver }{ \infty }\) | \(\bouquetQuiver{3}\) | \(\functionSignature{\function{\translationWordHomomorphism{3}}}{\wordGroup{\quiver{3}}}{\group{\power{\group{\mathbb{Z}}}{3}}}\) | \(\translationPresentation{3}\) presenting \(\group{\power{\group{\mathbb{Z}}}{3}}\) |

grid quiver | \(\gridQuiver{\sym{k}}\) | \(\bouquetQuiver{\sym{k}}\) | \(\functionSignature{\function{\translationWordHomomorphism{\sym{k}}}}{\wordGroup{\quiver{\sym{k}}}}{\group{\power{\group{\mathbb{Z}}}{\sym{k}}}}\) | \(\translationPresentation{\sym{k}}\) presenting \(\group{\power{\group{\mathbb{Z}}}{\sym{k}}}\) |

### Why?

This interrelationship between quotients, Cayley quivers, and presentations may seem rather pedantic – hardly worth all the notation and machinery. And perhaps this is true for transitive quivers! But as we mentioned in [[[Action groupoids#summary]]], the benefit is that we can generalize beyond groups and their Cayley quivers into areas where some of these constructions may be better suited, and provide better intuition, than others.

For example, the next section, we will consider more complex fundamental quivers, which will allow us to construct the hexagonal quiver, among others.

# Intransitive lattices

## Introduction

The lattice quivers we generated in [[[Lattice quivers]]] had the property that their fundamental quivers were **bouquet quivers** \(\bouquetQuiver{\sym{k}}\), meaning quivers with only one vertex. Since any two paths on a bouquet quiver can be composed, the path groupoid \(\pathGroupoid{\bouquetQuiver{\sym{k}}}\) is isomorphic to word group \(\wordGroup{\quiver{\sym{k}}}\) (the free group on \(\sym{k}\) letters), and the lattice quivers they generate can be seen as **Cayley quivers** of group presentations of \(\group{\power{\group{\mathbb{Z}}}{\sym{k}}}\) that we named \(\translationPresentation{\sym{k}},\starTranslationPresentation{\sym{k}}\).

By considering fundamental quivers with more than one vertex we can obtain bona-fide path groupoids – i.e. groupoids that are **not** groups. This occurs because there will be paths that cannot be composed, namely those pairs \(\path{P}\), \(\path{R}\) where \(\headVertex(\path{P}) \neq \tailVertex(\path{R})\). We’ll see that these produce more complex lattices in which the neighborhoods of vertices are not all alike, even when we generate them to "infinite depth". We'll call these **intransitive lattices**.

## Hexagonal lattice quiver

Our first two-vertex fundamental quiver is \(\bindCards{\subSize{\lineQuiver }{2}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}\parallelCardSymbol \bform{\card{b}}}\), which will generate the hexagonal quiver, which we'll write \(\bindCards{\subSize{\hexagonalQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\). For our path representation we'll use the same homomorphism \(\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,-1,0}},\mto{\gform{\card{g}}}{\translationVector{0,1,-1}},\mto{\bform{\card{b}}}{\translationVector{-1,0,1}}}\), the same homomorphism used to generate the triangular quiver \(\bindCards{\subSize{\triangularQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\). The fundamental quiver and matrix cardinals are shown below:

This gives us a hexagonal lattice quiver, shown here to depth 4, starting at fundamental vertex \(\vert{1}\).

This construction will serve as the definition of the hexagonal lattice quiver:

\[ \bindCardSize{\subSize{\hexagonalQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \compactQuotient{\bindCardSize{\subSize{\lineQuiver }{2}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}\parallelCardSymbol \bform{\card{b}}}}{1}{\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}} \]## Rhombille lattice quiver

We can take one obvious step to generalize the previous example: we can extend the fundamental quiver to have 3 vertices: \(\bindCards{\subSize{\lineQuiver }{3}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}\parallelCardSymbol \bform{\card{b}}}\).

This yields the rhombille lattice \(\bindCards{\subSize{\rhombilleQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\), shown here to depth 3, starting at vertex \(\vert{2}\):

This will serve as the definition of the rhombille lattice quiver:

\[ \bindCardSize{\subSize{\rhombilleQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \compactQuotient{\bindCardSize{\subSize{\lineQuiver }{3}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}\parallelCardSymbol \bform{\card{b}}}}{2}{\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}} \]## Euclidean tilings

With suitably chosen fundamental quivers and choices of representations, we can generate lattice quivers corresponding to all of the 2D uniform Euclidean tilings. Here are a few of the simpler ones:

Here are their corresponding quiver representations:

Note that we do not show the associated translation groups here.

## Enumeration

Using the quiver enumeration discussed in [[[Graphs and quivers]]], it is relatively straightforward to simply enumerate all lattice quivers once we fix a particular group representation, that is, once we fix a particular set of matrices.

### 2-cardinal lattices

We'll start with the 9 possible 2-cardinal quivers that we can form on two-vertex skeletons:

We fix the path representation to use \(\translationWordHomomorphism{2}\), the same as that of the square lattice:

\[ \bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}},\bform{\card{b}}}\defEqualSymbol \assocArray{\mto{\rform{\card{r}}}{\translationVector{1,0}},\mto{\bform{\card{b}}}{\translationVector{0,1}}} \]Up to graph isomorphism, we obtain the following 6 lattice quivers:

Note that I am deliberately ignoring "self-intersecting" lattices in which contain pairs of states \(\tuple{\vert{x},\matrix{M}}\), \(\tuple{\vert{y},\matrix{M}}\) with \(\vert{x} \neq \vert{y}\), since these would produce ambiguous 2D layouts when rendered in the natural way.

We now consider the 69 3-cardinal lattices on three vertices, the first 18 of which are shown below:

Up to isomorphism, we obtain the following 21 lattice quivers:

### 3-cardinal lattices

Next we'll consider the 26 different 3-cardinal quivers on the skeletons with two vertices:

We'll fix the path representation to use \(\starTranslationWordHomomorphism{3}\), that of the triangular lattice, so as to produce two-dimensional quivers:

\[ \bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\translationVector{1,-1,0}},\mto{\gform{\card{g}}}{\translationVector{0,1,-1}},\mto{\bform{\card{b}}}{\translationVector{-1,0,1}}} \]Up to isomorphism these generate the following 9 lattice quivers:

Notice we recover the \(\triangularQuiver\) and \(\hexagonalQuiver\), as expected.

For the 1124 different 3-cardinal quiver on skeletons with three vertices, we obtain the following 78 lattice quivers:

## Verex colorings

Intransitive lattices by definition have "different types of vertex". These "types" naturally lead us to consider colorings of vertices. Let's look at two simple examples:

### Hexagonal lattice

We return to the hexagonal lattice, which we saw had a fundamental quiver with two vertices. Here it is again:

The vertices of this lattice quiver are also of two types, which we could call “outward” and “inward”:

These two types of vertex in the lattice quiver correspond to the vertices labeled 1 and 2 in the fundamental quiver: the cardinals available to vertex 1 are all outgoing, and the cardinals available to vertex 2 are all incoming. This motif is reflected by the outward and inward sets of vertices in the lattice quiver. This correspondence reflects a general and important fact: lattice quivers are **covering quivers** of their fundamental quivers, as we will explain in the next section.

We can illustrate these two types of vertex visually by coloring the vertices of the lattice quiver by the identity of the vertex that generated them:

### Rhombille lattice

Now let’s look at the rhombille tiling, which has *three* types of vertices in its fundamental quiver:

Again, these three vertices correspond to the types of vertex we find in the lattice quiver:

Again, we can color each vertex to indicate which of the three fundamental vertices generated it:

This coloring phenomena is richer than it first appears, and we will address it in more detail in the section [[[Vertex colorings]]].

# Toroidal lattices

## Introduction

In [[[Lattice quivers]]], we examined **lattice quivers**, which were the **Cayley quivers** determined by particular group presentations \(\translationPresentation{\sym{n}},\starTranslationPresentation{\sym{n}}\) of infinite Abelian groups \(\group{\power{\group{\mathbb{Z}}}{\sym{n}}}\). We could also see these quivers as the path quotients of a **fundamental quiver** \(\quiver{F} = \bouquetQuiver{\sym{n}}\) by affine path valuations \(\translationPathValuation{\sym{n}},\starTranslationPathValuation{\sym{n}}\) valued in Abelian groups \(\group{\power{\group{\mathbb{Z}}}{\sym{n}}}\). The benefit of taking this more abstract path quotient perspective was that more complex choices of fundamental quiver \(\quiver{F}\) can generate the hexagonal, rhombille, etc. lattices – we developed this idea in [[[Intransitive lattices]]].

The generalization we pursue in *this* section is to swap out the infinite groups \(\group{\power{\group{\mathbb{Z}}}{\sym{n}}}\) for *finite* Abelian groups. These yield *finite* lattice quivers that we can visualize as lattices defined on a **torus**. Furthermore, by choosing direct products of groups, some of which are finite (\(\cyclicGroup{\sym{n}}\)), and others which are infinite (\(\group{\mathbb{Z}}\)), we can obtain "partial tori", or **cylinders**.

## Square torus

#### Square quiver

Recall that the square quiver \(\bindCardSize{\subSize{\squareQuiver }{ \infty }}{\rform{\card{r}},\bform{\card{b}}}\) can be seen in two ways:

as the Cayley quiver \(\bindCardSize{\cayleyQuiverSymbol{\translationPresentation{2}}}{\rform{\card{r}},\bform{\card{b}}}\), where \(\bindCardSize{\translationPresentation{2}}{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\bform{b}}}}\) is a presentation of \(\group{\power{\group{\mathbb{Z}}}{2}}\).

as the path quotient \(\compactQuotient{\quiver{F}}{}{\translationWordHomomorphism{2}}\) of the fundamental quiver \(\quiver{F} = \bouquetQuiver{2}\) by the affine path valuation \(\translationPathValuation{2}\).

Intuitively, the quotient takes the forward path quiver \(\forwardPathQuiver{\quiver{F}}{}\)(which is isomorphic to \(\subSize{\treeQuiver{2}}{ \infty }\)), and glues vertices together that share a path value under \(\translationPathValuation{2}\).

That \(\translationPathValuation{2}\) is affine is just the statement that a path value \(\function{\translationPathValuation{2}}(\path{P})\) depends only on the path word via the group homomorphism \(\functionSignature{\function{\translationWordHomomorphism{2}}}{\wordGroupSymbol }{\group{\power{\group{\mathbb{Z}}}{2}}}\):

\[ \function{\translationPathValuation{2}}(\path{P})\defEqualSymbol \function{\translationWordHomomorphism{2}}(\wordOf(\path{P})) \]The homomorphism \(\translationWordHomomorphism{2}\) just sends cardinals (the generators of \(\wordGroupSymbol\)) to the obvious generators of \(\group{\power{\group{\mathbb{Z}}}{2}}\) – the behavior of \(\translationWordHomomorphism{2}\) on longer words \(\elemOf{\wordSymbol{W}}{\wordGroupSymbol }\) is uniquely determined by these images:

\[ \bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}},\bform{\card{b}}}\defEqualSymbol \homomorphismMapping{\mto{\rform{\card{r}}}{\tuple{1,0}},\mto{\bform{\card{b}}}{\tuple{0,1}}} \]We can see that these constructions are really the *same*: the forward path quiver \(\forwardPathQuiver{\quiver{F}}{}\) first constructs paths with all possible words, and the subsequent quotient \(\quotient{\forwardPathQuiver{\quiver{F}}{}}{\translationPathValuation{2}}\) by the affine path valuation \(\translationPathValuation{2}\) then identifies those paths that *should* be the same under relator (\(\rform{\card{r}}\iGmult \bform{\card{b}} = \bform{\card{b}}\iGmult \rform{\card{r}}\)) in the presentation \(\translationPresentation{2}\).

#### Torus

We can construct a **square torus** quiver by replacing \(\group{\power{\group{\mathbb{Z}}}{2}}\) with \(\groupDirectProduct{\cyclicGroup{\sym{w}}\groupDirectProductSymbol \cyclicGroup{\sym{h}}}\) in *either* of these constructions.

Let's start with the group presentation \(\bindCardSize{\translationPresentation{2}}{\rform{\card{r}},\bform{\card{b}}}\) of \(\group{\power{\group{\mathbb{Z}}}{2}}\), which is:

\[ \bindCardSize{\translationPresentation{2}}{\rform{\card{r}},\bform{\card{b}}}\defEqualSymbol \groupPresentation{\rform{\groupGenerator{r}},\bform{\groupGenerator{b}}}{\groupCommutator{\rform{\groupElement{r}}}{\bform{\groupElement{b}}}} \]We now define the following presentation of \(\groupDirectProduct{\cyclicGroup{\sym{w}}\groupDirectProductSymbol \cyclicGroup{\sym{h}}}\), which *imposes* finitude, essentially saying that if we take the \(\rform{\card{r}}\) cardinal \(\sym{w}\) times we should return to our starting vertex, etc:

We can then define the square torus as the Cayley quiver of this presentation.

Alternatively, and equivalently, we can construct the square torus as a quotient by defining the "cyclic" affine path valuation induced by the group homomorphism we denote by \(\bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\):

\[ \begin{array}{c} \functionSignature{\function{\bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}}}{\wordGroup{\quiver{2}}}{\groupDirectProduct{\cyclicGroup{\sym{w}}\groupDirectProductSymbol \cyclicGroup{\sym{h}}}}\\ \\ \bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\defEqualSymbol \assocArray{\mto{\rform{\card{r}}}{\tuple{1,0}},\mto{\bform{\card{b}}}{\tuple{0,1}}} \end{array} \]The parameters \(\sym{w},\sym{h}\) are the **moduli** of the torus, where \(\sym{w}\) is the **width modulus** and \(\sym{h}\) is the **height modulus**. We generalize our notation for *finite* square quivers, which was \(\bindCardSize{\squareQuiver }{\sym{w},\sym{h}}\), to allow for these finite but *cyclic* dimensions:

We suppressed the cardinals \(\rform{\card{r}},\bform{\card{b}}\), but we can also explicitly bind them in the notation:

\[ \bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\defEqualSymbol \compactQuotient{\bindCardSize{\bouquetQuiver{2}}{\rform{\card{r}},\bform{\card{b}}}}{\vert{0}}{\bindCardSize{\translationWordHomomorphism{2}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}}\congruentSymbol \bindCardSize{\cayleyQuiverSymbol{\translationPresentation{2}}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}} \]Below we show \(\bindCards{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{10},\bform{\card{b}}\compactBindingRuleSymbol \modulo{4}}\), the square torus with width modulus 10 and height modulus 4:

The \(\rform{\card{x}}\) cardinal is associated with traversal around the outer radius of the torus, and the \(\bform{\card{y}}\) cardinal with the inner radius. The outer radius spanned by \(\rform{\card{x}}\) has a width modulus of 10, and the inner radius spanned by \(\bform{\card{y}}\) has a height modulus of 4.

#### Path relations

From a path-algebraic point of view, a square torus has a simple characterization. Recall that for the infinite square lattice, we have the single **path relation** \(\word{\rform{\card{x}}}{\bform{\card{y}}}\pathIso \word{\bform{\card{y}}}{\rform{\card{x}}}\). A square torus with moduli \(\tuple{\sym{w},\sym{h}}\) extends this to the set \(\list{\word{\rform{\card{x}}}{\bform{\card{y}}}\pathIso \word{\bform{\card{y}}}{\rform{\card{x}}},\repeatedPower{\word{\rform{\card{x}}}}{\sym{w}}\pathIso \word{1},\repeatedPower{\word{\bform{\card{y}}}}{\sym{h}}\pathIso \word{1}}\).

#### Square tori as generalizations of square quivers

We now compare the square torus quiver \(\bindCards{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\) and the square quivers we defined in [[finite square quiver \(\bindCards{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \sym{w},\bform{\card{b}}\compactBindingRuleSymbol \sym{h}}\) and the infinite square quiver \(\bindCards{\subSize{\squareQuiver }{ \infty }}{\rform{\card{r}},\bform{\card{b}}}\congruentSymbol \bindCards{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \infty ,\bform{\card{b}}\compactBindingRuleSymbol \infty }\).

Firstly, if we adopt the normal convention that the additive group of integers \(\group{\mathbb{Z}}\,\)*is* a cyclic group order infinity (\(\group{\mathbb{Z}} = \cyclicGroup{ \infty }\)), then we can see the infinite square quiver as being a square torus with infinite moduli. The last two path relations \(\list{\word{\rform{\card{x}}}{\bform{\card{y}}}\pathIso \word{\bform{\card{y}}}{\rform{\card{x}}},\repeatedPower{\word{\rform{\card{x}}}}{\sym{w}}\pathIso \word{1},\repeatedPower{\word{\bform{\card{y}}}}{\sym{h}}\pathIso \word{1}}\) become vacuous, leaving us with the ordinary path relations of the infinite square quiver \(\list{\word{\rform{\card{x}}}{\bform{\card{y}}}\pathIso \word{\bform{\card{y}}}{\rform{\card{x}}}}\).

Secondly, let's compare the finite square quiver \(\bindCards{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \sym{w},\bform{\card{b}}\compactBindingRuleSymbol \sym{h}}\) with the square torus quiver \(\bindCards{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\) for e.g. \(\tuple{\sym{w},\sym{h}} = \tuple{10,4}\). We will display the torus quiver on a 2-dimensional “modular plane” rather than in 3 dimensions. We can think of this plane as “wrapping around” on its border – but we’ll number the edges that cross this border so that it is easy to trace how they connect up:

Compare this to the ordinary finite square quiver \(\bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol 10,\bform{\card{b}}\compactBindingRuleSymbol 4}\):

Clearly, the square torus has the same number of vertices, and differs only from the regular square quiver in that we have "glued the borders" together in the obvious way.

## Square cylinder

We've defined the square torus \(\bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\), in which dimensions are cyclic. But what about the case in which only one is cyclic and the other is infinite, e.g. \(\bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \infty ,\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{h}}}\)?

This is already well-defined if we consider the integers to be a cyclic group with infinite order (\(\group{\mathbb{Z}} = \cyclicGroup{ \infty }\)). We'll call these special tori **cylinders**.

Here we show a fragment of the square cylinder \(\bindCardSize{\squareQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \infty ,\bform{\card{b}}\compactBindingRuleSymbol \modulo{5}}\) on the modular plane:

Here is the same cylinder in 3 dimensions:

In general, we'll use the generic term tori to encompass both the cases where both cardinals have finite dimension, yielding a torus, and where one (or more) cardinal has infinite dimension, yielding a cylinder (or higher-dimensional analogue).

## Triangular torus

The construction we saw above gives toroidal and cyclindrical versions of the square, cubic, and indeed all the higher dimensional variants \(\gridQuiver{\sym{n}}\). But how can we obtain a toroidal triangular lattice?

#### As Cayley quiver

The Cayley quiver construction is most natural. Recall the "triangular" group presentation \(\starTranslationPresentation{3}\) we introduced in [[[Cayley quivers#Transitive quivers as Cayley quivers]]]:

\[ \bindCardSize{\starTranslationPresentation{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\gform{g}}},\groupCommutator{\groupGenerator{\gform{g}}}{\groupGenerator{\bform{b}}},\groupGenerator{\rform{r}}\iGmult \groupGenerator{\gform{g}}\iGmult \groupInverse{\groupGenerator{\bform{b}}}} \]To make this group finite, we now define a related presentation that imposes additional relators that cause every element of this group to have finite order:

\[ \bindCardSize{\starTranslationPresentation{3}}{\groupGenerator{\rform{r}}\compactBindingRuleSymbol \modulo{\sym{h}},\groupGenerator{\gform{g}}\compactBindingRuleSymbol \modulo{\sym{w}},\groupGenerator{\bform{b}}\compactBindingRuleSymbol \modulo{\sym{w}}}\defEqualSymbol \groupPresentation{\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}}{\groupCommutator{\groupGenerator{\rform{r}}}{\groupGenerator{\gform{g}}},\groupCommutator{\groupGenerator{\gform{g}}}{\groupGenerator{\bform{b}}},\groupGenerator{\rform{r}}\iGmult \groupGenerator{\gform{g}}\iGmult \groupInverse{\groupGenerator{\bform{b}}},\groupPower{\groupGenerator{\rform{r}}}{\sym{h}},\groupPower{\groupGenerator{\gform{g}}}{\lcm(\sym{w},2 \, \sym{h})}} \]Two notes: the role of the lowest-common-multiple function will explained later. Also, the "missing relation" \(\groupPower{\groupGenerator{\bform{b}}}{\lcm(\sym{w},2 \, \sym{h})}\) is implied by the other relations.

#### As path quotient

We can use a similar method as before, except this time we adapt the homomorphism \(\starTranslationWordHomomorphism{3}\). We will cheat a little and define the toroidal form of \(\starTranslationWordHomomorphism{3}\) as a rather trivial map into the group presented by \(\bindCardSize{\starTranslationPresentation{3}}{\groupGenerator{\rform{r}}\compactBindingRuleSymbol \modulo{\sym{h}},\groupGenerator{\gform{g}}\compactBindingRuleSymbol \modulo{\sym{w}},\groupGenerator{\bform{b}}\compactBindingRuleSymbol \modulo{\sym{w}}}\) that we just defined above – we explained this connection in [[[Lattice quivers#Homomorphisms and presentations]]].

\[ \begin{array}{c} \functionSignature{\function{\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{h}},\gform{\card{g}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{w}}}}}{\wordGroup{\quiver{3}}}{\bindCardSize{\starTranslationPresentation{3}}{\groupGenerator{\rform{r}}\compactBindingRuleSymbol \modulo{\sym{h}},\groupGenerator{\gform{g}}\compactBindingRuleSymbol \modulo{\sym{w}},\groupGenerator{\bform{b}}\compactBindingRuleSymbol \modulo{\sym{w}}}}\\ \\ \starTranslationWordHomomorphism{3}\defEqualSymbol \assocArray{\mto{\rform{\card{r}}}{\groupGenerator{\rform{r}}},\mto{\gform{\card{g}}}{\groupGenerator{\gform{g}}},\mto{\bform{\card{b}}}{\groupGenerator{\bform{b}}}} \end{array} \]This map sends the cardinals \(\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}\) to the generators \(\groupGenerator{\rform{r}},\groupGenerator{\gform{g}},\groupGenerator{\bform{b}}\) of the presentation. Then we define the torus to be the path quotient:

\[ \bindCardSize{\triangularQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{h}},\gform{\card{g}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{w}}}\defEqualSymbol \compactQuotient{\bindCardSize{\bouquetQuiver{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\vert{0}}{\bindCardSize{\starTranslationWordHomomorphism{3}}{\rform{\card{r}}\compactBindingRuleSymbol \modulo{\sym{h}},\gform{\card{g}}\compactBindingRuleSymbol \modulo{\sym{w}},\bform{\card{b}}\compactBindingRuleSymbol \modulo{\sym{w}}}} \]Here is the triangular torus \(\bindCardSize{\triangularQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{4},\gform{\card{g}}\compactBindingRuleSymbol \modulo{8},\bform{\card{b}}\compactBindingRuleSymbol \modulo{8}}\):

In this case, the dimension 4 measures the number vertices visited before the \(\rform{\card{r}}\)-axis loops back to itself, and similarly 8 measures this count for the \(\gform{\card{g}}\)-axis (and also the \(\bform{\card{b}}\)-axis):

Here we visualize the triangular torus in 3 dimensions:

We can highlight the cardinal axes:

#### Triangular cylinder

Here is (a fragment of) the triangular cylinder \(\bindCardSize{\triangularQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{4},\gform{\card{g}}\compactBindingRuleSymbol \infty ,\bform{\card{b}}\compactBindingRuleSymbol \infty }\), cyclic in the \(\rform{\card{r}}\) dimension but infinite in the \(\gform{\card{g}}\) and \(\bform{\card{b}}\) dimensions:

Also note that we are using an artificial transparency effect to make the three dimensional structure easier to perceive.

## Hexagonal torus

Having established the construction of the triangular torus, we can produce the hexagonal torus by adjusting the fundamental quiver from \(\quiver{F} = \bindCardSize{\bouquetQuiver{3}}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\) to \(\quiver{F} = \bindCardSize{\subSize{\lineQuiver }{2}}{\rform{\card{r}}\parallelCardSymbol \gform{\card{g}}\parallelCardSymbol \bform{\card{b}}}\) as we explored in [[[Intransitive lattices]]].

Here we plot \(\bindCardSize{\toroidalModifier{\hexagonalQuiver }}{\rform{\card{r}}\compactBindingRuleSymbol 6,\gform{\card{g}}\compactBindingRuleSymbol 12,\bform{\card{b}}\compactBindingRuleSymbol 12}\congruentSymbol \bindCardSize{\hexagonalQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{6},\gform{\card{g}}\compactBindingRuleSymbol \modulo{12},\bform{\card{b}}\compactBindingRuleSymbol \modulo{12}}\) on the modular plane:

Comparing this to the triangular torus \(\bindCardSize{\triangularQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{6},\gform{\card{g}}\compactBindingRuleSymbol \modulo{6},\bform{\card{b}}\compactBindingRuleSymbol \modulo{6}}\), we see that we have can obtain the hexagonal torus by deleting from the triangular torus a periodic pattern of vertices, corresponding to the "forbidden states" that the structure of the fundamental quiver prevents us from visiting:

#### Three dimensions

We can plot the torus in three dimensions. Here we show \(\bindCardSize{\toroidalModifier{\hexagonalQuiver }}{\rform{\card{r}}\compactBindingRuleSymbol 18,\gform{\card{g}}\compactBindingRuleSymbol 48,\bform{\card{b}}\compactBindingRuleSymbol 48}\):

#### Hexagonal cylinder

A hexagonal cylinder gives a structure corresponding to the bond connectivity of a carbon nanotube. Here we show \(\bindCardSize{\hexagonalQuiver }{\rform{\card{r}}\compactBindingRuleSymbol \modulo{12},\gform{\card{g}}\compactBindingRuleSymbol \infty ,\bform{\card{b}}\compactBindingRuleSymbol \infty }\):

## Axes

Let’s consider for a moment what axis-aligned geodesics look like for toroidal lattices. First, though, let’s consider the situation for the non-toroidal, two-dimensional lattices:

The geodesics are colored by which cardinal they are aligned with. In a hexagonal lattice we cannot form path words like \(\word{\card{a}}{\card{a}}{\card{a}}\), so our “axes” are composed of alternating pairs of cardinals, e.g. \(\word{\card{a}}{\ncard{b}}{\card{a}}{\ncard{b}}{\card{a}}{\ncard{b}}\); we color these paths by blending the colors of the alternated cardinals.

Ok, now we can compare with square torus (top) and triangular torus (bottom):

As you probably expected, the square torus has predictable “orthogonal” axes.

But the situation with triangular torus appears to be more interesting: the \(\gform{\card{g}}\) and \(\bform{\card{b}}\) axes twist around and meet at an point opposite the origin. This behavior turns out to be sensitive to the moduli of the torus.

If the moduli are coprime, then either of the axes \(\gform{\card{g}}\) and \(\bform{\card{b}}\) will reach every vertex:

## Modular plane

This is of course a straightforward consequence of basic number theory, but is best explained by plotting these lattices on the **modular plane** we introduced earlier. To repeat: we can think of this plane as “wrapping around” on its border – but we’ll number the edges that cross this border so that it is easy to trace how they connect up:

Let’s plot the \(\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}\) axes on the plane for \(\bindCardSize{\toroidalModifier{\triangularQuiver }}{\rform{\card{r}}\compactBindingRuleSymbol 4,\gform{\card{g}}\compactBindingRuleSymbol 8,\bform{\card{b}}\compactBindingRuleSymbol 8}\):

Plotting the \(\rform{\card{r}}\) and \(\bform{\card{b}}\) axes for a range of moduli, we can see how the greatest common divisor of the two moduli determines how many orbits of the \(\sym{w}\) dimension the \(\bform{\card{b}}\) axis will make before returning to the origin:

Notice that if \(\sym{w} / 2\) and \(\sym{h} / 2\) are coprime, the axis \(\bform{\card{b}}\) intersects *every* vertex. In general the number of times \(\rform{\card{r}}\) will intersect \(\bform{\card{b}}\) for a \(\sym{w}\times \sym{h}\) torus is \(2 \, \lcm(\sym{h} / 2,\sym{w}) / \sym{h}\).

A similar situation applies for the hexagonal torus, when we use alternating axes:

Due to an arbitrary choice of orientation of these hexagons, our previously vertical axis is now horizontal, but a similar relation holds: the number of vertices in the intersection of the axis-aligned geodesics is given by \(3 \, \lcm(\sym{w},\sym{h} / 3) / \sym{h}\), where it is important to notice the \(\tuple{\sym{w} = 6,\sym{h} = 9}\) case actually has *two* such intersecting vertices, not one as it first appears. Similarly the \(\tuple{\sym{w} = 12,\sym{h} = 9}\) case has 4 intersecting vertices.

We are certainly not limited to these square, triangular, and hexagonal toroidal lattices. Any fundamental quiver employing the same translation groups has a toroidal version. For example, the rhombille lattice:

## Sheared tori

There is a straightforward construction we can use to introduce an intuitive kind of **shear** or **torsion** into our lattice quivers.

We'll start with the square torus (of size \(5 \times 3\)), where the torsion is easiest to see. On the left is the sheared lattice, and on the right is the normal square lattice for comparison.

Notice that in the sheared lattice, the top left vertex connects to the *second* vertex on the bottom left; in the normal lattice it connects to the first. We’ll say that this lattice has a **shear parameter** of \(\sym{z} = 1\).

Here are square lattices with shear parameters \(-2 \le \sym{z} \le 2\):

Notice similar "orbital mechanics" for the sheared square lattice as we saw for the triangular lattice: for \(\sym{z} = \pm 1\), a \(\bform{\card{y}}\) geodesic will orbit through all vertices, but for \(\sym{z} = \pm 2\), a \(\bform{\card{y}}\) geodesic effectively *skips* the neighboring geodesic on completing a circuit, so if \(\sym{w}\) is even we will have exactly two distinct \(\bform{\card{y}}\)-orbits.

From a path-algebraic point of view, a \(\sym{z}\)-sheared square lattice can be expressed as the path relation set:

\[ \list{\word{\rform{\card{x}}}{\bform{\card{y}}}\pathIso \word{\bform{\card{y}}}{\rform{\card{x}}},\repeatedPower{\word{\rform{\card{x}}}}{\sym{w}}\pathIso \repeatedPower{\word{\bform{\card{y}}}}{\sym{z}},\repeatedPower{\word{\bform{\card{y}}}}{\sym{h}}\pathIso \word{1}} \]It's clear then that if \(\sym{z} = \sym{h}\), we have no shear at all, and hence the shear parameter lives in the integers modulo \(\sym{h}\).

When plotted in three dimensions, the twist of a sheared square torus resembles the twisted magnetic field used in a tokamak.

# Noncommutativity

## Introduction

The line, square, triangular, etc. quivers we have examined in [[[Transitive quivers]]], [[[Intransitive lattices]]], and [[[Toroidal lattices]]] have all been associated with finite- or infinite-order Abelian groups. Specifically, they could be seen as the Cayley quivers of presentations of these groups, or in for non-transitive quivers like the hexagonal quiver, as path quotients by affine path valuations with values in these groups.

Ultimately, the group associated with each such quiver can be interpreted as the **translation group** that describes how we can permute the vertices of the quiver, while preserving path words. This translation group is the group center of the endomorphism group of the quiver, which is the larger group of path homomorphisms that permute vertices and *rewrite* path words in an invertible way.

Let us now move beyond Abelian groups, to non-commutative groups. One family of "minimally non-commutative" groups are the **dihedral groups**, which describe the symmetries of a *n*-gon. We'll examine the infinite dihedral groups, which, in the 1-dimensional case, describe symmetries of a "polygon with infinite number of edges".

Note: this section is a stub. See [[[here:Summary and roadmap#Noncommutativity]]] for more information.

## Dihedral lattices

Let’s look at the one-dimensional family first. The cardinals \(\rform{\card{x}}\) and \(\rform{\inverted{\card{x}}}\) move right and left, and \(\rgform{\card{f}}\) flips the direction of movement. The matrix representation used is:

The lattice so generated is a doubled version of the line lattice, with the \(\rgform{\card{f}}\) switching between the upper and lower portions:

We can generalize this concept to the two-dimensional infinite dihedral group, with cardinals \(\rform{\card{x}}\), \(\bform{\card{y}}\). This representation is:

The generated lattice consists of two oppositely oriented copies of the square lattice, with the \(\rgform{\card{f}}\) cardinal acting to alternate between them:

## Wreathe product

This immediately suggests a general procedure, since the flip cardinal is effectively a generator of a group action on the other cardinals via \(\list{\rewrite{\rform{\card{x}}}{\rform{\inverted{\card{x}}}},\rewrite{\bform{\card{y}}}{\bform{\inverted{\card{y}}}}}\). We can therefore take any lattice quiver with an abelian group, and add to it additional cardinals that permute and/or invert its other cardinals. I claim without proof that this is equivalent to forming a restricted wreath product with base group being the "pure translation" cardinals. The infinite dihedral group above is a wreathe product with a group isomorphic to \(\mathbb{Z}_2\).

As one example of this procedure, let’s add a cardinal \(\gform{\card{r}}\) that performs a 90° rotation of the other cardinals, which corresponds to taking a wreathe product with a group isomorphic to \(\mathbb{Z}_4\):

The generated lattice produces contains *four* copies of the square lattice:

Each application of the \(\gform{\card{r}}\) cardinal cycles us among four different square lattices in which the roles of \(\rform{\card{x}}\) and \(\bform{\card{y}}\) are cycled through the list \(\list{\rform{\card{x}},\bform{\card{y}},\rform{\inverted{\card{x}}},\bform{\inverted{\card{y}}}}\).

Unfortunately the representation I used above only generates a cover of the true lattice quiver. I had to write the cardinal matrices directly in order to produce a tractable coordinatization, but if one abandons matrix representations and uses a finite state machine directly, the lattice can be generated in that fashion quite directly.

A small aside: a famous example of a wreathe group is the “lamplighter group”: an bi-infinite street of lamps, and a lamplighter who can either move left or right, or toggle the lamp he is at. This admits a straightforward state machine implementation. For a “circular street” of \(n\)lamps, we obtain some very symmetric polyhedra:

One way to understand these polyhedra is that they are **truncated hypercubes** (this is most obvious visually for the 2 and 3-lamp cases above). The reason for this is simple: the possible configurations of \(n\) lamps form the vertices of an \(n\)-hypercube, with each axis corresponding to a lamp. Flipping a particular lamp is equivalent to traversing an edge of this hypercube. If we are at a corner of the hypercube, we can flip any lamp we wish since there are \(n\) edges incident to us. To prevent this, imagine “shaving off” a corner of the \(n\)-hypercube to obtain a polygon with \(n-1\) sides. Each corner of this polygon has one “toggle” cardinal associated with it, corresponding to toggling a single lamp. The two neighboring polygon corners represent the togglings available if we walk to the next or previous lamp. By walking along this polygon, we can reach any particular lamp in order to flip it.

## Enumerations

# Rewriting systems

## Introduction

We now make our first foray into *dynamics*: quivers that describe systems that change in time. To do this we will use the lens of **rewriting systems**. A (computational) **rewriting system** is a non-deterministic automaton that successively transforms some **global state**, represented by an arbitrary data structure, by finding and applying well-defined rules that **match** particular sub-elements of this data structure, replacing them with other elements.

Rewriting systems provide a flexible and general model of computation, and any computational system can be cast into this model: Turing machines, cellular automata, graph rewriting systems, string rewriting systems, register machines, etc. In all cases, these systems can be non-deterministic, although deterministic systems are special cases, which from our point of view will result in rather trivial theories.

Note: this section is only partially complete. The roadmap for this section can be found [[[here:Summary and roadmap#Rewriting systems]]].

## Local vs global states

We referred to rewrite systems as involving rewrites that match and replace sub-elements of the data structure representing the full global state. What are these sub-elements? We will approach this idea slowly. For now, we'll suggest that the **local states** are the *smallest* possible sub-elements from which a **global state** can be reconstructed. We'll look at some concrete rewrite systems to build intuition for the behavior of these local states.

Here we show a table indicating reasonable notions of local state for various types of rewriting system:

system type | global state | local state | |
---|---|---|---|

\(\stringRewritingSystem{}\) | string r.s. | string \(\qstring{\character{c}_1 \character{c}_2 \ellipsis \character{c}_{\sym{n}}}\) | \(\tuple{\sym{i},\character{c}_{\sym{i}}}\) |

\(\circularStringRewritingSystem{}\) | circular string r.s. | circular string \(\qstring{\ellipsis \character{c}_{\sym{n} - 1} \character{c}_{\sym{n}} \character{c}_1 \character{c}_2 \ellipsis }\) | \(\tuple{\sym{i},\character{c}_{\sym{i}}}\) |

\(\turingMachineRewritingSystem{}\) | Turing machine | head in state \(\sym{s}\) at cell \(\sym{i}\) on tape \(\qstring{\character{c}_1 \ellipsis \character{c}_{\sym{n}}}\) | \(\sym{s},\sym{i},\tuple{\sym{i},\character{c}_{\sym{i}}}\) |

\(\cellularAutomatonRewritingSystem{}\) | cellular automaton | vector of cells \(\list{\character{c}_1,\character{c}_2,\ellipsis ,\character{c}_{\sym{n}}}\) | \(\tuple{\sym{i},\sym{c}_{\sym{i}}}\) |

\(\graphRewritingSystem{}\) | graph r.s. | set of edges \(\list{\de{\vert{\sym{t}_1}}{\vert{\sym{h}_1}},\ellipsis ,\de{\vert{\sym{t}_{\sym{n}}}}{\vert{\sym{h}_{\sym{n}}}}}\) | edge \(\de{\vert{\sym{t}_{\sym{i}}}}{\vert{\sym{h}_{\sym{i}}}}\) |

\(\hypergraphRewritingSystem{}\) | hypergraph r.s. | set of hyperedges \(\list{\sym{e}_1,\sym{e}_2,\ellipsis ,\sym{e}_{\sym{n}}}\) | hyperedge \(\sym{e}_{\sym{i}}\) |

\(\petriNetRewritingSystem{}\) | Petri net | place occupancy \(\assocArray{\mto{\sym{t}_1}{\sym{N}_1},\ellipsis ,\mto{\sym{t}_{\sym{n}}}{\sym{N}_{\sym{n}}}}\) | \(\tuple{\sym{t}_{\sym{i}},\sym{N}_{\sym{i}}}\) |

We will limit our attention initially to string rewriting systems, as these have a simple notion of spatial locality that originates in the linear order of character positions in the string.

## Evolution

Once a particular rewriting system type has been chosen, it remains to specify the **rewriting rules** that should be applied. We'll indicate the system in a script typeface, typically \(\rewritingSystem{R}\), and list its rules as:

where \(\sym{L}_{\sym{i}}\) is the left-hand-side, or **lhs**, of the \(i^{\textrm{th}}\) rule, and \(\sym{R}_{\sym{i}}\) is the right-hand-side, or **rhs**.

The lhs specifies a **pattern** that should **match** sub-elements of the global state, and the rhs specifies how the matched state should be rewritten to yield a new local state, and hence a new global state. We will also use the word **match** as a noun, to refer to the sub-elements (the set of local states) that were matched by the rule.

The match can be **rewritten**, giving us a new global state. Multiple matches can be present in a particular global state: only one of them is rewritten in a particular evolution of the rewriting system. If multiple matches can occur in a given global state, the system is **non-deterministic**. If a maximum of one match can occur, the system is deterministic.

We will study non-deterministic systems *globally*, considering all possible evolutions that can occur: we apply all possible matches in a kind of branching process in order to explore their consequences holistically.

After specifying \(\rewritingSystem{R}\), we can now choose an **initial state**, written \(\sym{s}_0\). We write the combined data of a rewriting system and an initial state as \(\rewritingStateBinding{\rewritingSystem{R}}{\sym{s}_0}\).

At that point we can **evolve** the system, which is an iterative process that is carried out either for a fixed number of steps, or alternatively in an ongoing fashion that never terminates. The matches in a particular global state yield **successor states** of that state. We may sometimes reach a global state that cannot be rewritten because there are no matches: we say that a particular global state has **halted**.

This process generates a graph, known as a **multiway graph** or **rewrite graph**, in which vertices represent global states, and edges represent **rewrites**. We'll write the process that generates this graph to depth \(\sym{n}\) as:

Computationally, this is a **breadth first search** that yields a finite a (finite) rewrite graph if it terminates after some finite number of steps. We'll initially avoid defining or reasoning about infinite rewrite graphs.

## String rewriting systems

To illustrate this setup, we'll focus on **string rewriting systems**, since they are both easy to visualize, familiar to those with experience with computer programming, and fairly easy to analyze.

Our first example will be the following extremely simple system:

\[ \rewritingSystem{R_0}\defEqualSymbol \rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{b}}}{\lstr{\texttt{a}}}} \]This rewrite system simply replaces the letter \(\lchar{\texttt{b}}\) with the letter \(\lchar{\texttt{a}}\) whereever it occurs.

Here we visualize the rewrite graph generated by \(\multiwayBFS{\rewritingSystem{R_0},\qstring{\lstr{\texttt{bab}}}, \infty }\):

As you can see, there are two matches of the rewrite rule \(\rewritingRule{\lstr{\texttt{b}}}{\lstr{\texttt{a}}}\) in the initial state \(\qstring{\lstr{\texttt{bab}}}\): one corresponding to the character \(\lchar{\texttt{b}}\) occuring in the first string position, the other for the character \(\lchar{\texttt{b}}\) occuring in the third string position. When these matches are rewritten, we obtain two **successor global states** \(\qstring{\lstr{\texttt{aab}}}\) and \(\qstring{\lstr{\texttt{baa}}}\) respectively. Each of these two global states contain exactly one match, corresponding to the remaining \(\lchar{\texttt{b}}\). Rewriting these, we obtain the same successor global state \(\qstring{\lstr{\texttt{aaa}}}\). This global state contains *no* matches, as there are no remaining \(\lchar{\texttt{b}}\)'s.

## Rewrite quiver

Notice the important fact here that the two matches, at positions 1 and 3 of the original string, do not overlap: we say that these rewrites **commute**. This implies that we can rewrite those matches in either order, and the non-rewritten match will still remain. This gives us a hint of how we might attach a cardinal structure to the rewrite graph, yielding the **rewrite quiver**.

The rewrite quiver is the quiver whose cardinals are the labels formed by the following data taken as a whole: the *rule* *that was matched*, and the *local states that were involved in the match*.

Here we show the rewrite quiver for this simple system, where the two cardinals correspond to the two independent rewrites:

We have named these cardinals with the string position of the match they rewrite, since this is enough to uniquely determine the match for this system.

Let's consider a slightly more complex initial condition of \(\qstring{\lstr{\texttt{bbb}}}\).

We can see already that this rewrite system yields a rewrite quiver given by \(\subSize{\gridQuiver{\sym{n}}}{2}\), a hypercube of dimension \(\sym{n}\), where \(\sym{n}\) is the count of \(\lchar{\texttt{b}}\)'s in the initial condition. The rewrite quiver is insensitive to the order of the characters in the string, only depending on the \(\lchar{\texttt{b}}\)-count. This is a generic feature of string rewriting systems in which matches do not overlap (whether because of some "cosmic conspiracy" or because the form of the rules prevent this from occurring).

More generally, in any global state of a rewriting system in which we have \(\sym{n}\) commuting rewrites available, we will obtain a subquiver that is isomorphic to \(\subSize{\gridQuiver{\sym{n}}}{2}\).

#### Sorting system

We'll now focus on the "sorting system" that partially sorts substrings:

\[ \rewritingSystem{R_1}\defEqualSymbol \rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{ba}}}{\lstr{\texttt{ab}}}} \]Here we show the rewrite quiver for a variety of initial conditions:

Notice the obvious fact that all of these quivers terminate in a single final state, in which the characters of the string are in *sorted order*, with all \(\lchar{\texttt{b}}\)'s occurring to the right of all \(\lchar{\texttt{a}}\)'s. This must always happen, since a rewrite always "moves one \(\lchar{\texttt{b}}\) to the right", and will always be able to do so if any \(\lchar{\texttt{a}}\)'s are to the right of any \(\lchar{\texttt{b}}\)'s. The only halting state therefore is the state in which all \(\lchar{\texttt{b}}\)’s are to the right of all \(\lchar{\texttt{a}}\)’s.

## Local and global states

### Local and global states

To ground our discussion, we'll discuss the rewriting system introduced above, initialized to global state \(\lstr{\texttt{bbaa}}\):

\[ \rewritingSystem{R_2}\defEqualSymbol \rewritingStateBinding{\rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{ba}}}{\lstr{\texttt{ab}}}}}{\lstr{\texttt{bbaa}}} \]This yields the following multiway quiver:

We've already mentioned **local states** and **global states**. We'll write the set of local, global states for any initialized rewriting system \(\rewritingSystem{R}\) as \(\localStates{\rewritingSystem{R}}\), \(\globalStates{\rewritingSystem{R}}\), or just \(\localStatesSymbol ,\globalStatesSymbol\) when the rewriting system \(\rewritingSystem{R}\) is fixed. Let's enumerate them for our example \(\rewritingSystem{R_2}\):

### Local key and value substates

In a string rewriting system, the local states are pairs \(\tuple{\keySubStateSymbol{p},\keySubStateSymbol{c}}\) that bind together a string position \(\keySubStateSymbol{p}\) and the character \(\keySubStateSymbol{c}\) that "lives" there. The generic terms we'll use for these are **key local substates** and **value local substates** respectively. To explain this terminology: we take the terms **key** and **value** from associative array data structures in computer science because a global state cannot contain *two* local states that share the same key – this would correspond to characters living at the same string position. We use the term **local substate** to indicate that these are ingredients of a local state, but they do not individually constitute local states.

The set of key local substates for an initialized rewriting system \(\rewritingSystem{R}\) will be written \(\keySubStates{\rewritingSystem{R}}\), and the set of value local substates will be written \(\valueSubStates{\rewritingSystem{R}}\). Again we'll abbreviate these to \(\keySubStatesSymbol ,\valueSubStatesSymbol\) when \(\rewritingSystem{R}\) is fixed. The set of local substates is then a subset of Cartesian product of these:

\[ \localStatesSymbol \subseteq \keySubStatesSymbol \cartesianProductSymbol \valueSubStatesSymbol \]To be more concise we'll also sometimes write a local substate \(\tuple{\keySubStateSymbol{p},\keySubStateSymbol{c}}\) as \(\localState{\keySubStateSymbol{p}}{\keySubStateSymbol{c}}\). Using this notation, the set of local states for \(\rewritingSystem{R_2}\) is:

\[ \localStates{\rewritingSystem{R_2}} = \set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{a}}},\localState{3}{\lchar{\texttt{a}}},\localState{4}{\lchar{\texttt{a}}},\localState{1}{\lchar{\texttt{b}}},\localState{2}{\lchar{\texttt{b}}},\localState{3}{\lchar{\texttt{b}}},\localState{4}{\lchar{\texttt{b}}}} \]We have \(\localStatesSymbol \subseteq \keySubStatesSymbol \cartesianProductSymbol \valueSubStatesSymbol\) rather than \(\localStatesSymbol = \keySubStatesSymbol \cartesianProductSymbol \valueSubStatesSymbol\) because, for a given initialized rewriting system, rewriting might not produce *all* of the theoretically possible local states. For example, the system \(\rewritingSystem{R} = \rewritingStateBinding{\rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{b}}}{\lstr{\texttt{a}}}}}{\qstring{\lstr{\texttt{aab}}}}\) never creates the local state \(\localState{1}{\lchar{\texttt{b}}}\). The set of local states is actually \(\localStates{\rewritingSystem{R}} = \set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{a}}},\localState{3}{\lchar{\texttt{a}}},\localState{3}{\lchar{\texttt{b}}}}\).

For the main example \(\rewritingSystem{R_2}\), we happen to have that every possible local state is generated by the system:

\[ \begin{aligned} \keySubStates{\rewritingSystem{R_2}}&= \set{1,2,3,4}\\ \valueSubStates{\rewritingSystem{R_2}}&= \set{\lchar{\texttt{a}},\lchar{\texttt{b}}}\\[0.75em] \localStates{\rewritingSystem{R_2}}&= \keySubStates{\rewritingSystem{R_2}}\cartesianProductSymbol \valueSubStates{\rewritingSystem{R_2}}\\ &= \set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{a}}},\localState{3}{\lchar{\texttt{a}}},\localState{4}{\lchar{\texttt{a}}},\localState{1}{\lchar{\texttt{b}}},\localState{2}{\lchar{\texttt{b}}},\localState{3}{\lchar{\texttt{b}}},\localState{4}{\lchar{\texttt{b}}}}\end{aligned} \]### Key substates and length-preserving systems

In the definitions above, we have implicitly relied the fact the rewrites in the system \(\rewritingSystem{R_2} = \rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{ba}}}{\lstr{\texttt{ab}}}}\) do not change the length of the string. That is to say, the length of the lhs and rhs are equal, both being 2. This allows us to unambiguously talk of *global* string positions, and making the set \(\keySubStates{\rewritingSystem{R_2}} = \set{1,2,3,4}\) well defined.

For a rewriting system like \(\rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{ab}}}{\lstr{\texttt{a}}}}\), however, rewrites will *change* the length of the string. We might regard such rewrites as effectively *removing key substates*, making them inaccessible from all rewrites that follow. Conversely, systems like \(\rewritingRuleBinding{\stringRewritingSystem{}}{\rewritingRule{\lstr{\texttt{a}}}{\lstr{\texttt{ab}}}}\) effectively *introduce new key substates*. To handle such cases, we can expect to define equivalence classes that capture when two local states involving different key substates are *actually the same* from the point of view of a global or regional state.

There is a host of subtle issues hiding here, and so we will postpone considering such systems for now.

## Regional states

We can see local and global states as living at either end of a continuum of specificity: local states are the "minimal units" of information we have about the state of a rewrite system, whereas global states are maximal in that same sense.

This continuum contains other kinds of states, which are intermediate in the amount of information they contain about the state of the rewrite system. We'll call these **regional states**. One way to approach conceptualize regional states is by building them up from local states. More precisely, we can think of a regional state as being a **set of compatible local states**, meaning a set of local states in which there is only one element with any given key substate. In the case of string rewrite systems, compatibility amounts to saying that only one character can live at a given position of a string.

### Notation

We’ll write a regional state consisting of \(\sym{n}\) local states \(\localStateSymbol{l_{\sym{i}}}\) as \(\regionalStateForm{\localStateSymbol{l_1} \localStateSymbol{l_2} \ellipsis \localStateSymbol{l_{\sym{n}}}}\), and the set of *all* regional states of a system \(\rewritingSystem{R}\) as \(\regionalStates{\rewritingSystem{R}}\), abbreviating to \(\regionalStatesSymbol\) when \(\rewritingSystem{R}\) is fixed.

A regional state is a *partial description* of a global state. For string rewriting systems, in which global states are strings, a regional state is then a partial description of a string, which defines that certain string positions have certain character values, but can leave other parts of the string unspecified. We can therefore use **wildcard notation** to evoke this interpretation, where a dash to indicates a **wildcard** – a position in the string for what the regional state does not specify a character value.

Here we show some string regional states written in parentheses notation as well as wildcard notation:

\[ \begin{aligned} \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}}&\syntaxEqualSymbol \wstring{\lstr{\texttt{abab}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{b}}}}&\syntaxEqualSymbol \wstring{\lstr{\texttt{ab-b}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}}}&\syntaxEqualSymbol \wstring{\lstr{\texttt{a---}}}\\ \regionalStateForm{\localState{4}{\lchar{\texttt{b}}}}&\syntaxEqualSymbol \wstring{\lstr{\texttt{---b}}}\\ \emptyRegionalState &\syntaxEqualSymbol \wstring{\lstr{\texttt{----}}}\end{aligned} \]### Gluing

We can form a regional states by **gluing** a set of local states. We’ll write this function explicitly as \(\stateCompose\), or an infix form as \(\infixStateComposeSymbol\). This gluing function has the following signature:

Note the symbol \(\rightharpoonup\), which indicates that \(\stateCompose\) is a *partial* function, not necessarily defined on all sets of local states. This is due to the fact that we cannot glue **incompatible** local states: local states that share a key substate but differ in their value substate – we cannot have more than one distinct character at a given string position. We indicate this with notation \(\stateCompose(\ellipsis ) = \invalidRegionalState\).

#### Examples

Here we give some examples of gluing operations and their results, both in parentheses and wildcard notation:

\[ \begin{csarray}{rllllll}{acccccc} & & \stateCompose(\set{}) & = & \wstring{\lstr{\texttt{----}}} & \syntaxEqualSymbol & \emptyRegionalState \\ & & \stateCompose(\set{\localState{1}{\lchar{\texttt{a}}}}) & = & \wstring{\lstr{\texttt{a---}}} & \syntaxEqualSymbol & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}}}\\ \localState{1}{\lchar{\texttt{a}}}\infixStateComposeSymbol \localState{2}{\lchar{\texttt{b}}} & \syntaxEqualSymbol & \stateCompose(\set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{b}}}}) & = & \wstring{\lstr{\texttt{ab--}}} & \syntaxEqualSymbol & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}}}\\ \localState{1}{\lchar{\texttt{a}}}\infixStateComposeSymbol \localState{2}{\lchar{\texttt{b}}}\infixStateComposeSymbol \localState{4}{\lchar{\texttt{b}}} & \syntaxEqualSymbol & \stateCompose(\set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{b}}},\localState{4}{\lchar{\texttt{b}}}}) & = & \wstring{\lstr{\texttt{ab-b}}} & \syntaxEqualSymbol & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{b}}}}\\ \localState{1}{\lchar{\texttt{a}}}\infixStateComposeSymbol \localState{2}{\lchar{\texttt{b}}}\infixStateComposeSymbol \localState{3}{\lchar{\texttt{b}}}\infixStateComposeSymbol \localState{4}{\lchar{\texttt{a}}} & \syntaxEqualSymbol & \stateCompose(\set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{b}}},\localState{3}{\lchar{\texttt{b}}},\localState{4}{\lchar{\texttt{a}}}}) & = & \wstring{\lstr{\texttt{abba}}} & \syntaxEqualSymbol & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{a}}}} \end{csarray} \]### Melting

The inverse to gluing is **melting**, in which we *dissolve the glue*, obtaining the set of local states that form a particular regional state. The function that melts a regional state into a set of local states is written \(\stateDecompose\), and has the following signature:

We show some obvious examples:

\[ \begin{aligned} \stateDecompose(\wstring{\lstr{\texttt{----}}})&= \set{}\\ \stateDecompose(\wstring{\lstr{\texttt{a---}}})&= \set{\localState{1}{\lchar{\texttt{a}}}}\\ \stateDecompose(\wstring{\lstr{\texttt{ab--}}})&= \set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{b}}}}\\ \stateDecompose(\wstring{\lstr{\texttt{ab-b}}})&= \set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{b}}},\localState{4}{\lchar{\texttt{b}}}}\\ \stateDecompose(\wstring{\lstr{\texttt{abba}}})&= \set{\localState{1}{\lchar{\texttt{a}}},\localState{2}{\lchar{\texttt{b}}},\localState{3}{\lchar{\texttt{b}}},\localState{4}{\lchar{\texttt{a}}}}\end{aligned} \]Gluing the melt of a regional state yields that regional state:

\[ \forAllForm{\elemOf{\regionalStateSymbol{R}}{\regionalStatesSymbol }}{\stateCompose(\stateDecompose(\regionalStateSymbol{R})) = \regionalStateSymbol{R}} \]Melting the glue of a set of *compatible* local states yields that set:

### Identifying global and local states with regional states

We can identify global states of \(\rewritingSystem{R}\), shown on the left, with "maximal" regional states, shown on the right, first in parentheses and then wildcard notation:

\[ \begin{aligned} \qstring{\lstr{\texttt{aabb}}}& \approx \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{aabb}}}\\ \qstring{\lstr{\texttt{abab}}}& \approx \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{abab}}}\\ \qstring{\lstr{\texttt{abba}}}& \approx \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{abba}}}\\ \qstring{\lstr{\texttt{baab}}}& \approx \regionalStateForm{\localState{1}{\lchar{\texttt{b}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{baab}}}\\ \qstring{\lstr{\texttt{baba}}}& \approx \regionalStateForm{\localState{1}{\lchar{\texttt{b}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{baba}}}\\ \qstring{\lstr{\texttt{bbaa}}}& \approx \regionalStateForm{\localState{1}{\lchar{\texttt{b}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{bbaa}}}\end{aligned} \]Note that there are no wildcards actually present in the right hand column, since global states imply a character value for *every* string position.

Similarly we can identify local states, shown on the left, with "minimal" regional states, shown on the right in set and then wildcard notation:

\[ \begin{csarray}{cc}{ai} \localState{1}{\lchar{\texttt{a}}}\bijectiveSymbol \regionalStateForm{\localState{1}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{a---}}} & \localState{1}{\lchar{\texttt{b}}}\bijectiveSymbol \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{b---}}}\\ \localState{2}{\lchar{\texttt{a}}}\bijectiveSymbol \regionalStateForm{\localState{2}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{-a--}}} & \localState{2}{\lchar{\texttt{b}}}\bijectiveSymbol \regionalStateForm{\localState{2}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{-b--}}}\\ \localState{3}{\lchar{\texttt{a}}}\bijectiveSymbol \regionalStateForm{\localState{3}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{--a-}}} & \localState{3}{\lchar{\texttt{b}}}\bijectiveSymbol \regionalStateForm{\localState{3}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{--b-}}}\\ \localState{4}{\lchar{\texttt{a}}}\bijectiveSymbol \regionalStateForm{\localState{4}{\lchar{\texttt{a}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{---a}}} & \localState{4}{\lchar{\texttt{b}}}\bijectiveSymbol \regionalStateForm{\localState{4}{\lchar{\texttt{b}}}}\syntaxEqualSymbol \wstring{\lstr{\texttt{---b}}} \end{csarray} \]We'll denote both the functions that embed global and local states into the regional states with \(\function{ \iota }\):

\[ \begin{nsarray}{c} \functionSignature{\function{ \iota }}{\localStatesSymbol }{\regionalStatesSymbol }\\ \functionSignature{\function{ \iota }}{\globalStatesSymbol }{\regionalStatesSymbol } \end{nsarray} \]### Conjunctions

It is possible to lift the idea of gluing *up a level*, obtaining an operation we'll call **conjunction**. Gluing applies to sets of local states; **conjunction** applies to sets of regional states. We'll write conjunction as \(\stateJoin\) or in infix form as \(\infixStateJoinSymbol\). Conjunction has the following signature:

Conjunction of two regional states can be *defined* in terms of their underlying local states: we form the conjunction by gluing the union of their melts:

This operation can clearly fail when there is a local state in one regional state that is incompatible with a local state in the other regional state, hence \(\stateJoin\) is a partial function.

Here are some examples of conjunction on regional states:

\[ \begin{csarray}{ccccc}{acccc} \emptyRegionalState & \sqcup & \emptyRegionalState & = & \emptyRegionalState \\ \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}} & \sqcup & \emptyRegionalState & = & \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}} & \sqcup & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}}} & = & \invalidRegionalState \\ \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}} & \sqcup & \regionalStateForm{\localState{2}{\lchar{\texttt{a}}}} & = & \regionalStateForm{\localState{1}{\lchar{\texttt{b}}} \localState{2}{\lchar{\texttt{a}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}}} & \sqcup & \regionalStateForm{\localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}} & = & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}}} & \sqcup & \regionalStateForm{\localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}} & = & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}}} & \sqcup & \regionalStateForm{\localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{b}}}} & = & \invalidRegionalState \end{csarray} \]We can also depict these same examples with wildcard notation, with the conjunction appearing under its operands:

\[ \begin{array}{cccccccc} \regionalStateSymbol{R} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{aba-}}} & \wstring{\lstr{\texttt{ab--}}} & \wstring{\lstr{\texttt{aaa-}}}\\[1em] \regionalStateSymbol{S} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{a---}}} & \wstring{\lstr{\texttt{-a--}}} & \wstring{\lstr{\texttt{-bab}}} & \wstring{\lstr{\texttt{--ab}}} & \wstring{\lstr{\texttt{-bbb}}}\\[1em] \regionalStateSymbol{R}\infixStateJoinSymbol \regionalStateSymbol{S} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{b---}}} & \invalidRegionalState & \wstring{\lstr{\texttt{ba--}}} & \wstring{\lstr{\texttt{abab}}} & \wstring{\lstr{\texttt{abab}}} & \invalidRegionalState \end{array} \]### Disjunctions

Similar to the operation of conjunction, we can define **disjunction**, which finds the regional state that is *in common* to two or more input regional states. We'll write the disjunction function as \(\stateMeet\), and in infix form as \(\infixStateMeetSymbol\). Disjunction has the following signature:

Unlike conjunction, disjunction is a total function: it is defined on every set of regional states. We can define the disjunction of two regional states in terms of \(\stateDecompose\) and \(\stateCompose\) as follows:

\[ \regionalStateSymbol{R}\infixStateMeetSymbol \regionalStateSymbol{S}\defEqualSymbol \stateCompose(\stateDecompose(\regionalStateSymbol{R})\setIntersectionSymbol \stateDecompose(\regionalStateSymbol{S})) \]The totality of \(\stateMeet\) is a consequence of the fact that the local states in the melt of \(\regionalStateSymbol{R}\) are, by definition, compatible, and likewise for \(\regionalStateSymbol{S}\). Hence the local states in their intersection must be compatible too.

\[ \begin{csarray}{ccccc}{acccc} \emptyRegionalState & \sqcap & \emptyRegionalState & = & \emptyRegionalState \\ \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}} & \sqcap & \emptyRegionalState & = & \emptyRegionalState \\ \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}} & \sqcap & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}}} & = & \emptyRegionalState \\ \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}} & \sqcap & \regionalStateForm{\localState{1}{\lchar{\texttt{b}}} \localState{2}{\lchar{\texttt{b}}}} & = & \regionalStateForm{\localState{1}{\lchar{\texttt{b}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}}} & \sqcap & \regionalStateForm{\localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}} & = & \regionalStateForm{\localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}} & \sqcap & \regionalStateForm{\localState{1}{\lchar{\texttt{b}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{b}}}} & = & \regionalStateForm{\localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}}}\\ \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}} & \sqcap & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}} & = & \regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}} \end{csarray} \]As before, we can depict these same examples with wildcard notation, with the disjunction appearing under its operands:

\[ \begin{array}{cccccccc} \regionalStateSymbol{R} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{aaa-}}} & \wstring{\lstr{\texttt{aaaa}}} & \wstring{\lstr{\texttt{aaaa}}}\\[1em] \regionalStateSymbol{S} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{a---}}} & \wstring{\lstr{\texttt{bb--}}} & \wstring{\lstr{\texttt{-aaa}}} & \wstring{\lstr{\texttt{baab}}} & \wstring{\lstr{\texttt{aaaa}}}\\[1em] \regionalStateSymbol{R}\infixStateMeetSymbol \regionalStateSymbol{S} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{----}}} & \wstring{\lstr{\texttt{b---}}} & \wstring{\lstr{\texttt{-aa-}}} & \wstring{\lstr{\texttt{-aa-}}} & \wstring{\lstr{\texttt{aaaa}}} \end{array} \]### Extent

Regional states represent partial descriptions of global states. Therefore it is natural to ask *which* global states a regional state describes. This is called the **extent** of the regional state, written \(\stateExtent\), with the following signature:

We can summarize the behavior of the \(\stateExtent\) function using a table whose rows are global states and whose columns are regional states. Every entry of the table contains a tick if that global state is described by that regional state. Here we show such a table for *all* global states and a *selection* of regional states:

The extent \(\stateExtent(\regionalStateSymbol{R})\) of a regional state \(\regionalStateSymbol{R}\) is then the set of global states with a tick in the column for \(\regionalStateSymbol{R}\).

Here we show some examples of the extent of some regional states:

\[ \begin{csarray}{rcl}{acc} \stateExtent(\emptyRegionalState )\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{----}}}) & = & \set{\qstring{\lstr{\texttt{aabb}}},\qstring{\lstr{\texttt{abab}}},\qstring{\lstr{\texttt{abba}}},\qstring{\lstr{\texttt{baab}}},\qstring{\lstr{\texttt{baba}}},\qstring{\lstr{\texttt{bbaa}}}}\\ \stateExtent(\regionalStateForm{\localState{1}{\lchar{\texttt{a}}}})\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{a---}}}) & = & \set{\qstring{\lstr{\texttt{aabb}}},\qstring{\lstr{\texttt{abab}}},\qstring{\lstr{\texttt{abba}}}}\\ \stateExtent(\regionalStateForm{\localState{1}{\lchar{\texttt{b}}}})\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{b---}}}) & = & \set{\qstring{\lstr{\texttt{baab}}},\qstring{\lstr{\texttt{baba}}},\qstring{\lstr{\texttt{bbaa}}}}\\ \stateExtent(\regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}}})\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{ab--}}}) & = & \set{\qstring{\lstr{\texttt{abab}}},\qstring{\lstr{\texttt{abba}}}}\\ \stateExtent(\regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{b}}}})\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{abb-}}}) & = & \set{\qstring{\lstr{\texttt{abba}}}}\\ \stateExtent(\regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{b}}} \localState{3}{\lchar{\texttt{b}}} \localState{4}{\lchar{\texttt{a}}}})\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{abba}}}) & = & \set{\qstring{\lstr{\texttt{abba}}}}\\ \stateExtent(\regionalStateForm{\localState{1}{\lchar{\texttt{a}}} \localState{2}{\lchar{\texttt{a}}} \localState{3}{\lchar{\texttt{a}}} \localState{4}{\lchar{\texttt{a}}}})\syntaxEqualSymbol \stateExtent(\wstring{\lstr{\texttt{aaaa}}}) & = & \set{} \end{csarray} \]### Intent

### Comparing regional states

The regional states \(\regionalStatesSymbol\) naturally have a **specificity** relation \(\regionalSubstateSymbol\) on them that captures when a regional state is more or less *specific* than another, defined as follows:

We will use the following terminology:

z | z |
---|---|

\(\regionalStateSymbol{S}\regionalSuperstateSymbol \regionalStateSymbol{R}\) | \(\regionalStateSymbol{S}\) is more specific than \(\regionalStateSymbol{R}\) |

\(\regionalStateSymbol{S}\regionalSubstateSymbol \regionalStateSymbol{T}\) | \(\regionalStateSymbol{S}\) is less specific than \(T\) |

\(\regionalStateSymbol{R}\comparableRegionalStatesSymbol \regionalStateSymbol{S}\) | \(\regionalStateSymbol{R}\) is compatible with \(\regionalStateSymbol{S}\) |

\(\regionalStateSymbol{R}\incomparableRegionalStatesSymbol \regionalStateSymbol{S}\) | \(\regionalStateSymbol{R}\) is incompatible with \(\regionalStateSymbol{S}\) |

Note the slightly confusing aspect of the terminology that a state is more specific than itself. We can say **strictly more specific**, etc, when we wish to exclude this case.

Two regional states are **compatible** if one is more specific than the other, in other words, if they are comparable in the partial order. Two regional states can also be **incompatible** – neither more nor less specific – when a local state in one is incompatible with a local state in the other.

#### Intuition

The terminology of specificity is helpful if we think about regional states as *filters* of global states. A more specific regional state (which has more constraints in the form of local states) will *filter out* *more* / *match* *fewer* global states. The least specific regional state of all is an *empty filter* with no constraints, which matches all global states.

This avoids the confusion of talking about "larger" or "smaller" regional states, since there is an inverse (*contravariant*) relationship between the *number* of local states in a regional state, and the *number* of global states it matches: increasing the number of local states in a regional state (making it more specific) decreases the number of global states that it matches. This inverse relationship makes a consistent interpretation of the term "larger" hard to remember.

### Partial order

This specificity relation defines a **partial order** on \(\regionalStatesSymbol\), since it is reflexive, transitive, and if both \(\regionalStateSymbol{R}\regionalSubstateSymbol \regionalStateSymbol{S}\) and \(\regionalStateSymbol{S}\regionalSubstateSymbol \regionalStateSymbol{R}\) then the contain the same local states and are therefore equal as regional states.

We can visualize this partial order for our example \(\regionalStates{\rewritingSystem{R}}\) via its **Hasse diagram**:

Notice that the global states \(\globalStatesSymbol\) correspond to the maximal elements of this partial order. The unique minimal element of the entire semilattice is the *empty regional state* \(\emptyRegionalState \syntaxEqualSymbol \wstring{\lstr{\texttt{----}}}\), which is less specific than any regional state. This element is written \(\semilatticeBottom\) in an order-theoretic context, and is called the **bottom element**.

Notice that the local states \(\localStatesSymbol\) correspond to the minimal non-bottom elements of this order (known as the **atoms** of the semilattice). We can also identify the local states by way of the **covering relation**: an element \(\posetElementSymbol{x}\) is said to **cover** and \(\posetElementSymbol{y}\) in a poset, written \(\posetElementSymbol{x}\posetCoversSymbol \posetElementSymbol{y}\), when both \(\posetElementSymbol{x}\posetGreaterSymbol \posetElementSymbol{y}\) and there does not exist another element \(\posetElementSymbol{z}\) such that \(\posetElementSymbol{x}\posetGreaterSymbol \posetElementSymbol{z}\posetGreaterSymbol \posetElementSymbol{y}\). Then the local states are those that **cover** the bottom element \(\semilatticeBottom\):

### Meet semilattice

The set of regional states \(\regionalStatesSymbol\) has the additional order-theoretic structure of a **meet semilattice**. What is a meet semilattice? It is a partially ordered set \(\tuple{\posetSymbol{X},\posetLessEqualSymbol }\) with an additional binary operation called **meet**: \(\functionSignature{\function{\semilatticeMeetSymbol }}{\tuple{\posetSymbol{X},\posetSymbol{X}}}{\posetSymbol{X}}\) that takes a pair of elements to their greatest lower bound under the partial order \(\posetLessEqualSymbol\). In other words, the meet is defined to be the following operation:

This function must be well-defined, that is, there must be a unique such maximal element for a poset to be a meet semilattice (in a *total* order, the maximum is *always* unique, but a set of elements of a poset that is not a total order can have several such maxima, which are necessarily pairwise incomparable).

In the case of regional states, the partial order is given by containment \(\regionalSubstateSymbol\), and the meet \(\regionalStateSymbol{R}\semilatticeMeetSymbol \regionalStateSymbol{S}\) of two regional states is simply the disjunction \(\regionalStateSymbol{R}\infixStateMeetSymbol \regionalStateSymbol{S}\) we have already defined, which satisfies the required property by definition.

### Illustration of meets

Here we illustrate several meet operations in the regional state semilattice.

### Relation between extent and specificity

Phrased in terms of extent, then, we can have the following implication:

\[ \regionalStateSymbol{S}\regionalSubstateSymbol \regionalStateSymbol{R}\implies \stateExtent(\regionalStateSymbol{R}) \subseteq \stateExtent(\regionalStateSymbol{S}) \]In other words, if \(\regionalStateSymbol{R}\) is more specific than \(\regionalStateSymbol{S}\), the extent of \(\regionalStateSymbol{R}\) is a subset of the extent of \(\regionalStateSymbol{S}\). This makes sense: making a filter more specific will necessarily cause fewer global states to match it. This makes the function \(\stateExtent\) into a **monotone** function between the poset \(\tuple{\regionalStatesSymbol ,\regionalSubstateSymbol }\) of regional states and the poset \(\tuple{\powerSet{\globalStatesSymbol }, \subseteq }\) of sets of global states.

### Intent

There is a natural "inverse" of the extent function, which we'll call **intent**. It maps a set of global states to a regional state that attempts to describe this set. The signature of \(\stateIntent\) is:

Formally, it is defined to be the most specific regional state whose extent includes the input set of global states:

\[ \stateIntent(\setSymbol{G})\defEqualSymbol \indexMax{\suchThat{\regionalStateSymbol{R}}{\elemOf{\regionalStateSymbol{R}}{\regionalStatesSymbol },\stateExtent(\regionalStateSymbol{R}) \supseteq \setSymbol{G}}}{}{} \]The intent is bounded below by the *disjunction* of the input global states (when they are embedded into the corresponding regional states), since this disjunction matches all the input global states and so participates in the above maximization.

In some cases, the intent is exactly the inverse of the extent. This is true whenever we apply intent to an set of global states that can be described *exactly* as the extent of some regional state:

As an example, consider the set \(\setSymbol{G} = \set{\qstring{\lstr{\texttt{aabb}}},\qstring{\lstr{\texttt{baba}}}}\). The intent of \(\setSymbol{G}\) is the regional state \(\wstring{\lstr{\texttt{-ab-}}}\), because specifying any characters beyond the middle two will prevent one or both of the elements of \(\setSymbol{G}\) from matching. Moreover, this regional state specifies exactly one \(\lchar{\texttt{a}}\) and one \(\lchar{\texttt{b}}\), and so there are only two ways of fixing the unspecified string positions, meaning the regional state has exactly the extent \(\setSymbol{G}\). Summarizing this situation:

\[ \begin{aligned} \setSymbol{G}&= \set{\qstring{\lstr{\texttt{aabb}}},\qstring{\lstr{\texttt{baba}}}}\\ \stateIntent(\setSymbol{G})&= \wstring{\lstr{\texttt{-ab-}}}\\[1em] \stateExtent(\stateIntent(\setSymbol{G}))&= \stateExtent(\wstring{\lstr{\texttt{-ab-}}})\\ &= \set{\qstring{\lstr{\texttt{aabb}}},\qstring{\lstr{\texttt{baba}}}}\\ &= \setSymbol{G}\end{aligned} \]However, not all sets of global states are of this form. In particular, if there is pair of incompatible states within a pair of global states in the input set, this forces a *blind spot* in any regional state whose extent includes this pair of global states, and this blind spot can let other global states through that are *not* in the input set.

A simple example of this situation for \(\rewritingSystem{R_2}\) occurs for the set \(\setSymbol{G} = \set{\qstring{\lstr{\texttt{aabb}}},\qstring{\lstr{\texttt{bbaa}}}}\). This two global states are *maximally incompatible*, in that the only regional state that matches both of them is the empty regional state. This is because the melt of these two global states has zero intersection: they disagree in all string positions. Hence, the intent of \(\setSymbol{G}\) is \(\emptyRegionalState \syntaxEqualSymbol \wstring{\lstr{\texttt{----}}}\), which in turn matches *all* global states:

Notice that while we do not have *equality* between \(\setSymbol{G}\) and \(\stateExtent(\stateIntent(\setSymbol{G}))\) for some arbitrary set \(\elemOf{\setSymbol{G}}{\powerSet{\globalStatesSymbol }}\), they *are* at least comparable:

In words: taking the extent of the intent of a set of global states can either keep the set the same or enlarge it with novel states.

### Galois connection

### Diagram

This diagram illustrates the functions between local, regional, and global states. A half-arrowhead indicates a partial function. We enumerate these functions here:

z | z |
---|---|

\(\functionSignature{\function{ \iota }}{\localStatesSymbol }{\regionalStatesSymbol }\) | embeds local state into atomic regional state |

\(\functionSignature{\function{ \iota }}{\globalStatesSymbol }{\regionalStatesSymbol }\) | embeds global state into maximal regional state |

\(\partialFunctionSignature{\stateCompose}{\powerSet{\localStatesSymbol }}{\regionalStatesSymbol }\) | glues set of local states into regional state |

\(\functionSignature{\stateDecompose}{\regionalStatesSymbol }{\powerSet{\localStatesSymbol }}\) | melts regional state into set of compatible local states |

\(\functionSignature{\stateExtent}{\powerSet{\globalStatesSymbol }}{\regionalStatesSymbol }\) | gives set of global states matching regional state |

\(\functionSignature{\stateIntent}{\regionalStatesSymbol }{\powerSet{\globalStatesSymbol }}\) | gives most specific regional state that matches set of global states |

\(\partialFunctionSignature{\stateJoin}{\powerSet{\regionalStatesSymbol }}{\regionalStatesSymbol }\) | gives least specific regional state more specific than set of regional states |

\(\functionSignature{\stateMeet}{\powerSet{\regionalStatesSymbol }}{\regionalStatesSymbol }\) | gives most specific regional state less specific than set of regional states |

## Rewrite rules as schemas

## Generalizing linear algebra with atrices

## Petri nets

## Causal bigraph and causal hypergraph

## Compilation

### Rewriting systems to hypergraph rewriting systems

### Hypergraph rewriting systems to petri nets

# Coverings

## Motivation

In [[[Path groupoids]]] we defined the **path groupoid** \(\pathGroupoid{\quiver{Q}}\) of a quiver \(\quiver{Q}\), and in [[[Path homomorphisms]]] we defined maps between the **path groupoids** of two quivers that was a **groupoid homomorphism**. We will now define a relationship between quivers called a **quiver covering**, which corresponds to a surjective path homomorphism. Certain coverings can be seen as **contractions** in which we *glue together* sets of vertices.

## Graph covers

### Graph homomorphisms

Before we can define quiver covers, we'll briefly introduce **graph covers** and **graph homomorphisms**.

Formally, a (directed) graph cover \(\functionSignature{\function{\graphHomomorphism{ \pi }}}{\graph{G}}{\graph{H}}\) is a **surjective graph homomorphism** \(\graphHomomorphism{ \pi }\) between directed graphs \(\graph{G}\) and \(\graph{H}\). \(G\) is then called a **cover** of \(\graph{H}\), written \(\graph{G}\coversSymbol \graph{H}\). We'll also write \(\graphCovering{\graphHomomorphism{ \pi }}{\graph{G}}{\graph{H}}\) to explicitly indicate the homomorphism \(\graphHomomorphism{ \pi }\) involved. But what *is* a graph homomorphism?

The homomorphism \(\graphHomomorphism{ \pi }\) tells us how to "project" elements of \(\graph{G}\) onto elements of \(\graph{H}\), and consists of a map \(\functionSignature{\graphHomomorphism{ \pi _V}}{\vertexList(\graph{G})}{\vertexList(\graph{H})}\) and a map \(\functionSignature{\graphHomomorphism{ \pi _E}}{\edgeList(\graph{G})}{\edgeList(\graph{H})}\), subject to the fairly obvious compatibility condition that the vertices of the projection of an edge must be the projection of the vertices of that edge:

\[ \begin{array}{c} \graphHomomorphism{ \pi _E}(\de{\vert{g_1}}{\vert{g_2}}) = \de{\vert{h_1}}{\vert{h_2}}\\ \implies \\ \paren{\graphHomomorphism{ \pi _V}(\vert{g_1})\orSymbol \vert{h_1}}\andSymbol \paren{\graphHomomorphism{ \pi _V}(\vert{g_2})\orSymbol \vert{h_2}} \end{array} \]### Graph covers

A graph cover \(\graph{G}\coversSymbol \graph{H}\) is a surjective graph homomorphism. Therefore one way we can depict a cover is by illustrating which vertices and edges of \(\graph{G}\) cover which vertices and edges of \(\graph{H}\). For small graphs, we will use additive color semantics to do this. In particular, if \(\graph{G}\) has three vertices, we will depict them as \(\rform{\filledToken }\), \(\gform{\filledToken }\), \(\bform{\filledToken }\). Then we'll show a vertex of \(\graph{H}\) with a color that is the additive blend of the primary colors of its preimage under \(\graphHomomorphism{ \pi _V}\). For example, \(\inverse{\graphHomomorphism{ \pi _V}}(\rgform{\filledToken }) = \list{\rform{\filledToken },\gform{\filledToken }}\), \(\inverse{\graphHomomorphism{ \pi _V}}(\wcform{\filledToken }) = \list{\rform{\filledToken },\gform{\filledToken },\bform{\filledToken }}\). Similarly we'll color the edges: \(\inverse{\graphHomomorphism{ \pi _E}}(\waform{\barToken }) = \list{\barToken ,\wcform{\barToken }}\).

Let's look at very simple example *undirected* graph \(\graph{G}\), which we'll call the **partial triangle**, and a particular \(\graph{H}\) it covers:

The \(\rgform{\filledToken }\) vertex of \(\graph{H}\) is covered by the \(\rform{\filledToken }\) and \(\gform{\filledToken }\) vertices of \(\graph{G}\), and the \(\bform{\filledToken }\) vertex of \(\graph{H}\) by the \(\bform{\filledToken }\) vertex of \(\graph{G}\). We’ll refer to these pre-images as **contractions**, so that we say \(\rgform{\filledToken }\) is the **contraction** of \(\rform{\filledToken }\) and \(\gform{\filledToken }\). We've also shown dotted lines to their original positions to indicate that the \(\rform{\filledToken }\) and \(\gform{\filledToken }\) vertices were contracted together by this covering to form the \(\rgform{\filledToken }\) vertex. The two edges are mapped uniquely here, and so remain colored \(\barToken\) and \(\wcform{\barToken }\), hence there are no edge contractions. However, the edge between \(\rform{\filledToken }\) and \(\gform{\filledToken }\) now become a self-loop on \(\rgform{\filledToken }\).

### Covering partial order

Our partial triangle covers many smaller graphs. For finite graphs, it’s clear that the covered graph has the same or fewer vertices, so we can conclude that up to isomorphism there are only finitely many covered graphs of \(\graph{G}\). Furthermore, if \(\graph{G}\coversSymbol \graph{H}\coversSymbol \graph{J}\), it's clear that \(\graph{G}\coversSymbol \graph{J}\), since the composition of two surjective homomorphisms is another. Therefore, covering forms a partial order on finite graphs, and we can depict this in the form of a larger vertical graph, with an edge between two graphs if the higher one *strictly* covers the lower one – in other words a graph is connected to one below it if it covers it, is not isomorphic, and the covering is not implied by transitivity.

Here is the **covering** **partial order**, or synonymously the **contraction partial order**, for our example:

### Square

Let's consider the covering partial order for a "partial square" with 4 vertices and 3 edges. To reduce uninteresting complexity, we will avoid the explicit contraction of edges, and perform these contractions "greedily" whenever multiple edges exist between two vertices as a result of a vertex contraction:

Notice that as in the previous example, this partial order terminates in a single vertex and self-loop, the **1-graph**, which is covered by every other graph. This single vertex and its self loop are gray, reflecting the fact that they are the color average of *all* of the original primary-colored vertices and edges.

## Quiver covers

What about quivers? How do we extend the notion of covering to the quiver setting? We saw that for graphs, we could effect any particular covering in a sequence of atomic steps, in which we contracted together pairs of vertices and edges. It turns out that this approach does not work for quivers, for the elementary reason that we can easily violate the local uniqueness property unless we perform multiple vertex contractions *simultaneously*.

#### Simultaneous contractions

Let's take a simple example to illustrate this idea. We'll focus on coverings of \(\bindCards{\subSize{\squareQuiver }{2}}{\rform{\card{r}},\bform{\card{b}}}\):

Here we illustrate a particular *graph* covering:

Here, we have performed the contraction \(\contractionProduct{\vert{N}\contractionProductSymbol \vert{E}}\), where \(\contractionProductSymbol\) represents the contraction of two vertices. However the result is *no longer a quiver*, as the contracted vertex \(\contractionProduct{\vert{n}\contractionProductSymbol \vert{e}}\) now has two incoming \(\rform{\card{r}}\) cardinals. So while this is *graph* covering, it is not a *quiver* covering.

In contrast, here is a valid quiver covering, where we have performed the gluing \(\contractionSum{\contractionProduct{\vert{N}\contractionProductSymbol \vert{E}}\contractionSumSymbol \contractionProduct{\vert{S}\contractionProductSymbol \vert{W}}}\), which consists of the simultaneous gluings \(\contractionProduct{\vert{N}\contractionProductSymbol \vert{E}}\) and \(\contractionProduct{\vert{S}\contractionProductSymbol \vert{W}}\):

You might object that there should be *two* edges with the cardinal \(\rform{\card{r}}\) between the glued vertices \(\contractionProduct{\vert{N}\contractionProductSymbol \vert{E}}\) and \(\contractionProduct{\vert{S}\contractionProductSymbol \vert{W}}\). We simply define these two edges to be contracted together automatically and implicitly, and in fact we will only allow edge contractions that are precisely of this "deduplicating" form.

The key point here is that the requirement of preserving the local uniqueness property is what makes gluings such as \(\contractionProduct{\vert{N}\contractionProductSymbol \vert{E}}\) impossible to perform in isolation: it is only when we perform \(\contractionSum{\contractionProduct{\vert{N}\contractionProductSymbol \vert{E}}\contractionSumSymbol \contractionProduct{\vert{S}\contractionProductSymbol \vert{W}}}\) simultaneously that we preserve local uniqueness.

#### Contraction order

Let's compute the complete contraction order:

In the section [[[Contraction lattices]]] we will examine this object from an order-theoretic point of view, establishing the structure of an **order-theoretic lattice** (which is a different usage of the word "lattice" than that in [[[Lattice quivers]]]).

## Path homomorphisms

Let's now turn to the question of the connection between **path homomorphisms** and **quiver coverings**.

Pictured below is a quiver \(\graph{G}\), and 5 quivers that it covers: \(\graph{H_1},\graph{H_2},\graph{H_3},\graph{H_4},\graph{H_5}\).

These form a partial order as before, which looks like this:

Let's focus on a particular covering \(\graphCovering{\graphHomomorphism{ \pi }}{\quiver{G}}{\quiver{H_1}}\), which is associated with some graph (quiver) homomorphism \(\graphHomomorphism{ \pi } = \tuple{\graphHomomorphism{ \pi _E},\graphHomomorphism{ \pi _V}}\).

An alternative way of expressing a quiver homomorphism is via a **path homomorphism** \(\functionSignature{\pathHomomorphism{ \rho }}{\pathGroupoid{\graph{G}}}{\pathGroupoid{\graph{H_1}}}\). Why is this? I'll illustrate by providing a particular \(\pathHomomorphism{ \rho }\) that corresponds to \(\graphHomomorphism{ \pi }\), and illustrating its behavior with a path table:

As for any path homomorphism, empty paths are sent to empty paths, \(\pathHomomorphism{ \rho }(\paren{\pathWord{\vert{s}}{\emptyWord{}}{\vert{s}}}) = \paren{\pathWord{\vert{t}}{\emptyWord{}}{\vert{t}}}\), giving us the vertex covering \(\graphHomomorphism{ \pi _V}(\vert{s}) = \vert{t}\).

Additionally, this homomorphism is length-preserving: 1-paths are sent to 1-paths, \(\pathHomomorphism{ \rho }(\paren{\pathWord{\vert{u}}{\word{\card{c}}}{\vert{\primed{u}}}}) = \paren{\pathWord{\vert{v}}{\word{\card{c}}}{\vert{\primed{v}}}}\), giving us the edge covering \(\graphHomomorphism{ \pi _E}(\tde{\vert{u}}{\primed{u}}{\card{c}}) = \tde{\vert{v}}{\primed{v}}{\card{c}}\).

What about the compatibility condition between \(\graphHomomorphism{ \pi _E}\) and \(\graphHomomorphism{ \pi _V}\)? This is a consequence (easily checked) of the ordinary compatibility condition of path homomorphisms: \(\pathHomomorphism{ \rho }(\pathCompose{\path{P_1}}{\path{P_2}}) = \pathCompose{\pathHomomorphism{ \rho }(\path{P_1})}{\pathHomomorphism{ \rho }(\path{P_2})}\).

Ok, so coverings can be described by path homomorphisms. But are they uniquely so described? And do all path homomorphisms yield corresponding coverings?

#### Surjective path homomorphisms to coverings

To answer the latter question, let's answer a simpler question: can a path homomorphism send an empty path to a non-empty path? As we saw in [[[Path homomorphisms]]], the answer is *no*. Therefore we can always obtain a vertex cover \(\graphHomomorphism{ \pi _V}\) from a *surjective* path homomorphism, since this will guarantee that every vertex (= empty path) of the covered quiver has at least one corresponding vertex (= empty path) of the covering quiver.

Ok, so the \(\graphHomomorphism{ \pi _V}\) story seems clear. To know that we can obtain an edge cover \(\graphHomomorphism{ \pi _E}\) from \(\pathHomomorphism{ \rho }\), let's imagine how it could go wrong: can a 1-path can be sent to e.g. a 2-path? Again, we saw examples of **path-lengthening** homomorphisms already in [[[Path homomorphisms]], but here is a path table showing a simple example:

This path homomorphism is *not* surjective, of course: the 1-paths in \(\quiver{H}\) have no preimage under \(\pathHomomorphism{ \rho }\). However, we *can* construct a more elaborate *surjective* homomorphism that is still path lengthening? Here is such an example:

Clearly, if the covering quiver is disconnected we have considerable freedom in defining path homomorphisms.

Another counterexample to the hypothesis that surjective path homomorphisms yield coverings is given by the following surjective **path shortening** homomorphism:

Here, in moving from \(\quiver{G}\) to \(\quiver{H}\), we have **contracted** the edge labeled with cardinal \(\gform{\card{g}}\), effectively deleting that cardinal from the path word of any path that passes through it. This is not a covering, because a covering must send edges of \(\quiver{G}\) to edges of \(\quiver{H}\).

The tentative answer then seems to be that without additional restrictions, a surjective path homomorphism does not automatically yield a covering. But the requirement we need is easily guessed: we must have a **length-preserving** path homomorphism, since this defines a unique (oriented) edge in the covered graph for every edge in the covering graph in a compatible way.

#### Coverings to surjective path homomorphisms

Does a covering \(\graphCovering{\function{ \pi }}{\quiver{G}}{\quiver{H}}\) induce a *unique* path homomorphism \(\pathHomomorphism{ \rho }\)? The answer here is more satisfying: yes! The reason of course is simple: the data of \(\graphHomomorphism{ \pi _E}\) and \(\graphHomomorphism{ \pi _V}\) does uniquely determine the behavior of \(\pathHomomorphism{ \rho }\) on 0-paths and 1-paths. And since all longer paths in \(\graph{G}\) can be constructed from compositions of 1-paths, and composition must be preserved by \(\pathHomomorphism{ \rho }\), we have no choice in how to define \(\pathHomomorphism{ \rho }\) on such longer paths. Hence, a covering yields (or *generates*) a path homomorphism.

We write this as \(\quiverCovering{\pathHomomorphism{ \gamma }}{\quiver{G}}{\quiver{H}}\), and call \(\pathHomomorphism{ \gamma }\) a **covering homomorphism**.

## Path quiver homomorphism

A more subtle kind of path homomorphism has been latent since we introduced path quivers: the path homomorphisms from a quiver to its forward path quiver. To construct these, we need to map every path in the quiver to a path in the path quiver (a “path of paths”). Let’s start with the *easy* part: if the path in the base quiver starts at the origin, we form the path in the path quiver by taking the sequence of extensions that extend the empty path into the full path:

In this example, we’ve sent a path in the base quiver with path word \(\word{\gform{\card{g}}}{\gform{\card{g}}}{\rform{\card{r}}}\) to a path in the path quiver with path word \(\word{\gform{\card{g}}}{\gform{\card{g}}}{\rform{\card{r}}}\).

But to define a path homomorphism, we also need to figure out where to send paths which do *not* start at the origin. For the tree quiver example, this is straightforward: for a path \(\path{P}\) in the base quiver starting at \(\vert{t} \neq \vert{o}\), we choose a vertex in the path quiver corresponding to the path taking us from the origin \(\vert{o}\) to \(\vert{t}\), then use the path word as before:

In the case of trees the of this path homomorphism from a quiver to its path quiver is trivial, because the path quivers of trees are already isomorphic to them. But it turns out a related construction will work for all finite connected quivers, not just trees. Let’s look at a simple base quiver that is not a tree:

Because there is a cycle \(\paren{\pathWord{\vert{1}}{\word{\rform{\card{r}}}{\bform{\card{b}}}{\rform{\ncard{r}}}{\bform{\ncard{b}}}}{\vert{1}}}\), the path quiver will be infinite, so we’ll only draw the subgraph of the path quiver in which this cycle is “wound” a maximum of one time. Choosing the origin as \(\vert{1}\), we again have no problem with the “easy” part of where to send paths starting at the origin, since they just have the same cardinal word. For example, the path \(\paren{\pathWord{\vert{1}}{\word{\bform{\card{b}}}{\rform{\card{r}}}{\bform{\card{b}}}}{\vert{6}}}\):

The “hard” part consists of dealing with paths like this:

To figure out where our path homomorphism \(\pathHomomorphism{ \rho }\) sends this path \(\path{P} = \paren{\pathWord{\vert{5}}{\word{\bform{\ncard{b}}}{\rform{\card{r}}}{\bform{\card{b}}}}{\vert{6}}}\) must first choose a vertex \(\elemOf{\vert{t}}{\forwardPathQuiver{\quiver{Q}}{\vert{1}}}\) that corresponds to the *tail* vertex of \(\path{P}\), which is \(\vert{5}\). Now \(\pathHomomorphism{ \rho }(\vert{5})\) is a path with head vertex \(\vert{5}\). One we’ve chosen \(\pathHomomorphism{ \rho }(\vert{t})\), there is nothing more to do, since the path word of \(\path{P}\) gives the path word of \(\pathHomomorphism{ \rho }(\path{P})\). However, whatever choice we make for \(\pathHomomorphism{ \rho }(\vert{5})\) must be compatible with other choices we make for other vertices in \(\quiver{Q}\). Luckily, this isn’t as tricky as it sounds. All we need to do is choose a **spanning tree** of the base quiver.

## Fundamental quiver

We now return to the situation of a lattice quiver \(\quiver{Q}\) generated by a fundamental quiver \(\bindCards{\quiver{F}}{\card{\card{c}_1},\card{\ellipsis },\card{\card{c}_{\sym{i}}}}\) with the representation associated with a **cardinal valuation** \(\functionSignature{\groupoidFunction{ \phi }}{\cardinalList(\quiver{Q})}{\group{G}}\). That \(\quiver{Q}\) is generated by \(\quiver{F}\) is the statement:

An important fact about lattice quivers is they cover their fundamental quivers: \(\graphCovering{\graphHomomorphism{ \pi }}{\quiver{Q}}{\quiver{F}}\). To this covering is associated a path homomorphism \(\pathHomomorphism{ \rho }\), which we use to define the covering as follows:

\[ \begin{aligned} \pathHomomorphism{ \rho }(\pathWord{\primed{\vert{t}}}{\wordSymbol{W}}{\primed{\vert{h}}})&\defEqualSymbol \pathWord{\vert{t}}{\wordSymbol{W}}{\vert{h}}\\ \primed{\vert{t}}&\defEqualSymbol \tuple{\vert{t},\groupoidElement{g}_1}\\ \primed{\vert{h}}&\defEqualSymbol \tuple{\vert{h},\groupoidElement{g}_2}\end{aligned} \]Here we use the fact that \(\elemOf{\primed{\vert{v}}}{\vertexList(\quiver{Q})}\) are precisely pairs \(\tuple{\vert{v},\groupoidElement{g}}\), where \(\elemOf{\vert{t}}{\vertexList(\quiver{F})}\) and \(\groupoidElement{g}\) are elements of the group associated with the cardinal valuation \(\groupoidFunction{ \phi }\).

# Contraction lattices

## Recap

In the previous section [[[Coverings]]], we introduced the idea of a **quiver covering**. Starting from a single quiver \(\quiver{Q}\) we considered the **contraction order** of \(\quiver{Q}\), whose elements are **contracted quivers** in which sets of vertices of \(\quiver{Q}\) are identified with one another. In this section we will develop this idea further, turning the contraction order into a **contraction lattice**, and study the structure of this lattice for some of the basic transitive quivers. We will also outline the connections of this lattice with group theory.

Note: this section is largely incomplete. See [[[here:Summary and roadmap#Contraction lattices]]] for more information.

## Contraction sets and terms

Let's start with the quiver \(\bindCards{\subSize{\squareQuiver }{2}}{\rform{\card{r}},\bform{\card{b}}}\), shown below:

The contraction order for this quiver is shown below, where each contracted quiver is labeled with the corresponding **contraction set**, which we will define shortly:

A **contraction set** \(\sym{S}\) expresses a contraction of the original quiver in terms of a disjoint union of **contraction terms** \(\sym{T}_{\sym{i}}\):

Each term \(\sym{T}_{\sym{i}}\) represents a set of vertices that are contracted together. For example, the contracted quiver on the right is specified by the contraction set \(\contractionSum{\contractionProduct{\vert{E}\contractionProductSymbol \vert{S}}\contractionSumSymbol \contractionProduct{\vert{N}\contractionProductSymbol \vert{W}}}\), which consists of the disjoint contraction terms \(\contractionProduct{\vert{E}\contractionProductSymbol \vert{S}}\) and \(\contractionProduct{\vert{N}\contractionProductSymbol \vert{W}}\).

## Visualizing contractions

We'll introduce three ways to visualize the individual contracted quivers in the contraction lattice. To illustrate these visualizations, we'll use the same contractions of \(\bindCards{\subSize{\squareQuiver }{2}}{\rform{\card{r}},\bform{\card{b}}}\) that we introduced above:

The **clique visualization** depicts each contraction term as a clique of interconnected vertices. The **color visualization** draws vertices belonging to the same contraction term using distinct colors (note these colors have no connection to the colors of the cardinals), and uses white for vertices that are left uncontracted. The **graph visualization** shows the contracted graphs directly.

## Lattice structure

We defined the contraction order to be the partially ordered set of **contracted quivers** of some original quiver \(\quiver{Q}\). Next we will show how to to attach the structure of an **order-theoretic lattice** to this poset, which we'll call the **contraction lattice** of \(\quiver{Q}\), written \(\contractionLattice{\quiver{Q}}\).

#### Lattices

First we give a superficial summary of lattices, but we recommend reading additional material about lattices to make the most of this section.

A lattice is a poset with two additional binary operations, known as \(meet\) \(\latticeMeetSymbol\) and **join** \(\latticeJoinSymbol\). These operations distribute over each other, and play a role analogous to intersection and union of sets – in particular the subsets of a set \(\sym{X}\) form a lattice in which \(\latticeElementSymbol{A}\latticeMeetSymbol \latticeElementSymbol{B}\defEqualSymbol \latticeElementSymbol{A}\setIntersectionSymbol \latticeElementSymbol{B}\), \(\latticeElementSymbol{A}\latticeJoinSymbol \latticeElementSymbol{B}\defEqualSymbol \latticeElementSymbol{A}\setUnionSymbol \latticeElementSymbol{B}\), and \(\latticeElementSymbol{A} \le \latticeElementSymbol{B}\iff \latticeElementSymbol{A} \subseteq \latticeElementSymbol{B}\). A **bounded lattice** is a lattice with two distinguished elements called **top** \(\latticeTop\) and **bottom** \(\latticeBottom\), which are respectively greater than and less than all other elements, and serve as the identity for meet and join respectively.

#### Contractions as equivalence relations

Recall that \(\path{R}\) is a contraction quiver of \(\quiver{Q}\) if the vertices of \(\path{R}\) are contractions of vertices of \(\quiver{Q}\) that preserve the local uniqueness property. We can then see \(\path{R}\) as being uniquely specified by an equivalence relation on the vertices of \(\quiver{Q}\). We’ll write this relation as \(\isContractedIn{\vert{u}}{\vert{v}}{\quiver{R}}\), which is the statement that vertices \(\vert{u},\vert{v}\) of \(\quiver{Q}\) are considered contracted together in \(\quiver{R}\).

A little thought will reveal that a contraction \(\quiver{R}\) covers another contraction \(\quiver{S}\), written \(\quiver{R}\coversSymbol \quiver{S}\) (or in lattice terminology \(\quiver{R}\latticeGreaterEqualSymbol \quiver{S}\)) if and only if the relation for \(\quiver{R}\) is a **refinement** of the relation for \(\quiver{S}\):

(Note that we use the word "cover" here in the usual sense of a quiver covering, not in the lattice-theoretic sense of a cover, which has a different meaning.)

Ok, we've expressed the elements of the contraction order in terms of equivalence relations on sets, with the partial order induced by refinement of equivalence relations. The **finest** equivalence relation regards all vertices as distinct, corresponding to the **uncontraction** \(\quiver{Q}\), and the **coarsest** equivalence relation regards all vertices as contracted, corresponding to the **complete contraction**, given by a bouquet quiver with the same cardinals as \(\quiver{Q}\). These serve as the top and bottom of the contraction order: \(\latticeTop = \quiver{Q},\latticeBottom = \bouquetQuiver{\sym{k}}\).

#### Topped intersection structure

We now show that the set of vertex equivalence relations describing valid quiver contractions form a subset of the powerset of the vertices, closed under intersection. A set of this form is called a **topped intersection structure**, and automatically gives this set the structure of a **bounded lattice**. The meet and join are defined as you would expect: the meet of two relations corresponds to the finest relation that is still coarser than both, and the join of two relations gives the coarsest relation that still is finer than both.

First, we show that if we have two quiver contraction equivalence relations \(\contractedRelation{\quiver{R}},\contractedRelation{\quiver{S}}\), the set-theoretic intersection of these two relations (as subsets of the powerset) \(\contractedRelation{\quiver{R} \, \quiver{S}}\defEqualSymbol \contractedRelation{\quiver{R}}\setIntersectionSymbol \contractedRelation{\quiver{S}}\) is again an equivalence relation. This is straightforward:

\[ \begin{aligned} \paren{\isContractedIn{\vert{u}}{\vert{v}}{\quiver{R} \, \quiver{S}}}\andSymbol \paren{\isContractedIn{\vert{v}}{\vert{w}}{\quiver{R} \, \quiver{S}}}& \implies \paren{\isContractedIn{\vert{u}}{\vert{v}}{\quiver{R}}}\andSymbol \paren{\isContractedIn{\vert{u}}{\vert{v}}{\quiver{R}}}\andSymbol \paren{\isContractedIn{\vert{v}}{\vert{w}}{\quiver{S}}}\andSymbol \paren{\isContractedIn{\vert{v}}{\vert{w}}{\quiver{S}}}\\ & \implies \paren{\isContractedIn{\vert{u}}{\vert{w}}{\quiver{R}}}\andSymbol \paren{\isContractedIn{\vert{u}}{\vert{w}}{\quiver{S}}}\\ & \implies \isContractedIn{\vert{u}}{\vert{w}}{\quiver{R} \, \quiver{S}}\end{aligned} \]It remains to be shown that this equivalence relation corresponds to a *quiver* contraction, meaning that it preserves LU. We use proof by contradiction: assume that we have a violation of LU witnessed by \(\tde{\vert{u}}{\vert{v}}{\card{c}},\tde{\vert{u}}{\vert{w}}{\card{c}},\isNotContractedIn{\vert{v}}{\vert{w}}{\quiver{R} \, \quiver{S}}\), where \(\vert{u},\vert{v},\vert{w}\) are arbitrary representatives of equivalence classes in the relation \(\contractedRelation{\quiver{R} \, \quiver{S}}\). Since \(\contractedRelation{\quiver{R}}\) and \(\contractedRelation{\quiver{S}}\) both **do** correspond to quiver contractions, we must have both \(\isContractedIn{\vert{v}}{\vert{w}}{\quiver{R}}\) and \(\isContractedIn{\vert{v}}{\vert{w}}{\quiver{S}}\) to *prevent* these being witnesses to violation of LU in \(\quiver{R}\) and \(\quiver{S}\) respectively. But this implies that \(\isContractedIn{\vert{v}}{\vert{w}}{\quiver{R} \, \quiver{S}}\) by definition, a contradiction.

#### Intuition

In terms of contractions, we can provide an intuitive summary of this situation as follows:

the meet \(\quiver{R}\latticeMeetSymbol \quiver{S}\) of two contracted quivers gives the contracted quiver that performs all the vertex contractions present in either, as well as the minimum number of additional contractions that is required to recover LU. In other words, the “minimal amount of contraction” that will unify both quivers.

the join \(\quiver{R}\latticeJoinSymbol \quiver{S}\) of two contracted quivers gives the contracted quiver that has undergone as

*much*contraction as possible that is consistent with either of them: any additional contractions would contract pairs of vertices that are*not*contracted in one or the other.

## Minimal contraction sets

We now describe a simple algorithm to find the **minimal contraction sets** of a quiver \(\quiver{Q}\). These are “minimal” in the sense that they are *forced* by a contraction of only two vertices. We'll write this set as \(\minimalContractionSets(\quiver{Q})\), and the corresponding contracted quivers as \(\minimalContractions(\quiver{Q})\).

The algorithm starts by enumerating the set of all possible contaction terms containing two distinct vertices. Each 2-term serves as a "seed" for calculating a minimal contraction set that preserves local uniqueness (abbreviated LU). We first contract the vertices of the original quiver using the seed 2-term, and then test if this produces any duplicate incoming or outgoing cardinals, as these indicate a violation of LU for one or more cardinals. If there are no duplicates, we are done, and have a new minimal contraction set containing just that single 2-term, and we can move on to the next candidate term.

If not, we "chase" all duplicate incoming or outgoing cardinals, and so obtain one or more additional 2-terms that must be contracted to re-establish the LU. This chasing process might have to repeat several times, since these additional contraction terms will eliminate the original violations of LU but could introduce new violations for the new terms. When we finally obtain a new set of contraction terms that does not introduce any new LU violations, we are done, and we have a minimal contraction set, and can proceed to the next candidate pair.

#### Example

We'll use \(\bindSize{\squareQuiver }{2,3}\) to illustrate this process:

Here we show a seed term \(\contractionProduct{1\contractionProductSymbol 2}\) and the successive steps of the "growing" of the seed, each time chasing outgoing \(\rform{\card{r}}\)-duplicates:

Here we show the sequence for a different seed term \(\contractionProduct{1\contractionProductSymbol 3}\):

The sequence for \(\contractionProduct{2\contractionProductSymbol 3}\):

Notice that if we start at the seed \(\contractionProduct{4\contractionProductSymbol 5}\), we obtain the same final contraction:

With a little thought it becomes clear that we will grow the same contraction set when starting from *any* of its terms. This implies that our enumeration algorithm can avoid this redundancy by skipping any candidate 2-terms that have already appeared as a 2-term of any previously generated contraction set.

## Some simple minimal contractions

To gain intuition, we'll explore the minimal contraction sets \(\minimalContractionSets(\quiver{Q})\) for the line, cycle, square, and square tori quivers. These will turn out to be straightforward to understand, and in fact will give us a good initial understanding of the nature of the contraction lattices for these particular quivers.

### Cycle quiver

Here we show \(\minimalContractionSets(\subSize{\cycleQuiver }{12})\), using color visualization, and below each, the corresponding minimal contracted quivers \(\minimalContractions(\subSize{\cycleQuiver }{12})\):

We've also labeled each of the contracted quivers with its "closed form", that is, a named quiver that is isomoprhic to the contracted quiver. These are the cycle quivers whose size consists of prime power divisors of 12, plus the "degenerate prime" \(\subSize{\cycleQuiver }{1}\). This turns out to be true for the minimal contraction sets of \(\subSize{\cycleQuiver }{\sym{n}}\) for arbitrary \(\sym{n}\), yielding the theorem:

\[ \minimalContractions(\subSize{\cycleQuiver }{\sym{n}}) = \setConstructor{\subSize{\cycleQuiver }{\power{\sym{p}}{\sym{m}}}}{\elemOf{\sym{m}}{\mathbb{N}^+},\elemOf{\sym{p}}{\mathbb{P}},\power{\sym{p}}{\sym{m}}\dividesSymbol \sym{n}}\setUnionSymbol \list{\subSize{\cycleQuiver }{1}} \]Here \(\mathbb{P}\) stands for the domain of the prime numbers.

### Line quiver

Here we show \(\minimalContractionSets(\subSize{\lineQuiver }{7})\):

The minimal contractions here the ways to glue the two ends of the line together with more or less overlap, yielding cycle quivers. Again, the pattern generalizes, yielding the following theorem:

\[ \minimalContractions(\subSize{\lineQuiver }{\sym{n}}) = \setConstructor{\subSize{\cycleQuiver }{\sym{i}}}{1 \le \sym{i}<\sym{n}} \]### Square quiver

We'll start with \(\bindSize{\squareQuiver }{3,2}\), the example we worked through earlier:

Here is \(\minimalContractionSets(\bindSize{\squareQuiver }{3,2})\):

Notice that the first 5 contractions have intuitive interpretations. The \(1^{\textrm{st}}\), \(2^{\textrm{nd}}\) and \(5^{\textrm{th}}\) have "rolled up" the square quiver to form toroidal lattices in either the \(\rform{\card{r}}\) or \(\bform{\card{b}}\) axis. The \(3^{\textrm{rd}}\) and \(4^{\textrm{th}}\) have "projected" the 2D square lattice into a 1D lattice with the two possibily "parities". To explain the final two contractions, we will consider the larger example of \(\bindSize{\squareQuiver }{3,3}:\)

The twelve generating contraction sets are shown below using color visualization:

As before, we obtain several torodial contractions and two linear contractions. A little thought will see that we can compose the remaining, mysterious contractions with each other to obtain sheared toroidal contractions with shear factors of \(1\) and \(-1\) along either the \(\rform{\card{r}}\) or \(\bform{\card{b}}\) axes.

### Square tori

Earlier, we saw that a line quiver contracted to cycle quivers, which themselves contracted to other cycle quivers according to their prime power factors. Similarly, the square quiver contracted to line quivers and (possibly sheared) square tori, which are the 2D analogues of the cycle quivers. So do the square tori contract to other square tori?

To answer this question, we compute the minimal contractions for some small square tori:

The answer here is a provisional yes. Eliding cardinal assingments for brevity, we have the following closed forms, given in the same order as the above diagrams:

\[ \begin{aligned} \minimalContractions(\bindSize{\toroidalModifier{\squareQuiver }}{3,3})&= \list{\bindSize{\lineQuiver }{3},\bindSize{\lineQuiver }{3},\bindSize{\toroidalModifier{\squareQuiver }}{3,1},\bindSize{\toroidalModifier{\squareQuiver }}{1,3}}\\ \minimalContractions(\bindSize{\toroidalModifier{\squareQuiver }}{3,4})&= \list{\bindSize{\cycleQuiver }{1},\bindSize{\toroidalModifier{\squareQuiver }}{1,2},\bindSize{\toroidalModifier{\squareQuiver }}{3,1},\bindSize{\toroidalModifier{\squareQuiver }}{2,4},\bindSize{\toroidalModifier{\squareQuiver }}{3,2}}\\ \minimalContractions(\bindSize{\toroidalModifier{\squareQuiver }}{4,4})&= \list{\bindSize{\cycleQuiver }{4},\bindSize{\toroidalModifier{\squareQuiver }}{2_1,2},\bindSize{\toroidalModifier{\squareQuiver }}{2,2_1},\bindSize{\toroidalModifier{\squareQuiver }}{4,1},\bindSize{\toroidalModifier{\squareQuiver }}{1,4},\bindSize{\toroidalModifier{\squareQuiver }}{4,2},\bindSize{\toroidalModifier{\squareQuiver }}{2,4},\bindSize{\toroidalModifier{\squareQuiver }}{2_2,2}}\end{aligned} \]## Obtaining the contraction lattice

Having obtained the set of minimal contraction sets for a quiver, how do we go about generating the entire contraction lattice?

To gain intuition for this problem, let's look at the lattice for the quiver \(\bindSize{\squareQuiver }{2,3}\) we looked at initially. Here we highlight the vertices that correspond to the minimal contraction sets:

The minimal contractions here seem to be precisely the elements that are covered by a *single* other element. Lattice elements with this property are known as **meet-irreducible** in order theory. Meet-irreducible elements have the further property that for complete lattices, *every* element can be expressed as the meet of meet-irreducible elements (although not necessarily in a unique way).

The non-uniqueness of this decomposition can be seen in the example above. The rightmost element on the third row can be formed by taking the meet of any 2 of its 3 parents (or all three). The situation can be even more complicated: the left element on the penultimate row has 4 meet-irreducible parents. There are 5 ways of taking the meet of 2 of these parents to form it, but there is *also* 1 way of taking the meet of 2 of its parents (the left 2) that does *not* form it. Taking the meet of any 3 of its parents is sufficient to form it, however.

Nevertheless, this connection with meet-irreducibility suggests a plain if inelegant way generate all other contraction sets: we can form all possible sums of the meet irreducible contraction sets. This corresponds to forming a valid contraction \(\quiver{R}\coveredBySymbol \quiver{Q}\) as the meet of every minimal contraction generated by all pairs of contracted vertices in \(\quiver{R}\).

# Vertex colorings

## Introduction

Returning to colorings, we are now better equipped to understand how colorings correspond to coverings. We say a quiver covering \(\functionSignature{\graphHomomorphism{ \pi }}{\graph{G}}{\graph{H}}\,\)**induces a coloring** of \(\graph{G}\), where the “colors” are just the identities of vertices in \(\graph{H}\). In other words, the color of a \(\elemOf{\vert{g}}{\vertexList(\graph{G})}\) is simply *which* vertex \(\elemOf{\vert{h}}{\vertexList(\graph{H})}\) it is sent to: specifically \(\graphHomomorphism{ \pi _V}(\vert{g})\). We can of course only use visible colors to illustrate this idea when \(\graph{H}\) has a finite number of vertices.

What makes this covering perspective on colorings interesting is that there is a fairly mechanical process we can use to construct finite coverings of *any* fundamental quiver. These coverings will continue to generate the same lattice quiver as the original fundamental quiver, but will yield colorings of the lattice quiver containing more colors. We’ll see later that that the families of colorings given by fundamental coverings are in a sense discrete *harmonics* of a particular lattice quiver, generated by discrete **partial difference equations**.

## Line lattice

Let’s look first at the 1-dimension **line lattice**.

The fundamental coloring given by the simplest fundamental quiver, the bouquet quiver, is a 1-coloring (or constant coloring), since the bouquet only has one vertex:

Here’s the 2-coloring, given by the fundamental quiver that simply alternates between the two colors:

Higher colorings are given by “circle quivers” with more vertices, where the cardinal simply cycles among the colors (the 1- and 2-colorings can be seen as instances of these):

These are the only fundamental colorings of the line lattice, since there is no remaining freedom for how to choose cardinals once the number of vertices in the fundamental quiver is decided.

## Square lattice

The square lattice brings with it more interesting behavior, since the two cardinals can interact in various ways. But let’s start with the 1-coloring given by the simplest fundamental quiver:

With two colors, we obtain “stripes” of constant color that are horizontally, vertically, or diagonally oriented:

3-colorings display a similar pattern, except the diagonal stripes can now be oriented in two different ways (whereas in the case of 2 colors, both of these orientations yield the same coloring):

4-colorings repeat these motifs, but include a third kind of behavior we haven’t encountered before, in which fixing a color yields a “decimated” form of the original square lattice at one-quarter resolution:

One way we can view these colorings is as embedding sub-lattices in the quiver lattice. The idea is to choose a color in the fundamental quiver and consider the set of (shortest) paths that begin and end at this vertex. The words for these paths then act as the primitive cardinals for the sub-lattice. The first four colorings above yield to the line lattice by choosing e.g. blue, and the path words \(\word{\card{x}}\), \(\word{\card{y}}\), and \(\word{\card{x}}{\ncard{y}}\) respectively. The fifth coloring yields the square lattice, via blue and the path words \(\list{\word{\card{x}}{\card{x}},\word{\card{y}}{\card{y}}}\).

Let's call this fifth coloring a 1-decimation, since it is the smallest possible decimation. A 2-decimation is shown below, with path words \(\list{\word{\card{x}}{\card{x}}{\card{x}},\word{\card{y}}{\card{y}}{\card{y}}}\):

## Triangular lattice

Let’s now turn to colorings of the triangular lattice. As before, we’ll start with the 1*-*coloring induced by the simplest fundamental quiver:

The two-colorings are the familiar oriented stripes, which can now occur along the three cardinal axes. Interestingly, there is no equivalent of the diagonal stripes we saw in the square case:

The 3-colorings do include a new motif that can be seen either as 1-decimation of the triangular lattice, or as a diagonal stripe that is half-way between the orientations of the 3 cardinals. Again, as in the square case, this single coloring simultaneously yields all 3 orientations of this diagonal in a “degenerate” way:

As we saw before in the square case, moving to 4-colorings causes the single degenerate diagonal coloring to split into oriented versions:

With 5-colorings, this situation repeats unchanged:

Jumping ahead to 9-colorings, we obtain a 2-decimation of the triangular lattice to itself:

This 2-decimation is in fact almost identical to the 2-decimation of the square lattice we saw earlier, thanks to the fact that a triangular lattice can be obtained from a square lattice by introducing additional edges that connect each vertex \(\vert{v}\) to the vertex \(\vert{w}\) via path \(\pathWord{\vert{v}}{\word{\card{x}}{\card{y}}}{\vert{w}}\).

It’s not hard to see that for both triangular and square lattices, an \(n\)-decimation has \((n+1)^2\) colors.

## Hexagonal lattice

The hexagonal lattice, being the first lattice with a non-bouquet fundamental quiver, generally requires many more colors in its fundamental quivers, making its analysis more cumbersome.

Let's start with the smallest fundamental quiver, which is already a two-coloring:

There are no three colorings. Four-colorings produce “mixed diagonal” stripes composed of two colors and two cardinals taken in alternation:

There is a simple connection between \(n\)-colorings of the hexagonal lattice and \(n+1\)-colorings of triangular lattice: observe that the hexagonal lattice can be obtained from the triangular lattice by deleting a certain set of vertices, leaving a triangular pattern of “holes”. We can achieve this by deleting one or more vertices from the fundamental quiver of a triangular lattice if those vertices yield the right triangular pattern of holes. This will give a fundamental quiver that generates the hexagonal lattice, and hence a hexagonal lattice coloring. For example, the 2-coloring shown above can be obtained from the 1-decimation of the square lattice by deleting the green fundamental vertex:

Similarly we can obtain of the hexagonal 4-colorings by deleting a color from triangular 5-colorings. Here we show one case:

# Quiver products

## Motivation

In this section we will consider how we can form **product spaces** in quiver geometry, which we call **quiver products**: quivers constructed as products of two or more factor quivers. Quiver products will be important in a latter section, where we will need them to define the local trivializations used in [[[fibre bundles:Fiber bundles]]], but they'll also give us a new perspective on [[[transitive quivers:Transitive quivers]]].

## Quiver products

What is a quiver product? Speaking informally, a quiver product between two quivers \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) is another quiver \(\quiver{\rbform{Q}}\) with the following properties:

\[ \begin{aligned} \vertexList(\quiver{\rbform{Q}})&= \vertexList(\quiver{\rform{R}})\cartesianProductSymbol \vertexList(\quiver{\bform{S}})\\ \edgeList(\quiver{\rbform{Q}})&\subseteq \pathList(\quiver{\rform{R}})\cartesianProductSymbol \pathList(\quiver{\bform{S}})\end{aligned} \]There are many possible products \(\quiver{\rbform{Q}}\), corresponding to systematic ways to choose the subsets of the Cartesian product \(\pathList(\quiver{\rform{R}})\cartesianProductSymbol \pathList(\quiver{\bform{S}})\). We will gradually introduce the formalism to do this.

Of course, we must *also* attach a cardinal structure to the product \(\quiver{\rbform{Q}}\), defining how the so-called **product cardinals** label its edges.

#### Terminology

To talk clearly about the vertices, edges, and cardinals of \(\quiver{\rbform{Q}}\), and contrast them with the vertices, edges, and cardinals of \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\), we will use the following terms:

z | z |
---|---|

factor quiver | \(\quiver{\rform{R}}\) or \(\quiver{\bform{S}}\) |

factor vertex | vertex of \(\quiver{\rform{R}}\) or \(\quiver{\bform{S}}\) |

factor transition | path in \(\quiver{\rform{R}}\) or \(\quiver{\bform{S}}\) |

factor cardinal | cardinal of \(\quiver{\rform{R}}\) or \(\quiver{\bform{S}}\) |

These ingredients will be combined to form the vertices, edges, and cardinals of \(\quiver{\rbform{Q}}\), as follows:

z | z | z |
---|---|---|

product quiver | \(\quiver{\rbform{Q}}\) | constructed from factor quivers |

product vertex | vertex of \(\quiver{\rbform{Q}}\) | constructed from tuples of factor vertices |

product cardinal | cardinal of \(\quiver{\rbform{Q}}\) | constructed from tuples of factor transitions |

We will use \(\vertexProduct{\vert{\rform{u}},\vert{\bform{v}}}\) to represent the product vertex consisting of factor vertices \(\vert{\rform{u}}\) and \(\vert{\bform{v}}\) from factor quivers \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\), and similarly \(\cardinalProduct{\rform{\card{r}},\bform{\card{b}}}\) to represent the product cardinal consisting of factor cardinals \(\rform{\card{r}}\) and \(\bform{\card{b}}\).

#### Color

To ease the notational burden and avoid unnecessary use of subscripts, we'll use color to associate vertices, edges, and cardinals with their corresponding graphs in a product. For example, for quivers \(\quiver{\rform{R}},\quiver{\bform{S}}\) we'll refer generically to particular vertices belonging to \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) with symbols \(\vert{\rform{r}}\) and \(\vert{\bform{s}}\). Likewise we'll refer to particulars edges with \(\edge{\rform{e}},\edge{\bform{f}}\), and refer to particular cardinals with \(\card{\rform{c}},\card{\bform{d}}\).

We'll *also* use color to relate the product vertices with factor vertices, similar to the role of color in our explanation of [[[coverings:Coverings]]]. Using this convention the product vertex such as \(\vertexProduct{\vert{\rform{\filledToken }},\vert{\gform{\filledToken }}}\) is displayed as the additive color blend of the two factor vertex colors, giving \(\vert{\rgform{\filledToken }}\). Similarly, the product cardinal \(\cardinalProduct{\rform{\card{r}},\gform{\card{g}}}\) will be displayed with the colored arrowhead \(\rgform{\arrowhead }\).

### Transitions

We will now define a large family of binary quiver products by choosing how to combine *transitions* in the factor quivers \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) into product edges. Transitions in the factor quivers are *paths*, rather than edges. For the products defined in this section, we will consider paths of maximal length 1.

As we stated above , the vertices of \(\quiver{\rbform{P}}\) are all the possible pairs \(\vertexProduct{\vert{\rform{u}},\vert{\bform{v}}}\), where \(\vert{\rform{u}}\) is a vertex from \(\quiver{\rform{R}}\) and \(\vert{\bform{s}}\) is a vertex from \(\quiver{\bform{S}}\).

Our only freedom in defining a product then comes from how we construct the edges of \(\quiver{\rbform{P}}\) from the paths in \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\), which we will call *transitions*. We will construct product edges from pairs of *transitions* from \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\), where we can restrict each the \(\quiver{\rform{R}}\)-transition and \(\quiver{\bform{S}}\)-transition to one of three types: forward, backward, and neutral:

A

**forward**\(\quiver{\rform{R}}\)-transition is a 1-path in \(\quiver{\rform{R}}\) containing a forward (non-inverted) cardinal \(\rform{\card{r}}\).A

**backward**\(\quiver{\rform{R}}\)-transition is a 1-path in \(\quiver{\rform{R}}\) containing a backward (inverted) cardinal \(\rform{\inverted{\card{r}}}\).A

**neutral**\(\quiver{\rform{R}}\)-transition is a 0-path in \(\quiver{\rform{R}}\), labeled with the special factor cardinal \(\rform{\card{1}}\) that is not part of the cardinal set of \(\quiver{\rform{R}}\).

The same definitions are used for \(\quiver{\bform{S}}\)-transitions, of course.

Here are the 9 possible product edges we can form that correspond to pairs of these transitions. These will form the *primitive constructors* that will generate the edges of the quiver product. Any combinations of these constructors will yield a distinct quiver product.

Here we have laid out the product edges *vertically*, so that the left and right factors vertices and cardinals now appear on the top and bottom respectively:

Let's examine some constructors from this table to understand them better.

The top left entry in the table above is \(\tde{\vertexProduct{\vert{\rform{t}},\vert{\bform{t}}}}{\vertexProduct{\vert{\rform{h}},\vert{\bform{h}}}}{\cardinalProduct{\rform{\card{r}},\bform{\card{b}}}}\), a product edge constructor that combines two factor transitions \(\parenPathWord{\vert{\rform{t}}}{\rform{\word{\card{r}}}}{\vert{\rform{h}}}\) and \(\parenPathWord{\vert{\bform{t}}}{\bform{\word{\card{b}}}}{\vert{\bform{h}}}\). This product edge is labeled with a product cardinal \(\cardinalProduct{\rform{\card{r}},\bform{\card{b}}}\) which expresses that these transitions involve forward (non-inverted) cardinals \(\rform{\card{r}}\) and \(\bform{\card{b}}\).

The bottom right entry is \(\tde{\vertexProduct{\vert{\rform{h}},\vert{\bform{h}}}}{\vertexProduct{\vert{\rform{t}},\vert{\bform{t}}}}{\cardinalProduct{\rform{\inverted{\card{r}}},\bform{\inverted{\card{b}}}}}\), a product edge constructor formed from two factor transitions \(\parenPathWord{\vert{\rform{h}}}{\rform{\word{\inverted{\card{r}}}}}{\vert{\rform{t}}}\) and \(\parenPathWord{\vert{\bform{h}}}{\bform{\word{\inverted{\card{b}}}}}{\vert{\bform{t}}}\), which correspond to edges from \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) that have been traversed *backwards*. Again, the product cardinal \(\cardinalProduct{\rform{\inverted{\card{r}}},\bform{\inverted{\card{b}}}}\) expresses this fact.

Now consider the top-right entry \(\tde{\vertexProduct{\vert{\rform{t}},\vert{\bform{h}}}}{\vertexProduct{\vert{\rform{h}},\vert{\bform{t}}}}{\cardinalProduct{\rform{\card{r}},\bform{\inverted{\card{b}}}}}\). Here we have a forward factor transition \(\parenPathWord{\vert{\rform{t}}}{\rform{\word{\card{r}}}}{\vert{\rform{h}}}\) from \(\quiver{\rform{R}}\) and a backward factor transition \(\parenPathWord{\vert{\bform{h}}}{\bform{\word{\inverted{\card{b}}}}}{\vert{\bform{t}}}\) from \(\quiver{\bform{S}}\), hence the product cardinal \(\cardinalProduct{\rform{\card{r}},\bform{\inverted{\card{b}}}}\).

The top middle entry is \(\tde{\vertexProduct{\vert{\rform{t}},\vert{\bform{b}}}}{\vertexProduct{\vert{\rform{h}},\vert{\bform{b}}}}{\cardinalProduct{\rform{\card{r}},\bform{\card{1}}}}\), which corresponds to a forward factor transition \(\parenPathWord{\vert{\rform{t}}}{\rform{\word{\card{r}}}}{\vert{\rform{h}}}\), and an empty factor transition \(\parenPathWord{\vert{\bform{b}}}{\emptyWord{}}{\vert{\bform{b}}}\).

These constructors tell us how to *manufacture* product edges from the factor quivers. To understand the situation a bit better, we will now visualize the product quivers these constructors produce.

### Visualizing graph products

We can visualize the graph product in ways that help clarify the product structure. We'll use two methods to do this:

We will color vertices of the product by additive blending of the constituent vertices. For example, the product vertex \(\vertexProduct{\vert{\rform{\filledToken }},\vert{\gform{\filledToken }}}\) will be displayed as \(\vert{\rgform{\filledToken }}\), and \(\vertexProduct{\vert{\bform{\filledToken }},\vert{\emptyToken }}\) as \(\vert{\lbform{\filledToken }}\).

For graphs that can be laid out in one dimension, we will derive the \(\sym{y}\) coordinate of \(\vertexProduct{\vert{u},\vert{v}}\) from the one-dimensional coordinate of \(\vert{u}\) and the \(\sym{x}\) coordinate of \(\vertexProduct{\vert{u},\vert{v}}\) from the one-dimensional coordinate of \(\vert{v}\).

Let's examine the products we can form from \(\quiver{R} = \quiver{S} = \subSize{\lineQuiver }{2}\). These are the simplest factor quivers we can use that will yield a non-trivial result.

First we'll consider the product defined by \(\tde{\vertexProduct{\vert{\rform{t}},\vert{\bform{t}}}}{\vertexProduct{\vert{\rform{h}},\vert{\bform{h}}}}{\cardinalProduct{\rform{\card{r}},\bform{\card{b}}}}\):

Since there is only one edge and therefore one 1-path in each factor quiver, there there is only one possible product edge. The orientation of this product edge matches the orientations of the factor edges.

Now consider the product defined by \(\tde{\vertexProduct{\vert{\rform{t}},\vert{\bform{b}}}}{\vertexProduct{\vert{\rform{h}},\vert{\bform{b}}}}{\cardinalProduct{\rform{\card{r}},\bform{\card{1}}}}\):

There are of course *two* 0-paths in \(\quiver{\bform{S}}\), one for each vertex, and hence two possible product edges.

Lastly, consider the product defined by \(\tde{\vertexProduct{\vert{\rform{r}},\vert{\bform{b}}}}{\vertexProduct{\vert{\rform{r}},\vert{\bform{b}}}}{\cardinalProduct{\rform{\card{1}},\bform{\card{1}}}}\). This product does not depend on which edges are present in either quiver! It produces only self-loops:

Let's revisit the original table of the 9 possible primitive product edge constructors:

\[ \begin{csarray}{ccc}{aii} \mtde{\verticalVertexProduct{\vert{\rform{t}}}{\vert{\bform{t}}}}{\verticalVertexProduct{\vert{\rform{h}}}{\vert{\bform{h}}}}{\rform{\card{r}}}{\bform{\card{b}}} & \mtde{\verticalVertexProduct{\vert{\rform{t}}}{\vert{\bform{b}}}}{\verticalVertexProduct{\vert{\rform{h}}}{\vert{\bform{b}}}}{\rform{\card{r}}}{\bform{\card{1}}} & \mtde{\verticalVertexProduct{\vert{\rform{t}}}{\vert{\bform{h}}}}{\verticalVertexProduct{\vert{\rform{h}}}{\vert{\bform{t}}}}{\rform{\card{r}}}{\bform{\inverted{\card{b}}}}\\[2em] \mtde{\verticalVertexProduct{\vert{\rform{r}}}{\vert{\bform{t}}}}{\verticalVertexProduct{\vert{\rform{r}}}{\vert{\bform{h}}}}{\rform{\card{1}}}{\bform{\card{b}}} & \mtde{\verticalVertexProduct{\vert{\rform{r}}}{\vert{\bform{b}}}}{\verticalVertexProduct{\vert{\rform{r}}}{\vert{\bform{b}}}}{\rform{\card{1}}}{\bform{\card{1}}} & \mtde{\verticalVertexProduct{\vert{\rform{r}}}{\vert{\bform{h}}}}{\verticalVertexProduct{\vert{\rform{r}}}{\vert{\bform{t}}}}{\rform{\card{1}}}{\bform{\inverted{\card{b}}}}\\[2em] \mtde{\verticalVertexProduct{\vert{\rform{h}}}{\vert{\bform{t}}}}{\verticalVertexProduct{\vert{\rform{t}}}{\vert{\bform{h}}}}{\rform{\inverted{\card{r}}}}{\bform{\card{b}}} & \mtde{\verticalVertexProduct{\vert{\rform{h}}}{\vert{\bform{b}}}}{\verticalVertexProduct{\vert{\rform{t}}}{\vert{\bform{b}}}}{\rform{\inverted{\card{r}}}}{\bform{\card{1}}} & \mtde{\verticalVertexProduct{\vert{\rform{h}}}{\vert{\bform{h}}}}{\verticalVertexProduct{\vert{\rform{t}}}{\vert{\bform{t}}}}{\rform{\inverted{\card{r}}}}{\bform{\inverted{\card{b}}}} \end{csarray} \]Notice that we can fully describe a product edge constructor by with the product cardinal labelling each product edge, since they fully determine the transitions corresponding to each product edge. Here we reproduce the above table using only these product cardinals:

\[ \begin{csarray}{ccc}{aii} \cardinalProduct{\rform{\card{r}},\bform{\card{b}}} & \cardinalProduct{\rform{\card{r}},\bform{\card{1}}} & \cardinalProduct{\rform{\card{r}},\bform{\inverted{\card{b}}}}\\[2em] \cardinalProduct{\rform{\card{1}},\bform{\card{b}}} & \cardinalProduct{\rform{\card{1}},\bform{\card{1}}} & \cardinalProduct{\rform{\card{1}},\bform{\inverted{\card{b}}}}\\[2em] \cardinalProduct{\rform{\inverted{\card{r}}},\bform{\card{b}}} & \cardinalProduct{\rform{\inverted{\card{r}}},\bform{\card{1}}} & \cardinalProduct{\rform{\inverted{\card{r}}},\bform{\inverted{\card{b}}}} \end{csarray} \]Below we show the corresponding table of product quivers:

### Arrow notation

The disadvantage of using product cardinals like \(\cardinalProduct{\rform{\card{r}},\bform{\inverted{\card{b}}}}\) to describe product edge constructors is that we are relying on a redundant kind of notational convention: both the color (red) and the letter \(\rform{\card{r}}\) is used to communicate the same idea – that we are taking a forward transition in both of the factor quivers. We instead propose a simpler and more compact notation, called **arrow notation**, which uses only color.

Using arrow notation, a binary quiver product between \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) can be written as a kind of polynomial in the "variables" in two colors, being \(\rform{\forwardFactor },\rform{\backwardFactor },\rform{\neutralFactor }\) and \(\bform{\forwardFactor },\bform{\backwardFactor },\bform{\neutralFactor }\), which denote the different kinds of possible factor transitions from \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) respectively:

z | z | z | z |
---|---|---|---|

\(\rform{\forwardFactor }\) | forward \(\quiver{\rform{R}}\)-transition: 1-path from \(\quiver{\rform{R}}\) | \(\bform{\forwardFactor }\) | forward \(\quiver{\bform{S}}\)-transition: 1-path from \(\quiver{\bform{S}}\) |

\(\rform{\backwardFactor }\) | backward \(\quiver{\rform{R}}\)-transition: 1-path from \(\quiver{\rform{R}}\) | \(\bform{\backwardFactor }\) | backward \(\quiver{\bform{S}}\)-transition: 1-path from \(\quiver{\bform{S}}\) |

\(\rform{\neutralFactor }\) | neutral \(\quiver{\rform{R}}\)-transition: 0-path from \(\quiver{\rform{R}}\) | \(\bform{\neutralFactor }\) | neutral \(\quiver{\bform{S}}\)-transition: 0-path from \(\quiver{\bform{S}}\) |

Each term of the polynomial is a degree-two monomial with one red variable and one blue variable, and represents a primitive product edge constructor.

Let's use this new notation to label the 9 primitive products of our two 2-line quivers, which are the 9 possible degree-2 monomials:

### Sums of monomials

If we allow sums of these monomials, such as \(\quiverProdPoly{\rform{\forwardFactor }\,\bform{\neutralFactor }+\rform{\neutralFactor }\,\bform{\forwardFactor }}\), we can express quiver products that contain the *union* of the corresponding primitive products:

### Application notation

If we wish to form a quiver product defined by a particular arrow polynomial, we would like to able to communicate which factor quivers are bound to which arrows. To do this, we will use a notation in which the factor quivers appear as the bottom row of a vertical expression, and the arrow appears as the top row. Color will then serve as the way in which we bind the factor quivers, written as symbols, with the arrows of the polynomial.

Here are several examples of products of two factor quivers named \(\quiver{R}\) and \(\quiver{S}\):

\[ \frac{\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardFactor }}} {\quiver{\rform{R}},\quiver{\bform{S}}} \kern{20pt} \frac{\quiverProdPoly{\rform{\forwardFactor }\,\bform{\backwardFactor }}} {\quiver{\rform{R}},\quiver{\bform{S}}} \kern{20pt} \frac{\quiverProdPoly{\rform{\forwardFactor }\,\bform{\neutralFactor }+\rform{\neutralFactor }\,\bform{\forwardFactor }}} {\quiver{\rform{R}},\quiver{\bform{S}}} \]The actual colors used do not matter – they effectively act as *local variables*. The above products are identical to those shown below, which use two different colors that denote the same bindings between factor quivers and arrows:

We will also sometimes write an application in a compact form:

\[ {\quiverProdPoly{\rbform{\forwardFactor }\,\gbform{\forwardFactor }}} / {(\quiver{\rbform{R}},\quiver{\gbform{S}})} \kern{20pt} {\quiverProdPoly{\rbform{\forwardFactor }\,\gbform{\backwardFactor }}} / {(\quiver{\rbform{R}},\quiver{\gbform{S}})} \kern{20pt} {\paren{\quiverProdPoly{\rbform{\forwardFactor }\,\gbform{\neutralFactor }+\rbform{\neutralFactor }\,\gbform{\forwardFactor }}}} / {(\quiver{\rbform{R}},\quiver{\gbform{S}})} \]### Symmetries under reversal

At this point, it is useful to observe that the 9 constructors we enumerated are related to one another under edge reversal of the resulting products.

For example, the product defined by \(\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardFactor }}\) is the edge reversal of the product defined by \(\quiverProdPoly{\rform{\backwardFactor }\,\bform{\backwardFactor }}\):

The explicit edges of these two products are given by \(\tde{\vert{\drbform{\filledToken }}}{\vert{\lgbform{\filledToken }}}{\cardinalProduct{\rform{\card{r}},\bform{\card{b}}}}\) and \(\tde{\vert{\lgbform{\filledToken }}}{\vert{\drbform{\filledToken }}}{\cardinalProduct{\rform{\inverted{\card{r}}},\bform{\inverted{\card{b}}}}}\). This suggests the structural property of product edges that the inversion of the product cardinal \(\cardinalProduct{\rform{\card{r}},\bform{\card{b}}}\) is the product cardinal \(\cardinalProduct{\rform{\inverted{\card{r}}},\bform{\inverted{\card{b}}}}\) – in other words, that inversion distributes across the product.

Here we show the 9 constructors in columns, placed above their inverses:

We can use notation to "pre-apply" inversion to the product polynomial rather than to the resulting product quiver. In other words, we can write the inversion of the product resulting from given quivers as the product of an inverted polynomial. The algebraic properties of this inversion operation on polynomials are:

\[ \begin{aligned} \inverted{\forwardFactor }&= \backwardFactor \\ \inverted{\neutralFactor }&= \neutralFactor \\ \inverted{\backwardFactor }&= \forwardFactor \\ \inverted{\quiverProdPoly{\sym{ \alpha }+\sym{ \beta }}}&= \quiverProdPoly{\inverted{\sym{ \alpha }}+\inverted{\sym{ \beta }}}\\ \inverted{\paren{\quiverProdPoly{\sym{ \alpha }\,\sym{ \beta }}}}&= \quiverProdPoly{\inverted{\sym{ \alpha }}\,\inverted{\sym{ \beta }}}\end{aligned} \]Here, \(\sym{ \alpha }\) and \(\sym{ \beta }\) represent arbitrary arrows, whether forward, backward, or neutral.

### Distributivity

We briefly examine the validity of the following distributivity law:

\[ \quiverProdPoly{\paren{\sym{ \alpha }+\sym{ \beta }}\,\sym{ \gamma }} = \quiverProdPoly{\quiverProdPoly{\sym{ \alpha }\,\sym{ \gamma }}+\quiverProdPoly{\sym{ \beta }\,\sym{ \gamma }}} \]As an example, taking \(\sym{ \alpha } = \rform{\forwardFactor },\sym{ \beta } = \rform{\backwardFactor },\sym{ \gamma } = \bform{\forwardFactor }\), this manifests as the identity \(\quiverProdPoly{\paren{\rform{\forwardFactor }+\rform{\backwardFactor }}\,\bform{\forwardFactor }} = \quiverProdPoly{\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardFactor }}+\quiverProdPoly{\rform{\backwardFactor }\,\bform{\forwardFactor }}}\). What is the meaning of this identity? To unpack this, we will have to understand how we can evaluate this product in stages, first evaluating the part \(\quiverProdPoly{\rform{\forwardFactor }+\rform{\backwardFactor }}\) and then evaluating the product \(\quiverProdPoly{□+\bform{\forwardFactor }}\).

We can understand an arrow polynomial as a tree of evaluations – the formal mathematical structure being known as an **operad**, which we will not elucidate here. For example, the product \({\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardFactor }}} / {\paren{\quiver{\rform{R}},\quiver{\bform{S}}}}\) can be understood as the following tree, where the leaves of the tree are the quivers that are used as inputs to the arrow polynomial:

The factorization \(\quiverProdPoly{\paren{\rform{\forwardFactor }+\rform{\backwardFactor }}\,\bform{\forwardFactor }}\) is depicted graphically below, in which we *first* evaluate a univariate product \({\paren{\quiverProdPoly{\gform{\forwardFactor }+\gform{\backwardFactor }}}} / {\quiver{\gform{R}}}\) and then use the result as the left factor for the product \(\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardFactor }}\):

This corresponds to how we can factor the polynomial \(\sym{x} \, \sym{y} + \inverse{\sym{x}} \, \sym{y}\) into the product \(\paren{\sym{x} + \inverse{\sym{x}}} \, \sym{y}\), which can be evaluated on particular inputs in a staged fashion

Again, notice that the colors and symbols \(\sym{a},\sym{b},\sym{c}\) serve a *purely notational* role, depicting how to bind the inputs of each polynomial evaluation. We can replace these symbols and colors with any others we like to describe the exact same product.

### Unary products

It is interesting to examine the three products we can form from a *single* quiver using the monomials \(\forwardFactor ,\neutralFactor ,\backwardFactor\). They are shown below for a finite line quiver in the first row and a finite square quiver in the second row:

In short, the forward constructor \(\forwardFactor\) acts like the identity, the backward constructor \(\backwardFactor\) reverses all edges, and the neutral constructor \(\neutralFactor\) has the effect of disconnecting the quiver, creating self-loops for each vertex. Let's make this explicit for the line quiver:

\[ \begin{aligned} {\quiverProdPoly{\quiver{\forwardFactor }}} / {\bindCardSize{\subSize{\lineQuiver }{4}}{\rform{\card{r}}}}&= \bindCardSize{\subSize{\lineQuiver }{4}}{\cardinalProduct{\rform{\card{r}}}}\\[1em] {\quiverProdPoly{\quiver{\backwardFactor }}} / {\bindCardSize{\subSize{\lineQuiver }{4}}{\rform{\card{r}}}}&= \bindCardSize{\subSize{\lineQuiver }{4}}{\cardinalProduct{\rform{\inverted{\card{r}}}}}\\[1em] {\quiverProdPoly{\quiver{\neutralFactor }}} / {\bindCardSize{\subSize{\lineQuiver }{4}}{\rform{\card{r}}}}&= \indexGraphDisjointUnion{\bindCardSize{\subSize{\cycleQuiver }{1}}{\cardinalProduct{\rform{\card{1}}}}}{4}{}\end{aligned} \]We can write this behavior for a general quiver as:

\[ \begin{aligned} {\quiverProdPoly{\quiver{\forwardFactor }}} / {\quiver{Q}}&\homeomorphicSymbol\quiver{Q}\\[1em] {\quiverProdPoly{\quiver{\backwardFactor }}} / {\quiver{Q}}&\homeomorphicSymbol\inverted{\quiver{Q}}\\[1em] {\quiverProdPoly{\quiver{\neutralFactor }}} / {\quiver{Q}}&\homeomorphicSymbol\indexGraphDisjointUnion{\subSize{\cycleQuiver }{1}}{\vertexCountOf{\quiver{Q}}}{}\end{aligned} \]## Named binary products

Equipped with our new, compact notation for building products out of sums of primitive edge constructors, we will now enumerate and name some of the possible binary products.

### Locked product

We'll introduce a binary quiver product called the **locked quiver product**, denoted \(\lockedQuiverProductSymbol\). This product is related to the notion of the *tensor graph product* \(\otimes\) of undirected graphs.

The **locked quiver product** \(\quiver{R}\lockedQuiverProductSymbol \quiver{S}\) of two quivers \(\quiver{R},\quiver{S}\) is defined to be:

We use the word **locked** to indicate that the transitions of the two factor quivers are locked together in the same, forward orientation.

Let's examine \(\quiver{R}\lockedQuiverProductSymbol \quiver{S}\), where \(\quiver{R} = \quiver{S} = \subSize{\lineQuiver }{2}\):

Let's enlarge \(\quiver{S}\), setting \(\quiver{S} = \subSize{\lineQuiver }{3}\):

Finally let's enlarge \(\quiver{R}\), setting \(\quiver{R} = \subSize{\lineQuiver }{3}\):

### Free products

We now introduce the **right-free quiver product** \(\quiver{R}\rightFreeQuiverProductSymbol \quiver{S}\), which involves *always* taking the forward transition for the left factor and taking either the forward, neutral, or backward transition for the right factor. Explicitly:

Here we are using the notation \(\forwardBackwardNeutralFactor \syntaxEqualSymbol \paren{\quiverProdPoly{\quiver{\forwardFactor }+\quiver{\neutralFactor }+\quiver{\backwardFactor }}}\).

The **left-free quiver product** is defined how you would expect:

### Cartesian product

The Cartesian product \(\quiver{R}\cartesianQuiverProductSymbol \quiver{S}\) is defined by taking only *one* factor transition, taken from either factor, combined with the neutral factor from the other factor. Explicitly:

### Visualization

Let's visualize the free quiver products and contrast them with the locked and Cartesian quiver product. Let's start with the same products we considered earlier:

Notice that for the free products and the Cartesian product we no longer have vertices that are totally disconnected.

Let's enlarge \(\quiver{S}\), setting \(\quiver{S} = \subSize{\lineQuiver }{3}\):

Now let's enlarge \(\quiver{R}\), setting \(\quiver{R} = \subSize{\lineQuiver }{3}\):

### Summary

The following table lists the named products we have defined above:

z | z | z |
---|---|---|

locked | \(\quiver{\rform{R}}\dependentQuiverProductSymbol \quiver{\bform{S}}\) | \(\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardFactor }}\) |

left free | \(\quiver{\rform{R}}\leftFreeQuiverProductSymbol \quiver{\bform{S}}\) | \(\quiverProdPoly{\rform{\forwardBackwardNeutralFactor }\,\bform{\forwardFactor }}\) |

right free | \(\quiver{\rform{R}}\rightFreeQuiverProductSymbol \quiver{\bform{S}}\) | \(\quiverProdPoly{\rform{\forwardFactor }\,\bform{\forwardBackwardNeutralFactor }}\) |

Cartesian | \(\quiver{\rform{R}}\cartesianQuiverProductSymbol \quiver{\bform{S}}\) | \(\quiverProdPoly{\rform{\forwardFactor }\,\bform{\neutralFactor }+\rform{\neutralFactor }\,\bform{\forwardFactor }}\) |

## Transitive quivers

### Square quiver

The first application of the products we have defined above will be to construct the square quiver as a product of two line quivers. We will examine more exotic cases in the section [[[Exceptional products]]].

Let's start with the Cartesian product \(\subSize{\lineQuiver }{ \infty }\cartesianQuiverProductSymbol \subSize{\lineQuiver }{ \infty }\) between two infinite line lattices \(\subSize{\lineQuiver }{ \infty }\):

We obtain the infinite square quiver: \(\subSize{\lineQuiver }{ \infty }\cartesianQuiverProductSymbol \subSize{\lineQuiver }{ \infty }\isomorphicSymbol \subSize{\squareQuiver }{ \infty }\)

Let's limit ourselves to the Cartesian product of two *finite* line quivers, \(\subSize{\lineQuiver }{3}\cartesianQuiverProductSymbol \subSize{\lineQuiver }{2}\):

We obtain the finite rectangular quiver \(\bindCardSize{\squareQuiver }{3,2}\). We can generalize to arbitrary sizes easily, and include the cardinal bindings:

\[ \bindCardSize{\lineQuiver }{\gbform{\card{x}}\compactBindingRuleSymbol \sym{w}}\cartesianQuiverProductSymbol \bindCardSize{\lineQuiver }{\rbform{\card{y}}\compactBindingRuleSymbol \sym{h}}\isomorphicSymbol \bindCardSize{\squareQuiver }{\gbform{\card{x}}\compactBindingRuleSymbol \sym{w},\rbform{\card{y}}\compactBindingRuleSymbol \sym{h}} \]Setting \(\sym{n} = \infty\) we obtain the special case \(\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\gbform{\card{x}}}\cartesianQuiverProductSymbol \bindCardSize{\subSize{\lineQuiver }{ \infty }}{\rbform{\card{y}}}\isomorphicSymbol \bindCardSize{\subSize{\squareQuiver }{ \infty }}{\gbform{\card{x}},\rbform{\card{y}}}\).

### Triangular quiver

We need not limit ourselves to products between *two* quivers.

Let us define the **triangular product** \(\function{ \Delta }(\quiver{R},\quiver{G},\quiver{B})\) between three quivers as follows:

Here we show the *largest* connected component of the the triangular product between three finite line lattices of size 5:

We have colored the product cardinals by the additive blend of the graphs involved, so that \(\quiverProdPoly{\rform{\forwardFactor }\,\gform{\backwardFactor }}\) is represented by the arrowhead \(\rgform{\arrowhead }\), \(\poly{\gform{\forwardFactor } \, \bform{\backwardFactor }}\) by \(\gbform{\arrowhead }\), and \(\quiverProdPoly{\bform{\forwardFactor }\,\rform{\backwardFactor }}\) by \(\rbform{\arrowhead }\). The symbol \(\componentSuperQuiverOfSymbol\) means "contains the connected component".

The full set of connected components correspond to slices through a larger object which we will describe in the next section.

#### Infinite product

The product between three infinite line lattices, \(\function{ \Delta }(\subSize{\lineQuiver }{ \infty },\subSize{\lineQuiver }{ \infty },\subSize{\lineQuiver }{ \infty })\), contains countably infinite connected components, each of which is isomorphic to the infinite line quiver:

\[ \function{ \Delta }(\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\rform{\card{r}}},\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\gform{\card{g}}},\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\bform{\card{b}}})\homeomorphicSymbol \indexGraphDisjointUnion{\subSize{\triangularQuiver }{ \infty }}{\mathbb{Z}}{} \]We depict this visually below:

### Non-transitive quivers

We can also construct various non-transitive quivers as products – this will be examined in the next section, [[[Exceptional products]]].

## Projections

Quiver products are explicitly constructed out of combinations of transitions of their factor quivers. This means that we have a straightforward way to *project* from a path in the product quiver to any of its factor quivers, by simply composing the transitions for that particular factor along the given path.

Let's examine an example of the square quiver \(\bindCardSize{\subSize{\squareQuiver }{3}}{\rform{\card{x}},\bform{\card{y}}}\), isomorphic to the product \(\bindCardSize{\subSize{\lineQuiver }{3}}{\rform{\card{r}}}\cartesianQuiverProductSymbol \bindCardSize{\subSize{\lineQuiver }{3}}{\bform{\card{b}}}\) by identifying the cardinals \(\rform{\card{x}}\isomorphicSymbol \cardinalProduct{\rform{\card{r}},\bform{1}},\bform{\card{y}}\isomorphicSymbol \cardinalProduct{\rform{1},\bform{\card{b}}}\).

Below we show a path \(\path{P_{\quiver{\rbform{Q}}}} = \pathWord{\vertexProduct{2,2}}{\word{\bform{\card{y}}}{\rform{\card{x}}}{\bform{\ncard{y}}}{\bform{\ncard{y}}}{\rform{\card{x}}}}{\vertexProduct{4,1}}\) in the product quiver \(\quiver{\rbform{Q}} = \quiver{\rform{R}}\cartesianQuiverProductSymbol \quiver{\bform{S}}\), and the corresponding projected paths \(\path{P_{\quiver{\rform{R}}}},\path{P_{\quiver{\bform{S}}}}\) in the two factors \(\quiver{\rform{R}},\quiver{\bform{S}}\):

The two **factor paths** are obtained by writing the path \(\path{P_{\quiver{\rbform{Q}}}}\) in terms of product cardinals \(\cardinalProduct{\rform{\card{r}},\bform{\card{1}}},\cardinalProduct{\rform{\card{1}},\bform{\card{b}}}\) rather than the aliases \(\rform{\card{x}},\bform{\card{y}}\). Then we have:

Writing this path *vertically*, we can more easily see how the projections are formed:

The projections are the *top half* and *bottom half* of this path:

Notice that we were able to apply word reduction to the *factor* path words once they had been obtained, which made them shorter than the original product path word.

There are two [[[path homomorphisms:Path homomorphisms]]], written \(\functionSignature{\pathHomomorphism{ \pi _{\quiver{\rform{R}}}}}{\pathGroupoid{\quiver{\rbform{Q}}}}{\pathGroupoid{\quiver{\rform{R}}}}\) and \(\functionSignature{\pathHomomorphism{ \pi _{\quiver{\bform{S}}}}}{\pathGroupoid{\quiver{\rbform{Q}}}}{\pathGroupoid{\quiver{\bform{S}}}}\), that project *any* path in \(\quiver{\rbform{Q}}\) to its **factor path** in \(\quiver{\rform{R}}\) and \(\quiver{\bform{S}}\) respectively. Then our examples of course illustrate \(\pathHomomorphism{ \pi _{\quiver{\rform{R}}}}(\path{P_{\quiver{\rbform{Q}}}}) = \path{P_{\quiver{\rform{R}}}}\) and \(\pathHomomorphism{ \pi _{\quiver{\bform{S}}}}(\path{P_{\quiver{\rbform{Q}}}}) = \path{P_{\quiver{\bform{S}}}}\).

This of course generalizes to higher-arity products. For a product with \(\sym{n}\) factors, we can label these projection homomorphisms \(\pathHomomorphism{ \pi _1},\pathHomomorphism{ \pi _2},\ellipsis ,\pathHomomorphism{ \pi _{\sym{n}}}\).

#### Coverings

The projections \(\pathHomomorphism{ \pi _1},\pathHomomorphism{ \pi _2},\ellipsis ,\pathHomomorphism{ \pi _{\sym{n}}}\) associated with an \(\sym{n}\)-ary product \(\quiver{Q} = {\quiverProdPoly{\sym{ \gamma }}} / {\paren{\quiver{R_1},\ellipsis ,\quiver{R_{\sym{n}}}}}\) are by definition surjective path homomorphisms. However, they are not in general length-preserving, as we saw in the above example. This implies that \(\quiver{Q}\) is not in general a [[[cover:Coverings#Quiver covers]]] of the quivers \(\quiver{R_{\sym{i}}}\), as one might first expect.

# Exceptional products

In the previous section, [[[quiver products:Quiver products]]], we defined a general family of products of quivers in which edges of the product quiver correspond to combinations of transitions in the factor quivers. We saw that we could obtain some of the familiar transitive quivers [[[as simple products:Quiver products#Transitive quivers]]].

In this section, we will apply use products to obtain some [[[intransitive lattices:Intransitive lattices]]], as well as other more exotic objects.

## Intransitive lattices

### Triangular quiver

Recall from the previous section we could obtain countably many disjoint copies of the **triangular quiver** using the **triangular product**:

The triangular product is defined to be:

\[ \function{ \Delta }(\quiver{R},\quiver{G},\quiver{B})\defEqualSymbol \frac{\quiverProdPoly{\rform{\forwardFactor }\,\gform{\backwardFactor }\,\bform{\neutralFactor }+\rform{\neutralFactor }\,\gform{\forwardFactor }\,\bform{\backwardFactor }+\rform{\backwardFactor }\,\gform{\neutralFactor }\,\bform{\forwardFactor }}} {\quiver{\rform{R}},\quiver{\gform{G}},\quiver{\bform{B}}} \]The statement that we obtain countably many copies of the triangular lattice is then:

\[ \function{ \Delta }(\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\rform{\card{r}}},\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\gform{\card{g}}},\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\bform{\card{b}}})\homeomorphicSymbol \indexGraphDisjointUnion{\bindCardSize{\subSize{\triangularQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\sym{i}}{ \infty } \]### Hexagonal quiver

It might seem that we cannot construct *intransitive* quivers, such as the hexagonal lattice, as products of transitive quivers. And this is true. But *finite* line lattices are *not* transitive: their two end vertices are different from the others. Surprisingly, we can construct the hexagonal lattice by including one additional factor quiver, being the 2-line lattice. Here is the product that accomplishes this, which we will call the **hexagonal product**:

Let's examine an example, taking \(\rform{\quiver{R}},\gform{\quiver{G}},\bform{\quiver{B}}\) to be copies of \(\subSize{\lineQuiver }{\sym{n}}\) as before, with \(\wbform{\quiver{X}}\) being \(\subSize{\lineQuiver }{2}\). For \(\sym{n} = 8\) we obtain:

Here is the full set of connected graph components, this time for \(\sym{n} = 6\).

We can also interpret the factorization \(\quiverProdPoly{\rform{\forwardFactor }\,\gform{\backwardFactor }\,\bform{\neutralFactor }\,\waform{\forwardFactor }+\rform{\neutralFactor }\,\gform{\forwardFactor }\,\bform{\backwardFactor }\,\waform{\forwardFactor }+\rform{\backwardFactor }\,\gform{\neutralFactor }\,\bform{\forwardFactor }\,\waform{\forwardFactor }} = \quiverProdPoly{\paren{\rform{\forwardFactor }\,\gform{\backwardFactor }\,\bform{\neutralFactor }+\rform{\neutralFactor }\,\gform{\forwardFactor }\,\bform{\backwardFactor }+\rform{\backwardFactor }\,\gform{\neutralFactor }\,\bform{\forwardFactor }}\,\waform{\forwardFactor }}\) as stating that we can obtain the hexagonal lattice quiver as the locked product of the triangular lattice quiver with the 2-line quiver. In fact, three copies of the triangular lattice are produced, shown below:

An important fact is that the number of connected components (three) does not depend on the size of the \(\rform{\quiver{R}},\gform{\quiver{G}},\bform{\quiver{B}}\) line lattices, which leads to the theorem:

\[ \bindCardSize{\subSize{\triangularQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}\lockedQuiverProductSymbol \bindCardSize{\subSize{\lineQuiver }{2}}{\waform{\card{x}}}\homeomorphicSymbol \indexGraphDisjointUnion{\bindCardSize{\subSize{\hexagonalQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\sym{i}}{3} \]Since the triangular product produces countably infinite copies of the triangular lattice, we also have this property of the hexagonal product:

\[ \function{\starModifier{ \Delta }}(\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\rform{\card{c}}},\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\gform{\card{g}}},\bindCardSize{\subSize{\lineQuiver }{ \infty }}{\bform{\card{b}}},\bindCardSize{\subSize{\lineQuiver }{2}}{\waform{\card{w}}})\homeomorphicSymbol \indexGraphDisjointUnion{\bindCardSize{\subSize{\hexagonalQuiver }{ \infty }}{\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}}}{\sym{i}}{ \infty } \]### Rhombille quiver

Extending \(\wbform{\quiver{X}}\) to be a 3-line lattice yields the **rhombille lattice**:

Again, the factorization gives us an alternative way of building the rhombille lattice, again showing the three connected components:

Extending \(\wbform{\quiver{X}}\) to be a 4-line lattice yields the "alternating rhombille lattice", which involves a similar motif to the rhombille lattice, in which vertices alternate between degree 3 and degree 6:

Returning to the square lattice, we apply the same technique to obtain non-transitive versions of the square lattice. Here, we compute:

\[ \quiverProdPoly{\rform{\forwardFactor }\,\waform{\forwardFactor }+\bform{\forwardFactor }\,\waform{\forwardFactor }} \]varying \(\wbform{\quiver{X}}\) between a 2-line lattice and a 5-line lattice:

## Decomposition of products

In the previous section we decomposed the non-transitive hexagonal and rhombille lattices into their connected components. We now extend this technique to the earlier products we examined, so we can better understand how they produce fragments of corresponding lattice quivers.

As before, we'll visualize these as by superimposing each connected component on top of the full union of all connected components, shown dimmed. Note that it may appear that the union itself is fully connected, but when this occurs it is an artefact of the projection onto two dimensions.

### Square decomposition

The Cartesian product \(\quiverProdPoly{\rform{\forwardFactor }+\bform{\forwardFactor }}\) decomposition is trivial, since the Cartesian product of two line lattices yields a *single* connected graph.

### Triangular decomposition

The triangular product \(\quiverProdPoly{\rform{\forwardFactor }\,\gform{\backwardFactor }+\gform{\forwardFactor }\,\bform{\backwardFactor }+\bform{\forwardFactor }\,\rform{\backwardFactor }}\) is more interesting, yielding a "stack" of disconnected components:

These components correspond to angled slices through the vertices of a cubic grid. Here we show a smaller fragment to avoid clutter, from an angle to emphasize the separate planes that yield the connected components:

And from another angle to emphasize the hexagonal structure of each plane:

### Hexagonal decomposition

The hexagonal product \(\quiverProdPoly{\rform{\forwardFactor }\,\gform{\backwardFactor }\,\waform{\forwardFactor }+\gform{\forwardFactor }\,\bform{\backwardFactor }\,\waform{\forwardFactor }+\bform{\forwardFactor }\,\rform{\backwardFactor }\,\waform{\forwardFactor }}\) is similar to the triangular product, except with more slices possible:

The corresponding higher-dimensional object of which the connected components are slices is *4-dimensional*, and cannot easily be visualized. Notice that the appearance of *two* single vertex connected components above is a reflection of the fact that there are two product vertices (corresponding to the two \(\wbform{\quiver{X}}\)-vertices) whose whose projections into 2 (and

dimensions coincide.

### Square products

Finally, we can decompose the product of the square lattice with a 2-line lattice:

Unlike the case with the product of the triangular lattice with the 2-line, we have a large number of connected components that depends on the size of the finite square lattice. We can see these components as slices of the following three-dimensional "slab" in which the \(\wbform{\quiver{X}}\)-axis has length 2, depicted going *into* the page:

Here the \(\wbform{\quiver{X}}\)-axis is depicted vertically, and the connected components are more easily seen:

For a 3-line lattice we obtain a "thicker stripe" that scans across the square:

This corresponds to slices of a three-dimensional slab where the \(\wbform{\quiver{X}}\)-axis has length 3, depicted going into the page.

Here the \(\wbform{\quiver{X}}\)-axis is depicted vertically, and the connected components are more easily seen as being intersections of the slab vertices with particular three-dimensional planes:

## Extended product notation

## Powers

We are ready now to consider powers of factors like \(\quiverProdPoly{\rform{\forwardFactor }},\quiverProdPoly{\rform{\backwardFactor }}\) allowing terms such as \(\quiverProdPoly{\rform{\forwardFactor }\,\rform{\forwardFactor }},\quiverProdPoly{\rform{\forwardFactor }\,\rform{\forwardFactor }\,\rform{\forwardFactor }},\ellipsis\) and \(\quiverProdPoly{\rform{\backwardFactor }\,\rform{\backwardFactor }},\quiverProdPoly{\rform{\backwardFactor }\,\rform{\backwardFactor }\,\rform{\backwardFactor }},\ellipsis\). We first describe \(\quiverProdPoly{\rform{\forwardFactor }\,\rform{\forwardFactor }}\) and consider an example.

For quiver \(\rform{\quiver{R}}\) we define the monomial \(\quiverProdPoly{\rform{\forwardFactor }\,\rform{\forwardFactor }}\) to be the quiver \(\rform{\quiver{R}_2}\) defined as follows:

\[ \begin{aligned} \vertexList(\rform{\quiver{R}_2})&\defEqualSymbol \vertexList(\rform{\quiver{R}})\\ \\ \edgeList(\rform{\quiver{R}_2})&\defEqualSymbol \setConstructor{\tde{\tvert{r}}{\hvert{r}}{\card{c_1} \cardinalSequenceSymbol \card{c_2}}}{\elemOf{\tde{\tvert{r}}{\vert{m}}{\card{c_1}},\tde{\vert{m}}{\hvert{r}}{\card{c_2}}}{\edgeList(\rform{\quiver{R}})}}\end{aligned} \]In other words, the product edges of \(\rform{\quiver{R}_2}\) are 2-paths formed from (non-inverted) factor cardinals in \(\rform{\quiver{R}}\). The corresponding product cardinals are ordered 2-lists \(\card{c_1} \cardinalSequenceSymbol \card{c_2}\) of the corresponding factor cardinals. This is essentially identical to taking the square of the **cardinal adjacency matrix**.

### Line lattice

Here is \(\rform{\quiver{R}_2}\) visualized for \(\rform{\quiver{R}} = \bindCards{\subSize{\lineQuiver }{6}}{\rform{\card{r}}}\). It splits into two isomorphic connected components:

For length \(\rform{\quiver{R}} = \bindCards{\subSize{\lineQuiver }{5}}{\rform{\card{r}}}\), the two components are not isomorphic:

For \(\rform{\quiver{R}} = \bindCards{\subSize{\cycleQuiver }{6}}{\rform{\card{r}}}\), we again obtain two connected components:

For \(\rform{\quiver{R}} = \bindCards{\subSize{\cycleQuiver }{5}}{\rform{\card{r}}}\), we again obtain *one* connected component:

This general pattern is easy to state:

z | z | z |
---|---|---|

finite line quiver | \(\subSize{\lineQuiver }{\sym{n}}\) | \(\subSize{\lineQuiver }{\ceiling{\sym{n} / 2}}\graphUnionSymbol \subSize{\lineQuiver }{\floor{\sym{n} / 2}}\) |

infinite line quiver | \(\subSize{\lineQuiver }{ \infty }\) | \(\subSize{\lineQuiver }{ \infty }\graphUnionSymbol \subSize{\lineQuiver }{ \infty }\) |

even cycle quiver | \(\subSize{\cycleQuiver }{2 \, \sym{n}}\) | \(\subSize{\cycleQuiver }{2 \, \sym{n}}\) |

odd cycle quiver | \(\subSize{\cycleQuiver }{2 \, \sym{n} + 1}\) | \(\subSize{\cycleQuiver }{\sym{n}}\graphUnionSymbol \subSize{\cycleQuiver }{\sym{n}}\) |

### Square lattice

For \(\quiver{Q}\) a square quiver, which has multiple cardinals \(\cardinalList(\quiver{Q}) = \list{\rform{\card{r}},\bform{\card{b}}}\), the construction for \(\quiver{Q}_2\) is a little more complex. Here, the product cardinals are constructed from all possible pairs of non-inverted cardinals:

\[ \cardinalList(\quiver{Q_2}) = \list{\rform{\card{r}} \cardinalSequenceSymbol \rform{\card{r}},\rform{\card{r}} \cardinalSequenceSymbol \bform{\card{b}},\bform{\card{b}} \cardinalSequenceSymbol \rform{\card{r}},\bform{\card{b}} \cardinalSequenceSymbol \bform{\card{b}}} \]We visualize \(\quiver{Q}_2\) for a 5,5-square lattice:

The long horizontal and vertical edges above are the product cardinals \(\rform{\card{r}} \cardinalSequenceSymbol \rform{\card{r}}\) and \(\bform{\card{b}} \cardinalSequenceSymbol \bform{\card{b}}\), and the short pairs of diagonal edges with identical head and tail are \(\rform{\card{r}} \cardinalSequenceSymbol \bform{\card{b}}\) and \(\bform{\card{b}} \cardinalSequenceSymbol \rform{\card{r}}\).

### Triangular lattice

The structure of the second power of the triangular lattice is complex and interesting, but will not be described further here.

### Interaction of \(\forwardFactor\) and \(\backwardFactor\)

### Higher powers

We can define the general \(n^{\textrm{th}}\) forward power of a quiver similarly, written as \(\quiverProdPoly{\quiverProdPower{\rform{\forwardFactor }}{\sym{n}}}\defEqualSymbol \parenLabeled{\quiverProdPoly{\rform{\forwardFactor }\,\rform{\forwardFactor }\,\quiver{\ellipsis }\,\rform{\forwardFactor }}}{\sym{n} \textrm{ times}}\). The same constructions work as you would imagine for \(\quiverProdPoly{\quiverProdPower{\rform{\backwardFactor }}{\sym{n}}}\).

## Summary

Here are the four simple products we introduced in the last section, as applied to two line lattices. We also include some obvious generalizations for reference:

These examples make it clear that in the case of the products of infinite line lattices we choose to read the terms of the product as describing the roots of the resulting product, when see as a *point* lattice, in the sense of a subset of \(\mathbb{R}^2\), rather than as a quiver*.* For example, the Cartesian lattice quiver \(\quiverProdPoly{\rform{\forwardFactor }+\bform{\forwardFactor }}\) corresponds to the roots \(\list{\tuple{1,0},\tuple{0,1}}\), whereas \(\quiverProdPoly{\rform{\forwardFactor }+\bform{\forwardFactor }+\rform{\forwardFactor }\,\bform{\forwardFactor }+\rform{\forwardFactor }\,\bform{\backwardFactor }}\) corresponds to the roots \(\list{\tuple{1,0},\tuple{0,1},\tuple{1,1},\tuple{1,-1}}\).

# Fiber bundles

## Introduction

In this section we will approach how to define a general notion of a *smooth function* on a quiver. To do this we will adapt a construct from continuous mathematics, that of the **fiber bundle**, to the quiver-geometry setting. Therefore we will briefly review fiber bundles in their traditional form for those that require a refresher, and then explain how to adapt them to quiver geometry.

## Continuous fiber bundles

### Motivation

In continuous geometry, **fiber bundles** are a kind of gadget to organize and generalize the notion of a continuous function on a **topological space**.

To motivate why fiber bundles are needed, we will start by looking at three kinds of function: **functions on the unit interval**, **functions on a circle**, and **"functions on the Möbius strip"**. The last is in quotations because the story there is a little subtle, and is precisely the kind of situation that fiber bundles can clarify.

First, let \(\fiberSpaceStyle{\topologicalSpace{I_1}}\) denote the half-open interval \(\closedOpenInterval{-1}{1} \sub \mathbb{R}\), and \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\) denote the half-open interval \(\closedOpenInterval{\minus{\pi }}{\pi } \sub \mathbb{R}\). Let \(\baseSpaceStyle{\circleSpaceSymbol }\) denote the **circle** \(\circleSpaceSymbol \sub \realVectorSpace{2}\), seen as the set of points whose distance from the origin is 1:

We can parameterize \(\baseSpaceStyle{\circleSpaceSymbol }\) in terms of a real number \(\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}\) via:

\[ \baseSpaceStyle{\circleSpaceSymbol } = \setConstructor{\tuple{\sin(\baseSpaceElementStyle{\sym{ \theta }}),\cos(\baseSpaceElementStyle{\sym{ \theta }})}}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}} \]We will now consider "interval functions" of the form \(\functionType{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\), and "circle functions" of the form \(\functionType{\baseSpaceStyle{\circleSpaceSymbol }}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\). In other words, these are functions whose domain is the interval \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\) or the circle \(\baseSpaceStyle{\circleSpaceSymbol }\) respectively.

We can visualize these functions via their **graphs**. For example, here we show two functions: the function \(\functionSignature{\function{\bundleFunctionStyle{\function{f}}}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\) defined by \(\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{ \theta }}) = \sin(6 \, \baseSpaceElementStyle{\sym{ \theta }})\), as well as the function \(\functionSignature{\function{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}}{\baseSpaceStyle{\circleSpaceSymbol }}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\) defined also as \(\function{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}(\baseSpaceElementStyle{\sym{ \theta }}) = \sin(6 \, \baseSpaceElementStyle{\sym{ \theta }})\), where we use the \(\sym{ \theta }\)-parameterization of \(\baseSpaceStyle{\circleSpaceSymbol }\).

In both cases we have visualized the graph of the function in a space that consists of a "horizontal" subspace representing the domain and an orthogonal "vertical" subspace representing the codomain. For the interval function \(\bundleFunctionStyle{\function{f}}\) the horizontal subspace is 1-dimensional and hence the full graph is a subset of \(\realVectorSpace{2}\). But for the circle function \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\) the horizontal subspace is the circle \(\baseSpaceStyle{\circleSpaceSymbol }\), itself a subset of \(\realVectorSpace{2}\), and hence the full graph is a subset of \(\realVectorSpace{3}\), which is why we plot it as a 3-dimensional figure.

In either case the graph is a *subset* of a **total space** that consists of horizontal and vertical subspaces that model the domain and codomain:

The graph of the interval function \(\bundleFunctionStyle{\function{f}}\) is the set \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{f}}}} = \setConstructor{\tuple{\baseSpaceElementStyle{\sym{ \theta }},\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{ \theta }})}}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}}\). The graph \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{f}}}}\) is a subset \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{f}}}} \sub \baseSpaceStyle{\topologicalSpace{I_{\pi }}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) of the total space \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\).

The graph of \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\) is the set \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}} = \setConstructor{\tuple{\baseSpaceElementStyle{\sym{x}},\baseSpaceElementStyle{\sym{y}},\function{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}(\tuple{\baseSpaceElementStyle{\sym{x}},\baseSpaceElementStyle{\sym{y}}})}}{\elemOf{\tuple{\baseSpaceElementStyle{\sym{x}},\baseSpaceElementStyle{\sym{y}}}}{\baseSpaceStyle{\circleSpaceSymbol }}}\). In terms of the \(\sym{ \theta }\)-parameterization, this can be also be written \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}} = \setConstructor{\tuple{\sin(\baseSpaceElementStyle{\sym{ \theta }}),\cos(\baseSpaceElementStyle{\sym{ \theta }}),\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{ \theta }})}}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}}\). The graph \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}}\) is a subset \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}} \sub \baseSpaceStyle{\circleSpaceSymbol }\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) of the total space \(\baseSpaceStyle{\circleSpaceSymbol }\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\).

#### Continuity

Here we show interval and circle functions defined by \(\sin(\baseSpaceElementStyle{\sym{ \theta }})\) and \(\sin(\frac{1}{2} \, \baseSpaceElementStyle{\sym{ \theta }})\).

Notice in the top row that *both* of the interval functions \(\sin(\baseSpaceElementStyle{\sym{ \theta }})\) and \(\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)\) appear continuous, as they appear unbroken over the interior of the interval.

In contrast, the circle function \(\sin(\baseSpaceElementStyle{\sym{ \theta }})\) is continuous, while the circle function \(\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)\) is not, since it has a break at the far side.

The continuity of these functions can be rephrased as the corresponding graphs being connected, closed sets in the topology of the respective total space: \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) for the interval functions, and \(\baseSpaceStyle{\circleSpaceSymbol }\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) for the circle functions. (The real story here is more subtle, but is outside the scope of this treatment). This topological aspect is of fundamental importance for the *continuous* theory of fiber bundles, but we will not delve into it here, since for discrete fiber bundles topology manifests via the existence of path homomorphisms rather than open and closed sets.

#### Möbius strip

We just saw that the circle function \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\) defined by \(\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)\) is not continuous. However, if we imagine cutting the band on which the graph of \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\) is shown at the far side, twisting one end of the band, and gluing the ends together again, we will bring the two broken endpoints of the graph into coincidence, *joining the break*. This is now the graph of some *function-like object* we'll refer to as \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\). Let's visualize this situation:

For functions \(\functionSignature{\function{\bundleFunctionStyle{\function{f}}}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\) that are continuous on \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\), it's not hard to see that if \(\bundleFunctionStyle{\function{f}}\) is *even* (meaning \(\function{\bundleFunctionStyle{\function{f}}}(\minus{\baseSpaceElementStyle{\sym{x}}}) = \function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{x}})\)), then \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\) is continuous (on \(\baseSpaceStyle{\circleSpaceSymbol }\)). Likewise if \(\function{f}\) is *odd,* (meaning \(\function{\bundleFunctionStyle{\function{f}}}(\minus{\baseSpaceElementStyle{\sym{x}}}) = \minus{\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{x}})}\)), then \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) is continuous (on the "twisted" form of \(\baseSpaceStyle{\circleSpaceSymbol }\)). The only functions that can be continuous on both are those for which \(\function{\bundleFunctionStyle{\function{f}}}(\pi ) = \function{\bundleFunctionStyle{\function{f}}}(\minus{\pi }) = 0\) – where we abuse notation slightly via \(\function{\bundleFunctionStyle{\function{f}}}(\pi )\syntaxEqualSymbol \limit{\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{x}})}{\baseSpaceElementStyle{\sym{x}} \to \pi }\).

The band we have created to continuously *host* the graph of \(\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)\) is known as the Möbius strip. This is what we hinted at when at the beginning of this section when we said that fiber bundles allow us to define "functions on the Möbius strip". This was of course imprecise language, because \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) itself is defined *on the circle*, but its "codomain" is somehow a kind of interval that *twists around* as we move on this circle in a way that is described by the Möbius strip. We cannot think of \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) as being a function of type \(\functionType{\baseSpaceStyle{\circleSpaceSymbol }}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\), since the nature of the interval \(\fiberSpaceStyle{\topologicalSpace{I_1}}\) somehow changes as we proceed around \(\baseSpaceStyle{\circleSpaceSymbol }\) such that it is *flipped* by the time we complete one revolution around \(\baseSpaceStyle{\circleSpaceSymbol }\).

#### Conclusion

We haven't defined any of the machinery of fiber bundles yet, and so this story about the Möbius strip remains informal for now. But hopefully we have motivated a particular approach, which is to think of functions not as maps but as **graphs** living in some **total space** that serves as a host, which defines what functions count as continuous. An ordinary function is a graph living in the Cartesian product between the domain and codomain of the function. But there are more general situations for which the total space that hosts the graph of the function is *not* simply a Cartesian product. It will turn out that we need only that the total space is a Cartesian product *locally*.

### Base, fiber, and total space

As we saw above, we can think of a function \(\bundleFunctionStyle{\function{f}}\) as a **relation** between its domain and codomain: this relation is known as the graph \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{f}}}}\) of the function. We saw that this graph is a subset of the so-called **total space**, which in the case of functions on the interval and circle are the Cartesian products \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) and \(\baseSpaceStyle{\circleSpaceSymbol }\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\). We will now introduce some notation to organize these relationships:

symbol | name | interval fn's | circle fn's | interpretation |
---|---|---|---|---|

\(\baseSpaceStyle{\topologicalSpace{B}}\) | base space | \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\) | \(\baseSpaceStyle{\circleSpaceSymbol }\) | domain; space of inputs |

\(\totalSpaceStyle{\topologicalSpace{E}}\) | total space | \(\baseSpaceStyle{\topologicalSpace{I_{\pi }}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) | \(\baseSpaceStyle{\circleSpaceSymbol }\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{I_1}}\) | host for graph; space of input-output pairs |

\(\fiberSpaceStyle{\topologicalSpace{F}}\) | fiber space | \(\fiberSpaceStyle{\topologicalSpace{I_1}}\) | \(\fiberSpaceStyle{\topologicalSpace{I_1}}\) | codomain; space of outputs |

The **base space** \(\baseSpaceStyle{\topologicalSpace{B}}\) plays the role of the domain, the **fiber space** \(\fiberSpaceStyle{\topologicalSpace{F}}\) plays the role of the codomain, and the **total space** \(\totalSpaceStyle{\topologicalSpace{E}}\) plays the role of the host for the graph \(\bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{f}}}}\) of a function \(\bundleFunctionStyle{\function{f}}\) on the base space: the graph must be an closed, connected subset of the total space with particular properties we will shortly explain.

For the interval and circle functions, the total space is simply the Cartesian product of the base space and the fiber space: \(\totalSpaceStyle{\topologicalSpace{E}} = \baseSpaceStyle{\topologicalSpace{B}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{F}}\).

### Conditions on graphs

How do we know if a particular (closed, connected) subset \(\bundleGraphStyle{\topologicalSpace{G}} \sub \totalSpaceStyle{\topologicalSpace{E}}\) of the total space \(\totalSpaceStyle{\topologicalSpace{E}}\) corresponds to the graph of some function \(\bundleFunctionStyle{\function{f}}\)? In the special case that the total space can be written as the Cartesian product \(\totalSpaceStyle{\topologicalSpace{E}} = \baseSpaceStyle{\topologicalSpace{B}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{F}}\), we have the usual conditions for a relation to be a graph: for every point in the base space \(\elemOf{\baseSpaceElementStyle{\sym{x}}}{\baseSpaceStyle{\topologicalSpace{B}}}\) there must be *exactly one* point in the fiber space \(\elemOf{\fiberSpaceElementStyle{\sym{y}}}{\fiberSpaceStyle{\topologicalSpace{F}}}\) such that \(\elemOf{\tuple{\baseSpaceElementStyle{\sym{x}},\fiberSpaceElementStyle{\sym{y}}}}{\bundleGraphStyle{\topologicalSpace{G}}}\). Then \(\bundleGraphStyle{\topologicalSpace{G}}\) is the graph of the function \(\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{x}}) = \fiberSpaceElementStyle{\sym{y}}\).

For examples, let us consider again the functions \(\functionSignature{\function{\bundleFunctionStyle{\function{f}}}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\) and \(\functionSignature{\function{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}}{\baseSpaceStyle{\circleSpaceSymbol }}{\fiberSpaceStyle{\topologicalSpace{I_1}}}\) defined by \(\mto{\baseSpaceElementStyle{\sym{ \theta }}}{\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)}\):

The graphs of these functions are the relations:

\[ \begin{aligned} \bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{f}}}}&= \setConstructor{\tuple{\baseSpaceElementStyle{\sym{ \theta }},\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)}}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}}\\ \bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}}}&= \setConstructor{\tuple{\sin(\baseSpaceElementStyle{\sym{ \theta }}),\cos(\baseSpaceElementStyle{\sym{ \theta }}),\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)}}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}}\end{aligned} \]To write down the graph of \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\), we first parameterize the Möbius strip in terms of \(\baseSpaceElementStyle{\sym{ \theta }}\), the angle around the strip, and \(\fiberSpaceElementStyle{\sym{r}}\), the displacement above or below the "equator". We choose an outer radius of 5 and and inner radius of 1, though these are arbitrary:

\[ \function{M}(\baseSpaceElementStyle{\sym{ \theta }},\fiberSpaceElementStyle{\sym{r}})\defEqualSymbol \tuple{\cos(\baseSpaceElementStyle{\sym{ \theta }}) \, \paren{5 - \fiberSpaceElementStyle{\sym{r}} \, \sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)},\sin(\baseSpaceElementStyle{\sym{ \theta }}) \, \paren{5 - \fiberSpaceElementStyle{\sym{r}} \, \sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)},\fiberSpaceElementStyle{\sym{r}} \, \cos(\baseSpaceElementStyle{\sym{ \theta }} / 2)} \]This allows us to define the total space for the strip \(\totalSpaceStyle{\topologicalSpace{M}}\) to be:

\[ \totalSpaceStyle{\topologicalSpace{M}}\defEqualSymbol \setConstructor{\function{M}(\baseSpaceElementStyle{\sym{ \theta }},\fiberSpaceElementStyle{\sym{r}})}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}},\elemOf{\fiberSpaceElementStyle{\sym{r}}}{\fiberSpaceStyle{\topologicalSpace{I_1}}}} \]The graph of \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) is then given by:

\[ \begin{aligned} \bundleGraphStyle{\functionGraph{\bundleFunctionStyle{\function{\blackCircleModifier{f}}}}}&= \setConstructor{\function{M}(\baseSpaceElementStyle{\sym{ \theta }},\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2))}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}}\\ &= \setConstructor{\tuple{\cos(\baseSpaceElementStyle{\sym{ \theta }}) \, \paren{9 + \cos(\baseSpaceElementStyle{\sym{ \theta }})},\sin(\baseSpaceElementStyle{\sym{ \theta }}) \, \paren{9 + \cos(\baseSpaceElementStyle{\sym{ \theta }})},\sin(\baseSpaceElementStyle{\sym{ \theta }})} / 2}{\elemOf{\baseSpaceElementStyle{\sym{ \theta }}}{\baseSpaceStyle{\topologicalSpace{I_{\pi }}}}}\end{aligned} \]#### Projection maps

Returning to the question of how we can know when a given subset of \(\totalSpaceStyle{\topologicalSpace{E}}\) could be the graph of some function, we now need to define a **projection map** \(\functionSignature{\function{\bundleProjectionStyle{\bundleProjection{ \pi }}}}{\totalSpaceStyle{\topologicalSpace{E}}}{\baseSpaceStyle{\topologicalSpace{B}}}\), which is a continuous, surjective function takes us from a point in the total space to the *underlying* point in the base space. If we think of points in the total space as being input-output pairs, the projection map should send such an input-output pair to its input component. The projection maps for the interval and circle total spaces are therefore the following:

For example, here we show a point \(\totalSpaceElementStyle{\sym{e}} = \tuple{\baseSpaceElementStyle{\sym{ \theta }},\fiberSpaceElementStyle{\sym{y}}} = \tuple{-\frac{7}{8} \, \pi ,\frac{1}{2}}\) in the total space and its projection to a point \(\baseSpaceElementStyle{\sym{ \theta }}\) in the base space in all three cases:

We can visualize the action of the projection map as a series of arrows that show how sets of points of the total space \(\totalSpaceStyle{\topologicalSpace{E}}\) are mapped by \(\bundleProjectionStyle{\bundleProjection{ \pi }}\) to points in the base space \(\baseSpaceStyle{\topologicalSpace{B}}\):

We can then rephrase part of the condition that a subset \(\bundleGraphStyle{\topologicalSpace{G}} \sub \totalSpaceStyle{\topologicalSpace{E}}\) of the total space represent a graph. In particular, we can require that the image of the graph be the base space:

\[ \function{\imageModifier{\bundleProjectionStyle{\bundleProjection{ \pi }}}}(\bundleGraphStyle{\topologicalSpace{G}}) = \baseSpaceStyle{\topologicalSpace{B}} \]This ensures that there is *at least* one element \(\elemOf{\tuple{\baseSpaceElementStyle{\sym{x}},\fiberSpaceElementStyle{\sym{y}}}}{\bundleGraphStyle{\topologicalSpace{G}}}\) for every \(\elemOf{\baseSpaceElementStyle{\sym{x}}}{\baseSpaceStyle{\topologicalSpace{B}}}\), in other words, that the function underlying the graph is **total**.

Now, if there was *more* than one such pair for a given \(\elemOf{\baseSpaceElementStyle{\sym{x}}}{\baseSpaceStyle{\topologicalSpace{B}}}\), say \(\tuple{\baseSpaceElementStyle{\sym{x}},\fiberSpaceElementStyle{\sym{y}}}\) and \(\tuple{\baseSpaceElementStyle{\sym{x}},\primed{\fiberSpaceElementStyle{\sym{y}}}}\), then we would effectively have the graph of a **multivalued** function that takes on a *set* of values at \(\baseSpaceElementStyle{\sym{x}}\): \(\function{f}(\baseSpaceElementStyle{\sym{x}}) = \set{\fiberSpaceElementStyle{\sym{y}},\primed{\fiberSpaceElementStyle{\sym{y}}}}\).

To ensure that there is at *most one* such pair so that we have a single-valued function, we require that there exists a total function \(\functionSignature{\function{\bundleSectionStyle{\bundleSection{ \sigma }}}}{\baseSpaceStyle{\topologicalSpace{B}}}{\totalSpaceStyle{\topologicalSpace{E}}}\) that *gives* us this single pair for every \(\elemOf{\baseSpaceElementStyle{\sym{x}}}{\baseSpaceStyle{\topologicalSpace{B}}}\).

For example, for our interval function \(\function{\bundleFunctionStyle{\function{f}}}(\baseSpaceElementStyle{\sym{ \theta }}) = \sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)\) the corresponding \(\bundleSectionStyle{\bundleSection{ \sigma }}\) is:

\[ \function{\bundleSectionStyle{\bundleSection{ \sigma }}}(\baseSpaceElementStyle{\sym{ \theta }})\defEqualSymbol \tuple{\baseSpaceElementStyle{\sym{ \theta }},\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)} \]We can visualize the action of \(\bundleFunctionStyle{\function{f}},\bundleFunctionStyle{\function{\whiteCircleModifier{f}}},\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) as the sets of arrows that show how a point in the base space is mapped to a point in the total space:

Notice that this function \(\bundleSectionStyle{\bundleSection{ \sigma }}\) must send every \(\elemOf{\baseSpaceElementStyle{\sym{x}}}{\baseSpaceStyle{\topologicalSpace{B}}}\) to a point of \(\totalSpaceStyle{\topologicalSpace{E}}\) that then projects back down to the original point: \(\function{\bundleProjectionStyle{\bundleProjection{ \pi }}}(\function{\bundleSectionStyle{\bundleSection{ \sigma }}}(\baseSpaceElementStyle{\sym{x}}))\identicallyEqualSymbol \baseSpaceElementStyle{\sym{x}}\). Compactly this is the identity:

\[ \functionComposition{\bundleProjectionStyle{\bundleProjection{ \pi }}\functionCompositionSymbol \bundleSectionStyle{\bundleSection{ \sigma }}}\identicallyEqualSymbol \identity_{\baseSpaceStyle{\topologicalSpace{B}}} \]The function \(\bundleSectionStyle{\bundleSection{ \sigma }}\) contains the same information as the graph \(\bundleGraphStyle{\topologicalSpace{G}}\). In fact, the graph is simply the image of the base space under \(\bundleSectionStyle{\bundleSection{ \sigma }}\):

\[ \function{\imageModifier{\bundleSectionStyle{\bundleSection{ \sigma }}}}(\baseSpaceStyle{\topologicalSpace{B}}) = \bundleGraphStyle{\topologicalSpace{G}} \]#### Summary

We now have a characterization for when a subset \(\bundleGraphStyle{\topologicalSpace{G}}\) of the total space \(\totalSpaceStyle{\topologicalSpace{E}}\) is the graph of a function-like object on the base space \(\baseSpaceStyle{\topologicalSpace{B}}\):

We must

*first*decide on a projection map \(\functionSignature{\function{\bundleProjectionStyle{\bundleProjection{ \pi }}}}{\totalSpaceStyle{\topologicalSpace{E}}}{\baseSpaceStyle{\topologicalSpace{B}}}\) that describes how \(\totalSpaceStyle{\topologicalSpace{E}}\) covers \(\baseSpaceStyle{\topologicalSpace{B}}\), since this determines whether a subset \(\bundleGraphStyle{\topologicalSpace{G}} \sub \totalSpaceStyle{\topologicalSpace{E}}\) corresponds to to the graph of a single-valued function.One this choice is made, a subset \(\bundleGraphStyle{\topologicalSpace{G}} \sub \totalSpaceStyle{\topologicalSpace{E}}\) is the graph of a function-like object if it is the image \(\function{\imageModifier{\bundleSectionStyle{\bundleSection{ \sigma }}}}(\baseSpaceStyle{\topologicalSpace{B}}) = \bundleGraphStyle{\topologicalSpace{G}}\) of \(\baseSpaceStyle{\topologicalSpace{B}}\) under a continuous map \(\functionSignature{\function{\bundleSectionStyle{\bundleSection{ \sigma }}}}{\baseSpaceStyle{\topologicalSpace{B}}}{\totalSpaceStyle{\topologicalSpace{E}}}\), such that \(\functionComposition{\bundleProjectionStyle{\bundleProjection{ \pi }}\functionCompositionSymbol \bundleSectionStyle{\bundleSection{ \sigma }}}\identicallyEqualSymbol \identity_{\baseSpaceStyle{\topologicalSpace{B}}}\).

### Sections

We just explained how a continuous function \(\functionSignature{\function{\bundleSectionStyle{\bundleSection{ \sigma }}}}{\baseSpaceStyle{\topologicalSpace{B}}}{\totalSpaceStyle{\topologicalSpace{E}}}\) satisfying \(\functionComposition{\bundleProjectionStyle{\bundleProjection{ \pi }}\functionCompositionSymbol \bundleSectionStyle{\bundleSection{ \sigma }}}\identicallyEqualSymbol \identity_{\baseSpaceStyle{\topologicalSpace{B}}}\) can be seen as determining the graph of a function-like object \(\bundleGraphStyle{\topologicalSpace{G}}\) on a base space \(\baseSpaceStyle{\topologicalSpace{B}}\), once a projection \(\functionSignature{\function{\bundleProjectionStyle{\bundleProjection{ \pi }}}}{\totalSpaceStyle{\topologicalSpace{E}}}{\baseSpaceStyle{\topologicalSpace{B}}}\) is chosen. To represent and compute with these function-like objects, it turns out to be more convenient to work with \(\bundleSectionStyle{\bundleSection{ \sigma }}\) itself rather than its corresponding graph \(\bundleGraphStyle{\topologicalSpace{G}} \sub \totalSpaceStyle{\topologicalSpace{E}}\).

Such a function \(\functionSignature{\function{\bundleSectionStyle{\bundleSection{ \sigma }}}}{\baseSpaceStyle{\topologicalSpace{B}}}{\totalSpaceStyle{\topologicalSpace{E}}}\) is called a **section** of the **fiber bundle** \(\tuple{\totalSpaceStyle{\topologicalSpace{E}},\baseSpaceStyle{\topologicalSpace{B}},\bundleProjectionStyle{\bundleProjection{ \pi }}}\). The fiber bundle encapsulates our choice of where the graphs of functions on \(\baseSpaceStyle{\topologicalSpace{B}}\) should live, sections then represent and encode these graphs.

In the special case that \(\totalSpaceStyle{\topologicalSpace{E}} = \baseSpaceStyle{\topologicalSpace{B}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{F}}\), which are called **trivial fiber bundles**, these sections encode the graphs of ordinary functions with domain \(\baseSpaceStyle{\topologicalSpace{B}}\) and codomain \(\fiberSpaceStyle{\topologicalSpace{F}}\). This was the case with our examples \(\bundleFunctionStyle{\function{f}}\) and \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\).

The sections corresponding to the functions \(\bundleFunctionStyle{\function{f}}\) and \(\bundleFunctionStyle{\function{\whiteCircleModifier{f}}}\) are:

\[ \begin{aligned} \function{\bundleSectionStyle{\bundleSection{ \sigma }}}(\baseSpaceElementStyle{\sym{ \theta }})&\defEqualSymbol \tuple{\baseSpaceElementStyle{\sym{ \theta }},\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)}\\[0.75em] \function{\bundleSectionStyle{\bundleSection{\whiteCircleModifier{ \sigma }}}}(\baseSpaceElementStyle{\sym{ \theta }})&\defEqualSymbol \tuple{\sin(\baseSpaceElementStyle{\sym{ \theta }}),\cos(\baseSpaceElementStyle{\sym{ \theta }}),\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)}\end{aligned} \]What about the section \(\bundleSectionStyle{\bundleSection{\blackCircleModifier{ \sigma }}}\)? We will explain this in the next... section.

### Möbius strip

The object \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) was a case in which we could *not* represent the total space as a Cartesian product. For this case, \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) will be a section of a fiber bundle \(\tuple{\totalSpaceStyle{\topologicalSpace{M}},\baseSpaceStyle{\circleSpaceSymbol },\bundleProjectionStyle{\bundleProjection{\blackCircleModifier{p}}}}\), where \(\totalSpaceStyle{\topologicalSpace{M}}\) is the subset of \(\realVectorSpace{3}\) (with its induced topology) that we defined earlier to be:

The projection \(\functionSignature{\function{\bundleProjectionStyle{\bundleProjection{\blackCircleModifier{p}}}}}{\totalSpaceStyle{\topologicalSpace{M}}}{\baseSpaceStyle{\circleSpaceSymbol }}\) is defined by:

\[ \function{\bundleProjectionStyle{\bundleProjection{\blackCircleModifier{p}}}}(\function{M}(\baseSpaceElementStyle{\sym{ \theta }},\fiberSpaceElementStyle{\sym{r}}))\defEqualSymbol \baseSpaceElementStyle{\sym{ \theta }} \]Then \(\bundleFunctionStyle{\function{\blackCircleModifier{f}}}\) is the function-like object whose graph described by the section:

\[ \function{\bundleSectionStyle{\bundleSection{\blackCircleModifier{ \sigma }}}}(\baseSpaceElementStyle{\sym{ \theta }})\defEqualSymbol \function{M}(\baseSpaceElementStyle{\sym{ \theta }},\sin(\baseSpaceElementStyle{\sym{ \theta }} / 2)) \]### Typical fibers and local trivializations

We saw that the sections \(\bundleSectionStyle{\bundleSection{ \sigma }}\) are generalizations of functions, since they encode graphs of such function-like objects in some total space of the corresponding fiber bundle \(\tuple{\totalSpaceStyle{\topologicalSpace{E}},\baseSpaceStyle{\topologicalSpace{B}},\bundleProjectionStyle{\bundleProjection{ \pi }}}\). For *ordinary* functions – which are sections of trivial fiber bundles – the total space is a Cartesian product \(\totalSpaceStyle{\topologicalSpace{E}} = \baseSpaceStyle{\topologicalSpace{B}}\cartesianProductSymbol \fiberSpaceStyle{\topologicalSpace{F}}\), where the fiber space \(\fiberSpaceStyle{\topologicalSpace{F}}\) plays the role of the codomain of the function. Do we have to completely throw out the notion of such a fiber space for non-trivial fiber bundles? The idea of a codomain seems useful, so we would prefer not to throw the baby out with the bathwater. Luckily, we can still talk about a so-called **typical fiber** \(\fiberSpaceStyle{\topologicalSpace{F}}\).

The idea is that while the total space may not break down into a straightforward Cartesian product *globally*, it does break down *locally* – meaning that for suitably small open neighborhoods \(\baseSpaceStyle{\topologicalSpace{U}} \sub \baseSpaceStyle{\topologicalSpace{B}}\) of \(\baseSpaceStyle{\topologicalSpace{B}}\) around some point \(\elemOf{\baseSpaceElementStyle{\sym{x}}}{\baseSpaceStyle{\topologicalSpace{B}}}\), we *can* represent the corresponding portion \(\totalSpaceStyle{\topologicalSpace{V}} \sub \totalSpaceStyle{\topologicalSpace{E}}\) of the total space \(\totalSpaceStyle{\topologicalSpace{E}}\) that *lives above* \(\baseSpaceStyle{\topologicalSpace{B}}\) as a Cartesian product with the typical fiber \(\fiberSpaceStyle{\topologicalSpace{F}}\):

This is called a **local trivialization** of \(\totalSpaceStyle{\topologicalSpace{E}}\) around \(\baseSpaceElementStyle{\sym{x}}\). Such a local trivialization is always possible for a fiber bundle (which we will not show here). What makes a non-trivial fiber bundle *non-trivial* is that we cannot *glue together* these local trivializations to express the entire total space as such a Cartesian product.

For the interval, circle, and twisted functions we examined above, the typical fiber was in all cases \(\fiberSpaceStyle{\topologicalSpace{F}} = \fiberSpaceStyle{\topologicalSpace{I_1}}\).

## Discrete fiber bundles

How do we recapitulate the setup of continuous fiber bundles for quivers? Let us try the most straightforward approach. We define a **quiver fiber bundle** to be a tuple \(\tuple{\totalSpaceStyle{\quiver{E}},\baseSpaceStyle{\quiver{B}},\bundleProjectionStyle{\pathHomomorphism{ \pi }}}\), where:

\(\totalSpaceStyle{\quiver{E}}\) is the

**total quiver**\(\baseSpaceStyle{\quiver{B}}\) is the

**base quiver**\(\functionSignature{\function{\bundleProjectionStyle{\pathHomomorphism{ \pi }}}}{\totalSpaceStyle{\quiver{E}}}{\baseSpaceStyle{\quiver{B}}}\) is the

**projection map**, a [[[path homomorphism:Path homomorphisms]]] that is moreover a [[[covering map:Coverings#Path homomorphisms]]]

The special case of a **trivial quiver bundle** occurs when the total space is a [[[quiver product:Quiver products#Product notation]]] given by the so-called **fiber product**:

As an example, let us consider a total quiver \(\totalSpaceStyle{\quiver{E}}\) defined to be the fiber product of \(\baseSpaceStyle{\quiver{B}} = \bindCardSize{\subSize{\lineQuiver }{8}}{\card{\dbform{b}}}\) and \(\fiberSpaceStyle{\quiver{F}} = \bindCardSize{\subSize{\lineQuiver }{2}}{\card{\drform{f}}}\). We show the total space below, alongside its factors:

The projection map \(\bundleProjectionStyle{\pathHomomorphism{ \pi }}\) maps a path in the total quiver to the path in the base quiver that consists only of the first factor vertex of each product vertex, and whose cardinals consist of the first factor cardinal of each product cardinal. In other words, the cardinals are mapped as follows:

\[ \assocArray{\mto{\cardinalProduct{\card{\dbform{b}},\inverted{\card{\drform{f}}}}}{\card{\dbform{b}}},\mto{\cardinalProduct{\card{\dbform{b}},1}}{\card{\dbform{b}}},\mto{\cardinalProduct{\card{\dbform{b}},\card{\drform{f}}}}{\card{\dbform{b}}}} \]Below we visualize the "graph" of a section of this quiver bundle:

The corresponding section \(\functionSignature{\function{\bundleSectionStyle{\pathHomomorphism{ \sigma }}}}{\baseSpaceStyle{\quiver{B}}}{\totalSpaceStyle{\quiver{E}}}\) is then a path homomorphism that sends a path in the base quiver \(\baseSpaceStyle{\quiver{B}}\) to the corresponding "lifted" path that is a connected component of the fragment of the sub-graph illustrated above. For example, we show the image of two such paths in the base quiver:

# Path calculus

## Introduction

This section will introduce **path calculus**. Path calculus is the simple formalism that extends the so-called **calculus of finite differences** to the setting of functions defined on the vertices of quivers. In the subsequent section, we will explain how to generalize path calculus into the realm of **path algebra**, where finite differences become linear operators on a vector space of paths. But for now, the constructions will be elementary and straightforward.

How do we set up calculus on cardinal quivers? Before beginning, it is obvious that we should expect something that looks more like the **calculus of finite differences** than the more familiar **infinitesimal calculus** employed for continuous functions. Philosophically, the individual edges of a lattice quiver *are* the equivalent of infinitesimals from a suitably “zoomed out” perspective.

All the following examples will be hosted on a relatively simple lattice quivers. We’ll start with the one-dimensional line lattice, and move later to the square and triangular lattices to illustrate the generalization to multiple cardinals and hence dimensions.

## One dimensional lattices

To start, our base lattice will be **line lattice**, written \(\quiver{L}\), which has the attractive pedagogical property that it will act like the discrete equivalent of the *real number line* on which continuous one-dimensional functions are usually defined.

We’ve labeled the vertices with integers, with 0 for the origin vertex. We’ll refer to the vertices of \(\quiver{L}\) as \(\vertices{\quiver{L}} = \vertexList(\quiver{L})\).

## Vertex fields

A **vertex field** on \(\quiver{L}\) is a function \(\functionSignature{\function{\vertexField{f}}}{\vertices{\quiver{L}}}{\baseField{K}}\), where \(\baseField{K}\) is a field (or commutative ring). We write the space of all vertex fields \(\functionSpace{\vertices{}}{\baseField{K}}\).

The word *field* here is serving two different roles, of course: the *field* in “vertex field” is analogous to a scalar field in physics: a single value at each point in space, whereas the *field* in “\(\baseField{K}\) is a field*”* refers to a field of scalar values in the sense of algebraic ring with inverses. Of course, vertex fields form a field in their own right under pointwise multiplication, addition, and inversion, so the pun is in some sense accurate.

We’ll initially consider only *finite* fields \(\baseField{K} = \finiteField{p}\), which will allow us to visualize vertex fields as colorings of vertices. For \(\sym{p}=2\) we’ll use white for 0 and black for 1, and for \(\sym{p}=3\) we’ll use white for 0, red for 1, and blue for −1. For \(\sym{p}>3\), the cyclic property of these finite fields will be represented with colors that cycle through the visible spectrum.

Here are two vertex fields \(\vertexField{u}\) and \(\vertexField{v}\) we’ll use to illustrate various operations:

We can form sums and products of these vertex fields, defined vertex-wise:

## Translation operator

We now introduce the **translation** operator \(\pathTranslate{}{\vertexField{u}}\), which translates the vertex field \(\vertexField{u}\) along the cardinal \(\card{x}\). Later we’ll define this operation for quivers with more than one cardinal, and generalize it to allow translation along arbitrary paths.

Notice that the vertex fields \(\vertexField{u}\) and \(\vertexField{v}\) have been “shifted over” to the right. Similarly we can define the **backward translation** operator \(\pathBackwardTranslate{}{\vertexField{u}}\):

## Finite differences

We can use these translation operators to define the **forward finite difference** of a vertex field \(\vertexField{u}\), written \(\forwardDifference{}\,\vertexField{u}\).

The operation \(\forwardDifference{}\,\vertexField{u}\) is defined to be \(\forwardDifference{}\,\vertexField{u} = \vertexField{u} - \pathTranslate{}{\vertexField{u}}\), and can be thought of as *comparing* the translated and untranslated versions of \(\vertexField{u}\):

The single red vertex in the center indicates that the vertex field \(\vertexField{u} = \list{\ellipsis ,0,0,1,1,1,\ellipsis }\) has a “step change”, yielding \(\forwardDifference{}\,\vertexField{u} = \list{\ellipsis ,0,0,1,0,0,\ellipsis }\). In terms of 1D discrete functions, the picture is:

Similarly we can define the **backward finite difference** to be \(\backwardDifference{}\,\vertexField{u} = \pathBackwardTranslate{}{\vertexField{u}} - \vertexField{u}\), illustrated below:

The worked calculation is shown below:

We can again visualize this as a finite difference of a 1D discrete function:

Finally we can define the **central difference** \(\centralDifference{}\,\vertexField{u} = \forwardDifference{}\,\vertexField{u} + \backwardDifference{}\,\vertexField{u}\), which simplifies to \(\pathBackwardTranslate{}{\vertexField{u}} - \pathTranslate{}{\vertexField{u}}\)

The 1D discrete function interpretation:

### Second order differences

It is perfectly well defined to apply the \(\centralDifference{}\) operators multiple times, which we’ll write as \(\centralDifference{}\,\centralDifference{} = \centralDifference{}_2\), etc.

Let’s visualize the behavior of the second order forms of the three operators:

### Liebniz’s rule

In traditional calculus, we have the famous **Leibniz** (or **product**) **rule**: \(\partialdof{\function{f} \, \function{g}} = \function{f} \, \partialdof{\function{g}} + \function{g} \, \partialdof{\function{f}}\).

Does the same rule hold for the finite difference of a product \(\vertexField{u} \, \vertexField{w}\) of vertices fields \(\vertexField{u}\), \(\vertexField{w}\) on the line lattice? The following table illustrates on the first row example vertex fields \(\vertexField{u}\), \(\vertexField{w}\) and their product \(\vertexField{u} \, \vertexField{w}\), on the second row the central finite difference of them individually, and of their product, and on the third row the two terms of the ordinary product rule, and their sum.

As we can see, the two bolded expressions are different, so the “naive” form of the Leibniz rule does not hold.

Luckily, it is straightforward to derive a modified form of the Leibniz rule, involving translations of one or both of the products. There are in fact *two, equivalent* versions of this modified form for each derivative, which one might call the ‘forward’ and ‘backward’ forms:

Again, notice that the naive Leibniz rule does not give the true derivative for each of the three cases, whereas both the left and right forms of the modified Leibniz rule *do*.

It is intuitive that in the “continuum limit” of slowly changing vertex fields on very large lattices, the effect of the infinitesimal translations like \(\pathTranslate{}{\vertexField{w}}\) can be ignored and replaced with the untranslated fields \(\vertexField{w}\). Doing this to the above modified Liebniz rules we pass to the classical Liebnitz rule, so for example in the case of the central difference:

\[ \pathTranslate{}{\vertexField{w}} \, \centralDifference{}\,\vertexField{u} + \pathBackwardTranslate{}{\vertexField{u}} \, \centralDifference{}\,\vertexField{w}\approxEqualSymbol \pathBackwardTranslate{}{\vertexField{w}} \, \centralDifference{}\,\vertexField{u} + \pathTranslate{}{\vertexField{u}} \, \centralDifference{}\,\vertexField{w}\approxEqualSymbol \vertexField{w} \, \centralDifference{}\,\vertexField{u} + \vertexField{u} \, \centralDifference{}\,\vertexField{w} \]## Two dimensional lattices

To continue our recapitulation of calculus, we next move to *multivariable calculus*. Taking the place of continuous functions of two real variables, we will have vertex fields on lattices with two cardinals.

Let’s start with the most intuitive, the **square lattice**:

This time, we will take finite difference separately in the two cardinals directions \(\rbform{\card{x}}\) and \(\gbform{\card{y}}\), with the cardinal writtein subscript form: e.g. for the central finite differences

z | z | z | z |
---|---|---|---|

difference along \(\rbform{\card{x}}\) | \(\pathCentralDifference{\card{x}}\) | \(\pathForwardDifference{\card{x}}\) | \(\pathBackwardDifference{\card{x}}\) |

difference along \(\gbform{\card{y}}\) | \(\pathCentralDifference{\card{y}}\) | \(\pathForwardDifference{\card{y}}\) | \(\pathBackwardDifference{\card{y}}\) |

These are defined as before, except now we use per-cardinal forward and backward translations:

z | z | z |
---|---|---|

translation along \(\rbform{\card{x}}\) | \(\pathLeftTranslate{\card{x}}\) | \(\pathLeftBackwardTranslate{\card{x}}\) |

translation along \(\gbform{\card{y}}\) | \(\pathLeftTranslate{\card{y}}\) | \(\pathLeftBackwardTranslate{\card{y}}\) |

Here we show the finite differences of three simple vertex fields \(\vertexField{u}\), \(\vertexField{v}\), \(\vertexField{w}\), shown on successive rows:

We can also take second order differences in multiple ways, show here in terms of the central difference applied to \(\vertexField{u}\). Note that we define second order differences to act in the order they appear in the subscript, so that e.g. \(\pathCentralDifference{\card{x},\card{y}} = \pathCentralDifference{\card{y}}\,\pathCentralDifference{\card{x}}\).

Notice that \(\pathCentralDifference{\card{y},\card{x}}\,\vertexField{u} = \pathCentralDifference{\card{x},\card{y}}\,\vertexField{u}\). In fact, this identity is true for all vertex fields. This is a consequence of the path relation \(\word{\rbform{\card{x}}}{\gbform{\card{y}}}\pathIso \word{\gbform{\card{y}}}{\rbform{\card{x}}}\), which implies that the translations \(\pathLeftTranslate{\card{x}}\) and \(\pathLeftTranslate{\card{y}}\) commute, which makes the proof of the identity elementary:

\[ \begin{aligned} \pathCentralDifference{\rbform{\card{x}},\gbform{\card{y}}}\,\vertexField{u}&= \pathBackwardTranslate{\rbform{\card{x}}}{\paren{\pathCentralDifference{\gbform{\card{y}}}\,\vertexField{u}}} - \pathTranslate{\rbform{\card{x}}}{\paren{\pathCentralDifference{\gbform{\card{y}}}\,\vertexField{u}}}\\ &= \pathBackwardTranslate{\rbform{\card{x}}}{\pathBackwardTranslate{\gbform{\card{y}}}{\vertexField{u}}} - \pathBackwardTranslate{\rbform{\card{x}}}{\pathTranslate{\gbform{\card{y}}}{\vertexField{u}}} - \pathTranslate{\rbform{\card{x}}}{\pathBackwardTranslate{\gbform{\card{y}}}{\vertexField{u}}} + \pathTranslate{\rbform{\card{x}}}{\pathTranslate{\gbform{\card{y}}}{\vertexField{u}}}\\ &= \pathBackwardTranslate{\gbform{\card{y}}}{\pathBackwardTranslate{\rbform{\card{x}}}{\vertexField{u}}} - \pathBackwardTranslate{\gbform{\card{y}}}{\pathTranslate{\rbform{\card{x}}}{\vertexField{u}}} - \pathTranslate{\gbform{\card{y}}}{\pathBackwardTranslate{\rbform{\card{x}}}{\vertexField{u}}} + \pathTranslate{\gbform{\card{y}}}{\pathTranslate{\rbform{\card{x}}}{\vertexField{u}}}\\ &= \pathBackwardTranslate{\gbform{\card{y}}}{\paren{\pathCentralDifference{\rbform{\card{x}}}\,\vertexField{u}}} - \pathTranslate{\gbform{\card{y}}}{\paren{\pathCentralDifference{\rbform{\card{x}}}\,\vertexField{u}}}&= \pathCentralDifference{\gbform{\card{y}},\rbform{\card{x}}}\,\vertexField{u}\end{aligned} \]## Edge fields

Next, we are going to consider **edge fields** on the square lattice quiver \(\quiver{S}\), constructed similarly to the vertex fields \(\functionSpace{\vertices{}}{\baseField{K}}\).

An **edge field** on \(\quiver{S}\) is a function \(\functionSignature{\function{\edgeField{h}}}{\edges{\quiver{S}}}{\baseField{K}}\), where again \(\baseField{K}\) is a field or commutative ring, and \(\edges{\quiver{S}}\) is the set of edges of \(\quiver{S}\). We write the space of all edge fields as \(\functionSpace{\edges{}}{\baseField{K}}\).

#### Orientation

It's important that we consider these edges in a *directed* sense: an edge field is a *single* value associated with a given edge \(\elemOf{\edge{e}}{\edges{}}\), namely \(\edgeField{h}(\edge{e})\), but if we consider that edge in its invered oriention, we invert the corresponding value of the edge field: \(\edgeField{h}(\inverted{\edge{e}}) = \minus{\edgeField{h}(\edge{e})}\). This interpretation is natural if we regard the edge field as representing a **flow** of some quantity, since the flow *along* a given edge will be negative if choose to measure it in the opposite orientation. We'll be on a firmer footing when we consider path algebra, so don't worry if this point seems vague right now.

#### Visualization

We will visualize edge fields using colored edges in the same fashion we used for vertex fields. Because they are naturally oriented by the orientation of the corresponding cardinals, a positive weight on an edge will be represented visually as a doubled-headed arrow in which the arrowhead in the forward direction is colored red, and the arrowhead in the backward direction is colored blue. The opposite applies when drawing a negative weight.

Here's an example of an edge field in which only two edges have non-zero weight: the horizontal edge has weight 1, and the vertical edge has weight -1. Alongside it is the original lattice quiver \(\quiver{S}\) to allow comparison with the orientation of the corresponding cardinals.

### Gradient

We can generate an edge field from a vertex field \(\functionSignature{\function{\vertexField{u}}}{\vertices{\quiver{S}}}{\baseField{K}}\) using the **gradient operator**, written \(\grad\). Formally, \(\grad\) is a map from the space of vertex fields to the space of edge fields, in other words \(\functionSignature{\function{\grad }}{\functionSpace{\vertices{}}{\baseField{K}}}{\functionSpace{\edges{}}{\baseField{K}}}\). It is defined edgewise as follows:

In other words, the gradient operator *measures the difference* between the value of \(\vertexField{u}\) at the head and tail of each edge.

Let's visualize the behavior of the gradient operator on some simple vertex fields:

### Divergence

Next we'll introduce the **divergence operator**, which is a kind of complementary operator to the gradient operator. The divergence operator is written \(\div\). The divergence operator operates on an edge field \(\functionSignature{\edgeField{h}}{\edges{}}{\baseField{K}}\) and produces a vertex field \(\functionSignature{\function{\div \,\edgeField{h}}}{\vertices{}}{\baseField{K}}\), in othe words we have \(\functionSignature{\function{\div }}{\functionSpace{\edges{}}{\baseField{K}}}{\functionSpace{\vertices{}}{\baseField{K}}}\).

It is defined vertexwise as follows:

\[ \function{\paren{\div \,\edgeField{h}}}(\vert{v}) = \indexSum{\edgeField{h}(\edge{e})}{\headVertex(\edge{e}) = \vert{v}}{} - \indexSum{\edgeField{h}(\edge{e})}{\tailVertex(\edge{e}) = \vert{v}}{} \]We can define it more elegantly using the **path Kronecker**, \(\functionSignature{\function{ \Xi }}{\tuple{\edges{},\vertices{}}}{\baseField{K}}\), defined as:

Using \(\function{ \Xi }\) we can express \(\div\) vertexwise as a kind of **convolution**:

Let's apply the divergence operator to some example edge fields:

### Laplacian

An interesting quantity is the divergence of the gradient, \(\laplacian = \div \,\grad\), called the **Laplacian**. Let's evaluate it on a simple vertex field:

The weights for vertex field \(\laplacianOf{\vertexField{u}}\) are \(+4\) for the central red vertex and \(-1\) for the surrounding blue vertices. This can be explained by counting the number of incident edges for the central vertex you can see in \(\gradOf{\vertexField{u}}\): there are 4. There is only one incident edge for the outer vertices.

We will have more to say about this at a later stage, particular in its connection with vertex colorings of lattices.

### Curl

The operator analogous to **curl** in vector calculus will have to wait until we have the machinery of path algebra to guide us. This is largely due to the fact that curl itself is just the tip of the iceberg, more generically handled using the idea of the **exterior calculus** (or, largely equivalently, **geometric algebra**). We will therefore leave this topic aside for now, although we'll see hints of it below.

## Integration

We've covered some basic territory in terms of finite-differential calculus. What about integration?

The staring point here is the notion of the **path integral** of an edge field \(\elemOf{\edgeField{h}}{\functionSpace{\edges{}}{\baseField{K}}}\) along a path \(\path{P}\), written \(\pathIntegral{\path{P}}{\edgeField{h}}\).

Let's visualize this idea for an arbitrary \(\edgeField{h}\) and \(\path{P}\):

We first "split" \(\path{P}\) into its constituent edges (each edge having weight one), and then multiply them edgewise with \(\edgeField{h}\):

The summed weight of this product vector is the result of the integral, in this case, the sum of three negative weights and one positive weight.

We can write this succintly in the following way:

\[ \pathIntegral{\path{P}}{\edgeField{h}} = \pathDot{\split(\path{P})}{\edgeField{h}} \]Here, the **dot product** or **inner product** between two edge fields is the map \(\functionSignature{\function{\pathDotSymbol{}}}{\tuple{\functionSpace{\edges{}}{\baseField{K}},\functionSpace{\edges{}}{\baseField{K}}}}{\baseField{K}}\) defined to be the sum of products of their edge weights:

### Fundamental theorem of discrete calculus

We can now make a rather trivial observation about the gradient edge field of a vertex field \(\vertexField{u}\). Let's calculate the result of a path integral on \(\gradOf{\vertexField{u}}\):

\[ \begin{aligned} \pathIntegral{\path{P}}{\paren{\gradOf{\vertexField{u}}}}&= \pathDot{\split(\path{P})}{\paren{\gradOf{\vertexField{u}}}}\\ &= \indexSum{\suchThat{\function{\paren{\gradOf{\vertexField{u}}}}(\edge{e})}{\elemOf{\edge{e}}{\path{P}}}}{\edge{e}}{}\\ &= \indexSum{\suchThat{\vertexField{u}(\vert{y}) - \vertexField{u}(\vert{x})}{\elemOf{\de{\vert{x}}{\vert{y}}}{\path{P}}}}{\vert{x},\vert{y}}{}\end{aligned} \]In other words, the integral is the sum of differences of values of \(\vertexField{u}\) along the edges of the path \(\path{P}\). But these will cancel in pairs, since:

\[ \support(\path{P}) = \list{\de{\vert{x_1}}{\vert{x_2}},\de{\vert{x_2}}{\vert{x_3}},\ellipsis ,\de{\vert{x_{\sym{n} - 1}}}{\vert{x_{\sym{n}}}}} \]And hence the integral is determined by the difference between the value of \(\vertexField{u}\) at the head and tail of \(\path{P}\).

\[ \pathIntegral{\path{P}}{\paren{\gradOf{\vertexField{u}}}} = \vertexField{u}(\headVertex(\path{P})) - \vertexField{u}(\tailVertex(\path{P})) \]This rather banal calculation amounts to a discrete version of the **fundamental theorem of calculus** for a function of one variable. Note the important consequence that if \(\path{P}\) is a closed path, then by definition \(\headVertex(\path{P}) = \tailVertex(\path{P})\), and the integral is zero.

You may have noticed that we have played a little fast and loose here: in particular, the operation \(\split\) seemed to produce an edge field that didn't have the customary double arrowheads, and this feature was reflected by the product of it with the integrand edge field. There is good reason for this, and to understand what is going on we will have to develop path algebra. This will then allow us to formulate an appropriate discrete version of the **generalized Stoke's theorem**, an important milestone in "multicardinal calculus".

# Path algebra

## Introduction

We are ready at this point to understand path calculus through the lens of **path algebra**. This will allow us to generalize the finite difference approach to operate on objects more general than vertex and edge fields, and on quivers that include complex cardinal structure that manifest the phenomenon of *discrete curvature*.

We previously defined \(\functionSpace{\vertices{}}{\baseField{K}}\), the space of **vertex fields**, to be the space of functions from the set of vertices to some field \(K\). These form a vector space, and in fact a ring under pointwise multiplication. The basis of this space is of course the set of vertices.

We also defined \(\functionSpace{\edges{}}{\baseField{K}}\), the space of **edge fields**, similarly.

We now unify and extend these definitions via path algebra, by expressing vertices and edges as elements of a larger space: the set of finite paths.

## Path vectors

First, we notice that the set of vertices can be identified with the set of empty paths: to each vertex \(\vert{v}\) we associate the empty path \(\identityElement{\vert{v}} = \paren{\pathWord{\vert{v}}{\emptyWord{}}{\vert{v}}}\).

What about edge fields? We could identify an edge \(\elemOf{\edge{e}}{\edges{}} = \tde{\vert{x}}{\vert{y}}{\card{c}}\) with the 1-path \(\paren{\pathWord{\vert{x}}{\word{\card{c}}}{\vert{y}}}\). But we must take care of orientation: edge fields are antisymmetric, in that for edge field \(\elemOf{\edgeField{h}}{\functionSpace{\edges{}}{\baseField{K}}}\), we have \(\edgeField{h}(\inverted{\edge{e}}) = \minus{\edgeField{h}(\edge{e})}\). Therefore we identify the edge \(\edge{e}\) with the formal sum of paths \(\paren{\pathWord{\vert{x}}{\word{\card{c}}}{\vert{y}}} - \paren{\pathWord{\vert{y}}{\word{\inverted{\card{c}}}}{\vert{x}}}\).

Hence we have identified the basis elements for vertex and edge fields as basis elements of a larger vector space: the vector space of all *finite-length paths.*

A vector of this enlarged **path vector space**, written \(\pathVectorSpace{P}\), is a **path vector**, written \(\elemOf{\pathVector{p}}{\pathVectorSpace{P}}\).

A path vector \(\pathVector{p}\) is a \(\baseField{K}\)-linear combination of paths, which we will call **basis paths**, since they form the basis of the vector space \(\pathVectorSpace{P}\). We can write this sum formally as \(\pathVector{p} = \indexSum{\basisPathWeight{p}{i} \, \basisPath{p}{i}}{i}{}\), where \(\basisPath{p}{i}\) is a basis path and \(\basisPathWeight{p}{i}\) is the \(\baseField{K}\)-valued coefficient of the basis path in the sum, called a **weight**.

The set of \(\basisPath{p}{i}\) for which \(\basisPathWeight{p}{i} \neq 0\) will be called the **support** of the vector \(\pathVector{p}\).

### Visualization

We can visualize such path vectors in a familiar way: we superimpose the paths \(\basisPath{p}{i}\) in a single diagram, with the color indicating the weight \(\basisPathWeight{p}{i}\) in the linear combination. Basis paths with coefficient zero (basis paths outside the support) are of course not drawn at all. For example:

We'll note that this diagramatic scheme is fully consistent with the way we have chosen to visualize vertex and edge fields in the previous section.

### Unit vertex field

The vertex fields \(\functionSpace{\vertices{}}{\baseField{K}}\) form a vector subspace of \(\pathVectorSpace{P}\), of course, closed under addition, multiplication, and scalar multiplication. We can identify a special vector in this subspace: \(\unitVertexField\), the **unit vertex field**, with a weight 1 for every empty path:

### Cardinal edge fields

While the vertex fields are the simplest interesting subspace of \(\pathVectorSpace{P}\), there is another natural subspace: the subspace of length-1 paths. There are particular vectors in these subspaces that stand out: those that are associated with a particular cardinal. The linear combination of edges, that is to say, length-1 paths, that are labeled with a given cardinal \(\card{c}\) are called the **forward** **cardinal edge fields**, and are defined by:

To explain this notation a bit: the summand \(\paren{\pathWord{\vert{t}}{\word{\card{c}}}{\vert{h}}}\) is dropped if it is not a valid path, in other words, if \(\tde{\vert{t}}{\vert{h}}{\card{c}}\) is not an edge of the quiver.

Here we show the two forward cardinal edge fields for the square lattice:

We can equally well construct edge fields from the inverses of these cardinals, giving the **backward cardinal edge fields:**

The forward and backward cardinal edge fields are related via \(\wordVector{\word{\card{c}}}{\backwardSymbol } = \pathReverse{\paren{\wordVector{\word{\card{c}}}{\forwardSymbol }}}\), where \(\pathReverse{}\) indicates the operation of **path reversal**, in which we simply reverse each basis path of the vector.

A final class of cardinal edge fields are the **symmetric** and **antisymmetric cardinal edge fields**:

Visualized for the square lattice:

We adopt a slight tweak to the visualization here: when a path and its reversal are *both* present, we combine them into one double-headed arrow, shading the each arrowhead to indicate the weight in that direction. This reveals the fact that the edge fields we considered in the "Path calculus" section were in fact just the antisymmetric 1-path vectors.

## Word vectors

The cardinal edge fields are themselves special cases of a more general kind of path vector called a **word vector**, which is the path vector which consists of the sum of all paths whose path word is \(\wordSymbol{W}\):

z | z |
---|---|

forward word vector | \(\wordVector{\wordSymbol{W}}{\forwardSymbol }\defEqualSymbol \indexSum{\suchThat{\basisPath{p}{\sym{i}}}{\wordOf(\basisPath{p}{\sym{i}}) = \wordSymbol{W}}}{\sym{i}}{}\) |

backward word vector | \(\wordVector{\wordSymbol{W}}{\backwardSymbol }\defEqualSymbol \wordVector{\inverted{\wordSymbol{W}}}{\forwardSymbol } = \pathReverse{\paren{\wordVector{\wordSymbol{W}}{\forwardSymbol }}}\) |

symmetric word vector | \(\wordVector{\wordSymbol{W}}{\symmetricSymbol }\defEqualSymbol \wordVector{\wordSymbol{W}}{\forwardSymbol } + \wordVector{\wordSymbol{W}}{\backwardSymbol }\) |

antisymmetric word vector | \(\wordVector{\wordSymbol{W}}{\antisymmetricSymbol }\defEqualSymbol \wordVector{\wordSymbol{W}}{\forwardSymbol } - \wordVector{\wordSymbol{W}}{\backwardSymbol }\) |

As you’d expect, if the word \(\wordSymbol{W}\) consists of a single cardinal, we obtain the cardinal edge fields defined earlier.

Here are the word vectors corresponding to the word \(\word{\rbform{\card{x}}}{\gbform{\inverted{\card{y}}}}\):

As a special case, we can consider the empty word vectors, which we write with an invisible subscript. These yields the sum of empty paths, in other words, the unit vertex field \(\unitVertexField\):

#### Grades

We can organize this situation via the notion of a **grade**. The grade of an basis path \(\elemOf{\pathVector{p}}{\pathVectorSpace{P}}\) is simply the length of the path, which we’ll write \(\grade(\pathVector{p})\). The grade of a vector \(\pathVector{p} = \indexSum{\basisPathWeight{p}{i} \, \basisPath{p}{i}}{i}{}\) is the single common grade of its support, if there is such a common value. If the support of \(\pathVector{p}\) contains basis paths of different grades, we’ll say that \(\pathVector{p}\) has **mixed grade**. As a special case, we consider the zero path vector \(\pathVector{0}\) to be of *undefined* grade.

Since empty paths have length 0, we can now organize the situation as follows:

z | z | z |
---|---|---|

vertex fields | \(\pathVectorSpace{P}_0\) | path vectors of grade 0 |

edge fields | \(\pathVectorSpace{P}_1\) | path vectors of grade 1 |

\(n\)-path fields | \(\pathVectorSpace{P}_{\sym{n}}\) | path vectors of grade \(\sym{n}\) |

Then we have the general fact that \(\pathVectorSpace{P}_{\sym{n}}\) is a subspace of \(\pathVectorSpace{P}\) closed under addition, multiplication, and scaling.

## Linear and bilinear operations

What is the path algebra \(\pathVectorSpace{P}\,\)*good for?* In other words what can we do with it, or rather, what can we do *in it?*

We introduce several operations, though we have seen aspects of these before:

z | z |
---|---|

translation of \(\pathVector{p}\) along \(\pathVector{t}\) | \(\pathTranslate{\pathVector{t}}{\pathVector{p}}\) and \(\pathBackwardTranslate{\pathVector{t}}{\pathVector{p}}\) |

composition of \(\pathVector{p}\) with \(\pathVector{q}\) | \(\pathCompose{\pathVector{p}}{\pathVector{q}}\) |

reversal of \(\pathVector{p}\) | \(\pathReverse{\pathVector{p}}\) |

head and tail of \(\pathVector{p}\) | \(\pathHeadVector{\pathVector{p}}\) and \(\pathTailVector{\pathVector{p}}\) |

To define these operations formally, we need only specify what they do on the set of basis paths \(\compactBasis{\pathVectorSpace{P}}\defEqualSymbol \basis(\pathVectorSpace{P})\). For binary operations like \(\translateSymbol\), \(\backwardTranslateSymbol\), and \(\pathComposeSymbol{}\), this means defining their behavior on pairs of basis paths. For unary operations like \(\pathReverse{}\), \(\pathHeadVector{\path{}}\), and \(\pathTailVector{\path{}}\), this means defining their behavior on individual basis paths. We’ll illustrate this process with specific examples, but for now, let’s define the linear-algebraic foundation of this approach.

For a unary operation \(\function{f}\) defined on basis paths \(\functionSignature{\function{f}}{\compactBasis{\pathVectorSpace{P}}}{\compactBasis{\pathVectorSpace{P}}}\), we define \(\boldForm{\function{f}}\), the “vectorial” version of \(\function{f}\), by linearity:

\[ \function{\boldForm{\function{f}}}(\pathVector{p})\defEqualSymbol \indexSum{\basisPathWeight{p}{\sym{i}} \, \function{f}(\basisPath{p}{\sym{i}})}{\sym{i}}{} \]If \(\function{f}\) is a **partial function** not defined for certain basis paths \(\basisPath{p}{\sym{i}}\), we define \(\function{f}(\basisPath{p}{\sym{i}}) = \pathVector{0}\), so can simply drop such terms from the sum. This exact same construction will work if \(\function{f}\) outputs linear combinations of basis paths, rather than single basis paths, in other words, for \(\functionSignature{\function{f}}{\compactBasis{\pathVectorSpace{P}}}{\pathVectorSpace{P}}\).

Similarly, for a binary operation \(\function{g}\) defined on basis paths \(\functionSignature{\function{g}}{\tuple{\compactBasis{\pathVectorSpace{P}},\compactBasis{\pathVectorSpace{P}}}}{\compactBasis{\pathVectorSpace{P}}}\), we define vectorial \(\boldForm{\function{g}}\) by **bilinearity**:

Again, if \(\function{g}\) is a partial function that is not defined for certain pairs \(\tuple{\basisPath{p}{\sym{i}},\basisPath{q}{\sym{j}}}\), we define \(\function{g}(\basisPath{p}{\sym{i}},\basisPath{q}{\sym{j}}) = \pathVector{0}\). Likewise, this definition works perfectly well if \(\function{g}\) outputs linear combinations rather than basis elements, in other words, for \(\functionSignature{\function{g}}{\tuple{\compactBasis{\pathVectorSpace{P}},\compactBasis{\pathVectorSpace{P}}}}{\pathVectorSpace{P}}\).

### Translation

For a lattice quiver, the translation operation \(\pathTranslate{\pathVector{t}}{\pathVector{p}}\), or “\(\pathVector{p}\) translated along \(\pathVector{t}\)”, is defined on basis paths as follows:

\[ \constructor{\pathTranslate{\path{t}}{\path{p}} = \paren{\pathWord{\vert{y}}{\wordSymbol{P}}{\blank}}}{\begin{array}{c} \path{t} = \paren{\pathWord{\vert{x}}{\blank}{\vert{y}}}\\ \path{p} = \paren{\pathWord{\vert{x}}{\wordSymbol{P}}{\blank}} \end{array} } \]Here, \(\_\) is the symbol for *some* vertex or cardinal, whose value is irrelevant or implied.

Put in words, \(\pathTranslate{\path{t}}{\path{p}}\) is the path that *starts* where \(\path{t}\,\)*ends*, and has the same path word \(\wordSymbol{P}\) as \(\path{p}\). To be defined, \(\path{p}\) must be “tail-to-tail” with \(\path{t}\).

Some examples on basis paths:

The operation of translation will become considerably more subtle when move beyond perfect lattice quivers and consider the phenomenon of cardinal transport – more on that later.

### Composition

The **composition** operation \(\pathCompose{\pathVector{p}}{\pathVector{q}}\), or “\(\pathVector{p}\) composed with \(\pathVector{q}\)”, is defined on basis paths as follows:

This is just the ordinary path composition we have considered before: we can compose \(\path{p}\) with \(\path{q}\) if \(\path{p}\) is “head-to-tail” with \(\path{q}\). The result has the combined path word of \(\path{p}\) and \(\path{q}\).

Some examples on basis paths:

### Reversal

The **reversal** operation \(\pathReverse{\pathVector{p}}\) is defined on basis paths as follows:

Or more simply, if we define the inverse of a path word to be the inversion and reversal of its constituent cardinals:

\[ \constructor{\pathReverse{\path{p}} = \paren{\pathWord{\vert{y}}{\inverted{\wordSymbol{W}}}{\vert{x}}}}{\path{p} = \paren{\pathWord{\vert{x}}{\wordSymbol{W}}{\vert{y}}}} \]### Head and tail

The **head** and **tail** are defined as:

The head and tail operations “collapse” a path to the empty path at its head or tail vertex.

## Multiset interpretation

There is a simple interpretation of the *meaning* of path vectors and their linear operations when we take the base field \(K\) to be not a field but the semiring \(\mathbb{N}\): it is the algebra of multiset operations on paths. (Technically we have now passed to a module rather than a vector space). In this picture, a path vector is a *collection* (a multiset) of paths, and binary operations are applied *in every possible way* between the two collections being operated on, with the outputs again being placed in a collection. Some of these operations may “fail”, in which case they do not contribute to the resulting collection. The coefficient of a basis path in a vector merely counts its **multiplicity** in the collection: how many times that path occurs in the collection.

A slightly more flexible interpretation can also be used when the base field \(K\) is the ring \(\mathbb{Z}\): we allow paths in the collection to "cancel" when they occur with opposite "sign". At a stretch, we can even extend this interpretation to the field \(\mathbb{Q}\), where we can imagine sets in which multiplicities of individual basis paths are so large that we are justified in reasoning only about their multiplicity relative to some large absolute count \(N\), neglecting the “shot noise” that occurs for small but non-zero multiplicities.

These interpretations can be a helpful guide to concrete thinking about path vectors, but of course they become nonsensical for fields like \(\mathbb{R}\) or \(\mathbb{C}\), although later we will advance a possible interpretation of the "meaning" of \(\mathbb{C}\) (and its \(\mathbb{Z}\)-analog, the Gaussian integers).

## Covariant differences

Equipped with these new algebraic operations, we can generalize our previous forward, backward, and central finite differences in a straightforward way to define the **covariant difference** of a path vector \(Q\) along a path vector \(P\):

z | z |
---|---|

forward covariant difference | \(\Delta _P^+ Q=(P_ \bullet -P ) \uarr Q\) |

backward covariant difference | \(\Delta _P^- Q=(P^ \dag -P^ \bullet ) \uarr Q\) |

central covariant difference | \(\Delta _P Q=(P^ \dag -P ) \uarr Q=( \Delta _P^++ \Delta _P^- )Q\) |

To avoid typesetting clutter, we’ll sometimes write \(\Delta _P Q\) as \(P \Delta Q\), treating \(\Delta\) as a binary operation.

The motivation for the term *covariant difference* will become clearer when we consider curvature, but for now, the important aspect of these new differences is that can operate on arbitrary path vectors, not just vertex fields.

Let’s see how this works in practice. We’ll construct the forward cardinal edge fields for cardinal \(x\) and compute the various covariant difference of an examples vertex field \(u\).

The simplest case is the central covariant difference of a the forward edge field of \(x\). Previously, we called this \(\Delta _x\), which is written \(e_x^f \Delta\) in our new notation:

The full set of combinations is shown below, with rows corresponding to the forward, backward, symmetric, and antisymmetric edge fields of \(x\), and central three columns corresponding to the central, forward, and backward covariant differences:

We can see some identities manifested by in this table – to point them out we’ll refer to elements of this table as \((row,column)\).

That \((1,3)=-(2,4)\) and \((1,4)=-(2,3)\) is due to the identity \(P \Delta ^ \pm =-P^ \dag \Delta ^ \mp\), which follows directly from the definition. That \((1,2)=-(2,2)\) is due to the identity \(P \Delta =-P^ \dag \Delta\), again easily verified from the definition. That \((3,3)=-(3,4)\) is due to the first identity again, giving \(S \Delta ^+=-S \Delta ^-\) for symmetric path vectors (those satisfying \(S=S^ \dag\)). That \((3,2)=0\) follows from the previous fact, since \(\Delta = \Delta ^++ \Delta ^-\).

#### Differences of edge fields

What happens when we compute the covariant difference of an *edge field* rather than a vertex field? For perfect lattice quivers, the behavior is in some sense identical:

The action of the difference \(e_x^f \Delta\) on path vector \(Q\), or in fact an arbitrary difference \(\Delta _P\), is determined by the action of \(\Delta _P\) on the tail vertices of \(Q\).

We can formalize this in the following statement: \(( \Delta _PQ)_ \bullet = \Delta _P(Q_ \bullet )\).

This is not to say that \(\Delta _PQ\) can be *recovered* from \(( \Delta _PQ)_ \bullet\), as the following example demonstrates:

Since \(p\) consists of two 1-paths that have the same tail but opposite weight, the tail vector \(p_ \bullet\) is zero.

#### Differences and grades

We’ve seen the covariant differences taken along 1-paths, in other words, elements of \(\mathscr{P} _1\) but what about other grades?

Since vertex fields (elements of \(\mathscr{P} _0\)) have the property that \(P=P^ \dag =P^ \bullet =P_ \bullet\), the differences \((P_ \bullet -P )\) etc. used in the definition of covariant differences are identically zero, meaning the result of these covariant differences will also always be zero. We can verify this on a simple example:

# Cardinal transport

## Motivation

An important task for quiver geometry is to describe the phenomenon of **curvature**. Curvature is a well-defined if somewhat intricate concept in continuous geometry – what form could it possibly take in quiver geometry? It turns out to be connected to the presence of what we could call **topological crystal defects** in a quiver. We'll leave this idea mysterious for now, but we will return to it in time. The task of this section is to explore a simple example of a curved quiver, using a well-worn object: the **Möbius strip**. Doing this will make vivid the bookkeeping we will need to model arbitrary forms of curvature in more complex cases.

Note: this section requires substantial rework. See [[[here:Summary and roadmap#Curvature]]] for more information.

## The Möbius strip

First we’ll introduce the bare connectivity / topology of the strip, without any **cardinal structure**. As we did for toroidal lattices, the edges that “wrap around” on the sides of the strip are labeled to show how they connect up:

The famous twist of the Möbius strip is captured by the fact that the top right vertex is adjacent to the bottom left vertex via the edge labeled 3, and the top left vertex is adjacent to the bottom right vertex via the edge labeled 1.

## Charts

We’ll now introduce the cardinal structure. There will be 4 cardinals present, though we will soon organize them into collections called **charts**.

The cardinal \(\card{x}\) moves “rightward” on the strip and is present on every vertex, but the \(\rform{\card{r}}\), \(\gform{\card{g}}\), and \(\bform{\card{b}}\) cardinals are defined only on portions of the strip. Notice the important fact that this quiver looks *locally* like a **square lattice** quiver: the left half of the strip obeys the path relations \(\word{\card{x}}{\rform{\card{r}}}\pathIso \word{\rform{\card{r}}}{\card{x}}\), similarly \(\word{\card{x}}{\gform{\card{g}}}\pathIso \word{\gform{\card{g}}}{\card{x}}\) and \(\word{\card{x}}{\bform{\card{b}}}\pathIso \word{\bform{\card{b}}}{\card{x}}\) where they are defined.

Note the twist of the Möbius strip is *also* manifested by the fact that the cardinal \(\bform{\card{b}}\) reverses *apparent* orientation when passing from the right side of the lattice to the left. There is nothing special about this location in the lattice; this discontinuity exists purely because the way we are drawing the lattice doesn’t match the topology of the lattice itself.

Our three charts will be \(\textcolor{#b50700}{\chart{rx}} = \list{\rform{\card{r}},\card{x}}\), \(\textcolor{#217f00}{\chart{gx}} = \list{\gform{\card{g}},\card{x}}\), \(\textcolor{#165e9d}{\chart{bx}} = \list{\bform{\card{b}},\card{x}}\), their corresponding subgraphs visualized below:

Again, notice the important fact that *within* each chart we have the ordinary path relations of a square quiver \(\subSize{\squareQuiver }{ \infty }\): \(\word{\card{x}}{\rform{\card{r}}}\pathIso \word{\rform{\card{r}}}{\card{x}}\) for \(\textcolor{#b50700}{\chart{rx}}\), \(\word{\card{x}}{\gform{\card{g}}}\pathIso \word{\gform{\card{g}}}{\card{x}}\) for \(\textcolor{#217f00}{\chart{gx}}\), and \(\word{\card{x}}{\bform{\card{b}}}\pathIso \word{\bform{\card{b}}}{\card{x}}\) for \(\textcolor{#165e9d}{\chart{bx}}\).

## Transport atlas

We’ll now focus on the regions where the charts intersect, since this well tell us how the cardinals in each chart *relate* to one another.

The relations are shown below each intersection. For example, the final relation \(\word{\bform{\card{b}}}\pathIso \word{\rform{\inverted{\card{r}}}}\) represents that for any path in \(\textcolor{#165e9d}{\chart{bx}}\graphRegionIntersectionSymbol \textcolor{#b50700}{\chart{rx}}\), we can rewrite the path word by replacing any \(\bform{\card{b}}\) with \(\rform{\inverted{\card{r}}}\) (and vice versa), and the path will remain unchanged.

We can summarize these relations in a **transport atlas** for \(\quiver{Q}\), denoted \(\transportAtlas{\quiver{Q}}\):

The vertices of \(\transportAtlas{\quiver{Q}}\) are charts of \(\quiver{Q}\), and the edges of \(\transportAtlas{\quiver{Q}}\) are transitions between charts, labeled with the cardinals from \(\quiver{Q}\) that can achieve them. Additionally, edges of \(\transportAtlas{\quiver{Q}}\) are labeled with **cardinal rewrites**, which describe how to rewrite cardinals of the tail chart to become cardinals of the head chart when making a transition between two charts. Notice that these are oriented versions of the path relations \(\word{\rform{\card{r}}}\pathIso \word{\gform{\card{g}}},\word{\gform{\card{g}}}\pathIso \word{\bform{\card{b}}},\word{\bform{\card{b}}}\pathIso \word{\rform{\inverted{\card{r}}}}\) we saw above.

## Cardinal transport

Now, how do we interpret the non-orientability of the Möbius strip in this discrete setting? To do this we will now introduce the discrete analog of the notion of **parallel transport** from differential geometry. Parallel transport, which in differential geometry controls how tangent vectors are smoothly transformed as we move smoothly in the base manifold, becomes in the setting of quiver geometry **cardinal transport**: a process by which we can translate between cardinals in different charts as we follow a path in the quiver. Unlike in the continuous case, this transport is *discrete*: cardinals are replaced wholesale with other cardinals as described by the **transport atlas**.

Importantly, transport is associated with an entire *path* on the base quiver \(\quiver{Q}\), which is associated also with a path on \(\transportAtlas{\quiver{Q}}\). Let's examine an example for the Möbius strip.

We’ll start with an *empty* path, shown alongside its depiction on the transport atlas (which we’ll call an **atlas path**). We will gradually extend this path (colored teal), transporting the particular cardinal \(\rform{\card{r}}\) as we go. We’ll illustrate how the cardinal is rewritten as we transition between charts in the atlas. To do this, we’ll superimpose a second, smaller path at the head of the first path to illustrate what it means to take the *transported version* of \(\rform{\card{r}}\).

Here is the initially empty path \(\path{P_0} = \paren{\pathWord{\textcolor{#b50700}{\chart{rx}}}{\emptyWord{}}{\textcolor{#b50700}{\chart{rx}}}}\), along with the second path illustrating the direction of the (untransported) cardinal \(\rform{\card{r}}\):

We’ll now extend \(\path{P_0}\) by \(\repeatedPower{\card{x}}{2}\) to obtain \(\path{P_1} = \paren{\pathWord{\textcolor{#b50700}{\chart{rx}}}{\repeatedPower{\word{\card{x}}}{2}}{\textcolor{#217f00}{\chart{gx}}}}\).

Notice that the **cardinal rewrite**, \(\cardinalRewrite{\rform{\card{r}}}{\gform{\card{g}}}\), that occurred corresponds to the edge we traversed in the cardinal atlas.

Again we will extend \(\path{P_1}\) by the \(\repeatedPower{\card{x}}{2}\) to obtain \(\path{P_2} = \paren{\pathWord{\textcolor{#b50700}{\chart{rx}}}{\repeatedPower{\word{\card{x}}}{4}}{\textcolor{#165e9d}{\chart{bx}}}}\).

And again we obtained another cardinal rewrite, this time \(\cardinalRewrite{\gform{\card{g}}}{\bform{\card{b}}}\).

Finally we can extend \(\path{P_2}\) to obtain \(\path{P_3} = \paren{\pathWord{\textcolor{#b50700}{\chart{rx}}}{\repeatedPower{\word{\card{x}}}{6}}{\textcolor{#b50700}{\chart{rx}}}}\):

The final cardinal rewrite obtained was \(\cardinalRewrite{\bform{\card{b}}}{\rform{\inverted{\card{r}}}}\).

Summarizing the rewrites that occured, we see that transporting the cardinal \(\rform{\card{r}}\) around this closed path \(\path{P_3}\) yields the transported cardinal \(\rform{\inverted{\card{r}}}\), and hence the red path at the head is now oriented *against* the original direction of \(\rform{\card{r}}\). So, while we have an quiver that is formed by the union of square-lattice charts, there is a non-local form of *curvature* that prevents us from gluing these charts together into a larger chart that describes the entire quiver.

## Discrete curvature

We’ve seen how to define cardinal transport on a path in the quiver when we have a collection of charts, which we can organize into a transport atlas that summarizes the relations of their cardinals.

### Signed cardinals

Can we formalize the phenomena of curvature we saw for the Möbius strip? To do this, we'll first introduce the notion of a **signed cardinal permutation group**, or a **cardinal group** for short, written \(\cardinalGroup{\sym{C}}\). This group is the group of permutations of a set of cardinals \(C\) and their inverses, with the restriction that if a permutation sends \(\rewrite{\card{x}}{\card{y}}\), it must also send \(\rewrite{\inverted{\card{x}}}{\inverted{\card{y}}}\).

For example, taking the cardinals \(\sym{C} = \list{\card{x},\rform{\card{r}}}\), the cardinal group \(\cardinalGroup{\sym{C}}\) is the subgroup of the symmetric group on the set of **signed cardinals** \(\signed{\sym{C}} = \list{\card{x},\inverted{\card{x}},\rform{\card{r}},\rform{\inverted{\card{r}}}}\) obeying the symmetry mentioned above.

We now introduce the function \(\signedCardinalList\) that yields such signed cardinal sets, so for example we have the aforementioend \(\signed{\sym{C}} = \signedCardinalList(\textcolor{#b50700}{\chart{rx}}) = \list{\card{x},\inverted{\card{x}},\rform{\card{r}},\rform{\inverted{\card{r}}}}\), and the set of signed cardinals of the entire quiver \(\signedCardinalList(\quiver{Q}) = \list{\card{x},\inverted{\card{x}},\rform{\card{r}},\rform{\inverted{\card{r}}},\gform{\card{g}},\gform{\inverted{\card{g}}},\bform{\card{b}},\bform{\inverted{\card{b}}}}\).

We can then refer to the cardinal group for the signed cardinals of \(\quiver{Q}\) as as \(\cardinalGroup{\sym{\quiver{Q}}}\).

### Transport map

These definitions out the way, we can now define the **transport map**, written \(\transportMap{\path{P}}\), for a closed path \(\path{P}\) that begins (and ends) in a given chart \(\chartSymbol _{\sym{i}}\): it is the group element \(\elemOf{\transportMap{\path{P}}}{\cardinalGroup{\sym{\chartSymbol _{\sym{i}}}}}\) that sends cardinals to their transported forms under cardinal transport along \(\path{P}\): the cardinal \(\card{c}\) is sent to \(\transportMap{\path{P}}(\card{c})\).

We've already calculated \(\transportMap{\path{P_3}}(\rform{\card{r}})\) for the particular closed path \(\path{P_3}\) in the Möbius strip, and found it to be \(\transportMap{\path{P_3}}(\rform{\card{r}}) = \rform{\inverted{\card{r}}}\). And since \(\card{x}\) is shared by all charts, we know also that \(\transportMap{\path{P_3}}(\card{x}) = \card{x}\). Hence, \(\transportMap{\path{P_3}}\) is the permutation \(\rform{\card{r}} \leftrightarrow \rform{\inverted{\card{r}}}\), which is an element of \(\cardinalGroup{\sym{\textcolor{#b50700}{\chart{rx}}}}\).

But how can we generalize this definition to give us \(\transportMap{\path{P}}\) for *any* path \(\path{P}\) that starts in a given chart \(\chartSymbol _{\sym{i}}\)? It should work for paths that aren’t closed paths, in other words, paths that end on an *arbitrary* chart \(\chartSymbol _{\sym{j}}\). To do this, we might try to enlarge the cardinal group from \(\cardinalGroup{\sym{\textcolor{#b50700}{\chart{rx}}}}\) to \(\cardinalGroup{\sym{\quiver{Q}}}\).

For example, for the path \(\path{P_2} = \paren{\pathWord{\textcolor{#b50700}{\chart{rx}}}{\repeatedPower{\word{\card{x}}}{4}}{\textcolor{#165e9d}{\chart{bx}}}}\) we considered earlier, we might posit that \(\transportMap{\path{P_2}} = \cardinalRewrite{\rform{\card{r}}}{\bform{\card{b}}}\). But this isn't a permutation as such. What happens to \(\bform{\card{b}}\) and \(\gform{\card{g}}\)? These cardinals simply weren't present in the chart \(\textcolor{#b50700}{\chart{rx}}\), so they have no well-defined image under transport.

The way around this is quite simple: we will relax \(\cardinalGroup{\sym{\quiver{Q}}}\) from the status of group to the status of groupoid. What is this groupoid? It is the **action groupoid** of *partially defined permutations*, represented as rewrites*.* We can compose such partial permutations whenever they do not specify contradictory information.

For example, we cannot compose the partial permutations \(\cardinalRewrite{\rform{\card{r}}}{\bform{\card{b}}}\) and \(\cardinalRewrite{\gform{\card{g}}}{\bform{\card{b}}}\), since this resulting object could not be inverted. Similarly we cannot compose \(\cardinalRewrite{\rform{\card{r}}}{\gform{\card{g}}}\) and \(\cardinalRewrite{\rform{\card{r}}}{\bform{\card{b}}}\), since the image of \(\rform{\card{r}}\) would not be uniquely defined. But composing \(\cardinalRewrite{\rform{\card{r}}}{\gform{\card{g}}}\) with \(\cardinalRewrite{\gform{\card{g}}}{\bform{\card{b}}}\) would yield \(\cardinalRewrite{\rform{\card{r}}}{\bform{\card{b}}}\) as expected.

You may notice that this condition for when it is meaningful to compose partially defined permutations is similar to the local uniqueness property of quivers themselves. We will elaborate this connection in a future section, where we will see that rewrite systems give rise to quivers in a natural way, and we'll see that the cardinal groupoid is a rich motivating example.

#### Curvature quiver

An observant reader might recognize that \(\transportMapSymbol{}\) seemed to exhibit some of the properties of a path valuation of the path quiver \(\forwardPathQuiver{\transportAtlas{}}{\vert{\chartSymbol _{\sym{i}}}}\), which is the path quiver of paths in the transport atlas \(\transportAtlas{\quiver{Q}}\) that begin in the chart \(\chartSymbol _{\sym{i}}\). Let’s proceed boldly to construct the **curvature quiver**, being the quotient \(\quotient{\forwardPathQuiver{\transportAtlas{}}{\vert{\chartSymbol _{\sym{i}}}}}{\transportMapSymbol{}}\), in which \(\chartSymbol _{\sym{i}}\) plays the role of the origin. For the Möbius strip, we'll choose \(\chartSymbol _{\sym{i}} = \textcolor{#b50700}{\chart{rx}}\).

First, notice the (non-quotiented) atlas path quiver \(\forwardPathQuiver{\transportAtlas{}}{\textcolor{#b50700}{\chart{rx}}}\) is a line lattice:

Let's now see the **curvature quiver**, which is the quotient \(\quotient{\forwardPathQuiver{\transportAtlas{}}{\textcolor{#b50700}{\chart{rx}}}}{\transportMapSymbol{}}\):

To interpret what we're seeing here: vertices of the curvature quiver are atlas paths, modulo their cardinal rewrites. In other words, all cardinal transport behavior, for a path of any length that starts in \(\textcolor{#b50700}{\chart{rx}}\), is summarized in this single quiver.

More specifically, we see that circling the strip in either direction once we obtain the same cardinal rewrite \(\cardinalRewrite{\rform{\card{r}}}{\rform{\inverted{\card{r}}}}\). Technically the equivalence classes in this quotient include *infinitely many* paths, being (positive or negative) powers of \(\repeatedPower{\card{x}}{6}\), since we can wind around the strip an arbitrary number of times. Only the first such winding in either direction is indicated in the diagram.

To summarize, we will obtain the rewrite \(\cardinalRewrite{\rform{\card{r}}}{\rform{\card{r}}}\) if we circle an even number of times, and \(\cardinalRewrite{\rform{\card{r}}}{\rform{\inverted{\card{r}}}}\) an odd number of times, which matches the behavior of the familiar Möbius strip that a tangent vector is inverted when we traverse it an odd number of times.

This was a particularly simple example that serves well to illustrate the basic idea about cardinal transport. In the coming sections we will extend this construction to model curvature of the Platonic solids, which will be discrete stand-ins for the sphere.

# The cube

## Recap

In [[[Cardinal transport]]], we used the Möbius strip as a toy model to take the first steps towards defining curvature in the setting of quiver geometry. We defined the **transport atlas**, which measured the effects of **cardinal transport** when moving between **charts** of the strip. We saw a form of non-local curvature, corresponding to paths that circled the strip an integer number of times, and summarized this curvature in a **curvature quiver**.

In this section, we will repeat this procedure for a cardinal quiver corresponding to the most familiar of the Platonic solids, the **cube**.

Note: this section requires substantial rework. See [[[here:Summary and roadmap#Curvature]]] for more information.

## Cardinal structure

The underlying graph of a cube is very simple:

However, we will find it more intuitive to work with a subdivided cube, since it allows us to understand its behavior on the individual faces more clearly:

How do we attach a quiver structure to this skeleton? The cardinals we’ll use will correspond to the orbits of permutations of vertices that are associated with particular axes of symmetry that pass through the faces of the cube. These permutations do depend on a particular embedding of the graph into \(\mathbb{R}^3\), but once we have this cardinal structure we will immediately forget this embedding.

Putting these cardinals together we arrive at a cardinal quiver:

This is visually *busy*! We'll now organize these cardinals into **charts**.

## Charts

The charts we will use will consist of pairs of the cardinals \(\rform{\card{r}},\gform{\card{g}},\bform{\card{b}}\). We’ll pick the charts to correspond to the subgraphs on which those cardinals individually commute pairwise, in other words, in which the cardinal relations are those of \(\subSize{\squareQuiver }{3}\). These charts will correspond to *faces* of the cube. Notice that the charts split into pairs of *opposite faces*:

Here is the **transport atlas**, written \(\transportAtlas{\quiver{A}}\), plotted in 2 and 3 dimensions. Since the vertices correspond to faces of the cube, and edges to shared edges of the cube, the connectivity of this graph corresponds to to the *dual polyhedron* of the cube, which is an *octahedron*:

There is a lot going on here, but don’t worry, we will unpack these diagrams!

### Cardinal transport via linear algebra

To help us perform cardinal transport calculations, we’ll introduce the notion of a **frame vector**. This is a gadget that we will use to understand how cardinals are transported as we form paths in the atlas. The frame vector contains 3 components to track the orientations of cardinals as we perform transport. To each individual cardinal we associate a basis vector: \(\hat{r}=\{1,0,0\},\hat{g}=\{0,1,0\},\hat{b}=\{0,0,1\}\), with inverses of cardinals being represented by negation of vectors: \(\hat{\underline{r}}=-\hat{r}=\{-1,0,0\},\hat{\underline{g}}=-\hat{g}=\{0,-1,0\},\hat{\underline{b}}=-\hat{b}=\{0,0,-1\}\). We can see these vectors as living in the vector space \(\mathbb{F} _3^3\), which is a 3-dimensional vector space over the finite field \(\mathbb{F} _3=\{-1,0,1\}\).

This allows us to represent cardinal transport between two compasses via linear maps from this vector space to itself, called **frame transformations**. For example, for the path \(P=\de{\textcolor{#dc841a}{C_{rg+}}}{\textcolor{#c74883}{C_{rb+}}}\), we have that the transport \(\tau _P\) is given by the parallel rewrite \(\tau _P=\{r \to r,g \to \underline{b}\}\). We’ll write the linear map for this path \(P\) as \(T_P\), giving us \(T_P(\hat{r})=\hat{r}\) and \(T_P(\hat{g})=-\hat{b}\). But where to send \(T_P(\hat{b})\), especially since there is no cardinal \(b\) present in the compass \(\textcolor{#dc841a}{C_{rg+}}\)? We’ll see that we “fill in” the behavior of this map (and all cardinal transport maps) by imposing some desirable properties on this machinery of cardinal transport maps.

We first notice that these maps *must* compose in a natural way: if we have two composable paths \(\path{P}\) and \(\path{Q}\) in the atlas, then the orientation of a given cardinal after transport along \(\pathCompose{\path{P}}{\path{Q}}\) must clearly be the same its orientation after transport along \(\path{P}\) multiplied by its orientation after transport along \(\path{Q}\). Algebraically, we require \(T_{P \cdot Q}=T_P \cdot T_Q\). In other words, we are asking for a groupoid homomorphism \(\functionSignature{\groupoidFunction{T}}{\pathGroupoid{\transportAtlas{\quiver{A}}}}{\generalLinearGroup{3}{\finiteField{3}}}\) from the path groupoid of the atlas to the group(oid) of matrix multiplication of 3×3 matrices with entries in \(\mathbb{F} _3\) that we use to represent cardinal orientation.

At this point, we should expand on the idea that the atlas \(\transportAtlas{\quiver{A}}\) is a quiver in its own right. We’ll allow ourselves the freedom to label its edges with cardinals in whatever way we like, but we must keep distinct the two kinds of cardinals: cardinals in the atlas, which represent *transitions between compasses*, and cardinals in the base quiver, which represent the most atomic movements it is possible to make.

In the case of the cardinal atlas of the cube, we will use capital letters to denote the atlas cardinals: \(\card{R},\card{G},\card{B}\). The cardinal \(\card{R}\) will label the transition from \(\textcolor{#dc841a}{C_{rg+}}\) to \(\textcolor{#c74883}{C_{rb+}}\), and similarly for \(\card{G}\) and \(\card{B}\). Our goal will be to construct the homomorphism we seek by defining it only the atlas cardinals, in other words, by defining \(T_R,T_G,T_B\).

To help us define defining \(T_R\), we exploit the fact that it must represent parallel rewrites on two distinct edges edges \(\de{\textcolor{#dc841a}{C_{rg+}}}{\textcolor{#c74883}{C_{rb+}}}\) and \(\de{\textcolor{#c74883}{C_{rb+}}}{\textcolor{#dc841a}{C_{rg-}}}\). The first edge give is what we saw for \(T_P\) above, which was \(T_P(\hat{r})=\hat{r}\) and \(T_P(\hat{g})=-\hat{b}\). The second edge has parallel rewrite \(\{r \to r,b \to c\}\), giving us the action on the remaining basis vector \(T_R(\hat{b})=\hat{g}\). Using the basis \(\{\hat{r},\hat{g},\hat{b}\}\), we now have a matrix representation of \(T_R\); similar calculations yield \(T_G\) and \(T_B\):

\(T_R=\begin{pmatrix}1&0&0\\0&0&1\\0&{\underline{, 1, }}&0\end{pmatrix}\) \(T_G=\begin{pmatrix}0&0&{\underline{, 1, }}\\0&1&0\\1&0&0\end{pmatrix}\) \(T_B=\begin{pmatrix}0&1&0\\{\underline{, 1, }}&0&0\\0&0&1\end{pmatrix}\)

These matrices connect to our earlier notion of a **cardinal groupoid**. In fact, they are simply a *representation* of this groupoid, in an identical way to how matrices provide linear representations of ordinary groups.

### Curvature quiver

A benefit of having these matrices is that we can compute the **curvature quiver** using the same matrix machinery that we have used previously to produce lattice quivers. The atlas serves as the fundamental quiver, and the matrices \(T_i\) give us the path value map that we use to compute the quotient \(\compactQuotient{\transportAtlas{\quiver{A}}}{\chart{\sym{i}}}{\function{T}}\), where \(\functionSignature{\function{T}}{\sym{i}}{T_i}\) acts as a cardinal valuation. We’ll use \(\textcolor{#dc841a}{C_{rg+}}\)as the origin chart, but this choice is arbitrary.

Before we construct the curvature quiver, it is interesting to note that the atlas \(\transportAtlas{\quiver{A}}\) is the fundamental quiver of the **rhombitrihexagonal lattice**, a fragment shown here:

Computing the quotient \(\compactQuotient{\transportAtlas{\quiver{A}}}{\chart{\sym{i}}}{\function{T}}\) for the cardinal atlas under the cardinal valuation \(\sym{T}\), we obtain the curvature quiver, shown below:

It turns out that this 24-vertex curvature quiver is the Cayley quiver of the so-called **binary tetrahedral group**. Each vertex in the curvature quiver represents an element of the group, and the edges correspond to the actions of a particular set of generators. Each vertex of the atlas lives on 4 cycles, each of length 4, that together make up the whole graph: