Ways to think about structure in mathematics
Most of the objects that appear in mathematics can be thought of as
sets with additional “structure”. For example, a group is a set
But “structure” in this sense is a somewhat elusive concept. We know it when we see it but it’s hard to describe explicitly – which is why I just gave some examples and hoped you knew what I meant. (Sidenote: there is also a more formal notion of structure in mathematical logic but that’s not the topic of this post)
The goal of this post is not to give a formal definition of structure – I’m not sure how helpful that would even be – but rather to describe different perspectives that may be useful when thinking about it. To guide us, we will consider one particular question: what does it mean to say that object A has “more structure” than object B? For example, a vector space has more structure than a group, which has more structure than a simple set. We will start with more formal (but also more boring) perspectives and then work our way towards more speculative and fuzzy ones.
Notation
We’ll fix a set
We will write
Structure can be canonically removed
If
Structure leads to smaller symmetry groups
If
For example, we can treat the real numbers as a metric space or as a topological space. For a metric space, the automorphism group consists of only isometries (i.e. maps that don’t change distances between points), which for the real numbers are only translations. If we treat them as a topological space though (which has a lot less structure), then the automorphisms are all the homeomorphisms of the real number line, which form a much larger group.
More structure leads to fewer structure-preserving maps
If we consider two sets
The previous perspective is a special case of this, where
Structure allows more definitions and theorems
Now we start getting into slightly more hand-wavy territory. If
Some examples:
- On a Riemannian manifold, we can do things like measure the angle at which two curves intersect, which is simply not a concept that makes sense for a manifold without a metric
- Rings allow us to talk about divisibility, which does not have an analogon if only a group structure is avaliable
- All vector spaces have a basis but the same is not true for all modules (which have less structure than vector spaces)
While adding structure “preserves definitions and theorems”, it can sometimes make definitions trivial or collapse certain distinct concepts into one. For example, divisibility becomes very boring on fields because every element is divisible by every other element (except 0).
Algorithmic complexity
Now it’s getting really hand-wavy, so activate your lack-of-rigor-deflectors.
Loosely speaking, algorithmic complexity (or Kolmogorov complexity) measures how long the shortest possible description of some object is. This can be formally defined for bit sequences but I will appeal to your intuition to also apply it to other things like mathematical structures, without explicitly specifying how to encode those as bit sequences.
One connection between structure and complexity is quite obvious: if
There are some cases where this complexity perspective becomes a bit of a stretch. For example, it’s not obvious that defining a metric space is more complex than defining a topological space (unlike in the case of fields and groups, where the hierarchy is clear). I’d argue that it is in fact more complex because you need concepts like the real numbers which are pretty complicated compared to topological spaces. But there might be other examples where there really is a very short description of something which nevertheless has a lot of structure in terms of the other perspectives above. This is fine: our goal here is not to give a formal definition of structure but rather to list some of the properties that are typically associated with it.
There is another, more interesting way in which complexity comes into play when talking about
structure: how long is an average description of a particular element
- Specifying a topological space can be extremely complex. Because there is such a large number of
possible topologies on a fixed set, most of them need to have very long desriptions.
Also note that those topological spaces with very simple descriptions are often those that have
a natural additional structure. For example, to define the Euclidean topology on
, we usually first define its vector space structure, use that to define a metric and then use that to define a topology - Specifying a field on a finite set is very easy: there is at most one anyways (up to isomorphism)
- If the cardinality of
is prime, there is also only one group on . Otherwise, there might be more but still far fewer than there are topologies[I think, citation needed]
This seems to point towards more structure making it easier to specify a particular instance. But this is not always the case. For example, a Riemannian smooth manifold has more structure than just a smooth manifold (according to all the previous perspectives). But since every smooth manifold can be equipped with a Riemannian metric but that metric is not uniquely determined by the manifold, describing a Riemannian manifold usually takes longer than just describing a smooth manifold without a metric, because the choice of metric needs to be specified.
In general, adding structure means that there might be additional choices that need to be specified (such as a Riemannian metric) but it can also impose restrictions (for example, many topological spaces can’t be turned into metric spaces). These two factors pull the descriptive complexity of individual instances in opposite directions.
Inherent structure of objects
This is not a new perspective for thinking about structure. Instead, I will give an example for a possible “application” of the complexity-based perspective. Hopefully that will illustrate how these perspectives can be useful to have in your mental toolkit.
There are interesting connections between what we discussed in the previous section and Kolmogorov sufficient statistics. Intuitively speaking, the Kolmogorov sufficient statistic of a bit string is the part of that string that has “structure” in the sense of not being algorithmically random. Any bit string can be efficiently described by first describing its Kolmogorov sufficient statistic (which is a list of bit strings with the same “structure”) and then specifying its algorithmically random component (by giving its index in that list).
This is exactly analogous to describing e.g. a group on
Perhaps this idea can be used to define the “true inherent structure” on an object as its Kolmogorov sufficient statistic. But fleshing that out is a topic for another post.
Conclusion
In summary, here are all the perspectives we talked about: If an object has more structure, …
- this structure can always be canonically removed
- it has fewer symmetries
- there are fewer maps between it and other objects that preserve all the structure
- more concepts can be defined and more theorems proven
- specifying the class of objects with that structure tends to be more complex
- specifying that particular object is often easier because the structure restricts the space of options, but there are exceptions
-
In category theory these are forgetful functors but I’m just interested in intuition here, not formalism. In this example, there is also a natural way to turn any abelian group into a
-vector space, for a given field , by tensoring with . But that vector space won’t be over anymore and in many other cases there is no canonical way to add structure at all. ↩︎ -
The automorphism group is the set of all isomorphisms from an object to itself (which becomes a group via composition as the group operation) ↩︎