What is beauty? Can we measure it? If so, how?
When pondering such vertiginous questions, it is always useful to turn to the masters. Aquinas, following Aristotle, defines beauty by:
The underlying cause is that beauty reveals the ontology, the true nature of what is. That is Christopher Alexander’s fascinating thesis, most clearly explained in The Nature of Order: An essay on the art of building and the nature of the universe, book four: The Luminous Ground.
When I am part of the making of a building and examine my process, what is happening in me when I do it, myself, in my effort, is that I find that I am nearly always reaching for the same thing. In some form, it is the personal nature of existence, revealed in the building, that I am searching for.
We apply this essentialist philosophy to colours 🌈here.
Both agree that it is a subjective perception of an objective quality. Our goal is not to determine whether it is true, but to examine the ramifications of that assumption:
The definition of intelligence still disputed today, so let us take the descriptive path instead of the prescriptive one.
Imagine that you hear a chord (here, C and G):
The piano has been kindly sampled by Alexander Holm.
Its sound wave is approximately defined by:
And it looks like this:
Finding the original frequencies, and , is difficult with only a glance.
When the signal reaches your ear, it is encoded through the firing intensity of your neurons: the more intense the signal (a spike on the graph), the faster they fire.
However, the next neurons transmitting the signal also have their rates, such that we can infer that a second neuron, firing naturally at , will fire at a total rate of .
To find how much a neuron will fire depending on its rate, let us look at the integral:
This is an approximation to make it visible; there should only be two vertical bars at and .
What this graph teaches us is that a simple mechanism, neurons firing at their own rate when they receive a signal, enables them to decompose a complex signal into its exact fundamentals! It is called the Fourier transform. Each layer of neurons finds the rate at which the previous layer changes its rate, finding always more global patterns.
Aesthetic experience as theory buildingThe brain reduces signals to fundamental harmonics. Since a signal is periodic, the mechanism is intrinsically predictive: the brain always has a theory of what is coming next. When that theory is disproven, the brain learns something new and adjusts its theory. That learning is inherently pleasant, as it produces pleasure-inducing substances in the brain to strengthen neural connections.
Here are the first 30 seconds of Bach’s Kyrie as an example:
To make it simpler, let us suppose it was written in A minor instead of B minor. It starts with the fundamental of the key, A. On the second measure, it moves upwards, B. We are already building a theory: it is moving upwards on the scale, and the next note will be C. However! Bach destroys our theory and moves even further upwards, outside our scale, to C sharp. This unexpected note has a great effect.
ExamplesTo test our theory, here are two examples (both versions are romantic arrangements):
The chord progression is C → G → A → E → F → C → F → G → C. You can draw it below, as if you wanted to plot the path without lifting your pen. At every step, can you try to visually predict the next note?
Once notes do not bring new information (after the E, the progression is very predictable: there is a vertical symmetry axis), your emotion should decrease. Another example:
Here, the chord progression is A → D → G → C → F → B → E → A.
Much harder to predict, is it not? Towards the end, you finally discover the logic: aha! It is a star! Indeed, the symmetry is central this time. Here, you accumulated more chords before shattering your theory, and it was more pleasant. Its name is the circle of fifths. Personally, I call it the sublime progression. Even improvising simple arpeggios (separating the notes composing the chord) on that very mathematical progression is pleasant:
More rigorously, the goal is to maximise the amount of information in the minimal amount of data. To show how you do it intuitively, try to draw the most beautiful ‘H’:
There is a good chance you chose . Why? You can encode your ratio through two ways:
In other words, you must store the ratio plus its meaning. How can we avoid it? By making (1) and (2) equal!
This avoided us 1 bit to store the choice between (1) and (2). We can observe that the quantisation of beauty is possible with the theory of information. Mathematically, we want to maximise Shannon’s entropy:
is the probability of observing . In our example, .
Application to chordsHow can we apply this theory to find, for example, beautiful chords? In the twelve-tone equal temperament, the relative frequencies are:
Note | Frequency | Fractional approx. | Bits |
---|---|---|---|
C | 1.00 | 1 / 1 | 0.0 |
C# | 1.06 | 16 / 15 | 3.9 |
D | 1.12 | 9 / 8 | 3.0 |
D# | 1.19 | 6 / 5 | 2.3 |
E | 1.26 | 5 / 4 | 2.0 |
F | 1.33 | 4 / 3 | 1.6 |
F# | 1.41 | 64 / 45 | 5.5 |
G | 1.50 | 3 / 2 | 1.0 |
G# | 1.59 | 8 / 5 | 2.3 |
A | 1.68 | 5 / 3 | 1.6 |
A# | 1.78 | 16 / 9 | 3.2 |
B | 1.89 | 15 / 8 | 3.0 |
The fraction of E is much simpler than C#‘s. Indeed, E’s denominator is , and its numerator requires only bits, while B’s denominator is , and its numerator requires bits. Same information (note of a chord), but more data. We can expect, therefore, the first chord to be more pleasant than the second one.
Even without knowing the theory, you hear that it sounds odd. Does it mean that combinations like C + B are forever banished? No, on the contrary:
When we wanted a two-note chord, B was too different from C for them to be related. However, when you compose a music, you want more notes. And only repeating the same easy chord will bore the listener quickly (no new information!). However, if you combine cleverly different notes, they can be pleasant together.
Previously, we saw that storing B relatively to C requires 3 bits. However, storing E or G is more compact (2 and 1 bit). And storing B relatively to E or G requires 1 or 2 bits too. In other words, E and G bind C and B together. We have more notes = more information with fewer bits. Compression is higher and the chord is more pleasant!
For a deeper study of information theory’s entropy and harmonic waves as the root of the way our intellect works, see the free energy principle and connectome-specific harmonic waves: an introduction and an academic article.