Monday, April 16, 2018

Proving Goodstein's Theorem and Transfinite Methods

This is the third part of a post series on Goodstein's theorem. For the first part, see here.

The previous post introduced the reader to Peano arithmetic (PA), the archetypical example of an axiomatic system introduced to standardize the foundations of mathematics. Despite having tremendous expressive power in formulating and proving theorems about natural numbers, the system is not without limitations. Gödel's Incompleteness Theorem guarantees the existence of statements out of the reach of formal proofs in PA. Goodstein's Theorem, which states that every Goodstein sequence terminates at 0, is an example. Further, it is a "natural" example in the sense that it was not contrived to demonstrate incompleteness. In fact, Goodstein proved the theorem in 1944, decades before Laurence Kirby and Jeff Paris discovered that it is independent of the axioms of PA in 1982.

To see why this independence holds, we consider the proof of Goodstein's Theorem. As mentioned, it is not provable in PA, so the proof makes use of tools outside of arithmetic: in particular, infinite ordinal numbers. A more thorough discussion of ordinal numbers may be found elsewhere on this blog. For our purposes, the key property of ordinal numbers is that they represent order types of well-ordered sets.

A well-ordered set is simply a set of elements and an ordering relation (often called "less than" and denoted by "<") such that any two elements are comparable (each is less than, equal to, or greater than) the others, and every subset has a minimal element. The set may be finite or infinite. Here are some examples:

  • The set of natural numbers itself, {0,1,2,3,...}, with the relation "less than" is well-ordered because every two elements are comparable and every subset has a smallest number
  • The set of natural numbers with the "greater than" relation is not well-ordered: with this relation, "minimal" elements are really largest elements, and the subset {3,4,5,...}, for example, has no greatest element
  • The set {A,H,M,R,Z} with the relation "comes before in the alphabet" is well-ordered because we can compare any two letters to see which comes first in the alphabet, and any subset has a first letter
  • The set of all integers {...-2,-1,0,1,2,...} is not well-ordered by either less than or greater than relations


The gist of the definition is that all set elements are listed in a particular order so that we can always tell which of a pair comes first, and that infinite ascending sequences are acceptable while decreasing ones are not. To understand order type, we need a notion of when two well-ordered sets are "the same". For example, the set {1,2,3,4,5} with the less than relation and {A,H,M,R,Z} with the alphabet relation are quite similar. Using the one-to-one relabeling 1 → A, 2 → H, 3 → M, 4 → R, 5 → Z, we can move from one set to the another and preserve the ordering relation. That is, 1 < 2 in the first set and their images satisfy A < H in the second set, and so on. If there is a relabeling like the one above between two sets, they are said to be of the same order type.

The purpose of ordinal numbers is to enumerate all possible order types for well-ordered sets; there is one ordinal for each order type. To make thinking about ordinals simpler, we often say that an ordinal is a specific set of the given order type, a particularly nice set. Specifically, we choose the set of all smaller ordinals. Since the sets in the example of the previous paragraph have five elements, their order type is the ordinal 5 = {0,1,2,3,4} (the reader may wish to show that any well-ordered five element set in fact has this order type). In fact, for finite sets, there is simply one ordinal for each size set. For infinite sets, matters become more interesting.

The ordinal corresponding to the order type of the natural numbers is called ω. Using the canonical choice of representative set, ω = {0,1,2,3,...} (where we now view the elements as ordinals). The next ordinal is ω + 1, the order type of {0,1,2,3,...,ω} or in general the order type of a well-ordered set with an infinite ascending collection plus one element defined to be greater than all others. One can go on to define ω + n for any finite n and ω*2, the order type of {0,1,2,3,...,ω,ω + 1,ω + 2,...} (two infinite ascending chains stuck together). The precise details do not concern us here, but ω2, ωω, ωωω, and so on may be defined as well. What matters is that these ordinals exist and that the set of all ordinals expressible with ordinary operations on ω (for example, (ωω)*4 + (ω3)*2 + ω*5 + 7) is a well-ordered set. In fact, the set of such ordinals is itself a larger ordinal called ε0.

Once these preliminaries are established, the proof of Goodstein's Theorem is rather simple, and even clearer when considered intuitively. For any Goodstein sequence, the members are represented in hereditary base-n notation at every step: the first member is put into base 2, the next base 3, and so on. The idea is to take each member of the sequence and replace the base with ω to obtain a sequence of ordinals. For example, the sequence G4 generates a sequence of ordinals H4 in the following way:

G4(1) = 4 = 1*21*21H4(1) = ωω (the 2's are replaced with ω's),
G4(2) = 26 = 2*32 + 2*31 + 2 → H4(2) = (ω2)*2 + ω*2 + 2 (3's are replaced with ω's),
G4(3) = 41 = 2*42 = 2*41 + 1 → H4(3) = (ω2)*2 + ω*2 + 1 (4's are replaced by ω's),
and so on.

Note that the multiplication by coefficients has been moved to the other side of the ω's for technical reasons and that some of the 1's have been removed for clarity. One may precede in this matter to get a sequence of ordinals, the key property of which is that the sequence is strictly decreasing. In the above example, ωω > (ω2)*2 + ω*2 + 2 > (ω2)*2 + ω*2 + 1 and this downward trend would continue if we were to list more terms. This is because the H sequences "forget" about the base: it is always replaced by ω. The only change is caused by the subtraction of 1 at each step, which slowly reduces the coefficients. Intuitively, this is the point of the proof: by forgetting about the base, we replace the extreme growth of Goodstein sequences with a gradual decline. The units digit of the H sequence decreases by 1 every step. When it reaches 0, on the next step the ω coefficient is reduced by 1 and the units digit is replaced by the current base minus 1 (the highest allowed coefficient). These may become quite large, but they always reach zero eventually. Reasoning this way, it is clear that Goodstein's Theorem should be true.

In formal terms, the set of ordinals is well-ordered, so the set consisting of all members of an H sequence must have a minimal element, i.e., it cannot decrease forever. The only way that it can stop decreasing is if the G sequence stops, and Goodstein sequences only terminate at 0. Therefore, every Goodstein sequence terminates at 0 after a finite number of steps. We've proved Goodstein's Theorem!

Bringing in infinite ordinals to prove a statement about natural numbers is strange. So strange in fact that the argument is not formalizable in PA; there is simply no way to even define infinite ordinals in this language! This indicates why the given proof does not go through in PA, but does not settle the matter as to whether there is no possible proof of Goodstein's Theorem within PA. It leaves the possibility that there is a different clever approach that can succeed without infinite ordinals. A discussion of why this in fact does not occur may be found in the final post of this series (coming May 7).

Sources: http://www.cs.tau.ac.il/~nachumd/term/Kirbyparis.pdf, http://blog.kleinproject.org/?p=674, http://www.ams.org/journals/proc/1983-087-04/S0002-9939-1983-0687646-0/S0002-9939-1983-0687646-0.pdf

Monday, March 26, 2018

Goodstein's Theorem and Peano Arithmetic

This is the second part of a post series on Goodstein's theorem. For the first part, see here.

We saw last post that Goodstein sequences are certain sequences of positive integers defined using base representations. Despite their simple definition, they grow extraordinarily large. However, Goodstein's Theorem states that no matter how large the starting value is, the sequence will eventually terminate at 0 after some finite number of steps. Before discussing the proof to this remarkable theorem, we move in a completely different direction and define the axioms of Peano arithmetic.

Increasing standards of rigor were a hallmark of late 19th and early 20th century mathematics. With this movement came a need to precisely define the basic building blocks with which a given branch of mathematics worked. This was even true for simple arithmetic! By 1890, the mathematician Giuseppe Peano had published a formulation of the axioms (fundamental assumptions) of arithmetic, which are used nearly unchanged to this day. Written in simple english, the axioms state the following about the collection N of natural numbers (nonnegative integers), a function S known as the successor function, a function + known as addition, and a function * known as multiplication:

1) There exists a natural number called 0 in N
2) For every natural number n in N there is a successor S(n) in N (commonly written
n + 1)
3) There is no natural number whose successor is 0
4) For any two natural numbers m and n in N, if S(m) = S(n), then m = n
5) For any natural number n, n + 0 = n
6) For any natural numbers n and m, n + S(m) = S(m + n)
7) For any natural number n, n*0 = 0
8) For any natural numbers n and m, n*S(m) = n*m + n
9) For any "well-behaved" property P, if P(0) holds and for each n in N, P(n) implies
P(n + 1), then P is true for every natural number in N

The first two axioms simply say that you can start from 0 and count upwards forever through the natural numbers. The third says that there is no natural number below 0 (this is of course false for larger sets of numbers such as integers, but Peano's axioms are only for properties of the natural numbers). The fourth shows that the successor of a natural number is unique. The fifth through eighth axioms state the common properties of addition and multiplication. The ninth axiom is actually a large collection of axioms (an axiom schema) in disguise, called the "axiom schema of induction."



The idea of induction may be familiar to readers who recall their high school mathematics: one often wishes to prove that a given property holds for every natural number. To do so, it is sufficient to show that it is true in the "base case," that is, true for 0, and that the statement being true for a number means that it is also true for the next. Then since the property is true for 0, it is true for 1. Since it is true for 1, it is true for 2, and so on. The figure above illustrates the idea of induction, where each "statement" is that the given property is true for some number. The final axiom simply codifies the fact this type of reasoning works; we get that the property is true for all natural numbers. In this context, a "well-behaved" property is one expressible in the semantics of the variety of logic being used (first-order logic in this case).

Remarkably, the above assumptions are all that is needed to perform arithmetic. In principle, though formalizing complicated proofs would be quite lengthy, true statements such as "17 is prime" and "every number is the sum of four squares" become theorems in Peano arithmetic. All of these proofs would take the form of a chain of statements beginning from axioms and concluding with the desired result, such as "17 is prime," rendered appropriately in the formal mathematical language. The progression from each statement to the next would also follow a collection of well-defined deductive rules. In principle, almost all theorems concerning natural numbers could be proven this way, beginning from just a small collection of axioms! However, Peano arithmetic still runs afoul of a result that vexes many axiom systems of first-order logic: Gödel's Incompleteness Theorem.



Gödel's (First) Incompleteness Theorem, originally proven by the mathematician Kurt Gödel in 1931, dashed the hopes of those who imagined that formal logical systems would provide a complete description of mathematics. It states, informally, that for any first-order logical system powerful enough to encode arithmetic (Peano arithmetic is of course such a system), there exist statements in the language of the system that are neither provable nor refutable from the axioms. Further, there are explicit sentences in the logical system that are true (under the intended interpretation of the theory, more on this later) but unprovable. The above diagram illustrates the situation: there will always be things we know to be true or false that are beyond the reach of the axioms to formally prove or refute.

Some questions may spring to mind at this unintuitive result. What is the distinction between "true" and "provable"? How do we define "true" in mathematics if not as the end result of a proof? What do these unprovable statements look like, and what do they say?

The answer to the first of these depends on there being something "we mean by" the term "natural numbers". In other words, there is an intended interpretation of what natural numbers should be that the logical system fails to completely capture. Consequently, there are statements that we know to be true using methods outside the formal system but are unprovable within it. Bringing in additional assumptions does not simply resolve the incompleteness theorem, however. For each outside axiom added, the theorem guarantees the existence of a new unprovable statement. And if the system ever does become complete by the addition of more axioms, it also becomes inconsistent, that is, able to prove a contradiction (and loses all validity as a mathematical system).

As for the final question, the first known unprovable statements were those constructed in the proof of Gödel's theorem; these are known as Gödel sentences. They are highly contrived for the proof, however, and do not have any intuitive meaning. In the years following the original proof, the concern remained whether, for example, any statement that "naturally" arises in the study of natural numbers would be unprovable and unrefutable from the axioms of Peano arithmetic. Amazingly, such statements exist! In fact, a great example is Goodstein's Theorem. No proof exists, beginning from the Peano axioms, that has it as a conclusion. To read more about how it can be proven and why it is not a theorem of Peano arithmetic, see the next post.

Sources: https://www.cs.toronto.edu/~sacook/csc438h/notes/page96.pdf, https://plato.stanford.edu/entries/goedel-incompleteness/

Friday, March 9, 2018

Goodstein Sequences and Hereditary Base Notation

In mathematics, Goodstein sequences are certain sequences of natural numbers. Though they are fairly easy to define, their properties have important consequences in logic. Before investigating these, however, we give the definition. It depends on the concept of expressing numbers in different bases (well-known examples in addition to normal base-10 representations include binary, base 2, and hexadecimal, base 16). Recall that when writing a number, such as 4291, what we mean is 4 thousands plus 2 hundreds plus 9 tens, plus 1 one, alternatively expressed as

4291 = 4*103 + 2*102 + 9*101 + 1.

This decomposition uses 10 as a base. Note that the numbers multiplying the powers of 10 always vary between 0 and 9. Base 2, for example, could be used just as easily, with only digits 0 and 1 as coefficients. Expressing 4291 as powers of 2 yields

4291 = 1*4096 + 0*2048 + 0*1024 + 0*512 + 0*256 + 1*128 + 1*64 + 0*32 + 0*16 + 0*8 + 0*4 + 1*2 + 1*1
= 1*212 + 0*211 + 0*210 + 0*29 + 0*28 + 1*27 + 1*26 + 0*25 + 0*24 + 0*23 + 0*22 + 1*21 + 1.

Therefore, 4291 is typically expressed in binary as the sequence of coefficients 1000011000011. However, for our purposes, it is more convenient to explicitly express the powers of the base involved, although it will simplify matters to drop those terms with coefficient 0 since they have no contribution to the sum. The equation above then becomes

4291 = 1*212 + 1*27 + 1*26 + 1*21 + 1.

The system described above is known as ordinary base notation, but the definition of Goodstein sequences requires a slightly modified version, hereditary base notation. This involves taking the exponents themselves and subjecting them to the same base decomposition as the original number. Since 12 = 1*23 + 1*22, 7 = 1*22 + 1*21 + 1, and 6 = 1*22 + 1*21, the integer 4291 now becomes

4291 = 1*21*23 + 1*22 + 1*21*22 + 1*21 + 1 + 1*21*22 + 1*21 + 1*21 + 1.

This expression is quite complicated, but the process is not quite finished yet! The exponents 2 and 3 within the exponents are not yet in base-2: 3 = 1*21 + 1 and 2 = 1*21. Making the necessary replacements finally gives 4291 in hereditary base-2 notation:

4291 = 1*21*21*21 + 1 + 1*21*21 + 1*21*21*21 + 1*21 + 1 + 1*21*21*21 + 1*21 + 1*21 + 1.

In the general case, there may be many iterations of this process, which motivates the name "hereditary"; a base-2 decomposition is applied to the original integer and then the exponents that result, and then their exponents, and so on. The end result has only 2's as bases of exponents and only 1's as coefficients. The interested reader can verify that this type of process may be repeated for any positive integer in any base (using as coefficients positive integers less than the base), and that for a fixed number and base, the representation thus obtained is unique. The stage is now set for the definition of Goodstein sequence.

A Goodstein sequence is simply a sequence of nonnegative numbers. We may choose any number 1, 2, 3,... to begin the sequence. Next, whatever this number is, we express it in hereditary base-2 notation, just as we did with the example 4291 above. To generate the next member of the sequence, simply change every 2 in the hereditary base-2 representation to a 3, and then subtract 1 from the resulting number. This is the second member of the sequence. After that, express this second number in hereditary base-3 notation, change the 3's to 4's, and subtract one to get the third, and so on. We denote the nth member of the Goldstein sequence beginning with m by Gm(n). The first few sequences Gm die out quickly: if the seed is 1 (whose hereditary base-2 representation is just 1), there are no 2's to change to 3's so we simply subtract 1 to find G1(2) = 0. If a sequence reaches 0, we end it there, so that the sequence

G1 = {1,0}.

G2 is scarcely more interesting: G2(1) = 2 = 1*21 so changing the single 2 to a 3 and subtracting 1 yields G2(2) = 1*31 - 1 = 2. Recall that coefficients 0-2 are allowed in hereditary base-3 notation so 2 in this notation is simply 2. There are no 3's to change to 4's, so we subtract 1 to get G2(3) = 1. There are no 4's to change to 5's, so G2(4) = 0 and the sequence is finished:

G2 = {2,2,1,0}.

Beginning with 3 leads to a sequence nearly identical: the reader may try calculating the sequence. The end result is G3 = {3,3,3,2,1,0}. However, at m = 4, new behavior emerges. 4 = 1*21*21, so both 2's must be replaced by 3's to get: G4(2) = 1*31*31 - 1 = 27-1 = 26. In hereditary base-3, 26 = 2*32 + 2*31 + 2 so G4(3) = 2*42 + 2*41 + 2 - 1 = 41. For the next step, we get G4(4) = 2*52 + 2*51 + 1 - 1 = 60. Note that the units digit is reduced by one in each step even as the sequence increases. When it hits zero, as in this step, the coefficient of the penultimate coefficient is decreased by one: G4(5) = 2*62 + 2*61 - 1 = 83 = 2*62 + 1*61 + 5. However, the new units digit becomes one less than the base, namely 5, so it takes more steps for this to reach zero than previously. After another five steps, we arrive at G4(10) = 2*112 + 1*11 = 253. When changing to base 12 at the next step, we obtain G4(11) = 2*122 + 11 = 299. The units digit again decreases for the next 11 steps, until G4(22) = 2*232 = 1058.

The next step starts to indicate why Goodstein sequences can increase for so long: G4(23) = 2*242 - 1 = 1151 = 1*242 + 23*241 + 23. Since the base is 24, we get two new coefficients of 23. Each time the units digit reaches zero, the value at which it has to start the next time doubles. The square term in the base representation does not vanish until the base reaches 402653184. And at this point the sequence has barely begun. The largest value it reaches is 3*2402653210 - 1 at base 3*2402653209, after which the sequence remains stable for a while before finally declining to zero. This maximum value is so astronomically large that if the digits of the number were printed at a typical font size, front and back, it would fill a stack of paper over 10 feet tall! And this is just G4. Goodstein sequences with higher initial values increase much, much faster.

If we start with 18, for instance, since 18 = 1*21*21*21 + 1*21, replacing all the 2's with 3's gives G18(2) = 1*31*31*31 + 1*31 - 1 = 7625597484989. The third term is G18(3) = 1*41*41*41 + 2 - 1 ~ 10154. The values this sequence reaches quickly become difficult to even write down. However, Reuben Goodstein himself, after whom the sequences are named, proved in 1944 a statement that became known as Goodstein's Theorem. His remarkable result showed that no matter how incalculably large the sequences become, they always terminate at 0. That is, after some finite, though possibly immense, series of steps, each sequence stops increasing eventually and decreases to 0.

The theorem's proof has significance beyond demonstrating this surprising fact about Goodstein sequences. For more, see the next post.

Source: https://www.jstor.org/stable/2268019, http://mathworld.wolfram.com/GoodsteinSequence.html

Monday, February 12, 2018

Black Holes and Information

Black holes, with their extreme gravity and ability to profoundly warp space and time, are some of the most interesting objects in the universe. However, in at least one precisely defined way, they are also the least interesting.

According to general relativity, black holes are nearly featureless. Specifically, there is a result known as the "no-hair theorem" that states that stationary black holes have exactly three features that are externally observable: their mass, their electric charge, and their angular momentum (direction and magnitude of spin). There are no other attributes that distinguish them (these additional properties would be the "hair"). It follows that if two black holes are exactly identical in mass, charge, and angular momentum, there is no way, even in principle, to tell them apart from the outside.

This in and of itself is not a problem. As usual, problems arise when the principles of quantum mechanics are brought to bear in circumstances where both gravity and quantum phenomena play a large role. At the heart of the formalism of quantum mechanics is the Schrödinger equation, which governs the time-evolution of a system (at least between measurements). Fundamentally, the evolution may be computed both forwards and backwards in time. Therefore, at least the mathematical principles of quantum mechanics hold that information about a physical system cannot be "lost", that is, we may always deduce what happened in the past from the present. This argument does not take the measurement process into account, but it is believed that these processes do not destroy information either. Black holes provide some problems for this paradigm.

However, it may seem that information is lost all the time. If a book is burned, for example, everything that was written on its pages is beyond our ability to reconstruct. However, in principle, some omniscient being could look at the state of every particle of the burnt book and surrounding system and deduce how they must have been arranged. As a result, the omniscient being could say what was written in the book. The situation is rather different for black holes. If a book falls into a black hole, outside observers cannot recover the text on its pages, but this poses no problem for our omniscient being: complete knowledge of the state of all particles in the universe includes of course those on the interior of black holes as well as the exterior. The book may be beyond our reach, but its information is still conserved in the black hole interior.

The real problem became evident in 1974, when physicist Stephen Hawking argued for the existence of what is now known as Hawking radiation. This quantum mechanism allows black holes to shed mass over time, requiring a modification to the conventional wisdom that nothing ever escapes black holes.



The principles of quantum mechanics dictate that the "vacuum" of space is not truly empty. Transient so-called "virtual" particles may spring in and out of existence. Pairs of such particles may emerge from the vacuum (a pair with opposite charges, etc. is required to preserve conservation laws) for a very short time; due to the uncertainty principle of quantum mechanics, short-lived fluctuations in energy that would result from the creation of particles do not violate energy conservation. In the presence of very strong gravitational fields, such as those around a black hole, the resulting pairs of particles sometimes do not come back together and annihilate each other (as in the closed virtual pairs above). Instead, the pairs "break" and become real particles, taking with them some of the black hole's gravitational energy. When this occurs on the event horizon, one particle may form just outside and the other just inside, so that the one on the outside escapes to space. This particle emission is Hawking Radiation.

Theoretically, therefore, black holes have a way of shedding mass (through radiation) over time. Eventually, they completely "evaporate" into nothing! This process is extremely slow: black holes resulting from the collapse of stars may take tens of billions of years (more than the current age of the Universe!) to evaporate. Larger ones take still longer. Nevertheless, a theoretical puzzle remains: if the black hole evaporates and disappears, where did its stored information go? This is known as the black hole information paradox. The only particles actually emitted from the horizon were spontaneously produced from the vacuum, so it is not obvious how these could encode information. Alternatively, the information could all be released in some way at the moment the black hole evaporates. This runs into another problem, known as the Bekenstein bound.

The Bekenstein bound, named after physicist Jacob Bekenstein, is an upper limit on the amount of information that may be stored in a finite volume using finite energy. To see why this bound arises, consider a physical system as a rudimentary "computer" that stores binary information (i.e. strings of 1's and 0's). In order to store a five-digit string such as 10011, there need to be five "switches," each of which has an "up" position for 1 and a "down" position for 0. Considering all possible binary strings, there are therefore 25 = 32 different physical states (positions of switches) for our five-digit string. This is a crude analogy, but it captures the basic gist: the Bekenstein bound comes about because a physical system of a certain size and energy can only occupy so many physical states, for quantum mechanical reasons. This bound is enormous; every rearrangement of atoms in the system, for example, would count as a state. Nevertheless, it is finite.

The mathematical statement of the bound gives the maximum number of bits, or the length of the longest binary sequence, that a physical system of mass m, expressed as a number of kilograms, and radius R, a number of meters, could store. It is I ≤ 2.5769*1043 mR.

This is far, far greater than what any existing or foreseeable computer is capable of storing, and is therefore not relevant to current technology. However, it matters to black holes, because if they hold information to the moment of evaporation, the black hole will have shrunk to a minuscule size and must retain the same information that it had at its largest. This hypothesis addressing the black hole information paradox seems at odds with the Bekenstein bound.

In summary, there are many possible avenues for study in resolving the black hole information paradox, nearly all of which require the sacrifice of at least one physical principle. Perhaps information is not preserved over time, due to the "collapse" of the quantum wavefunction that occurs with measurement. Perhaps there is a way for Hawking radiation to carry information. Or possibly, there is a way around the Bekenstein bound for evaporating black holes. These possibilities, as well as more exotic ones, are current areas of study. Resolving the apparent paradoxes that arise in the most extreme of environments, where quantum mechanics and relativity collide, would greatly advance our understanding of the universe.

Sources: https://physics.aps.org/articles/v9/62, https://arxiv.org/pdf/quant-ph/0508041.pdf, http://kiso.phys.se.tmu.ac.jp/thesis/m.h.kuwabara.pdf, https://plus.maths.org/content/bekenstein

Monday, January 22, 2018

Neutrinos and Their Detection 2

This is the second part of a two part post. For the first part, see here.

The discovery of neutrinos led to a rather startling realization concerning the omnipresence of these particles. Scientists have known since the early 20th century that stars such as the Sun generate energy through nuclear fusion, especially of hydrogen into helium. In addition to producing radiation that eventually leads to what we see as sunlight, every one of these reactions releases neutrinos. As a result, the Earth is continually bathed in a stream of neutrinos: every second, billions of neutrinos pass through every square centimeter of the Earth's surface. A vast, vast majority of these pass through the planet unimpeded and resume their course through space, just as discussed in the previous post. As we will see, studying the properties of these solar neutrinos later led to an revolutionary discovery.



In 1967, an experiment began that had much in common with many of the neutrino experiments to come. Known as the Homestake experiment after its location, the Homestake Gold Mine in South Dakota, the main apparatus of the experiment was an 100,000 gallon tank of perchloroethylene (a common cleaning fluid) located deep underground, nearly a mile below the Earth's surface. The purpose of holding the experiment underground was to minimize the influence of cosmic rays, which would react with the perchloroethylene and produce experimental noise. Cosmic rays do not penetrate deep underground, however, while neutrinos do. The immense volume of liquid was necessary to obtain statistically significant data from the small rate of neutrino interactions. The number of argon atoms that were produced through the reaction was measured to determine how many reactions were occurring.

Simultaneously, physicists made theoretical calculations using knowledge of the Sun's composition, the process of nucleosynthesis, the Earth's distance from the Sun, and the size of the detector to estimate what the rate of interactions should have been. However, the results were not consistent with the data collected from the experiment. Generally, theoretical estimates were around three times as large as the actual results. Two-thirds of the expected reactions were missing! This disagreement became known as the "solar neutrino problem."

The models of the Sun were not at fault. In fact, the cause of the problem was an incorrect feature of the otherwise quite powerful Standard Model of Particle Physics, namely that neutrinos have mass. As far back as 1957, Italian physicist Bruno Pontecorvo considered the implications of neutrinos having mass.



He and others realized that neutrinos with mass would undergo what is known as neutrino oscillation when traveling through space. For example, an electron neutrino emitted from nuclear fusion would become a "mix" of all three flavors of neutrinos: electron, muon, and tau. When a solar neutrino reaches Earth and interacts with matter, it only has roughly a 1 in 3 chance of "deciding" to be an electron neutrino. This would explain the observed missing neutrinos, since the Homestake detector only accounts for electron neutrinos.

For the remainder of the 20th century, several more experiments were performed to investigate whether neutrino oscillation was in fact the solution to the solar neutrino problem. One experiment that was crucial in conclusively settling the matter was Super-Kamiokande, a neutrino observatory located in Japan. Like the Homestake experiment, it was located deep underground in a mine and consisted of a large volume of liquid (in this case, water).



When neutrinos interact with the water molecules in the detector, charged particles are produced that propagate through the chamber. These release radiation which is amplified and recorded by the photomultipliers that surround the water tank on every side. The number of photomultipliers allows a more detailed analysis of this radiation, yielding the energy and direction of origin for each neutrino interaction. It was this added precision that helped to resolve the solar neutrino problem: neutrinos indeed have mass and undergo oscillation. This discovery led to Japanese physicist Takaaki Kajita (who worked on the Super-Kamiokande detector as well as its predecessor the Kamiokande detector) sharing the 2015 Nobel Prize in Physics.

The exact masses of the different flavors of neutrinos are not yet known, nor do we completely understand why they have mass. However, despite the mysteries of particle physics that remain, further applications of neutrino detection continue in a different field: astronomy. The use of neutrinos to observe extraterrestrial objects is known as neutrino astronomy. In theory, if one could accurately measure the direction from which every neutrino arrives at Earth, the result would be an "image" of the sky highlighting neutrino sources. In reality, the scattering that occurs in detectors such as Super-Kamiokande when incoming particles hit and change direction limits angular resolution and so few interactions occur that there are insufficient samples to construct such an image. Only two extraterrestrial objects have ever been detected through neutrino emissions, in fact: the Sun, and a nearby supernova event, known as SN1987a after the year in which it took place. Theoretical calculations indicate that sufficiently bright supernovae may be located with reasonable accuracy using neutrino detectors in the future.



There is one major advantage to using neutrinos as opposed to light in making observations: neutrinos pass through nearly all matter unimpeded. The above discussion indicated that the Sun is a neutrino source. This is true, but not fully precise; the solar core is the source of the neutrinos, as it is where fusion occurs, and its radius is only about a quarter of the Sun's. There is no way to see the light emanating from the core because it interacts with other solar particles. However, we can see the core directly through neutrino imaging. In fact, the data from the Super Kamiokande experiment should be enough to approximate the radius in which certain fusion reactions take place. Future detectors could tell us even more about the Sun's interior.

Neutrino astronomy is still a nascent field and we do not yet know its full potential. Further understanding and detection of neutrinos will tell us more about the fundamental building blocks of matter, allow us to peer inside our own Sun, and measure distant supernovae.

Sources: http://www.sns.ias.edu/~jnb/SNviewgraphs/snviewgraphs.html, https://arxiv.org/pdf/hep-ph/0410090v1.pdf, http://slideplayer.com/slide/776551/, https://www.bnl.gov/bnlweb/raydavis/research.htm, https://arxiv.org/pdf/hep-ph/0202058v3.pdf, https://j-parc.jp/Neutrino/en/intro-t2kexp.html, https://arxiv.org/pdf/1010.0118v3.pdf, https://www.scientificamerican.com/article/through-neutrino-eyes/, https://arxiv.org/pdf/astro-ph/9811350v1.pdf, https://arxiv.org/pdf/1606.02558.pdf

Monday, January 1, 2018

Neutrinos and Their Detection

Neutrinos are a type of subatomic particle known both for their ubiquity and their disinclination to interact with other forms of matter. They have zero electric charge and very little mass even compared to other fundamental particles (though not none, more on this later) so they are not affected by electromagnetic forces and only slightly by gravity.



Since neutrinos are so elusive, it is not surprising that their existence was first surmised indirectly. In 1930, while studying a type of radioactive decay known as beta decay, physicist Wolfgang Pauli noticed a discrepancy. Through beta decay (shown above), a neutron is converted into a proton. This is a common process by which unstable atomic nuclei transmute into more stable ones. It was known that an electron was also released in this process. However, Pauli found that this left some momentum unaccounted for. As a result, he postulated the existence of a small, neutral particle (this properties eventually led to the name "neutrino"). The type emitted in this sort of decay is now known as an electron antineutrino (all the types will be enumerated below).

However, they were speculative for some decades before a direct detection occurred in 1956 in the Cowan-Reines Neutrino Experiment, named after physicists Clyde Cowan and Frederick Reines.



The experiment relied upon the fact that nuclear reactors were expected to release a large flux of electron antineutrinos during their operation, providing a concentrated source with which to experiment. The main apparatus of the experiment was a volume of water that electron antineutrinos emerging from the reactor would pass through. Occasionally, one would interact with a proton in the tank, producing a neutron and a positron (or anti-electron, denoted e+) through the reaction shown on the bottom left. This positron would quickly encounter an ordinary electron and the two would mutually annihilate to form gamma rays (γ). These gamma rays would then be picked up by scintillators around the water tanks. To increase the certainty that these gamma ray signatures in fact came from neutrinos, the experimenters added a second layer of detection by dissolving the chemical cadmium chloride (CdCl) in the water. The addition of a neutron (the other product of the initial reaction) to the common isotope Cd-108 creates an unstable state of Cd-109 which releases a gamma ray after a period of a handful of microseconds. Thus, the detection of two gamma rays simultaneously and then a third after a small delay would definitively indicate a neutrino interaction. The experiment was very successful and the rate of interactions, about three per hour, matched the theoretical prediction well. The neutrino had been discovered.

The Standard Model of particle physics predicted the existence of three "generations" of neutrinos corresponding to three types of particles called leptons.



The above diagram shows the three types of leptons and their corresponding neutrinos. In addition to this, every particle type has a corresponding antiparticle which in a way has the "opposite" properties (though some properties, such as mass, remain the same). The electron antineutrino discussed above is simply the antiparticle corresponding to the electron neutrino, for example. The discoveries of the others occurred at particle accelerators, where concentrated beams could be produced: the muon neutrino in 1962, and the tau neutrino in 2000. These results completed the expected roster of neutrino types under the Standard Model. In its original form, though, the Standard Model predicted that all neutrinos would have exactly zero mass. Note that this hypothesis (though later proved incorrect) is not disproven by the fact that neutrinos account for the "missing momentum" Pauli originally identified; massless particles, such as photons (particles of light), can still carry momentum and energy.

All of the neutrino physics described so far concerns artificially produced particles. However, these discoveries were only the beginning. Countless neutrinos also originate in the cosmos, motivating the area of neutrino astronomy. For more on this field and its value to both astronomy and particle physics, see the next post (coming January 22).

Sources: http://www.astro.wisc.edu/~larson/Webpage/neutrinos.html, http://hyperphysics.phy-astr.gsu.edu/hbase/particles/cowan.html, https://perimeterinstitute.ca/files/page/attachments/Elementary_Particles_Periodic_Table_large.jpghttp://www.scienceinschool.org/sites/default/files/articleContentImages/19/neutrinos/issue19neutrinos10_xl.jpg, http://www.fnal.gov/pub/presspass/press_releases/donut.html

Wednesday, December 20, 2017

2017 Season Summary

The 2017 Atlantic hurricane season had above-average activity, with a total of

18 cyclones attaining tropical depression status,
17 cyclones attaining tropical storm status,
10 cyclones attaining hurricane status, and
6 cyclones attaining major hurricane status.

Before the beginning of the season, I predicted that there would be

15 cyclones attaining tropical depression status,
15 cyclones attaining tropical storm status,
6 cyclones attaining hurricane status, and
3 cyclones attaining major hurricane status.

The average number of named storms, hurricanes, and major hurricanes for an Atlantic hurricane season (over the 30-year period 1981-2010) are 12.1, 6.4, and 2.7, respectively. The 2017 season was well above average in all categories, especially hurricanes and major hurricanes. In addition, there were several intense and long-lived hurricanes, inflating the ACE (accumulated cyclone energy) index to 223. This value, which takes into account the number, duration, and intensity of tropical cyclones, was the highest since 2005. 2017 was also the first year on record to have three storms exceeding 40 ACE units: Hurricane Jose, with 42, Hurricane Maria, with 45, and Hurricane Irma, with 67.

The ENSO oscillation, a variation in the ocean temperature anomalies of the tropical Pacific, often plays a role in Atlantic hurricane development. At the beginning of the 2017 season, these temperatures were predicted to rise, signaling a weak El Niño event and suppressing hurricane activity. However, this event did not materialize. Though anomalies did rise briefly in the spring, they returned to neutral and even negative by the early fall, when hurricane season peaks. This contributed to the extremely active September. In addition, conditions were more favorable for development in the central Atlantic than they had been for several years, allowing the formation of long-track major hurricanes. Due to these factors, my predictions significantly underestimated the season's extreme activity.

The 2017 Atlantic hurricane season was the costliest ever recorded, with Hurricanes Harvey, Irma, and Maria contributing the lion's share to this total. Among the areas most affected were southeastern Texas (by Harvey), the Leeward Islands (from Irma and Maria), and Puerto Rico and the Virgin Islands (from Maria). Some other notable facts and records for the 2017 season include:
  • Tropical Storm Arlene formed on April 20, one of only a small handful of April storms; it also had the lowest pressure ever recorded for an Atlantic tropical cyclone in April
  • The short-lived Tropical Storm Bret formed off the coast of South America and made landfall near the northern tip of Venezuela, becoming the southernmost forming June Atlantic cyclone since 1933
  • The remnants of Hurricane Franklin regenerated in the eastern Pacific after crossing Mexico and received a new name: Jova
  • Hurricane Harvey was the first major hurricane to make landfall in the U.S. since 2005, and the strongest to do so in Texas since 1961; the peak rainfall accumulation of 51.88" in Cedar Bayou, Texas was the largest tropical cyclone rain total ever for the continental U.S.
  • Hurricane Irma spent a total of 3.25 days as a category 5 hurricane, the most in the Atlantic since 1932, and maintained incredible 185 mph winds for 37 hours, the most recorded in the entire world
  • When Hurricanes Irma, Jose, and Katia were all at category 2 strength or above on September 8, it marked only the second such occurrence since 1893
  • Hurricane Maria reached a minimum pressure of 908 mb, then the tenth lowest ever for an Atlantic hurricane, and the lowest since Dean in 2007
  • Becoming a major hurricane near the Azores Islands, Hurricane Ophelia was the easternmost major hurricane ever to form in the Atlantic
  • All ten named storms from Hurricane Franklin to Ophelia became hurricanes, the first time ten consecutive names have done so in the Atlantic since 1893


Overall, the 2017 Atlantic hurricane season was exceptionally active and damaging, especially for parts of the Caribbean.

Sources: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf

Tuesday, November 7, 2017

Tropical Storm Rina (2017)

Storm Active: November 6-9

On November 3, a weak area of low pressure developed in the central tropical Atlantic, well away from any land areas. It moved slowly north and north-northeast over the following days and became better organized on November 5. Early in the morning on the 6th, the disturbance was organized enough to be classified as Tropical Depression Nineteen. The system was moving over marginal sea surface temperatures in an area of shear that was not too high, so modest strengthening occurred over the next day and the depression became Tropical Storm Rina overnight.

Rina began to accelerate northward on the 7th, passing the latitude of Bermuda almost 1000 miles to the east. Though sea surface temperatures were declining, the system's maximum winds increased some as it took on some subtropical characteristics. Rina reached its peak intensity of 60 mph winds and a pressure of 997 mb on November 8. Later that day it turned toward the north-northeast and began weakening as it transitioned to an extratropical system; it completed this transition during the morning of November 9. The system then turned eastward, eventually impacting the UK as a weak low before dissipation.



This image shows Tropical Storm Rina shortly after formation.


Rina did not affect any land areas during its lifetime.

Sunday, October 29, 2017

Tropical Storm Philippe (2017)

Storm Active: October 28-29

On October 23, a broad area of low pressure formed in the southwestern Caribbean Sea. Since this is a favorable area for late-season development, it was monitored closely. The broadness of the low made organization quite slow, despite plenty of moist air and fairly favorable atmospheric conditions. In addition, the circulation spent the next few days in close proximity with the coast of Nicaragua, where it dropped heavy rains. As a result, it was not until October 28 that the disturbance became Tropical Depression Eighteen. By the time it formed, the cyclone was already accelerating toward the north and northeast under the influence of a trough over the United States. Conditions were still favorable though, and the system strengthened into Tropical Storm Philippe as it passed over western Cuba. The rain bands of the Philippe extended well to the north and east of the center, so Cuba and Florida has already been experiencing heavy rains. Early on October 29, the storm crossed south Florida and emerged into the Atlantic.

As Philippe approached the cold front to its north, upper-level winds increased to enormous values. The system quickly became elongated from north to south and dissipated during that afternoon before its remnants merged with a developing extratropical system off the coast of the Carolinas. Tropical moisture from Philippe contributed to an already powerful developing nor'easter, enhancing rainfall over many of the northeast and mid-Atlantic states. The storm ultimately brought heavy snowfall to parts of eastern Canada before dissipating.



Philippe was a disorganized but large tropical storm that brought heavy rainfall to Cuba and Florida.



While Philippe's time as a tropical cyclone was short-lived, it contributed to a large storm that affected the northeast U.S.

Monday, October 9, 2017

Hurricane Ophelia (2017)

Storm Active: October 9-15

Around October 6, an area of low pressure began to form along a stationary frontal boundary located over the eastern Atlantic. The next day, the system began to separate from the remainder of the frontal boundary, although it still displayed a long, curved, front-like band of convection emanating from the center. Slowly, it developed some subtropical characteristics as it drifted in the northeast Atlantic, moving little. By October 8, the low was on the verge of tropical or subtropical cyclone status and satellite data indicated gale-force winds near the center. Overnight, a region of shower activity persisted just east of the center. Available information pointed to winds just below tropical storm strength, so the system was designated Tropical Depression Seventeen. Banding features started to appear during the morning of the 9th, and the depression was upgraded to Tropical Storm Ophelia.

Initially, Ophelia was moving slowly to the north and northeast in a region of weak steering currents. Over the next day, however, a mid-level ridge built in northwest of the cyclone and it turned toward the southeast. Meanwhile, shear was diminishing and convection was able to wrap around the center, which allowed for some strengthening. The largest inhibitor to development was some dry air inside the circulation. Interaction with this dry air caused intensity fluctuations on October 10, but deeper convection completely enclosed the center that night. At the same time, an eye feature formed, and Ophelia strengthened more rapidly. During the afternoon of October 11, Ophelia reached hurricane strength, becoming the tenth consecutive tropical cyclone of the 2017 season to develop into a hurricane.

Steering currents collapsed later that day as well, leaving the system to drift slowly eastward overnight. The appearance of deeper convection near the center suggested that additional strengthening had occurred. Sea surface temperatures remained just lukewarm, but unusually cool upper atmospheric temperatures created a steep enough gradient to support intensification. On October 12, Ophelia reached category 2 status, an unprecedented achievement for a hurricane so far northeast that late in the hurricane season. The cyclone began to gradually accelerate east-northeast overnight, reaching an intensity of 105 mph winds and a pressure of 970 mb. The eye clouded over briefly the morning of the 13th, but this was a short-lived trend. Later that day the eye cleared out and became even better defined, with deep convection completely surrounding the center. As a result, Ophelia maintained its remarkable category 2 status even farther north and east.

The system was not finished, however. A final burst of intensification on October 14 brought Ophelia to major hurricane strength, and it reached a peak intensity of 115 mph winds and a pressure of 960 mb. In doing so, it became the easternmost major hurricane ever recorded. The gap between it and its predecessors was even more impressive in its latitude range, where it was 900 miles farther east than any previous major hurricane. Finally, early on October 15, much colder waters and higher shear began to weaken Ophelia and induce extratropical transition. Later that day, the storm became extratropical as it sped toward Ireland. The center made landfall in southwest Ireland during the morning of October 17, bringing damaging hurricane-force winds. Since the system was moving at over 40 mph, it quickly passed over Ireland and the UK. The post-tropical cyclone brought gale force winds all the way to Scandinavia before finally dissipating.



Hurricane Ophelia is shown above as a major hurricane near the Azores, an unprecedented event in Atlantic hurricane records dating back to 1851.



Ophelia was no longer a tropical cyclone when it reached Europe (triangle points) but it still brought hurricane-force winds to many parts of Ireland.