Wednesday, May 16, 2018

Professor Quibb's Picks – 2018

My personal prediction for the 2018 North Atlantic hurricane season (written May 16, 2018) is as follows:

18 cyclones attaining tropical depression status, 16 cyclones attaining tropical storm status, 8 cyclones attaining hurricane status, and 4 cyclones attaining major hurricane status. In the wake of the especially devastating 2017 season, it is difficult to predict with any certainty the outcomes for this year. Once again, models indicate that the El Niño Southern Oscillation Index (or ENSO index) will be near zero or slightly positive during this hurricane season. This index, which is a certain quantitative measure of sea surface temperature anomalies in the tropical Pacific Ocean, has some ability to predict Atlantic hurricane activity. A positive index indicates an El Niño event, which tends to correlate with higher wind shear across the Atlantic basin and less tropical cyclone development. This effect is especially pronounced in the Gulf of Mexico and Caribbean Sea. The image below shows the ENSO forecast for this season (image from the International Research Institute for Climate and Society):


However, last year's forecast was qualitatively similar, but the index ended up dipping back negative and leaving very favorable conditions for hurricane formation. Though consideration of the ENSO index alone would lead to the prediction of an average hurricane season, there is significant uncertainty. Overall, I consider the ENSO to mainly a neutral factor this year.

Present ocean temperatures in the Atlantic are slightly above average in the Gulf of Mexico and Caribbean, and significantly above average in the subtropical Atlantic and near the U.S. east coast. However, there is a large area of below average temperatures in the tropical Atlantic which is forecast by long-term models to possibly persist for a few months. The tropical Atlantic has also been dry and stable, in contrast to elevated storm activity in the Caribbean and Gulf of Mexico. These trends also show some signs of persisting into the beginning of hurricane season. I therefore expect a slow start to the season in the main development region of the tropical Atlantic (extending from Africa to the Caribbean) and a corresponding lack of Cape Verde or long-track hurricanes, though these could appear more in late September and October. There is significant potential for formation in areas closer to land, so I expect some shorter lived hurricanes in the northern Caribbean/Gulf of Mexico and U.S. east coast regions.

My estimated risks for different parts of the Atlantic basin are as follows (with 1 indicating very low risk, 5 very high, and 3 average):

U.S. East Coast: 5

The jet stream over the U.S. has been weaker than usual so far this season, and the Bermuda high stronger. However, with a weak El Nino possibly developing, long hurricane tracks westward into the Gulf still seem unlikely. The east coast, in contrast, is at a greater risk. Ocean temperatures offshore are anomalously warm and region will be very moist, suggesting a fairly high probability of tropical cyclone impacts. Yucatan Peninsula and Central America: 3
The western Caribbean shows some signs of being a fertile area for cyclonogenesis, but with prevailing upper-level patterns as they are, it is difficult to see strong system taking due westward tracks into central America. Compared to the last few years, strong hurricanes are less of a threat, though the potential for flooding rains may be equal or greater.

Caribbean Islands: 2
As discussed above, the main development region may remain quiet for at least the first half of hurricane season. This would insulate the Caribbean islands from the approach of Cape Verde hurricanes to the west, but does not preclude development occurring locally. Nevertheless, it is somewhat more likely this year that the islands will receive a break from intense hurricane landfalls, especially the easternmost islands.

Gulf of Mexico: 3
Factors in the Gulf point in different directions. Ocean waters are warm and will likely continue to be so, particularly in eddies originating in the northern Caribbean (which also happens to be a likely source of Gulf hurricanes). On the other side, if an El Niño does develop, the Gulf of Mexico is where the suppression of hurricane activity would be most felt. Putting this together suggests a near-average risk this year.

Overall, the 2018 season is expected to be a bit above average; it should not be a repeat of the devastating 2017 season, but many areas such as the U.S. east coast may still be at high risk. Further, this is just an informal forecast and uncertainty in the outcome remains significant. Everyone in hurricane-prone areas should still take due precautions as hurricane season approaches. Dangerous storms may still occur even in overall quiet seasons. Sources: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf, https://www.tropicaltidbits.com/analysis/models/?model=cfs-avg, https://ocean.weather.gov/

Tuesday, May 15, 2018

Hurricane Names List – 2018

The name list for tropical cyclones forming in the North Atlantic basin for the year 2018 is as follows:

Alberto
Beryl
Chris
Debbie
Ernesto
Florence
Gordon
Helene
Isaac
Joyce
Kirk
Leslie
Michael
Nadine
Oscar
Patty
Rafael
Sara
Tony
Valerie
William

This list is the same as the list for the 2012 season, with the exception of Sara, which replaced the retired name Sandy.

Monday, May 7, 2018

Goodstein's Theorem and Non-Standard Models of Arithmetic

This is the final post in a four-part series on logic and arithmetic, with a focus on Goodstein's Theorem. For the first post, see here.

In the previous post, Goodstein's Theorem, a statement about the properties of certain sequences of natural numbers, was proven using infinite ordinals. The use of a method "outside" arithmetic makes it reasonable that this proof cannot be encoded in the language of Peano Arithmetic (PA), the formal logical system for discussing the natural numbers. A stronger statement is also true: there is no proof of Goodstein's Theorem in PA because it cannot be deduced from the axioms of PA.

But how does one go about proving something unprovable? Certainly it is intractable to check every possible method, as the diversity of such attempts could be infinite. Mathematicians take a different approach, using tools from what is called model theory. In mathematical logic, a model of a collection of axioms is a specific structure within which the axioms (and all theorems derived from them) are interpreted to be true. Recall that the axioms of PA mentioned five specific objects, that were assumed to be given from the start: a set N, a specific member, 0, a function S from N to itself, and two binary operations on N, + and *. Of course, to actually do arithmetic we interpret N as the set of natural numbers, 0 as the number 0, S as the "successor" function taking in a number n and returning n+1, and + and * as the usual addition and multiplication. Until we provide an interpretation of to what these objects refer, namely a model, they are just symbols! We may prove statements about them, such as the fact that S(0) and S(S(0)) are distinct members of N, but this is just a mathematical sentence resulting as the end product of a series of formal deductive rules.

Any collection A = (NA,0A,SA,+A,*A) of a set NA, a member 0A of the set, a function SA:NANA, and two binary operations +A and *A that satisfies the axioms is a model of PA. Of course, we know fairly well what we mean by "natural numbers", namely {0,1,2,...} with 0 the first element, S sending 0 to 1, 1 to 2, etc, and the usual addition and multiplication. The entire point of selecting axioms for PA is to study ℕ = (N,0,S,+,*), the standard natural numbers. A natural question (called the question of categoricity) arises: is the standard model the only type of model for PA, or are there others? The answer is no; there are other, non-standard models A that still satisfy every axiom of PA. These were first discovered by Norwegian mathematician Thoralf Skolem in 1934. To be clear, they are not the natural numbers, at least, not as we intend them to be. Their existence exemplifies another limitation of first-order logic: axiom systems often fail to specify structures uniquely and hence fail to capture some features of the field to be studied.

Non-standard models often serve as an essential tool in independence proofs. First, we know from the previous post that the standard model ℕ of PA does satisfy Goodstein's Theorem (the standard model has the properties the natural numbers possess within the larger field of set theory, the methods of which were used in the proof). This means that the negation of Goodstein's Theorem cannot be a theorem of PA, since there is a model satisfying both the axioms and the theorem. If one could find a model of PA in which the negation of Goodstein's Theorem were true, then this would prove independence, because there would be models in which it is true and others in which it is false! Kirby and Paris used precisely this method in their 1982 proof of the result.

But what do non-standard models of natural numbers actually look like? First, we may infer what they have in common. PA axiom 1 guarantees the existence of a number 0. Axiom 2 gives it successors S(0), S(S(0)), etc. Axiom 3 says that S(n) = S(m) implies m = n. Therefore, all the successors generated from 0 are distinct from one another. This means that any model A has a set of natural numbers NA containing the analogues of 0, 1, 2, and so on. The set of standard natural numbers N is thus contained in NA for every A. The difference is that non-standard models have extra numbers!

At first brush, having additional "non-standard" numbers seems to contradict the Peano axioms, specifically the fifth, the axiom schema of induction. It states that if 0 has some property and that any n having the property implies that n + 1 does as well, then all natural numbers have the property. The spirit of this axiom schema, if not the letter, is that beginning at 0 and knocking down the inductive dominoes will eventually reach every natural number. If we could choose the property to be "is in the set {0,1,2,...} (the standard natural numbers N)" then this would immediately rule out nonstandard models: 0 is this set, and for any n in the set, its successor is also standard, so all of NA is contained in {0,1,2,...} and hence we would have NA = {0,1,2,...}. Unfortunately, it is impossible to define the set {0,1,2,...} inside of the first-order logic formulation. It is also impossible to simply add an axiom "there are no other numbers besides 0, 1, 2, etc." for the same reason. Both approaches require infinitely long logical sentences to formulate, which are forbidden in the finitary system of first-order logic.



Though the axioms of PA cannot rule out non-standard natural numbers, they are forced by the axioms to satisfy some strange conditions. Any nonstandard number c must be greater than all standard numbers. Further, PA can prove that 0 is the only number without a successor, so a "predecessor" to c, which we may call c - 1, must exist. Similarly, c - 2, c - 3, etc. must exist, as must, of course, c + 1, c + 2, etc. These must all be new non-standard numbers. Therefore, the existence of one non-standard number guarantees the existence of a whole non-standard "copy" of the integers: {...,c - 2,c - 1,c,c + 1,c + 2,...}. However, it gets much, much worse. The operation of addition is part of Peano Arithmetic, so there must be a number c + c, that may be proven to be greater than all numbers c + 1, c + 2, and so on. From here, we get another new infinite collection of non-standards {...,c + c - 2,c + c - 1,c + c,c + c + 1,c + c + 2,...}. A similar story occurs for c + c + c = c*3 and larger numbers as well, but we can also go in reverse. One can prove in PA that every number is either even or odd; that is, for any n, there is an m satisfying either m + m = n (if n is even), or m + m + 1 = n (if n is odd). This theorem means that c is even or odd, so there must be a smaller non-standard d with d + d = c or d + d + 1 = c. This d has its own infinite set of non-standard neighbors. The reader may continue this type of exercise and eventually derive the type of picture illustrated above: any non-standard model of natural numbers must contain the standard numbers plus (at least) an infinite number of copies of the integers, ℤ, one for each member of the set of rational numbers, ℚ.

As strange as these models are, they cannot be ruled out in PA, nor is there a natural addition to the axioms that may do so. Rather than being just a defect of first-order logic however, non-standard models are a useful tool for examining the structure of different theories. Now that we have a non-standard model at our disposal, it seems reasonable that Goodstein's Theorem should fail for some non-standard models: "Goodstein sequences" beginning at non-standard natural numbers do not seem likely to terminate at zero. After all, they have infinitely many copies of the integers to move around in! These sequences often cannot be computed explicitly, but using other logical machinery, one can prove the fact that they do not necessarily terminate. This establishes the independence of the theorem from PA.

Goodstein sequences, interesting in their own right for their rapid growth, allow an interesting perspective on Peano Arithmetic and its limitations. The questions of independence and non-standard models arise frequently in the foundations of mathematics, as we seek to define precisely the scope of our mathematical theories.

Sources: http://www.cs.tau.ac.il/~nachumd/term/Kirbyparis.pdf, http://blog.kleinproject.org/?p=674, http://www.ams.org/journals/proc/1983-087-04/S0002-9939-1983-0687646-0/S0002-9939-1983-0687646-0.pdf, http://settheory.net/model-theory/non-standard-arithmetic, http://www.columbia.edu/~hg17/nonstandard-02-16-04-cls.pdf, http://boolesrings.org/victoriagitman/files/2015/04/introToPAModels.pdf, http://lesswrong.com/lw/g0i/standard_and_nonstandard_numbers/

Monday, April 16, 2018

Proving Goodstein's Theorem and Transfinite Methods

This is the third part of a post series on Goodstein's theorem. For the first part, see here.

The previous post introduced the reader to Peano arithmetic (PA), the archetypical example of an axiomatic system introduced to standardize the foundations of mathematics. Despite having tremendous expressive power in formulating and proving theorems about natural numbers, the system is not without limitations. Gödel's Incompleteness Theorem guarantees the existence of statements out of the reach of formal proofs in PA. Goodstein's Theorem, which states that every Goodstein sequence terminates at 0, is an example. Further, it is a "natural" example in the sense that it was not contrived to demonstrate incompleteness. In fact, Goodstein proved the theorem in 1944, decades before Laurence Kirby and Jeff Paris discovered that it is independent of the axioms of PA in 1982.

To see why this independence holds, we consider the proof of Goodstein's Theorem. As mentioned, it is not provable in PA, so the proof makes use of tools outside of arithmetic: in particular, infinite ordinal numbers. A more thorough discussion of ordinal numbers may be found elsewhere on this blog. For our purposes, the key property of ordinal numbers is that they represent order types of well-ordered sets.

A well-ordered set is simply a set of elements and an ordering relation (often called "less than" and denoted by "<") such that any two elements are comparable (each is less than, equal to, or greater than) the others, and every subset has a minimal element. The set may be finite or infinite. Here are some examples:

  • The set of natural numbers itself, {0,1,2,3,...}, with the relation "less than" is well-ordered because every two elements are comparable and every subset has a smallest number
  • The set of natural numbers with the "greater than" relation is not well-ordered: with this relation, "minimal" elements are really largest elements, and the subset {3,4,5,...}, for example, has no greatest element
  • The set {A,H,M,R,Z} with the relation "comes before in the alphabet" is well-ordered because we can compare any two letters to see which comes first in the alphabet, and any subset has a first letter
  • The set of all integers {...-2,-1,0,1,2,...} is not well-ordered by either less than or greater than relations


The gist of the definition is that all set elements are listed in a particular order so that we can always tell which of a pair comes first, and that infinite ascending sequences are acceptable while decreasing ones are not. To understand order type, we need a notion of when two well-ordered sets are "the same". For example, the set {1,2,3,4,5} with the less than relation and {A,H,M,R,Z} with the alphabet relation are quite similar. Using the one-to-one relabeling 1 → A, 2 → H, 3 → M, 4 → R, 5 → Z, we can move from one set to the another and preserve the ordering relation. That is, 1 < 2 in the first set and their images satisfy A < H in the second set, and so on. If there is a relabeling like the one above between two sets, they are said to be of the same order type.

The purpose of ordinal numbers is to enumerate all possible order types for well-ordered sets; there is one ordinal for each order type. To make thinking about ordinals simpler, we often say that an ordinal is a specific set of the given order type, a particularly nice set. Specifically, we choose the set of all smaller ordinals. Since the sets in the example of the previous paragraph have five elements, their order type is the ordinal 5 = {0,1,2,3,4} (the reader may wish to show that any well-ordered five element set in fact has this order type). In fact, for finite sets, there is simply one ordinal for each size set. For infinite sets, matters become more interesting.

The ordinal corresponding to the order type of the natural numbers is called ω. Using the canonical choice of representative set, ω = {0,1,2,3,...} (where we now view the elements as ordinals). The next ordinal is ω + 1, the order type of {0,1,2,3,...,ω} or in general the order type of a well-ordered set with an infinite ascending collection plus one element defined to be greater than all others. One can go on to define ω + n for any finite n and ω*2, the order type of {0,1,2,3,...,ω,ω + 1,ω + 2,...} (two infinite ascending chains stuck together). The precise details do not concern us here, but ω2, ωω, ωωω, and so on may be defined as well. What matters is that these ordinals exist and that the set of all ordinals expressible with ordinary operations on ω (for example, (ωω)*4 + (ω3)*2 + ω*5 + 7) is a well-ordered set. In fact, the set of such ordinals is itself a larger ordinal called ε0.

Once these preliminaries are established, the proof of Goodstein's Theorem is rather simple, and even clearer when considered intuitively. For any Goodstein sequence, the members are represented in hereditary base-n notation at every step: the first member is put into base 2, the next base 3, and so on. The idea is to take each member of the sequence and replace the base with ω to obtain a sequence of ordinals. For example, the sequence G4 generates a sequence of ordinals H4 in the following way:

G4(1) = 4 = 1*21*21H4(1) = ωω (the 2's are replaced with ω's),
G4(2) = 26 = 2*32 + 2*31 + 2 → H4(2) = (ω2)*2 + ω*2 + 2 (3's are replaced with ω's),
G4(3) = 41 = 2*42 = 2*41 + 1 → H4(3) = (ω2)*2 + ω*2 + 1 (4's are replaced by ω's),
and so on.

Note that the multiplication by coefficients has been moved to the other side of the ω's for technical reasons and that some of the 1's have been removed for clarity. One may precede in this matter to get a sequence of ordinals, the key property of which is that the sequence is strictly decreasing. In the above example, ωω > (ω2)*2 + ω*2 + 2 > (ω2)*2 + ω*2 + 1 and this downward trend would continue if we were to list more terms. This is because the H sequences "forget" about the base: it is always replaced by ω. The only change is caused by the subtraction of 1 at each step, which slowly reduces the coefficients. Intuitively, this is the point of the proof: by forgetting about the base, we replace the extreme growth of Goodstein sequences with a gradual decline. The units digit of the H sequence decreases by 1 every step. When it reaches 0, on the next step the ω coefficient is reduced by 1 and the units digit is replaced by the current base minus 1 (the highest allowed coefficient). These may become quite large, but they always reach zero eventually. Reasoning this way, it is clear that Goodstein's Theorem should be true.

In formal terms, the set of ordinals is well-ordered, so the set consisting of all members of an H sequence must have a minimal element, i.e., it cannot decrease forever. The only way that it can stop decreasing is if the G sequence stops, and Goodstein sequences only terminate at 0. Therefore, every Goodstein sequence terminates at 0 after a finite number of steps. We've proved Goodstein's Theorem!

Bringing in infinite ordinals to prove a statement about natural numbers is strange. So strange in fact that the argument is not formalizable in PA; there is simply no way to even define infinite ordinals in this language! This indicates why the given proof does not go through in PA, but does not settle the matter as to whether there is no possible proof of Goodstein's Theorem within PA. It leaves the possibility that there is a different clever approach that can succeed without infinite ordinals. A discussion of why this in fact does not occur may be found in the final post of this series (coming May 7).

Sources: http://www.cs.tau.ac.il/~nachumd/term/Kirbyparis.pdf, http://blog.kleinproject.org/?p=674, http://www.ams.org/journals/proc/1983-087-04/S0002-9939-1983-0687646-0/S0002-9939-1983-0687646-0.pdf

Monday, March 26, 2018

Goodstein's Theorem and Peano Arithmetic

This is the second part of a post series on Goodstein's theorem. For the first part, see here.

We saw last post that Goodstein sequences are certain sequences of positive integers defined using base representations. Despite their simple definition, they grow extraordinarily large. However, Goodstein's Theorem states that no matter how large the starting value is, the sequence will eventually terminate at 0 after some finite number of steps. Before discussing the proof to this remarkable theorem, we move in a completely different direction and define the axioms of Peano arithmetic.

Increasing standards of rigor were a hallmark of late 19th and early 20th century mathematics. With this movement came a need to precisely define the basic building blocks with which a given branch of mathematics worked. This was even true for simple arithmetic! By 1890, the mathematician Giuseppe Peano had published a formulation of the axioms (fundamental assumptions) of arithmetic, which are used nearly unchanged to this day. Written in simple english, the axioms state the following about the collection N of natural numbers (nonnegative integers), a function S known as the successor function, a function + known as addition, and a function * known as multiplication:

1) There exists a natural number called 0 in N
2) For every natural number n in N there is a successor S(n) in N (commonly written
n + 1)
3) There is no natural number whose successor is 0
4) For any two natural numbers m and n in N, if S(m) = S(n), then m = n
5) For any natural number n, n + 0 = n
6) For any natural numbers n and m, n + S(m) = S(m + n)
7) For any natural number n, n*0 = 0
8) For any natural numbers n and m, n*S(m) = n*m + n
9) For any "well-behaved" property P, if P(0) holds and for each n in N, P(n) implies
P(n + 1), then P is true for every natural number in N

The first two axioms simply say that you can start from 0 and count upwards forever through the natural numbers. The third says that there is no natural number below 0 (this is of course false for larger sets of numbers such as integers, but Peano's axioms are only for properties of the natural numbers). The fourth shows that the successor of a natural number is unique. The fifth through eighth axioms state the common properties of addition and multiplication. The ninth axiom is actually a large collection of axioms (an axiom schema) in disguise, called the "axiom schema of induction."



The idea of induction may be familiar to readers who recall their high school mathematics: one often wishes to prove that a given property holds for every natural number. To do so, it is sufficient to show that it is true in the "base case," that is, true for 0, and that the statement being true for a number means that it is also true for the next. Then since the property is true for 0, it is true for 1. Since it is true for 1, it is true for 2, and so on. The figure above illustrates the idea of induction, where each "statement" is that the given property is true for some number. The final axiom simply codifies the fact this type of reasoning works; we get that the property is true for all natural numbers. In this context, a "well-behaved" property is one expressible in the semantics of the variety of logic being used (first-order logic in this case).

Remarkably, the above assumptions are all that is needed to perform arithmetic. In principle, though formalizing complicated proofs would be quite lengthy, true statements such as "17 is prime" and "every number is the sum of four squares" become theorems in Peano arithmetic. All of these proofs would take the form of a chain of statements beginning from axioms and concluding with the desired result, such as "17 is prime," rendered appropriately in the formal mathematical language. The progression from each statement to the next would also follow a collection of well-defined deductive rules. In principle, almost all theorems concerning natural numbers could be proven this way, beginning from just a small collection of axioms! However, Peano arithmetic still runs afoul of a result that vexes many axiom systems of first-order logic: Gödel's Incompleteness Theorem.



Gödel's (First) Incompleteness Theorem, originally proven by the mathematician Kurt Gödel in 1931, dashed the hopes of those who imagined that formal logical systems would provide a complete description of mathematics. It states, informally, that for any first-order logical system powerful enough to encode arithmetic (Peano arithmetic is of course such a system), there exist statements in the language of the system that are neither provable nor refutable from the axioms. Further, there are explicit sentences in the logical system that are true (under the intended interpretation of the theory, more on this later) but unprovable. The above diagram illustrates the situation: there will always be things we know to be true or false that are beyond the reach of the axioms to formally prove or refute.

Some questions may spring to mind at this unintuitive result. What is the distinction between "true" and "provable"? How do we define "true" in mathematics if not as the end result of a proof? What do these unprovable statements look like, and what do they say?

The answer to the first of these depends on there being something "we mean by" the term "natural numbers". In other words, there is an intended interpretation of what natural numbers should be that the logical system fails to completely capture. Consequently, there are statements that we know to be true using methods outside the formal system but are unprovable within it. Bringing in additional assumptions does not simply resolve the incompleteness theorem, however. For each outside axiom added, the theorem guarantees the existence of a new unprovable statement. And if the system ever does become complete by the addition of more axioms, it also becomes inconsistent, that is, able to prove a contradiction (and loses all validity as a mathematical system).

As for the final question, the first known unprovable statements were those constructed in the proof of Gödel's theorem; these are known as Gödel sentences. They are highly contrived for the proof, however, and do not have any intuitive meaning. In the years following the original proof, the concern remained whether, for example, any statement that "naturally" arises in the study of natural numbers would be unprovable and unrefutable from the axioms of Peano arithmetic. Amazingly, such statements exist! In fact, a great example is Goodstein's Theorem. No proof exists, beginning from the Peano axioms, that has it as a conclusion. To read more about how it can be proven and why it is not a theorem of Peano arithmetic, see the next post.

Sources: https://www.cs.toronto.edu/~sacook/csc438h/notes/page96.pdf, https://plato.stanford.edu/entries/goedel-incompleteness/

Friday, March 9, 2018

Goodstein Sequences and Hereditary Base Notation

In mathematics, Goodstein sequences are certain sequences of natural numbers. Though they are fairly easy to define, their properties have important consequences in logic. Before investigating these, however, we give the definition. It depends on the concept of expressing numbers in different bases (well-known examples in addition to normal base-10 representations include binary, base 2, and hexadecimal, base 16). Recall that when writing a number, such as 4291, what we mean is 4 thousands plus 2 hundreds plus 9 tens, plus 1 one, alternatively expressed as

4291 = 4*103 + 2*102 + 9*101 + 1.

This decomposition uses 10 as a base. Note that the numbers multiplying the powers of 10 always vary between 0 and 9. Base 2, for example, could be used just as easily, with only digits 0 and 1 as coefficients. Expressing 4291 as powers of 2 yields

4291 = 1*4096 + 0*2048 + 0*1024 + 0*512 + 0*256 + 1*128 + 1*64 + 0*32 + 0*16 + 0*8 + 0*4 + 1*2 + 1*1
= 1*212 + 0*211 + 0*210 + 0*29 + 0*28 + 1*27 + 1*26 + 0*25 + 0*24 + 0*23 + 0*22 + 1*21 + 1.

Therefore, 4291 is typically expressed in binary as the sequence of coefficients 1000011000011. However, for our purposes, it is more convenient to explicitly express the powers of the base involved, although it will simplify matters to drop those terms with coefficient 0 since they have no contribution to the sum. The equation above then becomes

4291 = 1*212 + 1*27 + 1*26 + 1*21 + 1.

The system described above is known as ordinary base notation, but the definition of Goodstein sequences requires a slightly modified version, hereditary base notation. This involves taking the exponents themselves and subjecting them to the same base decomposition as the original number. Since 12 = 1*23 + 1*22, 7 = 1*22 + 1*21 + 1, and 6 = 1*22 + 1*21, the integer 4291 now becomes

4291 = 1*21*23 + 1*22 + 1*21*22 + 1*21 + 1 + 1*21*22 + 1*21 + 1*21 + 1.

This expression is quite complicated, but the process is not quite finished yet! The exponents 2 and 3 within the exponents are not yet in base-2: 3 = 1*21 + 1 and 2 = 1*21. Making the necessary replacements finally gives 4291 in hereditary base-2 notation:

4291 = 1*21*21*21 + 1 + 1*21*21 + 1*21*21*21 + 1*21 + 1 + 1*21*21*21 + 1*21 + 1*21 + 1.

In the general case, there may be many iterations of this process, which motivates the name "hereditary"; a base-2 decomposition is applied to the original integer and then the exponents that result, and then their exponents, and so on. The end result has only 2's as bases of exponents and only 1's as coefficients. The interested reader can verify that this type of process may be repeated for any positive integer in any base (using as coefficients positive integers less than the base), and that for a fixed number and base, the representation thus obtained is unique. The stage is now set for the definition of Goodstein sequence.

A Goodstein sequence is simply a sequence of nonnegative numbers. We may choose any number 1, 2, 3,... to begin the sequence. Next, whatever this number is, we express it in hereditary base-2 notation, just as we did with the example 4291 above. To generate the next member of the sequence, simply change every 2 in the hereditary base-2 representation to a 3, and then subtract 1 from the resulting number. This is the second member of the sequence. After that, express this second number in hereditary base-3 notation, change the 3's to 4's, and subtract one to get the third, and so on. We denote the nth member of the Goldstein sequence beginning with m by Gm(n). The first few sequences Gm die out quickly: if the seed is 1 (whose hereditary base-2 representation is just 1), there are no 2's to change to 3's so we simply subtract 1 to find G1(2) = 0. If a sequence reaches 0, we end it there, so that the sequence

G1 = {1,0}.

G2 is scarcely more interesting: G2(1) = 2 = 1*21 so changing the single 2 to a 3 and subtracting 1 yields G2(2) = 1*31 - 1 = 2. Recall that coefficients 0-2 are allowed in hereditary base-3 notation so 2 in this notation is simply 2. There are no 3's to change to 4's, so we subtract 1 to get G2(3) = 1. There are no 4's to change to 5's, so G2(4) = 0 and the sequence is finished:

G2 = {2,2,1,0}.

Beginning with 3 leads to a sequence nearly identical: the reader may try calculating the sequence. The end result is G3 = {3,3,3,2,1,0}. However, at m = 4, new behavior emerges. 4 = 1*21*21, so both 2's must be replaced by 3's to get: G4(2) = 1*31*31 - 1 = 27-1 = 26. In hereditary base-3, 26 = 2*32 + 2*31 + 2 so G4(3) = 2*42 + 2*41 + 2 - 1 = 41. For the next step, we get G4(4) = 2*52 + 2*51 + 1 - 1 = 60. Note that the units digit is reduced by one in each step even as the sequence increases. When it hits zero, as in this step, the coefficient of the penultimate coefficient is decreased by one: G4(5) = 2*62 + 2*61 - 1 = 83 = 2*62 + 1*61 + 5. However, the new units digit becomes one less than the base, namely 5, so it takes more steps for this to reach zero than previously. After another five steps, we arrive at G4(10) = 2*112 + 1*11 = 253. When changing to base 12 at the next step, we obtain G4(11) = 2*122 + 11 = 299. The units digit again decreases for the next 11 steps, until G4(22) = 2*232 = 1058.

The next step starts to indicate why Goodstein sequences can increase for so long: G4(23) = 2*242 - 1 = 1151 = 1*242 + 23*241 + 23. Since the base is 24, we get two new coefficients of 23. Each time the units digit reaches zero, the value at which it has to start the next time doubles. The square term in the base representation does not vanish until the base reaches 402653184. And at this point the sequence has barely begun. The largest value it reaches is 3*2402653210 - 1 at base 3*2402653209, after which the sequence remains stable for a while before finally declining to zero. This maximum value is so astronomically large that if the digits of the number were printed at a typical font size, front and back, it would fill a stack of paper over 10 feet tall! And this is just G4. Goodstein sequences with higher initial values increase much, much faster.

If we start with 18, for instance, since 18 = 1*21*21*21 + 1*21, replacing all the 2's with 3's gives G18(2) = 1*31*31*31 + 1*31 - 1 = 7625597484989. The third term is G18(3) = 1*41*41*41 + 2 - 1 ~ 10154. The values this sequence reaches quickly become difficult to even write down. However, Reuben Goodstein himself, after whom the sequences are named, proved in 1944 a statement that became known as Goodstein's Theorem. His remarkable result showed that no matter how incalculably large the sequences become, they always terminate at 0. That is, after some finite, though possibly immense, series of steps, each sequence stops increasing eventually and decreases to 0.

The theorem's proof has significance beyond demonstrating this surprising fact about Goodstein sequences. For more, see the next post.

Source: https://www.jstor.org/stable/2268019, http://mathworld.wolfram.com/GoodsteinSequence.html

Monday, February 12, 2018

Black Holes and Information

Black holes, with their extreme gravity and ability to profoundly warp space and time, are some of the most interesting objects in the universe. However, in at least one precisely defined way, they are also the least interesting.

According to general relativity, black holes are nearly featureless. Specifically, there is a result known as the "no-hair theorem" that states that stationary black holes have exactly three features that are externally observable: their mass, their electric charge, and their angular momentum (direction and magnitude of spin). There are no other attributes that distinguish them (these additional properties would be the "hair"). It follows that if two black holes are exactly identical in mass, charge, and angular momentum, there is no way, even in principle, to tell them apart from the outside.

This in and of itself is not a problem. As usual, problems arise when the principles of quantum mechanics are brought to bear in circumstances where both gravity and quantum phenomena play a large role. At the heart of the formalism of quantum mechanics is the Schrödinger equation, which governs the time-evolution of a system (at least between measurements). Fundamentally, the evolution may be computed both forwards and backwards in time. Therefore, at least the mathematical principles of quantum mechanics hold that information about a physical system cannot be "lost", that is, we may always deduce what happened in the past from the present. This argument does not take the measurement process into account, but it is believed that these processes do not destroy information either. Black holes provide some problems for this paradigm.

However, it may seem that information is lost all the time. If a book is burned, for example, everything that was written on its pages is beyond our ability to reconstruct. However, in principle, some omniscient being could look at the state of every particle of the burnt book and surrounding system and deduce how they must have been arranged. As a result, the omniscient being could say what was written in the book. The situation is rather different for black holes. If a book falls into a black hole, outside observers cannot recover the text on its pages, but this poses no problem for our omniscient being: complete knowledge of the state of all particles in the universe includes of course those on the interior of black holes as well as the exterior. The book may be beyond our reach, but its information is still conserved in the black hole interior.

The real problem became evident in 1974, when physicist Stephen Hawking argued for the existence of what is now known as Hawking radiation. This quantum mechanism allows black holes to shed mass over time, requiring a modification to the conventional wisdom that nothing ever escapes black holes.



The principles of quantum mechanics dictate that the "vacuum" of space is not truly empty. Transient so-called "virtual" particles may spring in and out of existence. Pairs of such particles may emerge from the vacuum (a pair with opposite charges, etc. is required to preserve conservation laws) for a very short time; due to the uncertainty principle of quantum mechanics, short-lived fluctuations in energy that would result from the creation of particles do not violate energy conservation. In the presence of very strong gravitational fields, such as those around a black hole, the resulting pairs of particles sometimes do not come back together and annihilate each other (as in the closed virtual pairs above). Instead, the pairs "break" and become real particles, taking with them some of the black hole's gravitational energy. When this occurs on the event horizon, one particle may form just outside and the other just inside, so that the one on the outside escapes to space. This particle emission is Hawking Radiation.

Theoretically, therefore, black holes have a way of shedding mass (through radiation) over time. Eventually, they completely "evaporate" into nothing! This process is extremely slow: black holes resulting from the collapse of stars may take tens of billions of years (more than the current age of the Universe!) to evaporate. Larger ones take still longer. Nevertheless, a theoretical puzzle remains: if the black hole evaporates and disappears, where did its stored information go? This is known as the black hole information paradox. The only particles actually emitted from the horizon were spontaneously produced from the vacuum, so it is not obvious how these could encode information. Alternatively, the information could all be released in some way at the moment the black hole evaporates. This runs into another problem, known as the Bekenstein bound.

The Bekenstein bound, named after physicist Jacob Bekenstein, is an upper limit on the amount of information that may be stored in a finite volume using finite energy. To see why this bound arises, consider a physical system as a rudimentary "computer" that stores binary information (i.e. strings of 1's and 0's). In order to store a five-digit string such as 10011, there need to be five "switches," each of which has an "up" position for 1 and a "down" position for 0. Considering all possible binary strings, there are therefore 25 = 32 different physical states (positions of switches) for our five-digit string. This is a crude analogy, but it captures the basic gist: the Bekenstein bound comes about because a physical system of a certain size and energy can only occupy so many physical states, for quantum mechanical reasons. This bound is enormous; every rearrangement of atoms in the system, for example, would count as a state. Nevertheless, it is finite.

The mathematical statement of the bound gives the maximum number of bits, or the length of the longest binary sequence, that a physical system of mass m, expressed as a number of kilograms, and radius R, a number of meters, could store. It is I ≤ 2.5769*1043 mR.

This is far, far greater than what any existing or foreseeable computer is capable of storing, and is therefore not relevant to current technology. However, it matters to black holes, because if they hold information to the moment of evaporation, the black hole will have shrunk to a minuscule size and must retain the same information that it had at its largest. This hypothesis addressing the black hole information paradox seems at odds with the Bekenstein bound.

In summary, there are many possible avenues for study in resolving the black hole information paradox, nearly all of which require the sacrifice of at least one physical principle. Perhaps information is not preserved over time, due to the "collapse" of the quantum wavefunction that occurs with measurement. Perhaps there is a way for Hawking radiation to carry information. Or possibly, there is a way around the Bekenstein bound for evaporating black holes. These possibilities, as well as more exotic ones, are current areas of study. Resolving the apparent paradoxes that arise in the most extreme of environments, where quantum mechanics and relativity collide, would greatly advance our understanding of the universe.

Sources: https://physics.aps.org/articles/v9/62, https://arxiv.org/pdf/quant-ph/0508041.pdf, http://kiso.phys.se.tmu.ac.jp/thesis/m.h.kuwabara.pdf, https://plus.maths.org/content/bekenstein

Monday, January 22, 2018

Neutrinos and Their Detection 2

This is the second part of a two part post. For the first part, see here.

The discovery of neutrinos led to a rather startling realization concerning the omnipresence of these particles. Scientists have known since the early 20th century that stars such as the Sun generate energy through nuclear fusion, especially of hydrogen into helium. In addition to producing radiation that eventually leads to what we see as sunlight, every one of these reactions releases neutrinos. As a result, the Earth is continually bathed in a stream of neutrinos: every second, billions of neutrinos pass through every square centimeter of the Earth's surface. A vast, vast majority of these pass through the planet unimpeded and resume their course through space, just as discussed in the previous post. As we will see, studying the properties of these solar neutrinos later led to an revolutionary discovery.



In 1967, an experiment began that had much in common with many of the neutrino experiments to come. Known as the Homestake experiment after its location, the Homestake Gold Mine in South Dakota, the main apparatus of the experiment was an 100,000 gallon tank of perchloroethylene (a common cleaning fluid) located deep underground, nearly a mile below the Earth's surface. The purpose of holding the experiment underground was to minimize the influence of cosmic rays, which would react with the perchloroethylene and produce experimental noise. Cosmic rays do not penetrate deep underground, however, while neutrinos do. The immense volume of liquid was necessary to obtain statistically significant data from the small rate of neutrino interactions. The number of argon atoms that were produced through the reaction was measured to determine how many reactions were occurring.

Simultaneously, physicists made theoretical calculations using knowledge of the Sun's composition, the process of nucleosynthesis, the Earth's distance from the Sun, and the size of the detector to estimate what the rate of interactions should have been. However, the results were not consistent with the data collected from the experiment. Generally, theoretical estimates were around three times as large as the actual results. Two-thirds of the expected reactions were missing! This disagreement became known as the "solar neutrino problem."

The models of the Sun were not at fault. In fact, the cause of the problem was an incorrect feature of the otherwise quite powerful Standard Model of Particle Physics, namely that neutrinos have mass. As far back as 1957, Italian physicist Bruno Pontecorvo considered the implications of neutrinos having mass.



He and others realized that neutrinos with mass would undergo what is known as neutrino oscillation when traveling through space. For example, an electron neutrino emitted from nuclear fusion would become a "mix" of all three flavors of neutrinos: electron, muon, and tau. When a solar neutrino reaches Earth and interacts with matter, it only has roughly a 1 in 3 chance of "deciding" to be an electron neutrino. This would explain the observed missing neutrinos, since the Homestake detector only accounts for electron neutrinos.

For the remainder of the 20th century, several more experiments were performed to investigate whether neutrino oscillation was in fact the solution to the solar neutrino problem. One experiment that was crucial in conclusively settling the matter was Super-Kamiokande, a neutrino observatory located in Japan. Like the Homestake experiment, it was located deep underground in a mine and consisted of a large volume of liquid (in this case, water).



When neutrinos interact with the water molecules in the detector, charged particles are produced that propagate through the chamber. These release radiation which is amplified and recorded by the photomultipliers that surround the water tank on every side. The number of photomultipliers allows a more detailed analysis of this radiation, yielding the energy and direction of origin for each neutrino interaction. It was this added precision that helped to resolve the solar neutrino problem: neutrinos indeed have mass and undergo oscillation. This discovery led to Japanese physicist Takaaki Kajita (who worked on the Super-Kamiokande detector as well as its predecessor the Kamiokande detector) sharing the 2015 Nobel Prize in Physics.

The exact masses of the different flavors of neutrinos are not yet known, nor do we completely understand why they have mass. However, despite the mysteries of particle physics that remain, further applications of neutrino detection continue in a different field: astronomy. The use of neutrinos to observe extraterrestrial objects is known as neutrino astronomy. In theory, if one could accurately measure the direction from which every neutrino arrives at Earth, the result would be an "image" of the sky highlighting neutrino sources. In reality, the scattering that occurs in detectors such as Super-Kamiokande when incoming particles hit and change direction limits angular resolution and so few interactions occur that there are insufficient samples to construct such an image. Only two extraterrestrial objects have ever been detected through neutrino emissions, in fact: the Sun, and a nearby supernova event, known as SN1987a after the year in which it took place. Theoretical calculations indicate that sufficiently bright supernovae may be located with reasonable accuracy using neutrino detectors in the future.



There is one major advantage to using neutrinos as opposed to light in making observations: neutrinos pass through nearly all matter unimpeded. The above discussion indicated that the Sun is a neutrino source. This is true, but not fully precise; the solar core is the source of the neutrinos, as it is where fusion occurs, and its radius is only about a quarter of the Sun's. There is no way to see the light emanating from the core because it interacts with other solar particles. However, we can see the core directly through neutrino imaging. In fact, the data from the Super Kamiokande experiment should be enough to approximate the radius in which certain fusion reactions take place. Future detectors could tell us even more about the Sun's interior.

Neutrino astronomy is still a nascent field and we do not yet know its full potential. Further understanding and detection of neutrinos will tell us more about the fundamental building blocks of matter, allow us to peer inside our own Sun, and measure distant supernovae.

Sources: http://www.sns.ias.edu/~jnb/SNviewgraphs/snviewgraphs.html, https://arxiv.org/pdf/hep-ph/0410090v1.pdf, http://slideplayer.com/slide/776551/, https://www.bnl.gov/bnlweb/raydavis/research.htm, https://arxiv.org/pdf/hep-ph/0202058v3.pdf, https://j-parc.jp/Neutrino/en/intro-t2kexp.html, https://arxiv.org/pdf/1010.0118v3.pdf, https://www.scientificamerican.com/article/through-neutrino-eyes/, https://arxiv.org/pdf/astro-ph/9811350v1.pdf, https://arxiv.org/pdf/1606.02558.pdf

Monday, January 1, 2018

Neutrinos and Their Detection

Neutrinos are a type of subatomic particle known both for their ubiquity and their disinclination to interact with other forms of matter. They have zero electric charge and very little mass even compared to other fundamental particles (though not none, more on this later) so they are not affected by electromagnetic forces and only slightly by gravity.



Since neutrinos are so elusive, it is not surprising that their existence was first surmised indirectly. In 1930, while studying a type of radioactive decay known as beta decay, physicist Wolfgang Pauli noticed a discrepancy. Through beta decay (shown above), a neutron is converted into a proton. This is a common process by which unstable atomic nuclei transmute into more stable ones. It was known that an electron was also released in this process. However, Pauli found that this left some momentum unaccounted for. As a result, he postulated the existence of a small, neutral particle (this properties eventually led to the name "neutrino"). The type emitted in this sort of decay is now known as an electron antineutrino (all the types will be enumerated below).

However, they were speculative for some decades before a direct detection occurred in 1956 in the Cowan-Reines Neutrino Experiment, named after physicists Clyde Cowan and Frederick Reines.



The experiment relied upon the fact that nuclear reactors were expected to release a large flux of electron antineutrinos during their operation, providing a concentrated source with which to experiment. The main apparatus of the experiment was a volume of water that electron antineutrinos emerging from the reactor would pass through. Occasionally, one would interact with a proton in the tank, producing a neutron and a positron (or anti-electron, denoted e+) through the reaction shown on the bottom left. This positron would quickly encounter an ordinary electron and the two would mutually annihilate to form gamma rays (γ). These gamma rays would then be picked up by scintillators around the water tanks. To increase the certainty that these gamma ray signatures in fact came from neutrinos, the experimenters added a second layer of detection by dissolving the chemical cadmium chloride (CdCl) in the water. The addition of a neutron (the other product of the initial reaction) to the common isotope Cd-108 creates an unstable state of Cd-109 which releases a gamma ray after a period of a handful of microseconds. Thus, the detection of two gamma rays simultaneously and then a third after a small delay would definitively indicate a neutrino interaction. The experiment was very successful and the rate of interactions, about three per hour, matched the theoretical prediction well. The neutrino had been discovered.

The Standard Model of particle physics predicted the existence of three "generations" of neutrinos corresponding to three types of particles called leptons.



The above diagram shows the three types of leptons and their corresponding neutrinos. In addition to this, every particle type has a corresponding antiparticle which in a way has the "opposite" properties (though some properties, such as mass, remain the same). The electron antineutrino discussed above is simply the antiparticle corresponding to the electron neutrino, for example. The discoveries of the others occurred at particle accelerators, where concentrated beams could be produced: the muon neutrino in 1962, and the tau neutrino in 2000. These results completed the expected roster of neutrino types under the Standard Model. In its original form, though, the Standard Model predicted that all neutrinos would have exactly zero mass. Note that this hypothesis (though later proved incorrect) is not disproven by the fact that neutrinos account for the "missing momentum" Pauli originally identified; massless particles, such as photons (particles of light), can still carry momentum and energy.

All of the neutrino physics described so far concerns artificially produced particles. However, these discoveries were only the beginning. Countless neutrinos also originate in the cosmos, motivating the area of neutrino astronomy. For more on this field and its value to both astronomy and particle physics, see the next post (coming January 22).

Sources: http://www.astro.wisc.edu/~larson/Webpage/neutrinos.html, http://hyperphysics.phy-astr.gsu.edu/hbase/particles/cowan.html, https://perimeterinstitute.ca/files/page/attachments/Elementary_Particles_Periodic_Table_large.jpghttp://www.scienceinschool.org/sites/default/files/articleContentImages/19/neutrinos/issue19neutrinos10_xl.jpg, http://www.fnal.gov/pub/presspass/press_releases/donut.html

Wednesday, December 20, 2017

2017 Season Summary

The 2017 Atlantic hurricane season had above-average activity, with a total of

18 cyclones attaining tropical depression status,
17 cyclones attaining tropical storm status,
10 cyclones attaining hurricane status, and
6 cyclones attaining major hurricane status.

Before the beginning of the season, I predicted that there would be

15 cyclones attaining tropical depression status,
15 cyclones attaining tropical storm status,
6 cyclones attaining hurricane status, and
3 cyclones attaining major hurricane status.

The average number of named storms, hurricanes, and major hurricanes for an Atlantic hurricane season (over the 30-year period 1981-2010) are 12.1, 6.4, and 2.7, respectively. The 2017 season was well above average in all categories, especially hurricanes and major hurricanes. In addition, there were several intense and long-lived hurricanes, inflating the ACE (accumulated cyclone energy) index to 223. This value, which takes into account the number, duration, and intensity of tropical cyclones, was the highest since 2005. 2017 was also the first year on record to have three storms exceeding 40 ACE units: Hurricane Jose, with 42, Hurricane Maria, with 45, and Hurricane Irma, with 67.

The ENSO oscillation, a variation in the ocean temperature anomalies of the tropical Pacific, often plays a role in Atlantic hurricane development. At the beginning of the 2017 season, these temperatures were predicted to rise, signaling a weak El Niño event and suppressing hurricane activity. However, this event did not materialize. Though anomalies did rise briefly in the spring, they returned to neutral and even negative by the early fall, when hurricane season peaks. This contributed to the extremely active September. In addition, conditions were more favorable for development in the central Atlantic than they had been for several years, allowing the formation of long-track major hurricanes. Due to these factors, my predictions significantly underestimated the season's extreme activity.

The 2017 Atlantic hurricane season was the costliest ever recorded, with Hurricanes Harvey, Irma, and Maria contributing the lion's share to this total. Among the areas most affected were southeastern Texas (by Harvey), the Leeward Islands (from Irma and Maria), and Puerto Rico and the Virgin Islands (from Maria). Some other notable facts and records for the 2017 season include:
  • Tropical Storm Arlene formed on April 20, one of only a small handful of April storms; it also had the lowest pressure ever recorded for an Atlantic tropical cyclone in April
  • The short-lived Tropical Storm Bret formed off the coast of South America and made landfall near the northern tip of Venezuela, becoming the southernmost forming June Atlantic cyclone since 1933
  • The remnants of Hurricane Franklin regenerated in the eastern Pacific after crossing Mexico and received a new name: Jova
  • Hurricane Harvey was the first major hurricane to make landfall in the U.S. since 2005, and the strongest to do so in Texas since 1961; the peak rainfall accumulation of 51.88" in Cedar Bayou, Texas was the largest tropical cyclone rain total ever for the continental U.S.
  • Hurricane Irma spent a total of 3.25 days as a category 5 hurricane, the most in the Atlantic since 1932, and maintained incredible 185 mph winds for 37 hours, the most recorded in the entire world
  • When Hurricanes Irma, Jose, and Katia were all at category 2 strength or above on September 8, it marked only the second such occurrence since 1893
  • Hurricane Maria reached a minimum pressure of 908 mb, then the tenth lowest ever for an Atlantic hurricane, and the lowest since Dean in 2007
  • Becoming a major hurricane near the Azores Islands, Hurricane Ophelia was the easternmost major hurricane ever to form in the Atlantic
  • All ten named storms from Hurricane Franklin to Ophelia became hurricanes, the first time ten consecutive names have done so in the Atlantic since 1893


Overall, the 2017 Atlantic hurricane season was exceptionally active and damaging, especially for parts of the Caribbean.

Sources: http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/lanina/enso_evolution-status-fcsts-web.pdf