Solution Aversion

Solution Aversion

I have had the misfortune to encounter many terms for psychological dysfunction in many venues. Cognitive dissonance, confirmation bias, the Dunning-Kruger effect – I have witnessed them all, all too often, both in the context of science and elsewhere. Those of us who are trained as scientists are still human: though we fancy ourselves immune, we are still subject to the same cognitive foibles as everyone else. Generally our training only suffices us to get past the oft-repeated ones.

Solution aversion is the knee-jerk reaction we have to deny the legitimacy of a problem when we don’t like the solution admitting said problem would entail. An obvious example in the modern era is climate change. People who deny the existence of this problem are usually averse to its solution.

Let me give an example from my own experience. To give some context requires some circuitous story-telling. We’ll start with climate change, but eventually get to cosmology.

Recently I encountered a lot of yakking on social media about an encounter between Bill Nye (the science guy) and Will Happer in a dispute about climate change. The basic gist of most of the posts was that of people (mostly scientists, mostly young enough to have watched Bill Nye growing up) cheering on Nye as he “eviscerated” Happer’s denialism. I did not watch any of the exchange, so I cannot evaluate the relative merits of their arguments. However, there is a more important issue at stake here: credibility.

Bill Nye has done wonderful work promoting science. Younger scientists often seem to revere him as a sort of Mr. Rogers of science. Which is great. But he is a science-themed entertainer, not an actual scientist. His show demonstrates basic, well known phenomena at a really, well, juvenile level. That’s a good thing – it clearly helped motivate a lot of talented people to become scientists. But recapitulating well-known results is very different from doing the cutting edge science that establishes new results that will become the fodder of future textbooks.

Will Happer is a serious scientist. He has made numerous fundamental contributions to physics. For example, he pointed out that the sodium layer in the upper atmosphere could be excited by a laser to create artificial guide stars for adaptive optics, enabling ground-based telescopes to achieve resolutions comparable to that of the Hubble space telescope. I suspect his work for the JASON advisory group led to the implementation of adaptive optics on Air Force telescopes long before us astronomers were doing it. (This is speculation on my part: I wouldn’t know; it’s classified.)

My point is that, contrary to the wishful thinking on social media, Nye has no more standing to debate Happer than Mickey Mouse has to debate Einstein. Nye, like Mickey Mouse, is an entertainer. Einstein is a scientist. If you think that comparison is extreme, that’s because there aren’t many famous scientists whose name I can expect everyone to know. A better analogy might be comparing Jon Hirschtick (a successful mechanical engineer, Nye’s field) to I.I. Rabi (a prominent atomic physicist like Happer), but you’re less likely to know who those people are. Most serious scientists do not cultivate public fame, and the modern examples I can think of all gave up doing real science for the limelight of their roles as science entertainers.

Another important contribution Happer made was to the study and technology of spin polarized nuclei. If you place an alkali element and a noble gas together in vapor, they may form weak van der Waals molecules. An alkali is basically a noble gas with a spare electron, so the two can become loosely bound, sharing the unwanted electron between them. It turns out – as Happer found and explained – that the wavefunction of the spare electron overlaps with the nucleus of the noble. By spin polarizing the electron through the well known process of optical pumping with a laser, it is possible to transfer the spin polarization to the nucleus. In this way, one can create large quantities of polarized nuclei, an amazing feat. This has found use in medical imaging technology. Noble gases are chemically inert, so safe to inhale. By doing so, one can light up lung tissue that is otherwise invisible to MRI and other imaging technologies.

I know this because I worked on it with Happer in the mid-80s. I was a first year graduate student in physics at Princeton where he was a professor. I did not appreciate the importance of what we were doing at the time. Will was a nice guy, but he was also my boss and though I respected him I did not much like him. I was a high-strung, highly stressed, 21 year old graduate student displaced from friends and familiar settings, so he may not have liked me much, or simply despaired of me amounting to anything. Mostly I blame the toxic arrogance of the physics department we were both in – Princeton is very much the Slytherin of science schools.

In this environment, there weren’t many opportunities for unguarded conversations. I do vividly recall some of the few that happened. In one instance, we had heard a talk about the potential for industrial activity to add enough carbon dioxide to the atmosphere to cause an imbalance in the climate. This was 1986, and it was the first I had heard of what is now commonly referred to as climate change. I was skeptical, and asked Will’s opinion. I was surprised by the sudden vehemence of his reaction:

“We can’t turn off the wheels of industry, and go back to living like cavemen.”

I hadn’t suggested any such thing. I don’t even recall expressing support for the speaker’s contention. In retrospect, this is a crystal clear example of solution aversion in action. Will is a brilliant guy. He leapt ahead of the problem at hand to see the solution being a future he did not want. Rejecting that unacceptable solution became intimately tied, psychologically, to the problem itself. This attitude has persisted to the present day, and Happer is now known as one of the most prominent scientists who is also a climate change denier.

Being brilliant never makes us foolproof against being wrong. If anything, it sets us up for making mistakes of enormous magnitude.

There is a difference between the problem and the solution. Before we debate the solution, we must first agree on the problem. That should, ideally, be done dispassionately and without reference to the solutions that might stem from it. Only after we agree on the problem can we hope to find a fitting solution.

In the case of climate change, it might be that we decide the problem is not so large as to require drastic action. Or we might hope that we can gradually wean ourselves away from fossil fuels. That is easier said than done, as many people do not seem to appreciate the magnitude of the energy budget what needs replacing. But does that mean we shouldn’t even try? That seems to be the psychological result of solution aversion.

Either way, we have to agree and accept that there is a problem before we can legitimately decide what to do about it. Which brings me back to cosmology. I did promise you a circuitous bit of story-telling.

Happer’s is just the first example I encountered of a brilliant person coming to a dubious conclusion because of solution aversion. I have had many colleagues who work on cosmology and galaxy formation say straight out to me that they would only consider MOND “as a last resort.” This is a glaring, if understandable, example of solution aversion. We don’t like MOND, so we’re only willing to consider it when all other options have failed.

I hope it is obvious from the above that this attitude is not a healthy one in science. In cosmology, it is doubly bad. Just when, exactly, do we reach the last resort?

We’ve already accepted that the universe is full of dark matter, some invisible form of mass that interacts gravitationally but not otherwise, has no place in the ridiculously well tested Standard Model of particle physics, and has yet to leave a single shred of credible evidence in dozens of super-sensitive laboratory experiments. On top of that, we’ve accepted that there is also a distinct dark energy that acts like antigravity to drive the apparent acceleration of the expansion rate of the universe, conserving energy by the magic trick of a sign error in the equation of state that any earlier generation of physicists would have immediately rejected as obviously unphysical. In accepting these dark denizens of cosmology we have granted ourselves essentially infinite freedom to fine-tune any solution that strikes our fancy. Just what could possibly constitute the last resort of that?

hammerandnails
When you have a supercomputer, every problem looks like a simulation in need of more parameters.

Being a brilliant scientist never precludes one from being wrong. At best, it lengthens the odds. All too often, it leads to a dangerous hubris: we’re so convinced by, and enamored of, our elaborate and beautiful theories that we see only the successes and turn a blind eye to the failures, or in true partisan fashion, try to paint them as successes. We can’t have a sensible discussion about what might be right until we’re willing to admit – seriously, deep-down-in-our-souls admit – that maybe ΛCDM is wrong.

I fear the field has gone beyond that, and is fissioning into multiple, distinct branches of science that use the same words to mean different things. Already “dark matter” means something different to particle physicists and astronomers, though they don’t usually realize it. Soon our languages may become unrecognizable dialects to one another; already communication across disciplinary boundaries is strained. I think Kuhn noted something about different scientists not recognizing what other scientists were doing as science, nor regarding the same evidence in the same way. Certainly we’ve got that far already, as successful predictions of the “other” theory are dismissed as so much fake news in a world unhinged from reality.

Degenerating problemshift: a wedged paradigm in great tightness

Degenerating problemshift: a wedged paradigm in great tightness

Reading Merritt’s paper on the philosophy of cosmology, I was struck by a particular quote from Lakatos:

A research programme is said to be progressing as long as its theoretical growth anticipates its empirical growth, that is as long as it keeps predicting novel facts with some success (“progressive problemshift”); it is stagnating if its theoretical growth lags behind its empirical growth, that is as long as it gives only post-hoc explanations either of chance discoveries or of facts anticipated by, and discovered in, a rival programme (“degenerating problemshift”) (Lakatos, 1971, pp. 104–105).

The recent history of modern cosmology is rife with post-hoc explanations of unanticipated facts. The cusp-core problem and the missing satellites problem are prominent examples. These are explained after the fact by invoking feedback, a vague catch-all that many people agree solves these problems even though none of them agree on how it actually works.

FeedbackCartoonSilkMamon
Cartoon of the feedback explanation for the difference between the galaxy luminosity function (blue line) and the halo mass function (red line). From Silk & Mamon (2012).

There are plenty of other problems. To name just a few: satellite planes (unanticipated correlations in phase space), the emptiness of voids, and the early formation of structure  (see section 4 of Famaey & McGaugh for a longer list and section 6 of Silk & Mamon for a positive spin on our list). Each problem is dealt with in a piecemeal fashion, often by invoking solutions that contradict each other while buggering the principle of parsimony.

It goes like this. A new observation is made that does not align with the concordance cosmology. Hands are wrung. Debate is had. Serious concern is expressed. A solution is put forward. Sometimes it is reasonable, sometimes it is not. In either case it is rapidly accepted so long as it saves the paradigm and prevents the need for serious thought. (“Oh, feedback does that.”) The observation is no longer considered a problem through familiarity and exhaustion of patience with the debate, regardless of how [un]satisfactory the proffered solution is. The details of the solution are generally forgotten (if ever learned). When the next problem appears the process repeats, with the new solution often contradicting the now-forgotten solution to the previous problem.

This has been going on for so long that many junior scientists now seem to think this is how science is suppose to work. It is all they’ve experienced. And despite our claims to be interested in fundamental issues, most of us are impatient with re-examining issues that were thought to be settled. All it takes is one bold assertion that everything is OK, and the problem is perceived to be solved whether it actually is or not.

8631e895433bc3d1fa87e3d857fc7500
“Is there any more?”

That is the process we apply to little problems. The Big Problems remain the post hoc elements of dark matter and dark energy. These are things we made up to explain unanticipated phenomena. That we need to invoke them immediately casts the paradigm into what Lakatos called degenerating problemshift. Once we’re there, it is hard to see how to get out, given our propensity to overindulge in the honey that is the infinity of free parameters in dark matter models.

Note that there is another aspect to what Lakatos said about facts anticipated by, and discovered in, a rival programme. Two examples spring immediately to mind: the Baryonic Tully-Fisher Relation and the Radial Acceleration Relation. These are predictions of MOND that were unanticipated in the conventional dark matter picture. Perhaps we can come up with post hoc explanations for them, but that is exactly what Lakatos would describe as degenerating problemshift. The rival programme beat us to it.

In my experience, this is a good description of what is going on. The field of dark matter has stagnated. Experimenters look harder and harder for the same thing, repeating the same experiments in hope of a different result. Theorists turn knobs on elaborate models, gifting themselves new free parameters every time they get stuck.

On the flip side, MOND keeps predicting novel facts with some success, so it remains in the stage of progressive problemshift. Unfortunately, MOND remains incomplete as a theory, and doesn’t address many basic issues in cosmology. This is a different kind of unsatisfactory.

In the mean time, I’m still waiting to hear a satisfactory answer to the question I’ve been posing for over two decades now. Why does MOND get any predictions right? It has had many a priori predictions come true. Why does this happen? It shouldn’t. Ever.

Cepheids & Gaia: No Systematic in the Hubble Constant

Cepheids & Gaia: No Systematic in the Hubble Constant

Casertano et al. have used Gaia to provide a small but important update in the debate over the value of the Hubble Constant. The ESA Gaia mission is measuring parallaxes for billions of stars. This is fundamental data that will advance astronomy in many ways, no doubt settling long standing problems but also raising new ones – or complicating existing ones.

Traditional measurements of the H0 are built on the distance scale ladder, in which distances to nearby objects are used to bootstrap outwards to more distant ones. This works, but is also an invitation to the propagation of error. A mistake in the first step affects all others. This is a long-standing problem that informs the assumption that the tension between H0 = 67 km/s/Mpc from Planck and H0 = 73 km/s/Mpc from local measurements will be resolved by some systematic error – presumably in the calibration of the distance ladder.

Well, not so far. Gaia has now measured enough Cepheids in our own Milky Way to test the calibration used to measure the distances of external galaxies via Cepheids. This was one of the shaky steps where things seemed most likely to go off. But no – the scales are consistent at the 0.3% level. For now, direct measurement of the expansion rate remains H0 = 73 km/s/Mpc.

Critical Examination of the Impossible

Critical Examination of the Impossible

It has been proposal season for the Hubble Space Telescope, so many astronomers have been busy with that. I am no exception. Talking to others, it is clear that there remain many more excellent Hubble projects than available observing time.

So I haven’t written here for a bit, and I have other tasks to get on with. I did get requests for a report on the last conference I went to, Beyond WIMPs: from Theory to Detection. They have posted video from the talks, so anyone who is interested may watch.

I think this is the worst talk I’ve given in 20 years. Maybe more. Made the classic mistake of trying to give the talk the organizers asked for rather than the one I wanted to give. Conference organizers mean well, but they usually only have a vague idea of what they imagine you’ll say. You should always ignore that and say what you think is important.

When speaking or writing, there are three rules: audience, audience, audience. I was unclear what the audience would be when I wrote the talk, and it turns out there were at least four identifiably distinct audiences in attendance. There were skeptics – particle physicists who were concerned with the state of their field and that of cosmology, there were the faithful – particle physicists who were not in the least concerned about this state of affairs, there were the innocent – grad students with little to no background in astronomy, and there were experts – astroparticle physicists who have a deep but rather narrow knowledge of relevant astronomical data. I don’t think it would have been possible to address the assigned topic (a “Critical Examination of the Existence of Dark Matter“) in a way that satisfied all of these distinct audiences, and certainly not in the time allotted (or even in an entire semester).

It is tempting to give an interruption by interruption breakdown of the sociology, but you may judge that for yourselves. The one thing I got right was what I said at the outset: Attitude Matters. You can see that on display throughout.

IMG_5460
This comic has been hanging on a colleague’s door for decades.

In science as in all matters, if you come to a problem sure that you already know the answer, you will leave with that conviction. No data nor argument will shake your faith. Only you can open your own mind.

Cosmology and Convention (continued)

Cosmology and Convention (continued)
Note: this is a guest post by David Merritt, following on from his paper on the philosophy of science as applied to aspects of modern cosmology.

Stacy kindly invited me to write a guest post, expanding on some of the arguments in my paper. I’ll start out by saying that I certainly don’t think of my paper as a final word on anything. I see it more like an opening argument — and I say this, because it’s my impression that the issues which it raises have not gotten nearly the attention they deserve from the philosophers of science. It is that community that I was hoping to reach, and that fact dictated much about the content and style of the paper. Of course, I’m delighted if astrophysicists find something interesting there too.

My paper is about epistemology, and in particular, whether the standard cosmological model respects Popper’s criterion of falsifiability — which he argued (quite convincingly) is a necessary condition for a theory to be considered scientific. Now, falsifying a theory requires testing it, and testing it means (i) using the theory to make a prediction, then (ii) checking to see if the prediction is correct. In the case of dark matter, the cleanest way I could think of to do this was via so-called  “direct detection”, since the rotation curve of the Milky Way makes a pretty definite prediction about the density of dark matter at the Sun’s location. (Although as I argued, even this is not enough, since the theory says nothing at all about the likelihood that the DM particles will interact with normal matter even if they are present in a detector.)

What about the large-scale evidence for dark matter — things like the power spectrum of density fluctuations, baryon acoustic oscillations, the CMB spectrum etc.? In the spirit of falsification, we can ask what the standard model predicts for these things; and the answer is: it does not make any definite prediction. The reason is that — to predict quantities like these — one needs first to specify the values of a set of additional parameters: things like the mean densities of dark and normal matter; the numbers that determine the spectrum of initial density fluctuations; etc. There are roughly half a dozen such “free parameters”. Cosmologists never even try to use data like these to falsify their theory; their goal is to make the theory work, and they do this by picking the parameter values that optimize the fit between theory and data.

Philosophers of science are quite familiar with this sort of thing, and they have a rule: “You can’t use the data twice.” You can’t use data to adjust the parameters of a theory, and then turn around and claim that those same data support the theory.  But this is exactly what cosmologists do when they argue that the existence of a “concordance model” implies that the standard cosmological model is correct. What “concordance” actually shows is that the standard model can be made consistent: i.e. that one does not require different values for the same parameter. Consistency is good, but by itself it is a very weak argument in favor of a theory’s correctness. Furthermore, as Stacy has emphasized, the supposed “concordance” vanishes when you look at the values of the same parameters as they are determined in other, independent ways. The apparent tension in the Hubble constant is just the latest example of this; another, long-standing example is the very different value for the mean baryon density implied by the observed lithium abundance. There are other examples. True “convergence” in the sense understood by the philosophers — confirmation of the value of a single parameter in multiple, independent experiments — is essentially lacking in cosmology.

Now, even though those half-dozen parameters give cosmologists a great deal of freedom to adjust their model and to fit the data, the freedom is not complete. This is because — when adjusting parameters — they fix certain things: what Imre Lakatos called the “hard core” of a research program: the assumptions that a theorist is absolutely unwilling to abandon, come hell or high water. In our case, the “hard core” includes Einstein’s theory of gravity, but it also includes a number of less-obvious things; for instance, the assumption that the dark matter responds to gravity in the same way as any collisionless fluid of normal matter would respond. (The latter assumption is not made in many alternative theories.) Because of the inflexibility of the “hard core”, there are going to be certain parameter values that are also more-or-less fixed by the data. When a cosmologist says “The third peak in the CMB requires dark matter”, what she is really saying is: “Assuming the fixed hard core, I find that any reasonable fit to the data requires the parameter defining the dark-matter density to be significantly greater than zero.” That is a much weaker statement than “Dark matter must exist”. Statements like “We know that dark matter exists” put me in mind of the 18th century chemists who said things like “Based on my combustion experiments, I conclude that phlogiston exists and that it has a negative mass”. We know now that the behavior the chemists were ascribing to the release of phlogiston was actually due to oxidation. But the “hard core” of their theory (“Combustibles contain an inflammable principle which they release upon burning”) forbade them from considering different models. It took Lavoisier’s arguments to finally convince them of the existence of oxygen.

The fact that the current cosmological model has a fixed “hard core” also implies that — in principle — it can be falsified. But, at the risk of being called a cynic, I have little doubt that if a new, falsifying observation should appear, even a very compelling one, the community will respond as it has so often in the past: via a conventionalist stratagem. Pavel Kroupa has a wonderful graphic, reproduced below, that shows just how often predictions of the standard cosmological model have been falsified — a couple of dozen times, according to latest count; and these are only the major instances. Historians and philosophers of science have documented that theories that evolve in this way often end up on the scrap heap. To the extent that my paper is of interest to the astronomical community, I hope that it gets people to thinking about whether the current cosmological model is headed in that direction.

Kroupa_F14_SMoCfailures
Fig. 14 from Kroupa (2012) quantifying setbacks to the Standard Model of Cosmology (SMoC).

 

 

Hubble constant redux

Hubble constant redux

There is a new article in Science on the expansion rate of the universe, very much along the lines of my recent post. It is a good read that I recommend. It includes some of the human elements that influence the science.

When I started this blog, I recalled my experience in the ’80s moving from a theory-infused institution to a more observationally and empirically oriented one. At that time, the theory-infused cosmologists assured us that Sandage had to be correct: H0 = 50. As a young student, I bought into this. Big time. I had no reason not to; I was very certain of the transmitted lore. The reasons to believe it then seemed every bit as convincing a the reasons to believe ΛCDM today. When I encountered people actually making the measurement, like Greg Bothun, they said “looks to be about 80.”

This caused me a lot of cognitive dissonance. This couldn’t be true. The universe would be too young (at most ∼12 Gyr) to contain the oldest stars (thought to be ∼18 Gyr at that time). Worse, there was no way to reconcile this with Inflation, which demanded Ωm = 1. The large deceleration of the expansion caused by high Ωm greatly exacerbated the age problem (only ∼8 Gyr accounting for deceleration). Reconciling the age problem with Ωm = 1 was hard enough without raising the Hubble constant.

Presented with this dissonant information, I did what most of us humans do: I ignored it. Some of my first work involved computing the luminosity function of quasars. With the huge distance scale of H0 = 50, I remember noticing how more distant quasars got progressively brighter. By a lot. Yes, they’re the most luminous things in the early universe. But they weren’t just outshining a galaxy’s worth of stars; they were outshining a galaxy of galaxies.

That was a clue that the metric I was assuming was very wrong. And indeed, since that time, every number of cosmological significance that I was assured in confident tones by Great Men that I Had to Believe has changed by far more than its formal uncertainty. In struggling with this, I’ve learned not to be so presumptuous in my beliefs. The universe is there for us to explore and discover. We inevitably err when we try to dictate how it Must Be.

The amplitude of the discrepancy in the Hubble constant is smaller now, but the same attitudes are playing out. Individual attitudes vary, of course, but there are many in the cosmological community who take the attitude that the Planck data give H0 = 67.8 so that is the right number. All other data are irrelevant; or at best flawed until brought into concordance with the right number.

It is Known, Khaleesi. 

Often these are the same people who assured us we had to believe Ωm = 1 and H0 = 50 back in the day. This continues the tradition of arrogance about how things must be. This attitude remains rampant in cosmology, and is subsumed by new generations of students just as it was by me. They’re very certain of the transmitted lore. I’ve even been trolled by some who seem particularly eager to repeat the mistakes of the past.

From hard experience, I would advocate a little humility. Yes, Virginia, there is a real tension in the Hubble constant. And yes, it remains quite possible that essential elements of our cosmology may prove to be wrong. I personally have no doubt about the empirical pillars of the Big Bang – cosmic expansion, Big Bang Nucleosynthesis, and the primordial nature of the Cosmic Microwave Background. But Dark Matter and Dark Energy may well turn out to be mere proxies for some deeper cosmic truth. IF that is so, we will never recognize it if we proceed with the attitude that LCDM is Known, Khaleesi.

Neutrinos got mass!

Neutrinos got mass!

In 1984, I heard Hans Bethe give a talk in which he suggested the dark matter might be neutrinos. This sounded outlandish – from what I had just been taught about the Standard Model, neutrinos were massless. Worse, I had been given the clear impression that it would screw everything up if they did have mass. This was the pervasive attitude, even though the solar neutrino problem was known at the time. This did not compute! so many of us were inclined to ignore it. But, I thought, in the unlikely event it turned out that neutrinos did have mass, surely that would be the answer to the dark matter problem.

Flash forward a few decades, and sure enough, neutrinos do have mass. Oscillations between flavors of neutrinos have been observed in both solar and atmospheric neutrinos. This implies non-zero mass eigenstates. We don’t yet know the absolute value of the neutrino mass, but the oscillations do constrain the separation between mass states (Δmν,212 = 7.53×10−5 eV2 for solar neutrinos, and Δmν,312 = 2.44×10−3 eV2 for atmospheric neutrinos).

Though the absolute values of the neutrino mass eigenstates are not yet known, there are upper limits. These don’t allow enough mass to explain the cosmological missing mass problem. The relic density of neutrinos is

Ωνh2 = ∑mν/(93.5 eV)

In order to make up the dark matter density (Ω ≈ 1/4), we need ∑mν ≈ 12 eV. The experimental upper limit on the electron neutrino mass is mν < 2 eV. There are three neutrino mass eigenstates, and the difference in mass between them is tiny, so ∑mν < 6 eV. Neutrinos could conceivably add up to more mass than baryons, but they cannot add up to be the dark matter.

In recent years, I have started to hear the assertion that we have already detected dark matter, with neutrinos given as the example. They are particles with mass that only interact with us through the weak nuclear force and gravity. In this respect, they are like WIMPs.

Here the equivalence ends. Neutrinos are Standard Model particles that have been known for decades. WIMPs are hypothetical particles that reside in a hypothetical supersymmetric sector beyond the Standard Model. Conflating the two to imply that WIMPs are just as natural as neutrinos is a false equivalency.

That said, massive neutrinos might be one of the few ways in which hierarchical cosmogony, as we currently understand it, is falsifiable. Whatever the dark matter is, we need it to be dynamically cold. This property is necessary for it to clump into dark matter halos that seed galaxy formation. Too much hot (relativistic) dark matter (neutrinos) suppresses structure formation. A nascent dark matter halo is nary a speed bump to a neutrino moving near the speed of light: if those fast neutrinos carry too much mass, they erase structure before it can form.

One of the great successes of ΛCDM is its explanation of structure formation: the growth of large scale structure from the small fluctuations in the density field at early times. This is usually quantified by the power spectrum – in the CMB at z > 1000 and from the spatial distribution of galaxies at z = 0. This all works well provided the dominant dark mass is dynamically cold, and there isn’t too much hot dark matter fighting it.

t16_galaxy_power_spectrum
The power spectrum from the CMB (low frequency/large scales) and the galaxy distribution (high frequency/”small” scales). Adapted from Whittle.

How much is too much? The power spectrum puts strong limits on the amount of hot dark matter that is tolerable. The upper limit is ∑mν < 0.12 eV. This is an order of magnitude stronger than direct experimental constraints.

Usually, it is assumed that the experimental limit will eventually come down to the structure formation limit. That does seem likely, but it is also conceivable that the neutrino mass has some intermediate value, say mν ≈ 1 eV. Such a result, were it to be obtained experimentally, would falsify the current CDM cosmogony.

Such a result seems unlikely, of course. Shooting for a narrow window such as the gap between the current cosmological and experimental limits is like drawing to an inside straight. It can happen, but it is unwise to bet the farm on it.

It should be noted that a circa 1 eV neutrino would have some desirable properties in an MONDian universe. MOND can form large scale structure, much like CDM, but it does so faster. This is good for clearing out the voids and getting structure in place early, but it tends to overproduce structure by z = 0. An admixture of neutrinos might help with that. A neutrino with an appreciable mass would also help with the residual mass discrepancy MOND suffers in clusters of galaxies.

If experiments measure a neutrino mass in excess of the cosmological limit, it would be powerful motivation to consider MOND-like theories as a driver of structure formation. If instead the neutrino does prove to be tiny, ΛCDM will have survived another test. That wouldn’t falsify MOND (or really have any bearing on it), but it would remove one potential “out” for the galaxy cluster problem.

Tiny though they be, neutrinos got mass! And it matters!

LCDM has met the enemy, and it is itself

LCDM has met the enemy, and it is itself

David Merritt recently published the article “Cosmology and convention” in Studies in History and Philosophy of Science. This article is remarkable in many respects. For starters, it is rare that a practicing scientist reads a paper on the philosophy of science, much less publishes one in a philosophy journal.

I was initially loathe to start reading this article, frankly for fear of boredom: me reading about cosmology and the philosophy of science is like coals to Newcastle. I could not have been more wrong. It is a genuine page turner that should be read by everyone interested in cosmology.

I have struggled for a long time with whether dark matter constitutes a falsifiable scientific hypothesis. It straddles the border: specific dark matter candidates (e.g., WIMPs) are confirmable – a laboratory detection is both possible and plausible – but the concept of dark matter can never be excluded. If we fail to find WIMPs in the range of mass-cross section parameters space where we expected them, we can change the prediction. This moving of the goal post has already happened repeatedly.

wimplimits2017
The cross-section vs. mass parameter space for WIMPs. The original, “natural” weak interaction cross-section (10-39) was excluded long ago, as were early attempts to map out the theoretically expected parameter space (upper pink region). Later predictions drifted to progressively lower cross-sections. These evaded experimental limits at the time, and confident predictions were made that the dark matter would be found.  More recent data show otherwise: the gray region is excluded by PandaX (2016). [This plot was generated with the help of DMTools hosted at Brown.]
I do not find it encouraging that the goal posts keep moving. This raises the question, how far can we go? Arbitrarily low cross-sections can be extracted from theory if we work at it hard enough. How hard should we work? That is, what criteria do we set whereby we decide the WIMP hypothesis is mistaken?

There has to be some criterion by which we would consider the WIMP hypothesis to be falsified. Without such a criterion, it does not satisfy the strictest definition of a scientific hypothesis. If at some point we fail to find WIMPs and are dissatisfied with the theoretical fine-tuning required to keep them hidden, we are free to invent some other dark matter candidate. No WIMPs? Must be axions. Not axions? Would you believe light dark matter? [Worst. Name. Ever.] And so on, ad infinitum. The concept of dark matter is not falsifiable, even if specific dark matter candidates are subject to being made to seem very unlikely (e.g., brown dwarfs).

Faced with this situation, we can consult the philosophy science. Merritt discusses how many of the essential tenets of modern cosmology follow from what Popper would term “conventionalist stratagems” – ways to dodge serious consideration that a treasured theory is threatened. I find this a compelling terminology, as it formalizes an attitude I have witnessed among scientists, especially cosmologists, many times. It was put more colloquially by J.K. Galbraith:

“Faced with the choice between changing one’s mind and proving that there is no need to do so, almost everybody gets busy on the proof.”

Boiled down (Keuth 2005), the conventionalist strategems Popper identifies are

  1. ad hoc hypotheses
  2. modification of ostensive definitions
  3. doubting the reliability of the experimenter
  4. doubting the acumen of the theorist

These are stratagems to be avoided according to Popper. At the least they are pitfalls to be aware of, but as Merritt discusses, modern cosmology has marched down exactly this path, doing each of these in turn.

The ad hoc hypotheses of ΛCDM are of course Λ and CDM. Faced with the observation of a metric that cannot be reconciled with the prior expectation of a decelerating expansion rate, we re-invoke Einstein’s greatest blunder, Λ. We even generalize the notion and give it a fancy new name, dark energy, which has the convenient property that it can fit any observed set of monotonic distance-redshift pairs. Faced with an excess of gravitational attraction over what can be explained by normal matter, we invoke non-baryonic dark matter: some novel form of mass that has no place in the standard model of particle physics, has yet to show any hint of itself in the laboratory, and cannot be decisively excluded by experiment.

We didn’t accept these ad hoc add-ons easily or overnight. Persuasive astronomical evidence drove us there, but all these data really show is that something dire is wrong: General Relativity plus known standard model particles cannot explain the universe. Λ and CDM are more a first guess than a final answer. They’ve been around long enough that they have become familiar, almost beyond doubt. Nevertheless, they remain unproven ad hoc hypotheses.

The sentiment that is often asserted is that cosmology works so well that dark matter and dark energy must exist. But a more conservative statement would be that our present understanding of cosmology is correct if and only if these dark entities exist. The onus is on us to detect dark matter particles in the laboratory.

That’s just the first conventionalist stratagem. I could given many examples of violations of the other three, just from my own experience. That would make for a very long post indeed.

Instead, you should go read Merritt’s paper. There are too many things there to discuss, at least in a single post. You’re best going to the source. Be prepared for some cognitive dissonance.

19133887

Tension in the Hubble constant

Tension in the Hubble constant

There has been some hand-wringing of late about the tension between the value of the expansion rate of the universe – the famous Hubble constant, H0, measured directly from observed redshifts and distances, and that obtained by multi-parameter fits to the cosmic microwave background. Direct determinations consistently give values in the low to mid-70s, like Riess et al. (2016): H0 = 73.24 ± 1.74 km/s/Mpc while the latest CMB fit from Planck gives H0 = 67.8 ± 0.9 km/s/Mpc. These are formally discrepant at a modest level: enough to be annoying, but not enough to be conclusive.

The widespread presumption is that there is a subtle systematic error somewhere. Who is to blame depends on what you work on. People who work on the CMB and appreciate its phenomenal sensitivity to cosmic geometry generally presume the problem is with galaxy measurements. To people who work on local galaxies, the CMB value is a non-starter.

This subject has a long and sordid history which entire books have been written about. Many systematic errors have plagued the cosmic distance ladder. Hubble’s earliest (c. 1930) estimate of H0 = 500 km/s/Mpc was an order of magnitude off, and made the universe impossibly young by what was known to geologists at the time. Recalibration of the distance scale brought the number steadily down. There followed a long (1960s – 1990s) stand-off between H0 = 50 as advocated by Sandage and 100 as advocated by de Vaucouleurs. Obviously, there were some pernicious systematic errors lurking about. Given this history, it is easy to imagine that even today there persists some subtle systematic error in local galaxy distance measurements.

In the mid-90s, I realized that the Tully-Fisher method was effectively a first approximation – there should be more information in the full shape of the rotation curve. Playing around with this, I arrived at H0 = 72 ± 2. My work relied heavily on the work of Begeman, Broeils, & Sanders and in turn on the distances they had assumed. This was a much large systematic uncertainty. To firm up my estimate would require improved calibration of those distances quite beyond the scope of what I was willing to take on at that time, so I never published it.

In 2001, the HST Key Project on the Distance Scale – the primary motivation to build the Hubble Space Telescope – reported H0 = 72 ± 8. That uncertainty was still plagued by the same systematics that had befuddled me. Since that time, the errors have been beaten down. There have been many other estimates of increasing precision, mostly in the range 72 – 75. The serious-minded cosmologist always worries about some subtle remaining systematic error, but the issue seemed finally to be settled.

One weird consequence of this was that all my extensive notes on the distance scale no longer seemed essential to teaching graduate cosmology: all the arcane details that had occupied the field for decades suddenly seemed like boring minutia. That was OK – about that time, there finally started to be interesting data on the the cosmic microwave background. Explaining that neatly displaced the class time spent on the distance scale. No longer were the physics students stopping to ask, appalled, “what’s a distance modulus?”; now it was the astronomy students who were appalled to be confronted by the spherical harmonics they’d seen but not learned in quantum mechanics.

The first results from WMAP were entirely consistent with the results of the HST key project. This reinforced the feeling that the problem was solved. In the new century, we finally knew the value of the Hubble constant!

Over the past decade, the best-fit value of H0 from the CMB has done a slow walk away from the direct measurements in the local universe. It has gotten far enough to result in the present tension. The problem is that the CMB doesn’t measure the Hubble constant directly; it constrains a multi-dimensional parameters space that approximately projects to a constant of the product ΩmH03, as illustrated below.

omh-2015wh0
Best fit values of the Hubble constant and the mass density from CMB satellite experiments (labeled). The blue lines demarcate the trench allowed in the space by the Planck data. Only the narrow space between the lines is allowed; the region above and below is excluded. The best fit values have simply marched along the floor of this trench over time.

Much of the progress in cosmology has been the steady reduction in the allowed range in the above parameter space. The CMB data now allow only a narrow trench. I worry that it may wink out entirely. Were that to happen, it would falsify our current model of cosmology.

For now the only thing that seems to be happening is that the χ2 for the CMB data  is ever so slightly better for lower values of the Hubble constant. While the lines of the trench represent no-go zones – the data require cosmological parameters to fall between the lines – there isn’t much difference along the trench. It is like walking along the floor of the Grand Canyon: exiting by climbing up the cliffs is disfavored; meandering downstream is energetically favored.

That’s what it looks like to me. The CMB χ2 has meandered a bit down the trench. It is not obvious to me that the current Planck best-fit is all that preferable to that from WMAP3. I have asked a few experts what would be so terrible about imposing the local distance scale as a strong prior. Have yet to hear a good answer, so chime in if you know one. If we put the clamps on H0 it must come out somewhere else. Where? How terrible would it be?

This is not an idle question. If one can recover the local Hubble constant with only a small tweak to, say, the baryon density, then fine – we’ve already got a huge problem there with lithium that we’re largely ignoring – why argue about the Hubble constant if this tension can be resolved where there’s already a bigger problem? If instead, it requires something more radical, like a clear difference from the standard number of neutrinos, then OK, that’s interesting and potentially a big deal.

So what is it? What does it take to reconcile to Planck with local H0? Since this is an issue of geometry, I suspect it might be something like the best fit geometry of the universe becoming ever so slightly not-flat, at the 2σ level instead of 1σ.

img_5358

While I have not come across a satisfactory explanation of what it would take to reconcile Planck with the local distance scale, I have seen many joint analyses of Planck plus lots of other data. They all seem consistent, so long as you ignore the high-L (L > 600) Planck data. It is only the high-L data that are driving the discrepancy (low L appear to be OK).

So I will say the obvious, for those who are too timid: it looks like the systematic error is most likely with the high-L data of Planck itself.

Ode to Vera

Ode to Vera

Vera Rubin passed away a few weeks ago. This was not surprising: she had lived a long, positive, and fruitful life, but had faced the usual health problems of those of us who make it to the upper 80s. Though news of her death was not surprising, it was deeply saddening. It affected me more than I had anticipated, even armed with the intellectual awareness that the inevitable must be approaching. It saddens me again now trying to write this, which must inevitably be an inadequate tribute.

In the days after Vera Rubin passed away, I received a number of inquiries from the press asking me to comment on her life and work for their various programs. I did not respond. I guess I understand the need to recognize and remark on the passing of a great scientist and human being, and I’m glad the press did in fact acknowledge her many accomplishments. But I wondered if, by responding, I would be providing a tribute to Vera, or merely feeding the needs of the never-ending hyperactive news cycle. Both, I guess. At any rate, I did not feel it was my place to comment. It did not seem right to air my voice where hers would never be heard again.

I knew Vera reasonably well, but there are plenty who knew her better and were her colleagues over a longer period of time. Also, at the back of my mind, I was a tiny bit afraid that no matter what I said, someone would read into it some sort of personal scientific agenda. My reticence did not preclude other scientists who knew her considerably less well from doing exactly that. Perhaps it is unavoidable: to speak of others, one must still use one’s own voice, and that inevitably is colored by our own perspective. I mention this because many of the things recently written about Vera do not do justice to her scientific opinions as I know them from conversations with her. This is important, because Vera was all about the science.

One thing I distinctly remembering her saying to me, and I’m sure she repeated this advice to many other junior scientists, was that you had to do science because you had a need to Know. It was not something to be done for awards or professional advancement; you could not expect any sort of acknowledgement and would likely be disappointed if you did. You had to do it because you wanted to find out how things work, to have even a brief moment when you felt like you understood some tiny fraction of the wonders of the universe.

Despite this attitude, Vera was very well rewarded for her science. It came late in her career – she did devote a lot of energy to raising a large family; she and her husband Bob Rubin were true life partners in the ideal sense of the term: family came first, and they always supported each other. It was deeply saddening when Bob passed, and another blow to science when their daughter Judy passed away all too early. We all die, sometimes sooner rather than later, but few of us take it well.

Professionally, Vera was all about the science. Work was like breathing. Something you just did; doing it was its own reward. Vera always seemed to take great joy in it. Success, in terms of awards, came late, but it did come, and in many prestigious forms – membership in the National Academy of Sciences, the Gold Medal of the Royal Astronomical Society, and the National Medal of Science, to name a few of her well-deserved honors. Much has been made of the fact that this list does not include a Nobel Prize, but I never heard Vera express disappointment about that, or even aspiration to it. Quite the contrary, she, like most modest people, didn’t seem to consider it to be appropriate. I think  part of the reason for this was that she self-identified as an astronomer, not as a physicist (as some publications mis-report). That distinction is worthy of an entire post so I’ll leave it for now.

Astronomer though she was, her work certainly had an outsized impact on physics. I have written before as to why she was deserving of a Nobel Prize, if for slightly different reasons than others give. But I do not dread that she died in any way disappointed by the lack of a Nobel Prize. It was not her nature to fret about such things.

Nevertheless, Vera was an obvious scientist to recognize with a Nobel Prize. No knowledgeable scientist would have disputed her as a choice. And yet the history of the physics Nobel prize is incredibly lacking in female laureates (see definition 4). Only two women have been recognized in the entire history of the award: Marie Curie (1903) and Maria Goeppert-Mayer (1963). She was an obvious woman to have honored in this way. It is hard to avoid the conclusion that the awarding of the prize is inherently sexist. Based on two data points, it has become more sexist over time, as there is a longer gap between now and the last award to a woman (63 years) than between the two awards (60 years).

Why should gender play any role in the search for knowledge? Or the recognition of discoveries made in that search? And yet women scientists face antiquated attitudes and absurd barriers all the time. Not just in the past. Now.

Vera was always a strong advocate of women in science. She has been an inspiration to many. A Nobel prize awarded to Vera Rubin would have been great for her, yes, but the greater tragedy of this missed opportunity is what it would have meant to all the women who are scientists now and who will be in the future.

Well, those are meta-issues raised by Vera’s passing. I don’t think it is inappropriate, because these were issues dear to her heart. I know the world is a better place for her efforts. But I hadn’t intended to go off on meta-tangents. Vera was a very real, warm, positive human being. So I what I had meant to do was recollect a few personal anecdotes. These seem so inadequate: brief snippets in a long and expansive life. Worse, they are my memories, so I can’t see how to avoid making it at least somewhat about me when it should be entirely about her. Still. Here are a few of the memories I have of her.

I first met Vera in 1985 on Kitt Peak. In retrospect I can’t imagine a more appropriate setting. But at the time it was only my second observing run, and I had no clue as to what was normal or particularly who Vera Rubin was. She was just another astronomer at the dinner table before a night of observing.

A very curious astronomer. She kindly asked what I was working on, and followed up with a series of perceptive questions. She really wanted to know. Others have remarked on her ability to make junior people feel important, and she could indeed do that. But I don’t think she tried, in particular. She was just genuinely curious.

At the time, I was a senior about to graduate from MIT. I had to beg permission to take some finals late so I could attend this observing run. My advisor, X-ray astronomer George Whipple Clark, kindly bragged about how I had actually got my thesis in on time (most students took advantage of a default one-week grace period) in order to travel to Kitt Peak. Vera, ever curious, asked about my thesis, what galaxies were involved, how the data were obtained… all had been from a run the semester before. As this became clear, Vera got this bemused look and asked “What kind of thesis can be written from a single observing run?” “A senior thesis!” I volunteered: undergraduate observers were rare on the mountain in those days; up till that point I think she had assumed I was a grad student.

I encountered Vera occasionally over the following years, but only in passing. In 1995, she offered me a Carnegie fellowship at DTM. This was a reprieve in a tight job market. As it happened, we were both visiting the Kapteyn Institute, and Renzo Sancisi had invited us both to dinner, so she took the opportunity to explain that their initial hire had moved on to a faculty position so the fellowship was open again. She managed to do this without making me feel like an also-ran. I had recently become interested in MOND, and here was the queen of dark matter offering me a job I desperately needed. It seemed right to warn her, so I did: would she have a problem with a postdoc who worked on MOND? She was visibly shocked, but only for an instant. “Of course not,” she said. “As a Carnegie Fellow, you can work on whatever you want.”

Vera was very supportive throughout my time at DTM, and afterwards. We had many positive scientific interactions, but we didn’t really work together then. I tried to get her interested in the rotation curves of low surface brightness galaxies, but she had a full plate. It wasn’t until a couple of years after I left DTM that we started collaborating.

fig3
Figure made by Vera Rubin from her measurements of the rotation curves of low surface brightness galaxies. Published in McGaugh, Rubin, & de Blok (2001).

Vera loved to measure. The reason I chose the picture featured at top is that it shows her doing what she loved. By the time we collaborated, she had moved on to using a computer to measure line positions for velocities. But that is what she loved to do. She did all the measurements for the rotation curves we measured, like the ones shown above. As the junior person, I had expected to do all that work, but she wanted to do it. Then she handed it on to me to write up, with no expectation of credit. It was like she was working for me as a postdoc. Vera Rubin was an awesome postdoc!

She also loved to observe. Mostly that was a typically positive, fruitful experience. But she did have an intense edge that rarely peaked out. One night on Las Campanas, the telescope broke. This is not unusual, and we took it in stride. For a half hour or so. Then Vera started calmly but assertively asking the staff why we were not yet back up and working. Something was very wrong, and it involved calling in extra technicians who led us into the mechanical bowels of the du Pont telescope, replete with steel cables and unidentifiable steam-punk looking artifacts. Vera watched them like a hawk. She never said a negative word. But she silently, intently watched them. Tension mounted; time slowed to a crawl till it seemed that I could feel like a hard rain the impact of every photon that we weren’t collecting. She wanted those photons. Never said a negative word, but I’m sure the staff felt a wall of pressure that I was keenly aware of merely standing in its proximity. Perhaps like a field mouse under a raptor’s scrutiny.

Vera was not normally like that, but every good observer has in her that urgency to get on sky. This was the only time I saw it come out. Other typical instrumental guffaws she bore in stride. This one took too long. But it did get fixed, and we were back on sky, and it was as if there had never been a problem in the world.

Ultimately, Vera loved the science. She was one of the most intrinsically curious souls I ever met. She wanted to know, to find out what was going on up there. But she was also content with what the universe chose to share, reveling in the little discoveries as much as the big ones. Why does the Hα emission extend so far out in UGC 2885? What is the kinematic major axis of DDO 154, anyway? Let’s put the slit in a few different positions and work it out. She kept a cheat sheet taped on her desk for how the rotation curve changed if the position angle were missed – which never happened, because she prepared so carefully for observing runs. She was both thorough and extremely good at what she did.

Vera was very positive about the discoveries of others. Like all good astronomers, she had a good BS detector. But she very rarely said a negative word. Rarely, not never. She was not a fan of Chandrasekhar, who was the editor of the ApJ when she submitted her dissertation paper there. Her advisor, Gamow, had posed the question to her, is there a length scale in the sky? Her answer would, in the modern parlance, be called the correlation length of galaxies. Chandrasekhar declined to consider publishing this work, explaining in a letter that he had a student working on the topic, and she should wait for the right answer. The clear implication was that this was a man’s job, and the work of a woman was not to be trusted. Ultimately her work was published in the proceedings of the National Academy, of which Gamow was a member. He had predicted that this is how Chandrasekhar would behave, afterwards sending her a postcard saying only “Told you so.”

On another occasion, in the mid-90s when “standard” CDM meant SCDM with Ωm = 1, not ΛCDM, she confided to me in hushed tones that the dark matter had to be baryonic. Other eminent dynamicists have said the same thing to me at times, always in the same hushed tones, lest the cosmologists overhear. As well they might. To my ears this was an absurdity, and I know well the derision it would bring. What about Big Bang Nucleosynthesis? This was the only time I recall hearing Vera scoff. “If I told the theorists today that I could prove Ωm = 1, tomorrow they would explain that away.”

I was unconvinced. But it made clear to me that I put a lot of faith in Big Bang Nucleosynthesis, and this need not be true for all intelligent scientists. Vera – and the others I allude to, who still live so I won’t name – had good reasons for her assertion. She had already recognized that there was a connection between the baryon distribution and the dynamics of galaxies, and that this made a lot more sense if the dark and luminous component were closely related – for example, if the dark matter – or at least some important fraction of it in galaxies – were itself baryonic. Even if we believe in Big Bang Nucleosynthesis, we’re still missing a lot of baryons.

The proper interpretation of this evidence is still debated today. What I learned from this was to be more open to the possibility that things I thought I knew for sure might turn out to be wrong. After all, that pretty much sums up the history of cosmology.

It was widely reported that Vera discovered dark matter or “proved” or “confirmed” its existence. I don’t think Vera would agree with this assessment, nor would many of her colleagues at DTM. I know this because we talked about it. A lot.

To my mind, what Vera discovered is both more specific and more profound than the dark matter paradigm it helped to create. What she discovered observationally is that rotation curves are very nearly flat, and continue to be so to indefinitely large radius. Over and over again, for every galaxy in the sky. It is a law of nature for galaxies, akin to Kepler’s laws for planets. Dark matter is an inference, a subsidiary result. It is just one possible interpretation, a subset of amazing and seemingly unlikely possibilities opened up by her discovery.

The discovery itself is amazing enough without conflating it with dark matter or MOND or any other flavor of interpretation of which the reader might be fond. Like many great discoveries, it has many parents. I would give a lot of credit to Albert Bosma, but there are also others who had early results, like Mort Roberts and Seth Shostak. But it was Vera whose persistence overcame the knee-jerk conservatism of cosmologists like Sandage, who she said dismissed her early flat rotation curve of M31 (obtained in collaboration with Roberts) as “the effect of looking at a bright galaxy.” “What does that even mean?” she asked me rhetorically. She also recalled Jim Gunn gasping “But… that would mean most of the mass is dark!” Indeed. It takes time to wrap our heads around these things. She obtained rotation curve after rotation curve in excess of a hundred to ensure we realized we had to do so.

Vera realized the interpretation was never as settled as the data. Her attitude (and that of many of us, including myself) is nicely summarized by her exchange with Tohline at the end of her 1982 talk at IAU 100. One starts with the most conservative – or at least, least outrageous – possibility, which at that time was a mere factor of two in hidden mass, which could easily have been baryonic. Yet much more more recently, at the last conference I attended with her (in 2009), she reminded the audience (to some visible consternation) that it was still “early days” for dark matter, and we should not be surprised to be surprised – up to, and including, how gravity works.

At this juncture, I expect some readers will accuse me of what I warned about above: using this for my own agenda. I have found it is impossible to avoid having an agenda imputed to me by people who don’t like what they imagine my agenda to be, whether they imagine right or not – usually not. But I can’t not say these things if I want to set the record straight – these were Vera’s words. She remained concerned all along that it might be gravity to blame rather than dark matter. Not convinced, nor even giving either the benefit of the doubt. There was, and remains, so much to figure out.

“Early days.”

I suppose, in the telling, it is often more interesting to relate matters of conflict and disagreement than feelings of goodwill. In that regards, some of the above anecdotes are atypical: Vera was a very positive person. It just isn’t compelling to relate episodes like her gushing praise for Rodrigo Ibata’s discovery of the Sagittarius dwarf satellite galaxy. I probably only remember that myself because I had, like Rodrigo, encountered considerable difficulty in convincing some at Cambridge that there could be lots of undiscovered low surface brightness galaxies out there, even in the Local Group. Some of these same people now seem to take for granted that there are a lot more in the Local Group than I find plausible.

I have been fortunate in my life to have known many talented scientists. I have met many people from many nations, most of them warm, wonderful human beings. Vera was the best of the best, both as a scientist and as a human being. The world is a better place for having had her in it, for a time.