Plate of Shrimp

I should perhaps explain a little about the title of the last post. It is perfectly obvious to me. But probably not to anyone else.

Our brains work in subtly different ways. One thing that mine does, whether I like it or not, is memorize lines and make obscure links between them. It is a facility I share with a few other people. I have had entire conversations by analogy through quotes with people who share this facility, much to the annoyance of those who don’t think this way and find it disturbingly freakish.

For example, Cole Miller and I occasionally send challenges to each other: lists of quotes we have to place in context. We’re both pretty good at it. As nerds of a certain age, there is a great deal of cultural overlap between us: we know the same quotes. Still, we are less likely to miss a quote than we are to discover ones that reveal small differences in our cultural knowledge.

In the movie Repo Man, there is a scene in which the goofiest character (of many eccentrics) goes on a tear about cosmic coincidences, giving an odd example: “Suppose you’re thinkin’ about a plate o’ shrimp. Suddenly someone’ll say, like, plate, or shrimp, or plate o’ shrimp out of the blue, no explanation.” How is this relevant? you might reasonably ask. Well, it has to do with title generation.

As it happens, as I was on my way to New York, this story about the infamous Saturday Night Live performance of the punk band Fear came to my attention. In it, they perform the song New York’s Alright If You Like Saxophones. So this was in my head as I traveled to New York. Fear contributed Let’s Have a War to the Repo Man soundtrack. And I do like saxophones – or at least, I used to play one.

Plate of shrimp.

rm_jfrank_cop

Dark Matter halo fits – today’s cut

I said I would occasionally talk about scientific papers. Today’s post is about the new paper Testing Feedback-Modified Dark Matter Haloes with Galaxy Rotation Curves: Estimation of Halo Parameters and Consistency with ΛCDM by Harley Katz et al.

I’ve spent a fair portion of my career fitting dark matter halos to rotation curves, and trying to make sense of the results.  It is a tricky business plagued on the one hand by degeneracies in the fitting (there is often room to trade off between stellar and dark mass) and on the other by a world of confirmation bias (many of us would really like to get the “right” answer – the NFW halo that emerges from numerical structure formation simulations).

No doubt these issues will come up again. For now, I’d just like to say what a great job Harley did. The MCMC has become the gold standard for parameter estimation, but it is no silver bullet to be applied naively. Harley avoided this trap and did a masterful job with the statistics.

The basic result is that primordial (NFW) halos do not fit the data as well as those modified by baryonic processes (we specifically fit the DC14 halo model). On the one hand, this is not surprising – it has been clear for many years that NFW doesn’t provide a satisfactory description of the data. On the other hand, it was not clear that feedback models would provide something better.

What is new is that fits of the DC14 halo profile to rotation curve data not only fit better than NFW (in terms of χ2), they also return the stellar mass-halo mass relation expected from abundance matching and are also consistent with the predicted concentration-halo mass relation.

Figure_3

The stellar mass-halo mass relation (top) and concentration-halo mass relation (bottom) for NFW (left) and DC14 (right) halos. The data are from fits to rotation curves in the SPARC database, which provides homogeneous near-IR mass models for ~150 galaxies. The grey bands are the expectation from abundance matching (top) and simulations (bottom).

The relations shown in grey in  the figure have to be true in ΛCDM. Indeed, SCDM had predicted much higher concentrations – this was one of the many reasons for finally rejecting it. The non-linear relation between stellar mass and halo mass was not expected, but is imposed on us by the mismatch between the steep predicted halo mass function and the flat observed luminosity function. (This is related to the missing satellite problem – a misnomer, since it is true everywhere in the field.)

It is not at all obvious that fitting rotation curves would return the same relation found in abundance matching. With NFW halos, it does not. Many galaxies fall off the relation if we force fits with this profile. (Note also the many galaxies pegged to the lower right edge of the concentration-mass panel at lower left. This is the usual cusp-core problem.)

In contrast, the vast majority of galaxies are in agreement with the stellar mass-halo mass relation when we fit the DC14 halo. The data are also broadly consistent with the concentration-halo mass relation. This happens without imposing strong priors: it just falls out. Dark matter halos with cores have long been considered anathema to ΛCDM, but now they appear essential to it.

And then there were six

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

– John von Neumann

The simple and elegant cosmology encapsulated by the search for two numbers has been replaced by ΛCDM. This is neither simple nor elegant. In addition to the Hubble constant and density parameter, we now also require distinct density parameters for baryonic mass, non-baryonic cold dark matter, and dark energy. There is an implicit (seventh) parameter for the density of neutrinos.

Now we also include the power spectrum as cosmological parameters (σ8, n). These did not use to be considered on the same level as the Big Two. They aren’t: they concern structure formation within the world model, not the nature of the world model. But I guess they seem more important once the Big Numbers are settled.

Here is a quick list of what we believed, then and now:

 

Paramater SCDM ΛCDM
H0 50 70
Ωm 1.0 0.3
Ωbh2 0.0125 0.02225
ΩΛ 0.7
σ8 0.5 0.8
n 1.0 0.96

 

There are a number of “lesser” parameters, like the optical depth to reionization. Plus, the index n can run, one can invoke scale dependent non-linear biasing (a rolling fudge factor for σ8), and people talk seriously about the time evolution of antigravity the dark energy equation of state.

From the late ’80s to the early ’00s, all of these parameters (excepting only n) changed by much more than their formal uncertainty or theoretical expectation. Even big bang nucleosynthesis – by far the most robustly constrained – suffered a doubling in the mass density of baryons. This should be embarrassing, but most cosmologists assert it as a great success while quietly sweeping the lithium problem under the carpet.

The only thing that hasn’t really changed is our belief in Cold Dark Matter. That’s not because it is more robust. It is because it is much harder to detect, let alone measure.

Two Numbers

Cosmology used to be called the hunt for two numbers. It was simple and elegant. Nowadays we need at least six. It is neither simple nor elegant. So how did we get here?

The two Big Numbers are, or at least up till the early-90s were, the Hubble constant H0 and the density parameter Ω. These told us Everything. Or so we thought.

The Hubble constant is the expansion rate of the universe. Not only does it tell us how fast the universe is expanding, it sets the size scale through the Hubble distance-velocity relation. Moreover, its inverse is the Hubble time – essentially the age of the universe. A Useful and Important Number. To seek to measure it was a noble endeavor into which much toil and treasure was invested. Getting this right was what the Hubble Space Telescope was built for.

The density parameter measures the amount of stuff in the universe. Until relatively recently, it was used exclusively to refer to the mass density – the amount of gravitating stuff normalized to the critical density. The critical density is the over/under point where there is enough gravity to counteract the expansion of the universe. If Ω < 1, there isn’t enough, and the universe will expand forever. If Ω > 1, there’s more than enough, and the universe will eventually stop expanding and collapse. It controls the fate of the universe.

Just two numbers controlled the size, age, and ultimate fate of the universe. The hunt was on.

Of course, the hunt had been on for a long time, ever since Hubble discovered that the universe was expanding. For the first fifty years it largely shrank, then settled into a double valued rut between two entrenched camps. Sandage and collaborators found H0 = 50 km/s/Mpc while de Vaucoulers found a value closer to 100 km/s/Mpc.

The exact age of the universe depends a little on Ω as well as the Hubble constant. If the universe is empty, there is no gravity to retard its expansion. The age of such a `coasting’ universe is just the inverse of the Hubble constant – about 10 Gyr (10 billion years) for H0 = 100 and 20 Gyr for H0 = 50. If instead the universe has the critical density Ω = 1, the age is just 2/3 of the coasting value.

The difference in age between empty and critical ages is not huge by cosmic standards, but it nevertheless played an important role in guiding our thinking. Stellar evolution places a constraint on the ages of the oldest stars. These are all around a Hubble time old. That’s good – it looks like the first stars formed near the beginning of the universe. But we can’t have stars that are older than the universe they live in.

In the 80s, a commonly quoted age for the oldest stars was about 18 Gyr. That’s too old for de Vaucoulers’s H0 = 100 – even if the universe is completely empty. Worse, Ω = 1 is the only natural scale in cosmology; it seemed to many like the most likely case – a case bolstered by the advent of Inflation. In that case, the universe could be at most 13 Gyr old, even adopting Sandage’s H0 = 50. It was easy to imagine that the ages of the oldest stars were off by that much (indeed, the modern number is closer to 12 Gyr) but not by a lot more: Ages < 10 Gyr with H0 = 100 were right out.

Hence we fell into a double trap. First, there was confirmation bias: the ages of stars led to a clear preference for who must be right about the Hubble constant. Then Inflation made a compelling (but entirely theoretical) case the Ω had to be exactly 1 – entirely in mass. (There was no cosmological constant in those days.  You were stupid to even consider that.) This put further pressure on the age problem. A paradigm emerged with Ω = 1 and H0 = 50.

There was a very strong current of opinion in the 80s that this had to be the case. Inflation demanded Ω = 1, in which case H0 = 50 was the only sensible possibility. You were stupid to think otherwise.

That was the attitude into which I was indoctrinated. I wouldn’t blame any particular person for this indoctrination; it was more of a communal group-think. But that is absolutely the attitude that reigned supreme in the physics departments of MIT and Princeton in the mid-80s.

I switched grad schools, having decided I wanted data. Actual observational data; hands on telescopes. When I arrived at the University of Michigan in 1987, I found a very different culture among the astronomers there. It was more open minded. Based on measurements that were current at the time, H0 was maybe 80 or so.

At first I rejected this heresy as obviously insane. But the approach was much more empirical. It would be wrong to say that it was uninformed by theoretical considerations. But it was also informed by a long tradition of things that must be so turning out to be just plain wrong.

Between 1987 and 1995, the value of the Big Numbers changed by amounts that were inconceivable. None of the things that must be so turned out to be correct. And yet now, two decades later, we are back to the new old status quo, where all the parameters are Known and Cannot Conceivably Change.

Feels like I’ve been here before.

Falsifiability and Persuadability in Science

There has been some debate of late over the role of falsifiability in science. Falsifiability is the philosophical notion advocated by Popper as an acid test to distinguish between ideas that are scientific and those that are not. In short, for a theory to be scientific, it has to be subject to falsification. It must make some prediction which, were it to fail, would irrevocably break it.

A good historical example is provided by the phases of Venus. In the geocentric cosmology of Ptolemy, Venus is always between the Earth and the Sun. Consequently, one should only observe a crescent Venus. In contrast, in the heliocentric cosmology of Copernicus, Venus can get to the other side of the sun, so we should see the full range of phases.

venus-geo-helio

Galileo observed the full range of phases when he pointed his telescope at Venus. So: game over, right?

Well, yes and no. In the strict sense of falsifiability as advocated by Popper, yes, geocentrism was out. That didn’t preclude hybrid pseudo-solutions, like the Tychonic model. Worse, it didn’t convince everyone instantaneously – even among serious minded people not impeded by religious absolutism, this was just one piece of evidence to be weighed along with many others. One might have perfectly good reason to weigh other things more heavily. Only with the benefit of hindsight can we look back and say Nailed it!

Nevertheless, this is often taught to young scientists as an example of how it is suppose to work. And it should. Ellis & Silk make an eloquent defense of the ethic of falsifiability. I largely agree with them, even if they offer a few examples which I don’t think qualify. They were motivated to mount this defense in response to the case made against falsification by Sean Carroll.

Without commenting on the merits of either argument – both sides make good points – it occurs to me that these is also a human element. One of personality and proclivity, perhaps. It has been my experience that those most eager to throw Popper (and Occam’s razor) under the bus are the same people who fancy ornate and practically unfalsifiable theories.

The debate about standards is thus also a debate about the relative merit of ideas. Should more speculative ideas have equal standing with more traditional explanations? If you’re a conservative scientist, you say Absolutely not! If you like to engage in theoretical speculation, you say Hell yes! 

Clearly there is value to both attitudes. The more conservative attitude teaches to refrain from turning our theories into Rube Goldberg machines that look really neat but teach us nothing. (Many galaxy formation simulations are like this.) On the other hand, some speculation is absolutely necessary to progress. Indeed, sometimes the most outrageous seeming speculations lead to the most profound advances.

In short, our attitudes matter. There is no such thing as the perfectly objective scientist as portrayed by the boring character in a white lab coat. We are human, after all, and a range of attitudes has value.

In this context, it seems that there should be a value system among scientists that parallels the standard of falsifiability for theories. We shouldn’t just hold theories to this high standard. We should also hold ourselves to a comparably high standard.

I suggest that a scientist must be persuadable. Just as a theory should subject itself to testing and potential falsification, we, as scientists, should set a standard by which we would change our minds. We all have pet ideas and tend to defend them against contrary evidence. Sometimes that is the right thing to do, as the evidence is not always airtight, or can be interpreted in multiple ways.

But – at what point does the evidence become compelling enough that we are obliged to abandon our favorite ideas? It isn’t good enough that a theory is falsifiable. We have to admit when it has been falsified. In short, we should set a standard by which we could be persuaded that an idea we had previously believed was wrong.

What the standard should be depends on the topic – some matters are more settled than others, and require correspondingly more compelling evidence to overturn. The standard also depends on the individual: each of us has to judge how to weigh the various lines of evidence. But there needs to be some standard.

In my experience, there are many scientists who are not persuadable. They are not simply hostile to speculative ideas. They are hostile to empirical data that contradicts their pet ideas. Sadly, in many cases, they do not seem to be able to distinguish between data – what is a plain fact – and contrary ideas. One sees this in the “debate” on global warming all the time: solution aversion (we don’t want to stop burning oil!) leads to cognitive dissonance and the rejection of facts: we don’t want to believe that, so the data must be faulty.

Sadly, this sort of behavior is all too common among practicing scientists. I advocate that this be considered unscientific behavior. Just as a theory should be falsifiable, we must be persuadable.

It’d be nice if we could be civil about it too. Baby steps.

Some rethinking

So I’m back from this small, convivial meeting. Many thanks to hosts Priya Natarajan and Doug Finkbeiner for putting the program together. I find it especially useful when scientists working on the same problem from different fields come together in this fashion.  It provides fresh perspective.

I had wondered whether we were capable of genuine rethinking. The opening dinner brought up a wide ranging discussion of cartoon characters (you had to be there), which put me in mind of Lucy van Pelt’s quote from A Charlie Brown Christmas:

lucy-doctor-stand

“The mere fact that you realize you need help indicates that you are not too far gone.”

This could be said of theories as well as people.  The predictable range of responses were on display – some of us really are too far gone – but I was encouraged that this was not typical, at least at this small gathering.

What I learned was that particle physics is complicated. Not that I didn’t know this, but in the context of dark matter models, things are rarely as clear cut as they are portrayed. For example, the constraints on dark matter from experiments at the LHC are often stated as hard limits, but these are based on very particular assumptions about how dark matter particles might be produced there. Since we don’t really know what the dark matter is (or even if it is really a particle and not some scalar field or GKW – God Knows What), there are a multiplicity of possibilities that are not quite so neatly described. Consequently, the hard limits are rarely that hard, once one drops the assumption of classic WIMP dark matter.

This is both good and bad. Good, in that there is indeed some rethinking to be done. Bad, in the sense that we might step into a bottomless pit. Which I suspect we’ve done already. We’ve already passed the natural cross section for WIMPs. Twice. The original prediction of 10-39 cm2 was falsified ages ago. The next natural cross section of 10-44 cm2 was crossed more recently. I was not alone in asking, when do we know to stop?

The next natural threshold is apparently 10-49 cm2. Around that level, there are second order loop processes that are unavoidable in any WIMP-like scenario. Or so the experts said. Something has to show up there. If not, we need something genuinely new. So that is when to stop with the current approach.

What `genuinely new’ might be is another matter. There was some encouraging rethinking on this point. But it still struck me as confined within traditional disciplinary boundaries. “We’re particle physicists, so we’ll make up a new particle.” I suspect we need to think outside this box.


Let me interrupt this rant to give a shout out to Jim Peebles, who showed up for this meeting on the eve of his 81st birthday. Still sharp as ever, he had lots of spot on questions for all the participants. Best of all, he gave a classic talk, to the effect of “yes, yes, we’ve solved all these large scale problems (many thanks to him!), but what about galaxies?” He showed actual pictures of all the bright, nearby galaxies listed by Tully, and went into some detail about how these did not really look much like what you’d expect in ΛCDM. A great theoretical cosmologist who looks at actual data and takes it seriously. The field could use more like him.

Rethinking the Dark Matter Paradigm

I travel to Cambridge, MA tomorrow to participate in the workshop Rethinking the Dark Matter Paradigm (I had nothing to do with the choice of title). I went to college at MIT in the ’80s, so is a bit back to the future for me in space as well as time. There is a lot to rethink, or nothing at all, depending on who you ask. I’m curious to see if any of us are willing to think beyond I was right all along!

One of the compelling notions that emerged in the ’80s was non-baryonic dark matter. Baryons are the massive particles (protons & neutrons) of which normal stuff is made. It was well established by that time that the light elements were produced in the early universe by Big Bang Nucleosynthesis (BBN). It became clear in the ’80s that the mass density of normal stuff produced by BBN did not add up to the mass we needed to explain a whole host of astronomical observations, in both cosmology and galaxy dynamics. In short, Einstein’s General Relativity plus the baryons we could see did not suffice to explain the universe.

There were two obvious paths forward. Modify Einstein’s theory, or invoke unseen non-baryonic matter. The latter course seems by far the more plausible. No one had a compelling reason to challenge Einstein’s highly successful theory. On the other hand, there were plenty of reasons in particle physics to imagine new particles outside the standard model, particularly in the hypothesized supersymmetric sector.

It was quickly realized that large scale structure would only grow if this new stuff were composed of slow moving, non-relativistic particles – a condition summarized as dynamically “cold.” Hence Cold Dark Matter (CDM) was born. Weakly Interacting Massive Particles (WIMPs) from supersymmetry were a good candidate to be the CDM.

Thus began the marriage of astronomy and particle physics, two fields divided by a common interest in dark matter and cosmology. The heated embrace of the honeymoon has long since worn off, to the point that some of us are ready to rethink the whole paradigm.

This is no small step. Though I’ve come to doubt the existence of CDM, I still feel very comfortable with it.  First love, and all. More importantly, it has been the one essential item in cosmology that has remained unchanged through the turbulent ’90s and on to today. But that is a longer story that will take many posts to tell.

For now, we’ll go see how much rethinking we’re willing to do.

Hello, World

Tell me, O muse, of that ingenious scientist who travelled far and wide after he had falsified the famous cosmology of Ptolemy. Many paradigms did he visit, and many were the theories with whose manners and customs he was acquainted; moreover he suffered much from the two body problem while trying to save the soul of science and raise his family safely at home. Tell me about all these things, O daughter of Jove, from whatsoever source you may know them.

Those familiar with the Classics will recognize the above text as a paraphrase of the opening lines of Homer’s Odyssey. It seems a fitting start to this blog, as my career in science has been a long voyage beset with many storms, complete with monsters worthy of mythology, both great and petty. I have, by chance of circumstance as much as choice of will, led an epic life.

This is a bold claim. Whether it is an accurate depiction of the stories I have to tell, I leave for others to judge. But I do have stories to tell. Many stories, such that it is impossible that they should all come out. Yet they clamor to be heard, and I find myself compelled to begin to tell them, at long last overcoming my discretion and better wisdom. Though in truth I have been at it since 1997, just not in blog form.

These words may sound odd as the preamble to a science blog. The reigning stereotype of a scientist is that of a dry arbiter of facts. This could not be more opposite the truth. We are passionate about the science we do. We care about the paradigms we develop, often much too deeply. We want them to be accurate depictions of the Truth, and all too often convince ourselves that they are.

This blog will cover many topics in science. Primarily it will focus on my own subjects of cosmology, astronomy, and astrophysics. These words mean different things to difference people, so the sociology of science will also be a frequent topic, as will the philosophy and history of science.

There is a tendency to oversimplify the history of science in order to satisfy the human need for a compelling story told in a short time. This serves an essential function: both our patience and our lifespans are finite. It is impossible to relate all the lessons of the past in their full detail. Yet sometimes the oversimplification inverts the truth, and scientists are as susceptible to this human foible as anyone.

I have no plan for how the stories will progress. They boil up, wanting to be told. I expect they will tumble out piecemeal, unstuck in time and devoid of linearity. Sometimes I will discuss current events. Sometimes I will relate what seem to be ancient anecdotes. In no circumstance will I dumb it down. Indeed, one thing I expect to do is write brief summaries of refereed science papers, which even scientists rarely manage read.

I am a practicing scientist. I am not a science journalist. I will attempt to be clear, but I am not trying to reach a mass audience nor explain things to the lowest common denominator. I make this distinction because I am what historians would call an original source.  I am not a reporter of science: I do it.