Wherein I list some (mostly) recent happenings, ramble a bit, and provide links, in an order roughly determined by importance and relevance to particle physics. Views are my own. Content very definitely skewed by my own leanings and by papers getting coverage, and it may not even be correct. It is a blog after all...
- First point for today is hot off the press! The long baseline NOvA experiment has released a preliminary analysis of $\nu_e$ appearance in their beam. They exclude inverted ordering at >2σ, preferring normal ordering with $\delta_{CP}\approx 3\pi/2$! Slides here [pdf].
- Quite a few conferences recently: Second Conference on Heavy Ion Collisions in the LHC era and beyond (indico/hashtag), 34th International Cosmic Ray Conference (indico/hashtag), and the 2015 Meeting of the APS Division of Particles and Fields (indico/hashtag) which is still going.
- Here is a view from NASA's DSCOVR satellite (floating at the Lagrange point between the Sun and Earth) of the sunlit "dark side" of the moon.
Now something a little different...
[Note: some edits on 11th September to distinguish between a hierarchy problem and a naturalness problem].
I have been thinking a lot about the hierarchy problem and Higgs mass naturalness over this year. I have come to the (controversial?) conclusion that
the standard model with gravity does not obviously suffer from a naturalness problem. For my own benefit this week I wanted to jot down my thoughts, and also decided to share, as it seems to me to somehow be a widely misunderstood subject... [caveats from first paragraph still hold! and discussion/comments are welcome]...
The standard model Higgs potential is$$V_{SM} = \mu^2 \phi^\dagger \phi + \lambda (\phi^\dagger\phi)^2 .$$Since 2012, we have known that $\mu^2 \approx - (88\text{ GeV})^2$ (at low energies). I take the hierarchy problem to be:
why is $\mu^2$ so small compared to $M_{Pl}\sim 10^{19}\text{ GeV}$? I take a naturalness problem as:
$\mu^2$ is sensitive to very large ($\gtrsim (1\text{ TeV})^2$) and physically meaningful quantum corrections.
But let's first consider
the standard model without gravity...
Like all bare parameters in a quantum field theory, the unmeasurable bare parameter $\mu^2$ must be connected to a measurable friend, say $\mu^2(m_Z)$. We do this by renormalising the theory, i.e. we calculate quantum corrections, cancel them off with the bare parameter, and connect what we have left to some observable. In a cutoff regularisation scheme, the dominant one-loop quantum correction to $\mu^2$ comes from the top quark and goes something like$$\delta\mu^2 \sim \frac{1}{(4\pi)^2} y_t^2 \left( \Lambda^2 + ...\right),$$where $\Lambda$ is a cutoff renormalisation scale. Renormalisation demands that this potentially large quantum contribution be cancelled off with the bare parameter in order to arrive at an electroweak scale $\mu^2(m_Z)$. One might worry about these "unnaturally" large cancellations. However, in the standard model without gravity this scale is completely arbitrary... it is unphysical! We should assign no physical significance to a large cancellation between an unmeasurable bare parameter and an unphysical cutoff -- this much we should have learned when we studied renormalisation. We don't have to worry about quadratic corrections to $\mu^2$ that are $\propto \Lambda^2$, in short since the standard model without gravity has only one explicit scale, so how can $\mu^2$ be corrected by anything other than $\mu^2$ itself? [Note: scale invariance is broken by quantum corrections and so this argument doesn't extend to dynamical scales: a little more later].
So what is physical? What exactly is the effect of the top quark on $\mu^2$? For me this becomes more clear in a dimensional regularisation scheme. The one-loop quantum correction to $\mu^2$ will go something like$$\delta\mu^2 \sim \frac{1}{(4\pi)^2}y_t^2\mu^2\left(\frac{1}{\epsilon}+ \text{ finite terms} +\ln\mu_R \right)$$where $\mu_R$ is a renormalisation scale and we take the limit $\epsilon\to 0$. The divergent term $\propto 1/\epsilon$ and the finite terms can be cancelled against a counterterm in the bare parameter. This is another way of saying
they are unphysical. However,
the term $\propto \ln\mu_R$ cannot be always absorbed and has an observable effect. Any observable must not depend on $\mu_R$, and (in a mass-independent renormalisation scheme) the counterterm must also be independent of $\mu_R$. After a little algebra this ends up implying that the $\mu^2$ parameter depends on the scale at which it is measured, $\mu^2=\mu^2(\mu_R)$, a familiar result of renormalisation in quantum field theories (see e.g. the 2004 Nobel Prize in Physics). In the standard model the top quark contribution turns out to be$$\frac{d\mu^2}{d\ln\mu_R}\approx\frac{1}{(4\pi)^2}6y_t^2\mu^2.$$This is called the renormalisation group equation (RGE) for $\mu^2$. You can see it's $\propto \mu^2$, which is just
another way of saying that the standard model without gravity has only one explicit scale. The only physical (and in-principle measurable) effect of the top quark on the $\mu^2$ parameter is to make it run with energy. And it doesn't run much! You can easily calculate that $\mu^2$ remains $\mathcal{O}(\mu^2)$ even up to a scale $\mu_R\sim 10^{19}\text{ GeV}$. That means that a small change in $\mu^2$ at some high scale results also in a corresponding small change in $\mu^2$ at a low scale, which is exactly the Barbieri-Giudice style fine-tuning requirement for a natural theory.
I like this RGE formulation of the hierarchy problem because it is physical: it is phrased in terms of an in-principle measurable parameter $\mu^2(\mu_R)$ and a quantifiable fine-tuning of that parameter at a high scale. If any perturbative new physics is added to the standard model one can just calculate its effect on the $\mu^2$ RGE and see if it results in fine-tuning at a high scale. In this sort of approach the requirement for a natural electroweak scale is just that $\frac{d\mu^2}{d\ln\mu_R} \lesssim (100\text{ GeV})^2$.
So in particular, and this is a fallacy I hear a lot, in the standard model without gravity
there are no top quark loop divergences that must be cancelled with new particles -- that has already been achieved for you with renormalisation.
I am not positive why this top loop quadratic divergence argument has gained traction, but I think the following is a reasonable possibility. In a generic new physics model, one fear is that the top quark, being strongly coupled to the Higgs, might also strongly couple to some other (higher) scale, and "transmit" that scale to $\mu^2$, i.e. one fears a quantum correction to $\mu^2$ that is $\propto y_t^2M_{NP}^2$. One would not have to worry if there was a new particle(s) which by some symmetry transmits an equal and opposite contribution to $\delta\mu^2$, such that they cancel. This is achieved in supersymmetry (SUSY) by the stop $\tilde{t}$. In dimensional regularisation the stop will give a $\delta\mu^2$ contribution which differs from the top contribution only by a negative sign and a factor $m_\tilde{t}^2/m_t^2$; they exactly cancel if $m_{\tilde{t}}=m_t$. But the fact that the divergent terms (to be associated with the quadratic divergences) cancel is beside the point, since they are unphysical anyway. What matters is the contribution to the $\mu^2$ RGE, and at one-loop the top/stop contributions together will result in a term proportional to the mass splitting,$$\frac{d\mu^2}{d\ln\mu_R}\approx \frac{1}{(4\pi)^2}6y_t^2\frac{\mu^2}{m_t^2}\left(m_t^2-m_\tilde{t}^2\right).$$The fine-tuning argument now demands the RHS be $\lesssim (100\text{ GeV})^2$. Unless the stop is sufficiently light, $\mu^2(\mu_R)$ will run to very large values at large scales, creating a fine-tuning problem, or an unnatural theory. Now, note that if you identify the splitting with the cutoff scale $\Lambda^2$ (makes sense if $m_t\ll m_{\tilde{t}}\sim M_{SUSY}$) then the fine-tuning condition gives roughly$$\frac{1}{(4\pi)^2} y_t^2\Lambda^2 \lesssim (100\text{ GeV})^2,$$which looks just like a quadratic cutoff correction due to the top. That equation taken out of context suggests that the appearance of the stop is acting to cancel any larger quadratic loop divergences of the top. Such an interpretation gives the right naturalness bound but for the wrong reasons... the correction has nothing necessarily to do with a cutoff and everything to do with a strongly coupled heavy particle: the stop.
Without the stop there is no problem! Renormalisation takes care of the divergent term.
There is one extra point to be covered to wrap up this conversation about the standard model without gravity. The standard model is not asymptotically free and therefore a very high dynamical scale is generated. In particular, the one-loop RGE for the $U(1)_Y$ gauge coupling is positive, and at $\mu_R\sim 10^{40}\text{ GeV}$ it hits a Landau Pole, i.e. the coupling appears to $\to\infty$. So you might ask: does this introduce a dynamical scale which will correct $\mu^2$? Does it make an electroweak $\mu^2$ unnatural? The answer to this question is not obvious to me. Such a theory is clearly transitioning into a non-perturbative regime. I can't carry out a calculation here (nobody can). Certainly a hand-waving one-loop argument for contributions to $\mu^2$ no longer holds. Furthermore it is not even clear to me that the Higgs field is a sensible degree of freedom in such a regime. Anyway, the worry is moot, since the assumption of a flat spacetime at this scale is not even close to valid; one expects quantum gravitational states to come in at latest the Planck scale $M_{Pl}\sim 10^{19}\text{ GeV}$, so about that...
So far we have argued that the standard model without gravity in flat spacetime suffers no obvious naturalness problem. Okay, but we
have measured another fundamental mass scale in physics: $M_{Pl}\sim 10^{19}\text{ GeV}$. [Let it be clear that $M_{Pl}$ is only a dimensional argument; it is defined as $1/M_{Pl}^2 := G_{N}$, where $G_{N}$ is Newton's constant which enters Einstein's equations for general relativity]. Should we be worried?
For
the standard model with gravity, the argument I often see goes something like the following: because of gravity, the standard model is at best an effective theory up to $M_{Pl}$, at which point we know new physics must come in, making the cutoff at $\Lambda^2\sim M_{Pl}^2$ physical and thereby making large cancellations unnatural. The argument has at least three holes.
(1) The appearance of an apparently large scale $M_{Pl}$ in an effective theory does not necessarily imply quantum states at a scale $M_{Pl}$ (see e.g. large extra dimensions).
(2) Even if it did, we don't have a quantum theory of gravity, and so we can't calculate the corrections to $\mu^2$ to convince ourselves there is a definite problem. Even naively, the one-loop flat spacetime correction is sure to be altered in some way.
(3) Perhaps the most important point:
the existence of some large mass quantum states coupled to the standard model (and therefore a large and physical cutoff to the standard model)
does not necessarily imply a naturalness problem.
Let me illustrate in particular points (1) and (3) above with an example: neutrino masses. Suppose you are convinced that neutrino masses are Majorana and generated by an effective dimension 5 Weinberg operator $ l\phi l\phi/\Lambda$ after electroweak symmetry breaking, so that$$m_\nu = v^2/\Lambda,$$where $v\approx 174\text{ GeV}$ is the Higgs vev. You then measure $m_\nu\sim 0.05\text{ eV}$ in experiment, suggesting $\Lambda \sim 10^{15}\text{ GeV}$. So the dimensional argument has lead to an apparent hierarchy and you fear a naturalness problem. The argument then goes: if the effective Weinberg description of neutrino masses is true then it looks like the standard model is at best a good effective theory up to $10^{15}\text{ GeV}$, and you know the rest...
But now let's look at a UV-complete model: the Type I see-saw. Add a heavy right-handed neutrino $N$ of mass $M_N$, with a Yukawa term $y\ l \phi N$, and integrate it out to match onto the Weinberg operator; you find $$1/\Lambda \equiv y^2/M_N.$$The correction to $\mu^2$ can be easily calculated as$$\frac{d\mu^2}{d\ln\mu_R} \sim -\frac{1}{(4\pi)^2}y^2 M_N^2 \sim -\frac{1}{(4\pi)^2} m_\nu M_N^3 / v^2.$$Plug in the numbers yourself and see that for $M_N\lesssim 10^7\text{ GeV}$ there is no large correction to $\mu^2$ (there's not even a large finite correction). How can this be? The reason is that
as $M_N$ becomes smaller so does $y^2$, in order to reproduce the observed neutrino mass; both work together to lower the correction to $\mu^2$. For $M_N\sim 10^7\text{ GeV}$ you'll find $y \sim 10^{-4}$. One might get uncomfortable about a small coupling in the theory. However the limit $y\to 0$ increases the symmetry of the theory by decoupling $N$ (it also reinstates a $U(1)_L$ symmetry), and so corrections to $y$ can only be proportional to $y$ itself. [This is is called a technically natural limit, and it is the very reason that we do not worry about a naturalness problem for the standard model fermion masses].
Anyway, I have just given an example where a dimensional argument makes you think that there is a very large scale $\sim 10^{15}\text{ GeV}$ in the theory, when a small coupling is just tricking you, and even the existence of a large scale $\sim 10^7\text{ GeV}$ in the renormalisable theory
calculably does not introduce a naturalness problem, thanks again to a small coupling (which is technically natural). These observations alone, even without point (2) I made above, are enough to convince me that the standard model with gravity does not necessarily have a naturalness problem.
So why do we often hear that it does? I am not positive, but I suspect that there are historical reasons for this. Grand unified models look so (subjectively) aesthetically pleasing that it is easy to want to believe in them. If you are set on a grand unified theory at $10^{15}\text{ GeV}$, then there are going to be strongly coupled heavy vector fields which correct $\mu^2$ in a calculable way,$$\frac{d\mu^2}{d\ln\mu_R} \sim \frac{1}{(4\pi)^2}g^2 M_{GUT}^2,$$
or at two-loop. This necessarily leads to a naturalness problem (the "gauge hierarchy problem") unless you come up with some mechanism to cancel away these contributions. SUSY is a very nice mechanism for doing this (perhaps the nicest, but that is subjective) and as a bonus you also protect yourself from $M_{Pl}$ and anything else up there! But if you introduce it you have to have it come in at around the TeV scale, otherwise the new strongly coupled heavy particles (e.g. the stops) will create their own naturalness problem anyway...
And so we wait for LHC Run II to reconnect us with experiment and perhaps shed some light...