1. The long read
I've been meaning to write a more comprehensive reflection on naturalness here for a while now, ever since penning a summary in the introduction to my PhD Thesis (now conferred) and submitting a paper on the topic (now published). During my graduate studies, I spent a lot of time earnestly trying to understand the assumptions on which naturalness was predicated, and how those assumptions might be written down hierarchically as a quasi-nested set, in order to appreciate what a "strong" or "conservative" stance on naturalness would entail. I did not find this to be an easy task. Instead, I found the literature to be plagued with oversimplifications, hand-waving, a lack of distinction between physical and unphysical quantities, definitions which are imprecise and inconsistent, and a great many unmentioned and/or unconsidered assumptions. Eventually I began to piece together my own understanding, and it became clear that the subject is more nuanced than many seem to appreciate, deserving a more careful treatment than what is typical.
I offer this post as an attempt at a comprehensive summary of my current understanding, in the hopes of getting people to at least step back and question what they thought they knew about naturalness. Throughout I will challenge conventional wisdoms. I am not intentionally trying to be provocative, just that I don't believe some of the things that are often said with regard to naturalness actually pass muster, and I will try to make it clear just why. The views are my own. The layperson will be able to follow for a while, however I will not make special effort to keep the latter prose below a graduate level. It is a long read, but it needs to be.
2. What do we mean by naturalness?
When we talk about naturalness we are talking about the required values of input parameters which enter a theory in order to explain observations (i.e. to basically reproduce the standard model at low energy). The descriptor "natural" is commonly used in two distinct senses in the literature.
- A theory may be called natural if the required dimensionless input parameters are of $\mathcal{O}(1)$. (If there are input parameters with mass dimension, then this criterion requires all those mass scales to be similar).
- A theory may be called natural if the required input parameters do not need to be very precisely specified.
3. What do we mean by "very precisely specified"?
Actually there is a not-so-subtle yet under-appreciated point to be made here, because there is a local and a global sense in which the parameters could be described as precisely specified. Both are used in the literature.
- In the local sense, a theory is natural if small perturbations around the required input parameters does not wildly change the observables.
- In the global sense, a theory is natural if the observables are not "improbable" when considered over the range of possible values the input parameters could have taken on.
It is tempting to think of the first sense as a weaker requirement than the second—I do tend to categorise it loosely as such—since it is easy to imagine a theory where a small but finite "island" of parameter space reproduces low scale physics like observed; this would be a locally natural theory that is not globally natural. But it seems to me also possible at least in principle to have a globally natural theory that is not locally natural (think chaotic systems). So I do think these are in fact technically distinct senses and this should be acknowledged. I will use the terms "locally natural" and "globally natural" to refer to these two qualitative definitions henceforth where appropriate.
4. Why should a theory be natural, anyway?
I think it's fair to say that the primary goal of theoretical particle physics is to discover the higher scale theory (and intermediate theories) from which the standard model (plus gravity) derives. The secret hope is that it uniquely predicts the standard model at low energy, i.e. that it is "perfectly" natural by the above definitions (it may even have inputs of $\mathcal{O}(1)$, or none at all). Failing that, the next secret hope is that it (loosely) generically predicts something like the standard model, i.e. that it is in some sense globally natural. The idea, I guess, is that the standard model should not be an accident but instead the inevitable outcome of some unified mathematical description – if only we could figure it out.
Many physicists would like, or subjectively feel like, this should be the case. It would be a somehow "neat" outcome. However, from a logical point of view, there is no guarantee that nature is actually like this. It is absolutely possible that nature is ultimately described by a set of mathematical rules with some input parameters which just "are what they are," and which may even be unnatural according to the above definitions.
5. What people do
Nevertheless one is certainly free to make the assumption that the high scale physics is natural, optionally acknowledging the logical possibility that it is not, and use that assumption as a sort of guiding principle to constrain theory space. That is what people do.
6. What people actually do
What people actually do (at least in hep-ph) is use the assumption of naturalness to constrain theories which are defined at a higher scale than the standard model, but not at so high a scale as to be considered the high scale theory. That is, if you consider the standard model as the end of a chain of effective theories all the way up to the highest scale, people are checking for naturalness halfway up the chain.
Why is this useful? There seems to be an implicit assumption here. The assumption is that naturalness of the theories halfway up the chain implies something about the naturalness of the theories above. But this needn't be the case. Even if it were the case it might be only at a very high scale that the theories begin to look natural. It is a logical possibility that, when viewed one by one, the chain links actually realised in nature might appear as any combination of "natural" and "unnatural" as you work your way up. A natural effective theory can be embedded in an unnatural theory, and an unnatural effective theory can be embedded in a natural one. Indeed, the latter idea can be used as a get-out-of-jail-free card; it's essentially the trick exploited by multiverse proponents, where an unnatural observable theory is made to be natural by embedding it in a large enough ensemble of theories, then appealing to anthropics. We could turn a natural theory into an unnatural one using the same trick.
The point here is that we should acknowledge when we are testing naturalness some way up the chain and that this does not necessarily imply anything about what's above. Still, this does not preclude the possibility—and this is what theorists hope—that it may be the successful algorithmic way to climb the theory chain all the way to its natural end. It might be true in some sense that natural theories mostly beget natural theories.
7. Quantum corrections
Since the definitions of naturalness above involve observables and input parameters, we need to go into some technical detail to understand how parameters behave in quantum field theories. In particular, they are not constants. For instance, what is called the fine-structure constant $\alpha\approx 1/137$ changes value with energy scale, and even its value at fixed scale depends on the renormalisation scheme (more on this later) and loop level which you are working in. In other words, in quantum field theories, your parameters are "corrected" by quantum effects.
The nature of these quantum corrections is important for understanding exactly what kind of naturalness problem we're talking about. There are three general cases I want to highlight which represent different levels at which a theorist might worry.
The nature of these quantum corrections is important for understanding exactly what kind of naturalness problem we're talking about. There are three general cases I want to highlight which represent different levels at which a theorist might worry.
- It might be that a parameter needs to be very precisely specified in a global sense, but nevertheless once it is specified it is stable under quantum corrections, because the corrections are proportional to that parameter.
- It might be that a parameter needs to be very precisely specified in a global sense, but nevertheless once it is specified it is stable under quantum corrections, because the corrections are sufficiently smaller than the parameter.
- It might be that a parameter needs to be very precisely specified in a global and local sense, since that parameter needs to cancel off comparatively large quantum corrections to many significant figures in order to realise the low scale observations.
8. The usual Higgs story
In the standard model the major naturalness worry is the Higgs mass-squared term $\mu^2 = -m_h^2/2 \approx -(88\text{ GeV})^2$. The usual introduction-slide story that is told is the following.
One-loop Feynman diagrams introduce a "quadratically divergent" quantum correction to this term, $$\delta\mu^2 \sim \frac{1}{(4\pi)^2} y_t^2 \Lambda^2,$$ which depends on the "cutoff scale" $\Lambda$. If $\Lambda$ is very large then this quantum correction needs to be cancelled away very precisely in order to realise the observed Higgs mass.
Ostensibly this sounds like a naturalness problem of the third kind as written just above. But we should ask: what does $\Lambda$ represent and what exactly is cancelling it away?
The quadratic divergence $\propto y_t^2 \Lambda^2$ in the standard model is due to a "cutoff regulator" which captures the divergent quantum correction to $\mu^2$. What it must be cancelled against is the bare $\mu_0^2$ contribution, in order to realise the renormalised parameter $\mu^2(m_t)\simeq -m_h^2/2$. What to make of this "unnatural" cancellation? In my opinion there is only one consistent interpretation: it is just the regularisation and renormalisation procedure. The cancellation has no physical significance, since the cutoff regulator scale $\Lambda$ has no physical significance, it is just the arbitrary scale at which we choose to renormalise, essentially just being used as a dummy variable or dictionary to convert between observables and renormalised Lagrangian parameters. Indeed, the very procedure itself mathematically demands that any observable you calculate does not depend on which $\Lambda$ you choose.
Again, to be true to the definition of naturalness we are working with, we want to see how the observables relate to the inputs. So what is the input parameter of interest? It can only be the renormalised parameter $\mu^2(\Lambda_h)$, defined at some high scale $\Lambda_h$ (where boundary conditions are specified), and not the bare parameter $\mu_0^2$, since after all I cannot measure $\mu_0^2$, and the regularisation procedure anyway forces me to choose a counterterm to realise the observed Higgs mass.
Next we must ask how the low scale parameter, $\mu^2(m_t)$, depends on the high scale input $\mu^2(\Lambda_h)$. An interesting implication (interesting enough for some Nobel Prizes, anyway) of the regularisation and renormalisation procedure is that the quantum parameters depend on energy scale according to the "renormalisation group equations". The renormalisation group equation for the Higgs mass-squared parameter in the standard model is $$\frac{d \mu^2}{d \log\Lambda_R} \approx \frac{6 y_t^2}{(4\pi)^2} \mu^2 ,$$ where $\Lambda_R$ is the renormalisation scale and the dependence of each parameter on $\Lambda_R$ is implied. If we take $y_t$ as a constant we can easily solve this to write $$\mu^2(m_t) = \mu^2(\Lambda_h) \left(\frac{m_t}{\Lambda_h}\right)^{\frac{6 y_t^2}{(4\pi)^2}}.$$ Even if we take $\Lambda_h\sim 10^{18}~\text{GeV}$ and $y_t\sim 1$, we have $\mu^2(m_t)\sim \mu^2(\Lambda_h)/4$. We see that the input parameter $\mu^2(\Lambda_h)$ has not at all needed to be precisely specified (in the local sense) to realise the observed Higgs mass (and in actuality the news is much better than a factor of 4, it is $\approx 1$, since the renormalised $y_t^2$ parameter shrinks with scale quite quickly).
What I am arguing here is that, when we only consider the parameters which have a physical significance, we can only conclude that the standard model is natural (in the local sense), since it is stable for small perturbations around the renormalised input parameter $\mu^2(\Lambda_h)$, i.e. we have $$\mu^2(m_t) \sim \mu^2(\Lambda_h).$$Indeed this might have been guessed, since there is only one explicit physical scale in the theory, and no dynamic scales are generated from the electroweak scale up to the scale at which we expect the theory must be modified (which we'll come to shortly).One-loop Feynman diagrams introduce a "quadratically divergent" quantum correction to this term, $$\delta\mu^2 \sim \frac{1}{(4\pi)^2} y_t^2 \Lambda^2,$$ which depends on the "cutoff scale" $\Lambda$. If $\Lambda$ is very large then this quantum correction needs to be cancelled away very precisely in order to realise the observed Higgs mass.
Ostensibly this sounds like a naturalness problem of the third kind as written just above. But we should ask: what does $\Lambda$ represent and what exactly is cancelling it away?
9. Input parameters in quantum field theories
To be true to the definition of naturalness we are working with, what we would actually like to do is see how the observables depend on the input parameters of the theory. So we can try to do just that; first we might think of taking the inputs as the parameters which enter the standard model Lagrangian, but we need to be careful, since quantum field theories are not that simple...
The parameters (and the normalisation of the fields themselves) which enter the Lagrangian of any quantum field theory are called "bare" quantities. If you try to calculate observables using these bare quantities you end up with a whole lot of infinities. Understanding and taming these infinities was an important problem of early quantum field theory. The key insight turned out to be the following: the bare quantities are not measurable. What is measured are the physical observables themselves, e.g. scattering cross-sections or decay rates, which are manifestly free from infinities. In order to proceed, from a calculational perspective, physicists learned to employ "regularisation" procedures to capture the divergences arising in the bare quantity calculations. Calculation results can then be made consistent with the finite physical observables by appending divergent counterterms to the bare quantities, which cancel with the divergences arising in the calculations. I think it is fair to say the modern interpretation is that the cancellation between divergences is just an unphysical intermediate artifact of the calculation procedure. Once the Lagrangian parameters are regularised (under some scheme) and rewritten in terms of physical observables, they are said to be "renormalised". These parameters are measurable.
10. The cutoff scale in the standard model
The quadratic divergence $\propto y_t^2 \Lambda^2$ in the standard model is due to a "cutoff regulator" which captures the divergent quantum correction to $\mu^2$. What it must be cancelled against is the bare $\mu_0^2$ contribution, in order to realise the renormalised parameter $\mu^2(m_t)\simeq -m_h^2/2$. What to make of this "unnatural" cancellation? In my opinion there is only one consistent interpretation: it is just the regularisation and renormalisation procedure. The cancellation has no physical significance, since the cutoff regulator scale $\Lambda$ has no physical significance, it is just the arbitrary scale at which we choose to renormalise, essentially just being used as a dummy variable or dictionary to convert between observables and renormalised Lagrangian parameters. Indeed, the very procedure itself mathematically demands that any observable you calculate does not depend on which $\Lambda$ you choose.
11. The standard model is natural
Again, to be true to the definition of naturalness we are working with, we want to see how the observables relate to the inputs. So what is the input parameter of interest? It can only be the renormalised parameter $\mu^2(\Lambda_h)$, defined at some high scale $\Lambda_h$ (where boundary conditions are specified), and not the bare parameter $\mu_0^2$, since after all I cannot measure $\mu_0^2$, and the regularisation procedure anyway forces me to choose a counterterm to realise the observed Higgs mass.
Next we must ask how the low scale parameter, $\mu^2(m_t)$, depends on the high scale input $\mu^2(\Lambda_h)$. An interesting implication (interesting enough for some Nobel Prizes, anyway) of the regularisation and renormalisation procedure is that the quantum parameters depend on energy scale according to the "renormalisation group equations". The renormalisation group equation for the Higgs mass-squared parameter in the standard model is $$\frac{d \mu^2}{d \log\Lambda_R} \approx \frac{6 y_t^2}{(4\pi)^2} \mu^2 ,$$ where $\Lambda_R$ is the renormalisation scale and the dependence of each parameter on $\Lambda_R$ is implied. If we take $y_t$ as a constant we can easily solve this to write $$\mu^2(m_t) = \mu^2(\Lambda_h) \left(\frac{m_t}{\Lambda_h}\right)^{\frac{6 y_t^2}{(4\pi)^2}}.$$ Even if we take $\Lambda_h\sim 10^{18}~\text{GeV}$ and $y_t\sim 1$, we have $\mu^2(m_t)\sim \mu^2(\Lambda_h)/4$. We see that the input parameter $\mu^2(\Lambda_h)$ has not at all needed to be precisely specified (in the local sense) to realise the observed Higgs mass (and in actuality the news is much better than a factor of 4, it is $\approx 1$, since the renormalised $y_t^2$ parameter shrinks with scale quite quickly).
12. A hierarchy problem
Nevertheless we have not solved a sort of the global naturalness problem. If one has $\mu^2(\Lambda_h) \lll \Lambda_h^2$ one might ask: why? In the standard model it is fair to argue that $\Lambda$ is an arbitrary regulator scale with no physical significance, $\Lambda_h$ is an arbitrary scale at which we specify the boundary condition, and that $\mu^2(\Lambda_h)$ is the only explicit mass scale in the theory, so why not? However many would argue that this is beside the point, since we expect that there is physics at a much higher scale, and it is actually this physics that we are concerned about. There seem to me to be two concerns here which are fundamentally distinct, although this distinction is not commonly made.
- The first concern is the violation of naive dimensional analysis.
- The second concern is that heavy new physics induces a naturalness problem for the Higgs mass.
13. The standard model with a hierarchy problem can be natural
What we should do is the following: write down the field theory with the high scale physics included, match this on to the effective field theory of the standard model at the threshold around $M$, and then write the Higgs mass-squared term $\mu^2(m_t)$ in terms of the input parameters of the high scale theory at scale $\Lambda_h>M$. When you do this you will find that an additional term appears in the $\mu^2$ renormalisation group equation above the scale $M$, i.e. $$\frac{d \mu^2}{d \log\Lambda_R} \sim \frac{1}{(4\pi)^2} y_t^2 \mu^2 + C M^2,$$where $C$ is some constant. There is also a threshold correction of similar order which appears when the theories are matched (in dimensionless schemes). The end result is you get something like $$\mu^2(m_t) \sim \mu^2(\Lambda_h) - C M^2 \log\left(\frac{\Lambda_h}{M}\right) + C_{thresh} M^2,$$where the first term is as in the pure standard model, the second term is the renormalisation group contribution, and the third term is the threshold correction contribution. This is why the Higgs has a potential naturalness problem: because it is not protected from corrections of this kind from any heavy new physics, and if $C M^2\gg \mu^2(m_t)$ then the input parameter $\mu^2(\Lambda_h)$ needs to be very precisely specified in order to realise the observed Higgs mass.
But this is exactly the point: it is $C M^2$ which is the quantity of interest, not $M^2$, and not (necessarily) anything to do with $y_t^2$. And what is $C$? Well it depends on the new physics: if it is a right-handed neutrino it is $\sim y_\nu^2 / (4\pi)^2$; if it is a particle with standard model gauge charges it is $\sim g^4/(4\pi)^4$; if it is another scalar it is $\sim \lambda_{HS}/(4\pi)^2$ (which could even be exactly zero in a technically natural way if the scalar is a gauge singlet); and if it is a stop (i.e. a top squark) it is $\sim y_t^2/(4\pi)^2$.
This brings us to a completely logical and obvious point: the correction to (and the naturalness problem for) the Higgs mass induced by any heavy new physics depends on the strength at which that new physics couples or "talks" to the Higgs. Therefore it is possible to have a field theory with a hierarchy problem, in the sense that new physics exists at a much higher scale than the electroweak scale, without having a naturalness problem (at least in the local sense), as long as that new physics couples sufficiently weakly to the Higgs.
14. The standard model with the hierarchy problem can be natural
By the hierarchy problem I am specifically referring to the disparity between the electroweak scale and the scale at which we expect quantum gravitational effects become important, i.e. the Planck scale $\Lambda_{Pl}\sim 10^{18}~\text{ GeV}$. The hierarchy problem is often treated as synonymous with the naturalness problem for the Higgs. Certainly it is a hierarchy problem, at least as I have defined it, but does it imply a naturalness problem? I don't believe it necessarily does. Let me offer one argument.
The Planck scale, as far as I know, is only derived by dimensional analysis. The physical modes of gravity (or whatever they may be) may actually lie far below this scale, i.e. the theory of gravity might actually reside at a much lower scale (call it $\Lambda_G< \Lambda_{Pl}$). The apparent largeness of the Planck scale could then be explained by a small coupling strength (call it $C$) between standard model modes and gravitational modes, e.g. something like $\Lambda_{Pl} \sim \Lambda_G/C^2$. The physical correction to the Higgs mass should be proportional to the coupling strength, so something like $\delta \mu^2(m_t) \sim C^2 \Lambda_G^2$, which can be rewritten as $\delta \mu^2(m_t) \sim C^6 \Lambda_{Pl}^2$. Therefore, for sufficiently small $C$, there would be no naturalness problem.
To make a more precise analogy, consider if neutrino mass came from the Weinberg operator, $$ \frac{1}{\Lambda_N} \overline{(l_L)^c}\Phi\Phi^Tl_L .$$ The neutrino mass scale of $\sim 0.1\text{ eV}$ suggests $\Lambda_N\sim 10^{15} \text{ GeV}$ (which is why people immediately think of grand-unified theories). But in the simplest renormalisable neutrino mass model with right-handed neutrinos $N$ coupling to the Higgs via $y \overline{l_L} \tilde{\Phi} N$, we have$$\Lambda_N \sim \frac{M_N}{y^2}, \;\,\;\,\; \delta\mu^2(m_t) \sim \frac{y^2}{(4\pi)^2}M_N^2 \sim \frac{y^6}{(4\pi)^2} \Lambda_N^2.$$If $y^2 \lesssim 10^{-4}$, then the corrections to the Higgs mass are no larger than 1 TeV. This small value for $y$ is even technically natural. The neutrino mass seesaw requirement $y^2=m_\nu M_N / v^2$ translates this to $M_N\lesssim 10^7\text{ GeV}$.
We see that in this simple neutrino mass example: dimensional analysis implies an apparent very large scale $\sim 10^{15}\text{ GeV}$ in the theory; this apparent large scale could just be due to the appearance of a technically natural small coupling; and even the existence of a physical large scale $\sim 10^7\text{ GeV}$ in the renormalisable theory calculably does not introduce a naturalness problem. I see no logical reason why something like this could not also be the case for gravity. Indeed, from what I can tell this is exactly what happens in gravitational theories like e.g. softened gravity, or large extra dimensions, and surely others.
15. The $y_t^2\Lambda^2$ correction
It is worth commenting that, in standard model plus heavy new physics theory, the quadratic divergence $\propto y_t^2\Lambda^2$ induced by the top quark loop does not appear in the equation relating $\mu^2(m_t)$ to the renormalised input parameter $\mu^2(\Lambda_h)$. Only contributions from heavy new physics appears. This is consistent with the interpretation of $\delta\mu^2 \propto y_t^2\Lambda^2$ as an unphysical contribution related to the regularisation procedure.
Often in the modern literature, proposed solutions to the Higgs naturalness problem (or a little naturalness problem) are motivated by showing that they contribute a term which cancels the $y_t^2\Lambda^2$ contribution. This may be a controversial statement, but I think this line of argument is unhelpful at best and misleading at worst. It's just that it mostly misses the real reason(s) why a given solution can solve a Higgs naturalness problem, and propagates the misconception that the quadratic divergence due to the standard model top quark is the problem which must be tamed by a negative contribution of the same order. The confusion is likely related to the fact that there is an actual physical contribution to the Higgs mass $\sim y_t^2\Lambda^2$ coming from new particles which couple to the Higgs with strength $y_t$ by construction in many of these solutions (e.g. stop quarks in supersymmetry or top partners in composite scenarios). Naturalness will demand that these particles be light because of this physical contribution, not because they need to save us from top quark divergences...
16. The $y_t^2\Lambda^2$ correction in supersymmetry
Let me be more explicit by appealing to the usual argument for why supersymmetry solves the Higgs naturalness problem, and why it needs to appear at a fairly low scale. It goes something like the following.
In a supersymmetric theory, for every fermionic quantum correction there exists an equal but opposite bosonic quantum correction (from the fermion's superpartner). For example, for the top quark we would have:$$\delta\mu^2 \sim \frac{1}{(4\pi)^2} y_t^2 \Lambda^2 - \frac{1}{(4\pi)^2} y_{\tilde{t}}^2 \Lambda^2,$$where the stop contribution cancels the top contribution by virtue of $y_t = y_{\tilde{t}}$. With no quantum correction to the Higgs mass there can be no naturalness problem.
Since supersymmetry must be broken, this cancellation is only good above the (effective) supersymmetry breaking scale (approximately the stop mass scale), call it $\Lambda_{\rm SUSY}$. Below this scale the (uncancelled) top quark contribution dominates, therefore the quantum correction is, in toto,$$\delta\mu^2 \sim \frac{1}{(4\pi)^2} y_t^2 \Lambda_{\rm SUSY}^2.$$ Naturalness demands that this contribution be not much larger than the Higgs mass, which implies that $\Lambda_{\rm SUSY}$ should be not much larger than the TeV scale.
Since supersymmetry must be broken, this cancellation is only good above the (effective) supersymmetry breaking scale (approximately the stop mass scale), call it $\Lambda_{\rm SUSY}$. Below this scale the (uncancelled) top quark contribution dominates, therefore the quantum correction is, in toto,$$\delta\mu^2 \sim \frac{1}{(4\pi)^2} y_t^2 \Lambda_{\rm SUSY}^2.$$ Naturalness demands that this contribution be not much larger than the Higgs mass, which implies that $\Lambda_{\rm SUSY}$ should be not much larger than the TeV scale.
This argument gives the right answer, i.e. that supersymmetry at or below the TeV scale can solve a Higgs naturalness problem, but it is totally misleading. The claim is that the top quark quadratic divergence is the problem that needs to be tamed, and that the taming is done by the stop. But this is not really the reason why supersymmetry solves a potential naturalness problem for the Higgs (nor the reason why there needs to be a stop).
17. Supersymmetry can solve a Higgs naturalness problem
(but probably not in the way you've been taught)
(but probably not in the way you've been taught)
Let me offer an alternative argument which is fully consistent with the picture sketched above (in terms of the renormalised parameters). In this picture the real worry for the Higgs mass is corrections from heavy new physics which is sufficiently strongly coupled to the Higgs. In particular, a heavy particle of mass $M$ introduces a term $\sim C M^2$ into the renormalisation group equation for the Higgs mass, which results in similarly sized terms in the equation for the observed Higgs mass when written in terms of the high scale input parameters. What supersymmetry does for you is ensure that another particle exists (the supersymmetric partner) which contributes another term $\sim -C M^2$ to the renormalisation group equation which cancels the first contribution. In fact it ensures that this property holds to all loop levels and at all energy scales. For this to work in fact requires that every particle must have a supersymmetric partner, not just the heavy particle. This is because, schematically, as you draw higher-loop Feynman diagrams involving the heavy particle you will eventually find one with also e.g. a top quark involved, and hence a loop-suppressed contribution to the Higgs mass $\propto y_t^2 M^2$, or else a contribution to some scalar quartic $\propto y_t^2$ which threatens stability under renormalisation group flow, etc. To cancel these contributions you find that you need a supersymmetric partner for the top. And so on for every other particle. This remarkable result is embodied in the supersymmetric non-renormalisation theorem, which ensures that the renormalisation group equation for each parameter in a supersymmetric theory, including for the mass parameters, is proportional to the parameter itself, i.e. they are always technically natural. So in any supersymmetric theory, once you set a mass parameter at the high scale, it will always be safe from large quantum corrections from heavy new physics (including in theories with very heavy states which couple relatively strongly to the Higgs, like in grand unified theories). This is, for me, the much more profound and completely non-trivial reason why supersymmetry can solve a Higgs naturalness problem (and why there needs to be a stop).
Of course, in nature, supersymmetry is broken, so this can't be the entire story. If nature is fundamentally supersymmetric then the standard model superpartners must have large enough masses to have thus far evaded searches at e.g. the Large Hadron Collider. This is a potential problem for naturalness of the Higgs mass. For example, in the (softly broken) minimal supersymmetric model the Higgs mass-parameter (named here $\mu_{\rm SM}$) is given at tree level by$$\mu_{\rm SM}^2 = |\mu|^2+\mu_{H_{u}}^2,$$ where $\mu$ is the supersymmetric $\mu$-parameter and $\mu_{H_{u}}^2$ is a soft-breaking parameter. Naturalness will loosely demand that the parameters on the right-hand side are not too large so as to require a precisely specified cancellation. But here $\mu$ is also the approximate scale for the higgsino masses, so this would imply light higgsinos. Beyond tree level there are radiative contributions primarily from the stop and gluino sectors, and renormalisation group and threshold corrections to $\mu_{H_{u}}^2$ of the order $\frac{y_t}{(4\pi)^2}m_{\rm stop}^2$. Naturalness will loosely demand that these corrections be sufficiently small as well. This is the argument for light stops (i.e. not because the stop is needed to start cancelling off a quadratic divergence from the top quark). Then gluinos correct the stop mass, so naturalness requires light enough gluinos, and so on. In any case, neither higgsinos, nor stops, nor gluinos have been seen at the LHC, threatening natural supersymmetry.
I am no expert in Higgs mass calculations from high scale supersymmetric parameters, so I will stop here. Broken supersymmetry is not necessarily an impediment for supersymmetry as a solution to a potential Higgs mass naturalness problem. Still, if it is the solution, the supersymmetric standard model partners must be realised at a low enough energy for the above mentioned reasons. To know exactly how heavy they can be before violating naturalness requires actually calculating the Higgs mass in terms of the high scale input parameters, quantifying the dependence on those input parameters, and determining when/if those parameters are required to be "precisely specified". This exercise has become its own industry for papers in high energy physics. Unfortunately many approximations are made in the map from inputs to Higgs mass (e.g. constraining the parameter space), and there is not even an agreed upon approach for measuring the degree of naturalness. This causes its own set of problems, to be discussed after the next section, where I have a small bone to pick.
I am no expert in Higgs mass calculations from high scale supersymmetric parameters, so I will stop here. Broken supersymmetry is not necessarily an impediment for supersymmetry as a solution to a potential Higgs mass naturalness problem. Still, if it is the solution, the supersymmetric standard model partners must be realised at a low enough energy for the above mentioned reasons. To know exactly how heavy they can be before violating naturalness requires actually calculating the Higgs mass in terms of the high scale input parameters, quantifying the dependence on those input parameters, and determining when/if those parameters are required to be "precisely specified". This exercise has become its own industry for papers in high energy physics. Unfortunately many approximations are made in the map from inputs to Higgs mass (e.g. constraining the parameter space), and there is not even an agreed upon approach for measuring the degree of naturalness. This causes its own set of problems, to be discussed after the next section, where I have a small bone to pick.
18. Supersymmetry does not solve the hierarchy problem
The minimal supersymmetric standard model does not explain the origin of the $\mu$ parameter (or $\mu_{H_u}^2$ for that matter). Therefore, and by my definition, one has not eliminated the hierarchy problem (i.e. why is $\mu \lll \Lambda_{Pl}$?), or any other hierarchy problem which might exist if heavy new physics is present (e.g. a grand unification scale). This has its own name: "the $\mu$ problem".
This is not to say the hierarchy problem cannot be answered within the supersymmetric paradigm. There are some very satisfactory possible explanations for the origin of $\mu$ in extended models. The point to make though is that you need such an explanation in addition to needing supersymmetry. For example, a common explanation is to have $\mu=0$ at high scale, imposed by some global or scale symmetry, only to be made non-zero when some dynamical scale is generated (e.g. by spontaneous symmetry breaking) at a comparatively high scale, which then induces a small $\mu$ term via some dimensionally suppressed operator. Something like this arises if the dynamical scale is otherwise decoupled and only communicated to $\mu$ via even higher scale physics (e.g. gravity or otherwise). This is an excellent example of how a fully natural hierarchy can exist in a quantum field theory. In this setup the $\mu$ parameter has its origin in the high dynamical scale, and its relative smallness (and the failure of naive dimensional analysis) is just explained by a suppressed coupling between the scales.
This is not to say the hierarchy problem cannot be answered within the supersymmetric paradigm. There are some very satisfactory possible explanations for the origin of $\mu$ in extended models. The point to make though is that you need such an explanation in addition to needing supersymmetry. For example, a common explanation is to have $\mu=0$ at high scale, imposed by some global or scale symmetry, only to be made non-zero when some dynamical scale is generated (e.g. by spontaneous symmetry breaking) at a comparatively high scale, which then induces a small $\mu$ term via some dimensionally suppressed operator. Something like this arises if the dynamical scale is otherwise decoupled and only communicated to $\mu$ via even higher scale physics (e.g. gravity or otherwise). This is an excellent example of how a fully natural hierarchy can exist in a quantum field theory. In this setup the $\mu$ parameter has its origin in the high dynamical scale, and its relative smallness (and the failure of naive dimensional analysis) is just explained by a suppressed coupling between the scales.
19. The problems with measuring naturalness
So far our discussion of naturalness has been intentionally non-rigorous. We have talked about parameters being precisely specified, in both local and global senses, without really explaining what this might mean quantitatively. Indeed, many questions remain, for example: In what parameters should we measure perturbations ($x$, $x^2$, $\log(x)$, ...)? What constitutes an unacceptably large change in the observables? Do we measure perturbations in all parameters, including the already measured parameters, or just the unmeasured ones? Does the answer depend on the way in which I write down my theory? What is the "volume" of the original parameter space? Is it right to exclude theories which are non-perturbative just because I cannot calculate in them? And so on.
There is a tendency in the field to define naturalness measures intuitively. For example, the most popular is the Barbieri-Giudice fine-tuning measure which compares percentage changes in an input(s) versus those in the Higgs mass-squared parameter (or the Higgs mass, or $Z$ mass, or similar):$$\Delta_{\mathcal{X}_I} = \left|\frac{\partial\log\mu^2}{\partial\log\mathcal{X}_I}\right|,$$where $\mathcal{X}_I$ is some high scale "input" parameter, usually one of the Lagrangian parameters or a high scale observable. If this measure is sufficiently large then you would start to call the theory unnatural, since a small change in the input is inducing a large change in the observable. When simple intuitive measures return non-sensical results in special cases, or when they become too involved to fully calculate, they might be extended/patched/simplified in various ways which are again often motivated by intuitive arguments. Popular ones are, for example,$$\Delta = \max\limits_{\mathcal{X}_I}\left\{ \left|\frac{\partial\log\mu^2}{\partial\log\mathcal{X}_I}\right| \right\},\text{ and}$$ $$\Delta = \sqrt{ \sum\limits_{\mathcal{X}_I} \left(\frac{\partial\log\mu^2}{\partial\log\mathcal{X}_I}\right)^2 },$$where the maximisation/sum is often performed over a subset of the parameters for simplicity. The end result of all this is a plethora of measures within the literature which can return very different results even for the very same model, and no clear "best" measure or even a methodology by which to categorise/compare the measures. Since they are motivated intuitively, it is not clear what the assumptions are behind them, and as such no clear way to address questions like in the first paragraph. Put simply, they are just rather arbitrary. A common excuse is the following: well, naturalness is a subjective concept anyway, why shouldn't our measures also be? This to me is entirely unsatisfactory – we can do better.
I make this claim because the last paragraph follows the arc of my own study and understanding of naturalness measures. In studying naturalness even within very minimal extensions of the standard model I found that the simple measures would sometimes fail to identify the fine-tuning in certain areas of parameter space which were clearly fine-tuned (regions which resembled what is known as a "Veltman throat"). Sometimes it can be argued that this goes away when the calculations are performed at higher loop order. Other times it remains. It doesn't take much brainstorming effort to intuitively identify a number of possible ways to extend the simple measures in order to capture the fine-tuning, but it is not at all clear which is the "proper" way. And when you're stuck, you go back to first principles. I asked: what are my assumptions? what exactly do I mean by naturalness? and so on, much of which resulted in what has already been written. The key turning point in my understanding of naturalness measures in particular was the discovery of a paper by Sylvain Fichet: "Quantified naturalness from Bayesian statistics"...
20. The Bayesian approach to measuring naturalness
Bayesian probability is a framework that has all the ingredients to capture and quantify the subjective part of naturalness and leave the rest up to the mathematics in a completely consistent way. One can write down quantities which act just like naturalness measures, but with the appealing property of having a precise meaning in terms of Bayesian statistics. This chapter is rather loose and a bit technical, but I want to provide a sketch in order to give you a taste of how this works, because it seems to me this is just the right way to think about naturalness.
Take some high scale model $\mathcal{M}$ of particle physics. The question of naturalness is reframed: how probable is the low scale standard model given this high scale model? Mathematically, that is, you are interested in$$Pr(\mathcal{O}=\mathcal{O}_{exp} | \mathcal{M}),$$where $\mathcal{O}$ is the set of observables. This is what is called the "Bayesian evidence" for $\mathcal{M}$ given a set of prior beliefs on the model inputs,$$B(\mathcal{M}) := Pr(\mathcal{O}=\mathcal{O}_{exp} | \mathcal{M}) = \int Pr(\mathcal{O}=\mathcal{O}_{ex} | \mathcal{I} )\; Pr( \mathcal{I} )\; d\mathcal{I},$$where $\mathcal{I}$ is the set of input parameters and $Pr( \mathcal{I})$ the set of prior beliefs. The Bayesian evidence is a very good place to start deriving a naturalness measure. All the subjectivity is captured and quantified in the prior densities (and implicitly the volume of the input parameter space). If you work through the mathematics (and assuming flat prior densities) you find that the Bayesian evidence becomes something like$$B(\mathcal{M}) \sim \frac{1}{\sqrt{\left|{\rm det}\left(JJ^T\right)\right|}},$$where $J$ is the matrix $J_{ij} = \partial\mathcal{O}_i/\partial\mathcal{I}_j$. (The origin of this term lies in coordinate transformations and the induced metric on the input space submanifold carved out by the requirement that observables match experiment).
Now, as argued above, we can define our input parameters with respect to the renormalised high scale model Lagrangian parameters, call them $\{\mathcal{X}_I\}$. What about our priors? Requiring that the result not be dependent on units or parameter rescalings suggests that we should take a flat prior in the $\log$ of the inputs (since $\partial\log\mathcal{X} = \partial\mathcal{X}/\mathcal{X}$); this is a sort of "maximally agnostic" prior. Similarly, we identify the set of observables with the logged set of renormalised standard model Lagrangian parameters, call them $\{\log\mathcal{X}_O\}$. The Bayesian evidence becomes$${\small
B(\mathcal{M}) \sim
\frac{1}
{\sqrt{\left|{\rm det}\left[
\left(
\begin{array}{ccc}
\frac{\partial\log\mathcal{X}_{O_1}}{\partial \log\mathcal{X}_{I_1}} & \cdots & \frac{\partial\log\mathcal{X}_{O_1}}{\partial \log\mathcal{X}_{I_n}} \\
\vdots & \ddots & \vdots \\
\frac{\partial\log\mathcal{X}_{O_m}}{\partial\log \mathcal{X}_{I_1}} & \cdots & \frac{\partial\log\mathcal{X}_{O_m}}{\partial\log \mathcal{X}_{I_n}}
\end{array}
\right)
\left(
\begin{array}{ccc}
\frac{\partial\log\mathcal{X}_{O_1}}{\partial \log\mathcal{X}_{I_1}} & \cdots & \frac{\partial\log\mathcal{X}_{O_1}}{\partial \log\mathcal{X}_{I_n}} \\
\vdots & \ddots & \vdots \\
\frac{\partial\log\mathcal{X}_{O_m}}{\partial\log \mathcal{X}_{I_1}} & \cdots & \frac{\partial\log\mathcal{X}_{O_m}}{\partial\log \mathcal{X}_{I_n}}
\end{array}
\right)^T
\right]
\right|}}
}$$So you're now starting to see something which looks like the inverse of a generalised version of the Barbieri-Guidice measure. That is what you expect, since low evidence should correspond to high fine-tuning (or low naturalness). And this quantity is calculable (if the model is perturbative). The renormalisation group equations, together with the threshold corrections at the interfaces of effective field theories, define the map which relates the high scale inputs to the standard model observables at low scale. With this map in hand you can start turning the crank: you just calculate to the highest loop level you can (or are willing to) and solve the resulting coupled set of differential equations with boundary conditions to evaluate $\left|{\rm det}\left(JJ^T\right)\right|$ at different points in the input parameter space.
How do you interpret the number you get? Actually it is not so straight forward to interpret the Bayesian evidence alone. In the above equations I left off volume factors and the integration over input space. This integration is complicated by the fact that points in the theory of interest may hit non-perturbative regimes (where we do not yet know how to calculate). I left these details off because there is a way to "bypass" the complications by taking ratios of Bayesian evidences. This can be done between models, within the one model for different inputs, or for different priors, and can be set up to capture the local or global naturalness problem. Whatever is chosen, the interpretation of the ratio is a well-defined one: it is a Bayesian model comparison which captures the relative evidence for one model over another. For example, to study naturalness of the Higgs mass in the paper for which I was a co-author, we found it useful to compare the models where the Higgs mass-squared parameter was taken either as a low scale "phenomenological" input parameter, or as a high scale input parameter (with the same prior). This ratio had some nice properties where much of the matrix algebra cancelled to unity, an intuitive Barbieri-Giudice-like measure was reproduced in a relevant limit, and all the previously uncaptured areas of fine-tuning were smoothed over.
The particulars of that approach are beyond the scope of this blog post (and irrelevant, really). What I wanted to give was a flavour of the Bayesian approach to naturalness. Unfortunately this approach is still a minority in the literature. There are probably two main reasons for this. The first reason is just that it is not as straight-forward as writing down an intuitive measure. The second reason is uneasiness about dependence on the priors; the common objection is the following: clearly your answer depends on the priors, and how do you know that the high scale physics implies e.g. flat priors? But that's exactly the point – you don't and you can't. What you have done is derive a naturalness measure from an underlying framework with a well-defined meaning and with all your assumptions rolled into the priors in a clear, explicit (and optionally maximally agnostic) way. That is in my opinion much more transparent than just using a measure written down intuitively. And it makes clear that you are in fact making assumptions about the high scale physics. Naturally! For all these reasons I find the approach extremely satisfying.
21. A kind of conclusion
I don't know if anyone will read this post, but I hope that they do, and that they share it and challenge it. It seems ridiculous to me that the field of physics beyond the standard model, which (it is probably fair to say) has been motivated primarily by naturalness arguments for the past decades, is so loose and so full of misconceptions about naturalness. I guess it is just the foreseeable consequence of a lack of clear, physical, rigorous, well-defined, and agreed upon definitions. There may have been a time when the majority of the field understood the nuances, but this does not seem to be the case now, and certainly not at the graduate level, where our exposure usually amounts to the same parroted introduction slide explanation in terms of a top quark induced quadratic divergence. Not only does this undersell the more profound reason why e.g. supersymmetry is a potential solution, but it propagates the incorrect ideas that the top quark itself is the problem, and that it is the scale of new physics alone which sets the level of the problem (i.e. not involving the coupling strength).
At a time when the Large Hadron Collider is challenging the established wisdom on naturalness, surely it is time to take stock and sturdy up its foundations? If you agree, please share this and start the dialogue. I sure hope so, but maybe not. So it goes.
At a time when the Large Hadron Collider is challenging the established wisdom on naturalness, surely it is time to take stock and sturdy up its foundations? If you agree, please share this and start the dialogue. I sure hope so, but maybe not. So it goes.
When IBM's Watson gives its Nobel Prize speech in 2028, for its study of the computational complexity of anomalous dimensions in the string landscape, hopefully it will acknowledge the pioneering nature of your work :-)
ReplyDeleteThanks for this. Somehow, in some corners of theoretical physics, we ended up in an age where vague informal talk dominates technical analysis. It is good to see you try to counteract this. I'd be interested in seeing what kind of reactions you will get.
ReplyDeleteFor what it's worth, I have added pointer to your text in the nLab page on naturalness, here ncatlab.org/nlab/show/naturalness. (This page is just a stub. If you feel like editing it, just hit the "edit" on the bottom. )
Thanks Urs, the exposure helps. I am interested in the reactions I will get as well! Hopefully it reaches some people with a healthy opposing view.
DeleteBeing only young my exposure in the field is limited, but all I can know is what I can read and what I see at conferences, and what is clear to me is that people are routinely now interpreting the "top quark divergence" as the problem. This is the main thing that irks me. My guess is that, at some point, the $y_t^2\Lambda^2$ correction was widely understood to be a stand-in for actual physical corrections from new physics (or perhaps model structure). In time the story has been (inadvertently) twisted, and any assumptions that might need to hold to trust it as a stand-in have been lost along the way...
I am glad to have found this - I am a beginning grad student in astrophysics who's had some interest in field theory and particle physics for a while, and naturalness has always been a subject I have found frustrating due to the hand-wavy way it is described so frequently, especially at the low-level. I don't yet have the knowledge to follow all of your arguments as I'm only taking a first semester of QFT now, but I am looking forward to reading this again over the next year as I get more familiar with the subject. Thanks for writing it and posting it!
ReplyDelete3d bioprinting = Immortality = go to stars ((typewrite: interstellar travel constant acceleration))
ReplyDelete...interstellar travel not acceleration constant (laser)... L=2*3.14*r... that beam of a laser pointer that turn 180º, from horizon to horizon, in 1 second and 4 imaginary semicircular screens located, for example, to the distances from Earth... 1: 95493 kms... 2: Moon (384403 kms)... 3: Sun (150 Million kms, 1.5*10^8)... 4: Andromeda (2 million light years...19 Trillion kms, 1.9*10^19, Universal Large Scale: 1 billion=1 million of millions, 10¹²)... The mark of laser pointer to reach, would move alongside each screen to an angular speed of 180º/sec and a linear speed, based on the light, of...(3.14*r)/c... km 95493=1c... Moon=4c... Sun=1570c... Andromeda galaxy= 198 Billion*c, (1.98*10^14)*c... If turn laser pointer to the right, why is going to bend each laser beam to the left, not to exceed c, from km 95493, radius "frontier" in which the speed of laser mark on the screen = c ?... Decreasing magnitudes have a close limit, but growing...have an "infinite" limit to our knowledge, understood here as "infinite", at least, an exorbitant figure... as with the TEMPERATURE (-273 ºC → "infinite")... MATTER (quark → "infinite")... TIME (0 → "infinite")... SPACE (0 → "infinite")... why it would be SPEED different?...(0 kms/sec → "infinite")...
ReplyDelete(2)...interstellar travel not acceleration constant (laser)... the question is to ask oneself whether we should consider as a real or fictitious speed the apparent displacement of laser mark on the screen. Because (apart that the different photon´s wave trains that make up the light beam are "almost" parallels, that is divergent (if 1 single photon goes away in straight line, why when they go away together into the light beam they are divergent?, they must go "colliding" among them and resulting deflected, until they are so separated that leave of collide and go in straight trajectories, but already divergent among them...), increasing the cross-sectional area of light beam with distance, and becoming thus increasingly tenuous) it´s really not 1 only light beam that rotates, but an "unlimited" number of light rays with limited length, leaving together from light source but, on turning this, grow apart laterally from each other, as the ribs of a fan, with increasing travel distance; as we see in harbor lighthouse that rotates and the look towards us, seeing 1 beam of light instantly. So, as go turning laser pointer, each laser beam that go out in direction goes in straight line and to reach screen leaves its mark a moment, the next beam leaves its mark a moment, etc...separate two adjacent marks for a dark space, bigger in each farther semicircular screen proportional to the radius. What we would see, as in frame of a movie, how the continuous displacement of 1 single mark apparent... Of course at an Andromeda´s radius, without obstacles, the photon of each wave train would arrive...but the others photons wave trains of the laser beam would be so far, due to the divergence...that would not see the mark.
ReplyDelete(2b)...interstellar travel not acceleration constant (laser)... But, if someday could make a laser ray with its photon´s wave trains auto-correcting the divergence...how if they should go away inside of a fiber-óptic tube reflecting on the walls without can go out, although the ray would travel millions of light-years, to the end would arrive the beam cross-section with the same area that beam departed from the light source... Laser/Antilaser? Trajectories: photons divergent, perhaps antiphotons convergent?, then would go 2 concenter "tubes of light" ◙, normal photons interior-tube divergent/antiphotons exterior-tube convergent, perhaps repulsion photon/antiphoton...the exterior-antilaser prevents go out outside the interior-laser; and the interior-laser prevents go in inside the exterior-antilaser...maintaining always the same diameter... What would happen to arrive those intact laser marks to the distant semicircular screens?...hyperluminal-speed.
ReplyDelete...interstellar travel constant acceleration (without religion "feasts": Inquisition)... when until goes to continue the world the "feasts" of religion enduring?... What a beautiful is the "Christ"mas!, but...that still only has the splendid showcase and after with the back-room of religious gloomy darkness, horror and crime. No more wiles to innocents, make already the authentic Birthmas, the Nativity from the wonderful real women, and not fom the inhuman-religious tales. Have to transform already ALL the "feasts" of religion into TRUE HUMAN FESTIVITIES, that not "pagans", nor do "mundane". Have that send already once and for all the religion, what religion, mine, yours, or of those?, all religions: the religion...to the trash (finished religion´s Wars, the unique Immortality it is coming from We self: Technological Immortality). Without delinquency, a New World based on Truth and Justice: 1 only United Planet without Nations, without Wars and without Frontiers (only cities), with 1 only World Government and 1 only Idiom... No more religion for to continue "forever and ever" the religious infecting global media, enterprises (watch cross seeming feign "4 windows" in Windos10), governments, schools, high schools and universities with their ancestral system buys-wills, "if you want have a job, house and a car on the door, already you know put the neck in the priests", with their thousands of millions of "believers": Bought, or Innocents, or Ignorant... No more religious tale time counting false "century XXI, year 2019 a.c."... A Planetary Convention to select the True Starting of the Time Counting... if from beginning of the Bronze Ages year 0, year -12500 begin Mathematics: some straight lines representing each one the number 1... on the Earth Planet, century 66... Merry Birthmas and Happy New Year 6519.
ReplyDelete(..., year -10500 begin mathematics: some...)
Delete(2)...interstellar travel constant acceleration (without suffering)... in that Brazil of the miserable ones, they say that for "to clean" the streets for the World Football Championship,,,the polic,,,assassi,,,childr,,, (someone of those majors children is capable of to kill for robbing the shoes, but the vermin are who governs and convert them in that). And FIFA "get to knew nothing", they would must to have done suspend the damned World Championship and to observe 1 minute of silence for those innocents "swept away" from the streets and from the life... Can watch in photographs to two little girls begging alms, the tender creatures are joined for mutual helping to carry their solitude and abandonment... When the night goes down, where go they?...have searched instinctively, such as all living beings, a place for take refuge...the place of the darkness, the place of the distance, the place of the fear, the place of the rats present over there too...one day another day, one night another night... Where is that "God"?, in the homovices ceilings of the Vatic ano. Where the hope?, where the tenderness?, where the education?...at none part. Others ones, carry glue to them for inhaling and forgetting and for dying.
ReplyDelete...interstellar travel constant acceleration (without suffering)... in the Brazil of the "Ordem e Progresso" the "children of the street" abandoned such as if they were dogs...in countries of Orient as Lebanon, children almost babies thrown away on the ground together with an adult begging alms, already from their more tender infancy they get accustomed to look the World from down and suffering...children dying of cold into refugee camps in countries with endless religion´s wars...and meanwhile shameless multireligious-pontifices in their golden palaces eating partridges, and well sheltered with the best wool in the best blankets made in England, the best boots...and not fall down their faces of shame when they look to themselves in the mirror... How they can have $50,000 million in the Bank, living on a golden skyscraper and to be dedicated to want build a barrier-wall, instead of to be dedicated to saving to all those who they can?... THE WORLD CAN NOT CONTINUE INDIFERENT TO THAT HORROR AGAINST THE INFANCY... ALL those children who are suffering must be rescued and be in ward by the State and be delivered in ADOPTION to all adequate FAMILIES OF THE WORLD who want adopt them... Children 8,500 die of hunger everyday on the World... Off with religion, off with monarchies and off with politicians shameless.
ReplyDelete(2c)...interstellar travel not acceleration constant (laser)... Light from Andromeda, photons divergent (on Earth there are reflection on dust´s particles, but in space...radiation-pressure, photons colliding, inside of the light´s beam to the beginning of go out from its source and the beam still has hight density of photons?)...although...if all they diverge, or not it would be sighted, or Andromeda would be sighted without details how a diffused spot amalgam of photons from different stars, the fact that in telescope its stars can be sighted how ░░light´s dots░░, indicates that Some Photons, perhaps into the center of the light´s beam stabilized by the photons exterior layers, they do not diverge and go away in straight line those central photons always from the beginning, permitting thus to see that star how a clearly defined • light´s dot...
ReplyDelete...interstellar travel constant acceleration (times to reach light-speed)... 1g (9.8 mts/sec²)=354 days ...↓... 0.5g (4.9 mts/sec²)=1.94 years ...↓... 0.25g (2.45 mts/sec²)=3.88 years ...↓... 0.1g (98 cms/sec²)=9.7 years ...↓... 0.01g (9.8 cms/sec²)=97 years ...↓... 0.001g (9.8 mms/sec²)=970 years ______ Huge constant acceleration the ship, living areas to 1g: inside the living areas..."the same as going submerged in water"...↓↓↓↓g (motors)...less...↑↑↑g (gravitational transformers)=↓g... ...↨... 2g (19.6 mts/sec²)=177 days ...↨... 10g (98 mts/sec²)=35.4 days ...↨... 100g (980 mts/sec²)=3.54 days ...↨... 1000g (9.8 kms/sec²)=8.5 hours ...↨... 10000g (98 kms/sec²)=51 minutes ...↨... 50000g (490 kms/sec²)=10.2 minutes ...↨... 100000g (980 kms/sec²)=5.1 minutes ...↨... 1 million G (9800 kms/sec²)=30.6 seconds... ((typewrite: interstellar travel constant acceleration))
ReplyDelete...space-elevator (orbital station bike wheel-1g)... geostationary orbit, a huge "bike-wheel" is gyrating around its own axis for have 1g-centrifugal. Wheel held in place with 4 CABLES (each cable with a track for Train, for both trains crossing ↓↑) FORMING THE STRUCTURE OF A RHOMBUS♦ (minor diagonal of rhombus is the gyration-axis of the Station-Wheel)...rhombus below, the carbon nanotubes Track towards Earth...rhombus above, Cable towards a higher counterweight... if...WHEEL´s RADIUS = 250 mts... Wheel gyration´s Axis length = rhombus´s minor diagonal = ½ Wheel´s radius = 125 mts (axis could be a resistant hollow tube, e.g. 20 mts in diameter, with an adequate wall´s thickness, hollow which could serving how tank of anything, e.g. air, or tank of frozen water storage no totally full, for volume expansion from liquid to ice without tank, Axis, breaking, thawing using solar-heat)... Cable´s length of the rhombus´s side = Wheel diameter = 500 mts... Wheel´s ZONE-1g: habitable length = 1571 mts*50 mts wide (separation at both sides between Cable and Circumference of the gyratory Wheel, approx.= 5 mts, adjusting this separation installing an Axis with major or minor length, as much as shorter Axis...stronger and, for easy train´s passing, an angle nearer to 180º value in both rhombus´s-vertex, minor diagonal, where are the Big Soundless-♫-Bearings of Axis´s-insertion...a few Electrical Motors fed with Solar Energy maintain automatically the gyration-speed counteracting the slow braking by friction from the Bearings...besides of Bearings: main Maglev system also for the gyration and Axis´s supporting, leaving the Bearings only how secondary security system...with a slight roominess between each Bearing and its exterior subjection, into the roominess all around...little solid Pistons radial extensible and retractable... Axis´s subjection: extensible ON, with Bearings; extensible OFF, with Maglev. With a sufficient cables tension coming from the counterweight, if Axis resists: the rhombus structure is undeformable)*10 Floors, 9 with 25 mts in height each one; top with 25 - axis radius = 15 mts; gyrating 360º each 31 seconds, angular-speed = 11.61º/sec, linear-speed (tangential) = 182 kms/h... Station-Wheel GYRATION: AXIS IN PERPENDICULAR (90º) ORIENTATION TO THE ORBITAL TRAJECTORY...and so, while Station-Wheel follows its geostationary orbit, the Wheel does Not changes the spatial-orientation of its axis, and thus there are Not Precession forces actuating (and thus there is Not torsion´s force against Track)...
ReplyDelete(2)...space-elevator (orbital station bike wheel-1g)... Wheel with maneuver´s tangential rockets for gyration´s start, or...gyration emergency stop...and maneuver´s axial rockets for reorientation of Wheel´s axis, if it is necessary: because the Earth´s axis slightly and very slowly goes oscillating cyclically due to nutation and precession...that oscillation evidently produces transversal traction pulling of the Track towards aside from its anchoring on equator, thus would have an also slight pendular movement side to side and it would produce precession´s movements of the Wheel´s axis and thus a not wished torsion of the Track...but the system must supporting lateral charges, in the same orbital plane and thus without precessión´s problems on the Wheel´s axis, of acceleration against W→Track←E produced by the Coriolis effect when movement´s direction is perpendiculat to the gyration axis, that lateral charge is maximum on equatør (there, a vertical movement is perpendicular to Earth´s axis) and zero on poles (there, a vertical movement is parallel to Earth´s axis); upwards charge to West, downwards charge to East; the gyroscopic-rigidity contributes for maintaining the gyration-axis perpendicular to the orbital trajectory... When the Maglev Train slowly arrives (Train with one opened ╚╝ side in its technical maglev-zone train´s all along over Track for can passing for both tracks: main-Track/rhombus-track/again main-Track), using now cogwheels on Zipper-Track (zippers installed on the same Maglev-Track), Train stops in Geo 0g-Station placed over one Extreme of the Gyration-Axis, next to «Port for Spacecrafts»... Passengers disembark and entering into gyratory circular corridor, they take now the interior-elevator of one of the Wheel´s hollow-radius (also could be an exterior-elevator running for a simple zipper-radius), and tunnel "descending" till Hotel into the final Zone-1g...where while Station-Wheel goes turning, the immense O2 producer Hydroponics Garden receives a filtered Sun light...and there are Earth´s awesome views.
ReplyDelete(4b)...space-elevator (orbital station ramp: maglev superconductor)... Already enough far, 100 kms height climbing with maglev and retractable cogwheels by zipper-maglev-track, with some minor gravitational Earth attraction and already at outer-space cryogenic temperature (enough with Ct= -148 ºC by now...) under Train in sun´s protected shadow...superconductor magnets zero electric-resistance, Maglev super power: Cogwheels off...constant acceleration upwards...Maglev hyperspeed→... On come back to Earth, after of braking and parachute, if still so Train comes with speed´s excess is switched from already on ground horizontal-track to a lateral circuit Vertical of Big Rings-Track in spiral where alone stops by gravity.
ReplyDelete"... in nature, supersymmetry is broken ..." I have conjectured that string theory with the infinite nature hypothesis implies supersymmetry occurs in nature and the Big Bang is basically correct, but string theory with the finite nature hypothesis implies supersymmetry does not occur in nature and the Big Bang needs to be replaced by Wolfram's Reset. What empirical evidence refutes the following?
ReplyDeleteThe Seven Sagacities of String Theory with the Finite Nature Hypothesis: (1) There is a profound synergy between string theory with the infinite nature hypothesis and string theory with the finite nature hypothesis. (2) Milgrom is the Kepler of contemporary cosmology — on the basis of overwhelming empirical evidence (implying dark-matter-compensation-constant = (3.9±.5) * 10^–5). (3) The Koide formula is essential for understanding the foundations of physics. (4) Lestone's theory of virtual cross sections is essential for understanding the foundations of physics. (5) The idea of Fernández-Rañada and Tiemblo-Ramos that atomic time is different from astronomical time is correct. (6) There is genius in the ideas of Riofrio, Sanejouand, and Pipino concerning the hypothesis that the speed of light in a perfect vacuum steadily decreases as our universe ages. (7) Quantum information reduces to Fredkin-Wolfram information, which is controlled by Wolfram's cosmological automaton in a mathematical structure isomorphic to a 72-dimensional, holographic, digital computer.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
Delete(2a)...interstellar travel not acceleration constant (laser) a light´s beam that rotates is not a light´s beam that rotates... if it would be 1 only beam even the infinitesimal centrifugal force from gyration perhaps would break the beam launching tangentially its photons at straight line, as the stone of a sling loop that breaks. The arriving light from a gyratory pulsar star is something seemed, the same that gyratory laser pointer from horizon to horizon 180º in 1 second, each beam that exits from the gyratory source is an independent light´s ray with limited length that only goes away in straight line: source on, beam starts; source turns, that beam finishes, is off, and starts-on another beam in the new direction... Photon, its mass "is believed" that is zero, but is the Photons Mass impacting against the future Space-Sail that produces the Transfer of Linear Momentum (mass*sped), the thrust... Photon, "relativistic" Mass M = E/c²... constant h=6.626*10^-34 joules-sec... v, frequency e.g. red light=4*10^14 Hz...that photon energy E = h*v; E = (6.626*10^-34) * (4*10^14); E = 2.6504*10^-19 joules... that photon MASS M = E/c²; M = (2.6504*10^-19) / (9*10^16); M = 2.944889*10^-36 kgs... that Photon "relativistic" Mass ~29 ten-sextillionth of kg ______ 4 Screens, infinitesimal Centrifugal Force in kgs from that Photon, according to huge centrifugal G... 1: 95493 kms (96,105,971g)=2.8*10^-28 kgs (~28 hundred-thousand-quadrillionth of kg)... 2: Moon 384,403 kms (386,870,541g)=1.1*10^-27 kgs (~11 ten-thousand-quadrillionth of kg)... 3: Sun, 1.5*10^8 kms (150,962,873,002g)=4.4*10^-25 kgs (~44 hundred-quadrillionth of kg)... 4: Andromeda, 1.9*10^19 kms (913,135,316,143,368g)=2.7*10^-21 kgs (~27 ten-thousand-trillionth of kg)... When source turns, each individual laser mark on the screen has zero speed, do not moves, arrives and it vanishes, such as a light-bulbs row that they go being on and off one after another, from the first to the last hyperluminal speed, but is Not the speed of a mobile because there is Not any mobile.
Deletepulsar is not light visible, is invisible radiation, infinitesimal centrifugal force in "kgf (kp)
Delete...interstellar travel constant acceleration (coronavirus: influenza vaccine?)... they say that influenza vaccine diminishes the effects and gravity of COVID-19... so, "while arrives its specific vaccine": EVERYBODYVACCINATIONINFLUENZA (WHO)... ((El Sistema Inmunológico reacciona a la vacuna gripe creando Anticuerpos que dañarán parcialmente el covid19 cuando llegue. Sin vacuna gripe, el coronavirus ataca a sus anchas y el sistema inmunológico puede reaccionar de forma desproporcionada produciendo Gran Inflamación que es lo que mata, por ello tratan con inmunosupresores para evitar en lo posible esa inflamación mortal. Con los Anticuerpos (the dogs) producidos en respuesta a la vacuna gripe el coronavirus (the tiger) atacado y algo debilitado no produce tanta reacción inflamatoria mortal... so, "while arrives its specific vaccine covid19": MASSIVE VACCINATION AGAINST FLU ALREADY (WHO).
ReplyDelete...interstellar travel constant acceleration (without "God bless to the United States of America")... and to the rest of the World that crack to it a ray... Only USA cans do this: leave to be USA and take the lead of UNION WORLD without nations, without wars and without frontiers (cities only) with 1 only World Government, 1 only Idiom, 1 only Army, 1 only Anthem and 1 only Earth Flag, admitting in Equality in UW to all countries that also want leave to be it and go away together toward the Immortal Future withous gods religion "blesses" nor tales, instead of continue wasting the time raising barriers and with the demented religious-patriots assail-Congresses who make a country in which the house of the president is called the "white" house... Come on to the future, and goodbye UN... Hello UW...
ReplyDelete