There was a period when renormalization was considered as a temporary
remedy, working luckily in a limited set of theories and supposed to
disappear within a physically and mathematically better approach. P.
Dirac called renormalization “doctoring numbers”
and advised us to search for better Hamiltonians. J. Schwinger also was
underlying the necessity to identify the implicit wrong hypothesis
whose harm is removed with renormalization in order to formulate the theory in better terms from the very beginning. Alas, many tried, but none prevailed.

In his article G. ‘t Hooft mentions the skepticism with respect to renormalization, but he says that this skepticism is not justified. I
was reading this article to understand his way of thinking about
renormalization. I thought it would contain something original,
insightful, clarifying. After reading it, I understood that G. ‘t Hooft
had nothing to say.

Indeed, what does he propose to convince me?

Let us consider his statement: “

*Renormalization is a natural feature, and the fact that renormalization counter terms diverge in the ultraviolet is unavoidable*”. It is rather strong to be true. An exaggeration without any proof. But probably, G. ‘t Hooft had no other experience in his research career.
“A
natural feature” of what or of whom? Let me precise then, it may be
unavoidable in a stupid theory, but it is unnatural even there. In a
clever theory everything is all right by definition. In other words,
everything is model-dependent. However G. ‘t Hooft tries to make an
impression that there may not be a clever theory, an impression that the
present theory is good, ultimate and unique.

“

*The fact that mass terms in the Lagrangian of a quantized field theory do not exactly correspond to the real masses of the physical particles it describes, and that the coupling constants do not exactly correspond to the scattering amplitudes, should not be surprising.*”
I
personally, as an engineering physicist, am really surprised – I am used
to equations with real, physical parameters. To what do those
parameters correspond then?

“

*The interactions among particles have the effect of modifying masses and coupling strengths*.” Here I am even more surprised! Who ordered this? I am used to independence of masses/charges from interactions. Even in relativistic case, the masses of constituents are unchanged and what depends on interactions is the total mass, which is calculable. Now his interaction is reportedly such that it changes masses and charges of constituents and this is OK. I am used to think that masses/charges were characteristics of interactions, and now I read that factually interactions modify interactions (or equations modify equations ;-)).
To convince me even more, G. ‘t Hooft says that this happens “

*when the dynamical laws of continuous systems, such as the equations for fields in a multi-dimensional world, are subject to the rules of Quantum Mechanics*”, i.e., not in everyday situation. What is so special about continuous systems, etc.? I, on the contrary, think that this happens every time when a person is too self-confident and makes a stupidity, i.e., it may happen in every day situations. You have just to try it if you do not believe me. Thus, when G. ‘t Hooft talks me into accepting perturbative corrections to the fundamental constants, I wonder whether he’s checked his theory for stupidity (like the stupid self-induction effect) or not. I am afraid he hasn’t. Meanwhile the radiation reaction is different from the near-field reaction, so we make a mistake when take the latter into account. This is not a desirable effect , that is why it is removed by hand anyway.
But let us admit he managed to talk me
into accepting the naturalness of perturbative corrections to the
fundamental constants. Now I read: “

*that the infinite parts of these effects are somehow invisible*”. Here I am so surprised that I am screaming. Even a quiet animal would scream after his words. Because if they are invisible, why was he talking me into accepting them?
Yes,
they are very visible, and yes, it is we who should make them invisible and this is called renormalization. This is our feature. Thus, it is
not “somehow”, but due to our active intervention in calculation
results. And it works! To tell the truth, here I agree. If I take the
liberty to modify something for my convenience, it will work without
fail, believe me. But it would be better and more honest to call those
corrections “unnecessary” if we subtract them.

How he justifies
this our intervention in our theory results? He speaks of bare particles
as if they existed. If the mass and charge terms do not correspond to
physical particles, they correspond to bare particles and the whole
Lagrangian is a Lagrangian of interacting bare particles.
Congratulations, we have figured out bare particles from postulating their interactions! What an insight!

No,
frankly, P. Dirac wrote his equations for physical particles and found
that this interaction was wrong, that is why we have to remove the wrong
part by the corresponding subtractions. No bare particles were in his
theory project or in experiments. We cannot pretend to have guessed a
correct interaction of the bare particles. If one is so insightful and
super-powerful, then try to write a correct interaction of physical
particles, - it is already about time.

“

*Confrontation with experimental results demonstrated without doubt that these calculations indeed reflect the real world. In spite of these successes, however, renormalization theory was greeted with considerable skepticism. Critics observed that ”the infinities are just being swept under the rug". This obviously had to be wrong; all agreements with experimental observations, according to some, had to be accidental.*”
That’s a proof from a Nobelist! It cannot be an accident!
G. ‘t Hooft cannot provide a more serious argument than that. In other
words, he insists that in a very limited set of renormalizable theories,
our transformations of calculation results from the wrong to the right
may be successful not by accident, but because these
unavoidable-but-invisible stuff does exists in Nature. Then why not to
go farther? With the same success we can advance such a weird
interaction that the corresponding bare particles will have a dick on
the forehead to cancel its weirdness and this shit will work, so what?
Do they exist, those weird bare particles, in your opinion?

And he
speaks of gauge invariance. Formerly it was a property of equations for
physical particles and now it became a property of bare ones. Gauge
invariance, relativistic invariance, locality, CPT, spin-statistics and
all that are properties of bare particles, not of the real ones; let us
face the truth if you take seriously our theory.

I like much better the interaction with counter-terms. First, it does not change the fundamental constants. Next, it shows imperfection of our “gauge” interaction - the counter-terms subtract the unnecessary contributions. Cutoff-dependence of counter-terms is much more natural and it shows that we are still unaware of a right interaction – we cannot write it down explicitly; at this stage of theory development we are still obliged to repair the calculation results perturbatively. In a clever theory, Lagrangian only contains unknown variables, not the solutions, but presently the counter-terms contain solution properties, in particular, the cutoff. The theory is still underdeveloped, it is clear.

No, this paper by G. ‘t Hooft is not original nor accurate, that’s my assessment.

## Комментариев нет:

## Отправить комментарий