#### [SOLVED] Rigor in quantum field theory

By MBN

Quantum field theory is a broad subject and has the reputation of using methods which are mathematically desiring. For example working with and subtracting infinities or the use of path integrals, which in general have no mathematical meaning (at least not yet) ect. My question is a little vague, but i am interested in hearing what is the status of rigor in QFT. What is known to be mathematically rigorous and consistent, what is known to be not rigorous? Any examples and references are welcome.

Added: Just to clarify by rigorous I meant anything that a mathematician would find satisfactory. Also my question wasn't for books with rigorous (in some sense) approach, although that was welcomed. It was about specific examples of what is considered mathematically satisfactory and what not. For example the quantization of free fields satisfying the Klein-Gordon equation can be done rigorously. There is no mathematical definition in general of the Feynman path integral and so on.

#### @Xiao-Gang Wen 2012-05-30 13:10:03

Here is my answer from a condensed matter physics point of view:

Quantum field theory is a theory that describes the critical point and the neighbor of the critical point of a lattice model. (Lattice models do have a rigorous definition).

So to rigorously define quantum field theories is to find their UV completion.

To classify quantum field theories to classify all the possible critical points of lattice models, which is a very important and very hard project.

(One may replace "lattice model" in the above by "non-perturbatively regulated model")

#### @MBN 2012-05-31 09:09:33

Thanks, can you point out a general exposition/overview article about lattice models and QFT. Or any sourse that can give me an idea.

#### @user1504 2012-06-28 23:45:44

QFT's reputation for using methods which are mathematically unsound isn't really deserved these days. Certainly, not everything is under perfect analytic control, but the situation isn't really much worse than it is in fluid dynamics.

In particular, the 'subtraction of infinities' thing isn't really considered to be an issue anymore. The mathematicians who've looked at it recently (like Borcherds & Costello) have basically come to the conclusion that Wilsonian effective field theory resolves these difficulty. You can make all computations solely in terms of long-distance 'effective' quantities, which are the things left behind when physicists subtract infinities. Short distance infinities therefore don't present a problem for defining correlation functions; there's nothing inconsistent about the basic path integral formalism.

This is really the same conclusion the constructive field theorists came to, studying lower dimensional examples in the 70s & 80s.

The challenge in rigorous QFT is dealing with infrared divergences. If your spacetime has infinite volume, then your field system can have degrees of freedom of arbitrarily large size. Coupling to these degrees of freedom can give you infinities. There are real mathematical problems here, but they're more like describing the solutions of an equation than describing the equation itself. (Really non-trivial things can happen. In QCD, for example, there is confinement: many of the observables you'd naively expect to be integrable with respect to the path integral measure -- like the observable representing a free quark or a free gluon -- aren't. Instead, the integrable observables are complicated mixtures of quarks and gluon, like protons, neutrons, and glueballs.) Most of the heavy lifting in Glimm & Jaffe, for example, comes not from constructing the 2d $\phi^4$ path integral measure, but from proving that its $n$-point correlation functions actually exist.

Naturally, this means that most computations of observable expectation values -- like in lattice gauge theory -- are not under tight analytic control. Convergence in simulation is mostly a matter of good judgement, for now.

Saying anything rigorously about this stuff almost certainly will require mathematicians to get a better grip on renormalization in non-perturbative settings (i.e., on the lattice). There are a good number of mathematicians actively working on this stuff. Geometers and topologists are getting more sophisticated about topological field theory, while the analysts have taken up statistical field theory.

#### @Estanislao 2012-06-28 16:11:03

I would like to point out that there are several different problems coming from different points of view on the subject. It would be very complicated to comment on all of them, so let me restrict to a particular one.

As a first remark, I have to state that nobody that works in mathematics may have a doubt about what "rigorous" means. I will not comment on this since it seems that it was already explained in a clear manner.

Concerning your question, I would like to state that QFT is not a "unique" theory, but a bunch of several different ones which are more less related to each other due to some intrinsic descriptions. For instance, the "behavior" and construction of the (real or complex) scalar field theory and of the gauge theory is rather different. This is a kind of natural consequence of the fact that Classical Field Theory (ClFT) (which is completely rigorous up to some extent, even though it still contains several nontrivial problems) is also a collection of several different theories, which share a general geometrical description, but which have their own particular difficulties: as a particular setting of ClFT we may obtain classical mechanics, electromagnetism or even nonabelian gauge theory, etc. Let me also add that the general philosophy underlying ClFT appears, in some sense, as the only manner to construct relativistic extensions of the free situation, as a major difference with classical mechanics, in which you may add any constraint to a free particle without breaking any fundamental principle of the theory. I'm only rephrasing what P. Deligne and D. Freed state in the first volume of "QFT and Strings for Mathematicians", which was already mentioned.

Concerning now the problem of the quantization of each of the particular settings you may consider in ClFT, there are several problems to deal with. Let me consider two different aspects of the problem: perturbative and nonperturbative QFT. We may say that the former is (morally) a shadow of the latter. Moreover, the perturbative QFT (pQFT) can be developed in a mathematically rigorous manner in lots of situations. You may see the article by R. Borcherds in the arXiv "Renormalization and quantum field theory" (even though some of the ideas were already present in other texts in the literature, and, in my opinion, they are lurking behind some of the constructions and proofs by the author, see for instance the articles by O. Steinmann, which were also considered by R. Brunetti, K. Fredenhagen, etc). In this situation he defines in a rigorous manner an object which behaves like the Feynmann measure ("via the Riesz theorem"), and he gives a very complete account of how the pQFT should be described in several situations. The problem stays however in giving a correct formulation of nonperturbative QFT. This is a major problem, and only a few rigorous constructions up to dimension 2 (also dimension 3, but really few as far as I know. It would nice to hear the experts in this point) were performed. You may see the book by J. Glimm and A. Jaffe "Quantum physics – a functional integral point of view". In fact the major problem comes when trying to quantize the gauge theory, as a sub collection of situations of QFT. The lack of such a general picture means in fact that we actually do not know what a Quantum Field Gauge Theory really looks like (or just is, if you want). In particular (I state this because some people argue that the following is a consequence of having only a perturbative description), two major claims of physicists about the standard model (which are in some sense related), the mass gap and the quark confinement, are not proved (the former in fact constitutes one of the Millennium Prize Problems). Needless is to say that none of the physical heuristic arguments are clearly sufficient.

#### @Ron Maimon 2012-06-28 19:17:21

Mathematicians are very silly when it comes to "rigorous" regarding measure theory and this is why they are stuck. The problem starts when you have to axiomatize measure theory to define random picks. There should be no hard work involved in defining a constructive measure (a picking you can do on a computer, or a limit thereof), but there is.

#### @Estanislao 2012-06-29 14:04:46

The use of disqualifying adjectives is completely unnecessary and misleading, since the people involved here is in some sense irrelevant, meaning that what is important is the subject of discussion (i.e. the rigor in QFT). On the other hand, the axioms of measure theory are completely clear and well-known even to a 2nd/3rd year undergraduate student of mathematics. This is not the problem we are taking about. What is being mentioned here is the (apparent) incapability of (some of) the present mathematical tools to provide a complete and correct formulation of nonperturbative QFT in general.

#### @Ron Maimon 2012-06-30 02:15:22

The language is necessary to shame people to motivate change. The "measure theory axioms" are not the problem, the problem is that measure theory involved needs axioms at all! You need a sigma algebra on the space and there is no simple sigma algebra on the unknown space of field distributions a-priori. This means that people define the measure in a stupid roundabout way, while there is a simple logic result (Solovay's theorem) that guarantees that this is no problem at all. Other problems remain, but the issue becomes one of probability analysis, the measure theory is trivial.

#### @Ron Maimon 2012-06-30 02:18:11

What I mean by that is the following: "A free quantum field theory: consider picking every fourier transform value f(k) of a random function to be a Gaussian with a (specific) variance $\sigma(k)$. This is the (imaginary time) quantum field." Did I just define free quantum fields? Not for mathematicians, because a random picking algorithm, no matter how convergent, does not define a measure. You need a sigma algebra to define a measure. You can't say "the measure of a set is the probability that this random function lands in the set" because this only makes sense in a Solovay universe.

#### @Estanislao 2012-07-01 17:37:27

I'm not sure I understand what you try to say, because in my limited experience the Solovay thm (further extended by Krivine, Shelah, etc) is just a manner of stating that the construction of nonmeasurable Lebesgue sets depends on the axiom of choice. All these results are far from being simple in my opinion. In any case, this discussion seems to me to be somehow misleading because which measure-like objects are needed is not completely hidden: measures are in some sense too restrictive, and prodistributions seem much more well-adapted objects, as studied by P. Cartier and C. De Witt-Morette.

#### @Estanislao 2012-07-01 17:48:46

(2nd part) In fact, both authors have showed that the setting of prodistributions (generalizing the restricted situation given by measures or even promeasures) gives the physically desired explanations if we are working with the rather restrictive (but already interesting) situation of paths. I would like to stress however what I think is the main nontrivial problem: even though a measure-like theoretic formulation of npQFT may be given, it must still give an answer to the mass gap problem or the quark confinement, which seems to be (really much) more than a straightfoward computation.

#### @Ron Maimon 2012-07-02 15:08:45

I completely agree that the nontrivial problem is proving the properties of the measure, like mass gap, and that what I am saying is focusing attention on something more primitive, and so might be misleading from the main point. But I am sure that any technique for proving mass-gap is one which shows the Euclidean theory is decaying correlation functions, and this is something like a probability coupling which relates the probability distribution on the fields on one regularization (lattice) to those in a coarser regularization (lattice), and takes the limit (renormalization).

#### @Ron Maimon 2012-07-02 15:12:50

I disagree that Solovey's theorem is "generalized" by Krivine and Shelah--- the later results serve as a smoke-screen to make the original result harder to digest and disguises their essential triviality (in the appropriate philosophy). The essence of Solovay is the method, not the result. Solovay's "random forcing" tells you that, aside from being inconsistent with the axiom of choice on uncountable sets, it is completely consistent to talk about random picks from a probability distribution. There is no contradiction in saying "I choose real number r randomly uniformly in [0,1]"

#### @Ron Maimon 2012-07-02 15:16:23

This idea is inconsistent with the existence of a nonmeasurable set, since a random real has a probability of landing in any pre-specified set. This probability defines the measure of every subset. If you define a process for picking a random scalar field on a lattice (like for free fields, pick each Fourier component as a Gaussian with the right variance), and then show that this process converges to picking a distribution in the limit of small lattices, you have defined a random-picking measure in a Solovay sense--- the measure is the chance that this random distribution lands in S.

#### @Estanislao 2012-07-04 19:44:54

I did not state that Shelah, Woodin, etc generalized the result of Solovay, but they extended the results of Solovay, since they have answered several questions that Solovay posed (or suggested) in the same article concerning the necessity of the hypothesis of inaccessible cardinals. There is a nice exposition by J. Raisonnier if you are interested.

#### @Ron Maimon 2012-07-05 01:22:00

I didn't mean to belittle Shelah's work (or Woodin's). Shelah's result that you need a large-cardinal, though, is often used in a propagandistic way to make it seem as though Solovay's result is not 100% sound, (the large cardinal involved cannot be controversial--- it is just a little stronger than asserting the existence of a model of ZFC), and this is one propaganda tool preventing mathematicians from accepting Solvay universes as the actual real universe. I live in a Solovay universe, I don't worry about sigma-algebras, and I don't want to have to say "Topos" to justify probability.

#### @Ron Maimon 2012-07-05 01:29:21

The necessity of inaccessible cardinals is really saying that the two-step process Solovay used to construct the measure on all subsets is unavoidable, despite what Solovay suspected. First Solovay extended a countable ZFC universe using random reals, and this gives measure to all the sets in the previous universe, but adds a whole bunch of new non-measurable sets, since choice still holds in the extension, then he cuts out the non-measurable sets in the new model. The cutting down requires you to have access to a model of ZFC, and this is a large cardinal, but an uncontroversial tiny one.

#### @Ron Maimon 2012-07-05 01:34:48

The idea of the proof is simply that there is a logically consistent way of defining a new real at random (so not in any current countable model), and this real gives measure to everything previous, just by the probability of landing in it (this much was already known to Cohen). But adding a new real adjoins lots of new sets, but you can choose a second random real, and again everything gets a measure, but you adjoin new sets, etc. The point is that you can consistently end this process, something which is intuitively obvious, because the probabilistic picking is obviously consistent.

#### @Ron Maimon 2012-07-05 01:39:12

Once you know probability is consistent, so that you can choose things at random without contradiction, you can do probability on any set, even a set of distributions, just by defining an algorithm which picks distributions at random. Physicists use this implicitly all the time, for constructing the Ising model on infinite lattices (for example) something which is not obvious in mathematics, because you need a ridiculous sigma algebra construction the moment the lattice is infinite. The baggage of measure theory is onerous, it blocks you from making intuitive arguments about field theory.

#### @Vladimir Kalitvianski 2011-03-09 17:24:40

I think everything is sufficiently rigorous when you do it according to the math rules.

Cheating starts when they say: "The integral of the delta-function squared, although looks as infinity, must be determined from the experimental data". It is just funny.

Once I encountered a similar infinity in a simpler but exactly solvable problem. First, I wanted to do renormalizations (determining the integral value from experimental data) but fortunately I managed to choose a better initial approximation and decrease the perturbative corrections. So the problem is in the initial approximation. If it is good, then the perturbative corrections are small. Otherwise they are large.

I also found an explanation why subtractions (discarding corrections) work sometimes. From my current point of view, the QFT needs reformulating since it is badly constructed. Reformulated QFT does not need repairing its solutions on the go.

#### @Tim van Beek 2011-03-09 09:15:05

First: There is no rigorous construction of the standard model, rigorous in the sense of mathematics (and no, there is not much ambivalence about the meaning of rigor in mathematics).

That's a lot of references that Daniel cited, I'll try to classify them a little bit :-)

Axiomatic (synonymous: local or algebraic) QFT tries to formulate axioms for the Heisenberg viewpoint (states are static, observables are dynamic). There are three sets of axioms known:

Roughly, the Wightman axioms describe how fields relate to observables, the Osterwalder-Schrader axioms are the Wightman axioms for Euclidean field theory, and the Haag-Kastler axioms dodge fields entirely and describe the observables per se. All three sets of axioms are roughly equivalent, meaning that the equivalence has been proven, sometimes with additional assumptions that physicists deem to be irrelevant.

"PCT, Spin and Statistics, and All That" was the first introduction to the Wightman axioms.

"Local Quantum Physics: Fields, Particles, Algebras" is an introduction to the Haag-Kastler axioms, as is "Mathematical Theory of Quantum Fields".

"Perturbative Quantum Electrodynamics and Axiomatic Field Theory" is a description of QED from the viewpoint of the Haag-Kastler axioms.

"Introduction to Algebraic and Constructive Quantum Field Theory" is about the quantization of given classical equations in the spirit of Haag-Kastler.

"Quantum Physics: A Functional Integral Point of View" uses the Osterwalder-Schrader axioms.

2D conformal field theory can be axiomatized using the Osterwalder-Schrader axioms, for example.

Functorial quantum field theory axiomatizes the Schrödinger viewpoint, see e.g. hnLab on FQFT.

This includes for example topological quantum field theories, these describe essentially theories with finite degrees of freedom. This branch has had a lot of impact in mathematics, especially with regard to differential geometry, and here to the theory of 3D and 4D smooth manifolds. I'd put

Daniel S. Freed (Author), Karen K. Uhlenbeck: "Geometry and Quantum Field Theory"

in this category.

"Geometry and Quantum Field Theory"

Quantization of classical field theories: Note that the axiomatic approaches don't depend on classical field theories that need to be quantized, they open the doors for a direct construction of quantum systems without classical mirror. The Lagrangian approach to QFT is an example of an ansatz that starts with a classical field theory that needs to be quantized, for which different means can be used.

Ticciati: "Quantum Field Theory for Mathematicians" is actually a quite canonical introduction to Lagrangian QFT, without much ado.

There is a lot of material about the geometry of classical field theories and variants to quantize them, like "geometric quantization".

The book Welington de Melo, Edson de Faria: "Mathematical Aspects of Quantum Field Theory" is an example of this.

Much more advanced is "Quantum Fields and Strings: A Course for Mathematicians (2 vols)"

For the path integral there are two points of view:

• The path integral - along with the Feynman rules - is a book keeping device for a game called renormalization, that lets you calculate numbers according to arcane rules,

• the path integral is a mathematical construct like a "measure" - but not a measure in the sense of measure theory known today - that needs to be discovered and defined appropriately.

AFAIK there has not been much progress with the second viewpoint, but there are people working on it, for example the authors of the book "Mathematical Theory of Feynman Path Integrals: An Introduction". You can find a lot more material about the mathematical theory of path integrals on the nLab here.

#### @Alex Nelson 2012-06-28 17:42:04

I thought the Osterwalder-Schrader axioms were describing the Euclidean path integral approach...not the Heisenberg picture. Also, there are some ambiguities with quantizing a classical field (even in quantum mechanics, there is ambiguities in the quantization procedure; see, e.g., the Groenewald-van Hove "no-go" theorem).

#### @Mr X 2011-03-09 00:20:47

working with and subtracting infinities ... which in general have no mathematical meaning

is not really correct, and seems to have a common misunderstanding in it. The technical difficulties from QFT do not come from infinities. In fact, ideas basically equivalent to renormalization and regularization have been used since the beginning of math--see, e.g., many papers by Cauchy, Euler, Riemann, etc. In fact, G.H. Hardy has a book published on the topic of divergent series:

http://www.amazon.com/Divergent-AMS-Chelsea-Publishing-Hardy/dp/0821826492

There is even a whole branch of math called "integration theory" (of which things like Lebesgue integration is a subset) that generalizes these types of issues. So having infinities show up is not an issue at all, in a sense, they show up out of convenience.

So the idea that infinities have anything to do with making QFT axiomatic is not correct.

The real issue, from a more formal point of view, is that you "want" to construct QFTs via some kind of path integral. But the path integral, formally (i.e., to mathematicians) is an integral (in the general sense that appears in topics like "integration theory") over a pretty pathological looking infinite dimensional LCSC function space.

Trying to define a reasonable measure on an infinite dimensional function space is problematic (and the general properties of these spaces doesn't seem to be particularly well understood). You run into problems like having all reasonable sets being "too small" to have a measure, worrying about measures of pathological sets, and worrying about what properties your measure should have, worrying if the "$\mathcal{D}\phi$" term is even a measure at all, etc...

At best, trying to fix this problem, you'd run into an issue like you have in the Lebesgue integral's definition, where it defines the integral and you construct some mathematically interesting properties, but most of its utility is in letting you abuse the Riemann integral in the way you wanted to. Actually calculating integrals from the definition of the Lebesgue integral is not generally easy. This isn't really enough to attract the attention of too many physicists, since we already have a definition that works, and knowing all of its formal properties would be nice, and would certainly tell us some surprising things, but it's not clear that it would be all that useful generally.

From an algebraic point of view, I believe you run into trouble with trying to define divergent products of operators that depend on renormalization scheme, so you need to have some family of $C^*$-algebras that respects renormalization group flow in the right way, but it doesn't seem like people have tried to do this in a reasonable way.

From a physics point of view, we don't care about any of this, because we can talk about renormalization, and demand that our answers have "physically reasonable" properties. You can do this mathematically, too, but the mathematicians are not interested in getting a reasonable answer; what they want is a set of "reasonable axioms" that the reasonable answers follow from, so they're doomed to run into technical difficulties like I mentioned above.

Formally, though, one can define non-interacting QFTs, and quantum mechanical path integrals. It's probably the case that formally defining a QFT is within the reach of what we could do if we really wanted, but it's just not a compelling topic to the people who understand how renormalization fixes the solutions to physically reasonable ones (physicists), and the formal aspects aren't well-understood enough that it's something one could get the formalism for "for free."

So my impression is that neither physicists or mathematicians generally care enough to work together to solve this problem, and it won't be solved until it can be done "for free" as a consequence of understanding other stuff.

Edit:

I should also add briefly that CFTs and SCFTs are mathematically much more carefully defined, and so a reasonable alternative to the classic ideas I mentioned above might be to start with a SCFT, and define a general field theory as some kind of "small" modification of it, done in such a way to keep just the right things well-defined.

#### @Carl Brannen 2011-03-09 01:16:17

I've got Hardy's book and quote it against what you've said. (I just don't have it with me). Hardy was a good mathematician and knew that how you choose to "regularize" a divergent series drastically effects the resulting sum. The reason QFT gets away with it is that there's an underlying assumption that the functions involved are complex and analytic.

#### @Mr X 2011-03-09 01:21:57

Yes, that's part of what I meant by saying we want to fix our answers against "physically reasonable" solutions. Although complex analytic is actually too strong an analyticity property in general for us, and you do need some extra technical assumptions to make sure things are "physically reasonable." But worrying about properties in terms of analyticity is problematic from the infinite dimensional POV (think about the topological and measure theoretic properties of analytic subsets of these infinite dimensional LCSC spaces).

#### @Mr X 2011-03-09 01:24:05

Also, the space of paths you integrate over are the Brownian motion like ones, which aren't differentiable anywhere. But you still run into problems because other spaces than the obvious one are pathological ;). I believe you can approach ODEs and PDEs from this point of view (I don't know if much has been done with this because it's a pretty perverse thing to do), but thinking about them brings up a whole host of problems that are only worse in this case from an analytic POV.

#### @lurscher 2011-07-14 15:17:43

very good summary; although i have to say that i find extremely sad and discouraging when i hear bright physicists say stuff like "From a physics point of view, we don't care about any of this, because we can talk about renormalization, and demand that our answers have physically reasonable properties...but the mathematicians are not interested in getting a reasonable answer". This might be right from a numerical (maybe numerological?) perspective, but its the completely wrong mindset to begin with. Mathematical consistency (or a clear pathway to it) is never a luxury. Avoiding it is

#### @Alex Nelson 2012-06-28 17:39:47

Although, speaking as a mathematician, I feel I must correct you: the proper phrase is measure theory not "integration theory".

#### @Daniel 2011-03-09 01:25:53

There are several books that approach QFT (and/or Gauge Theory) from different levels of 'mathematical rigor' (for some definition of "mathematical rigor" — that Moshe would approve ;-).

So, let me give you some sort of 'preliminary list'… it's by no means complete, and it's in no particular order either, but i think it may pave the way to further work.

In any case… there's much more out there, not only in terms of topics (renormalization, etc) but also in terms of articles, books and so on.

So, there's plenty of "mathematical rigor" in QFT (and String Theory, for that matter), including different 'levels' of it, that should please and suit varied tastes.

PS: There are other topics here that deal with this topic in a form or another, e.g., Haag's Theorem and Practical QFT Computations. So, don't be shy and take a look around. :-)

#### @Ron Maimon 2012-06-28 19:14:37

There is not "plenty" of mathematical rigor, as the rigorous work is utter crap, barely capable of repeating the stuff that was current in the 1950s. One central problem is that mathematicians are stupid with regard to defining measures, so that the field of "measure theory" is wrong. They need to reaxiomatize the field to make every set of reals measurable before they can do path integrals, and they won't do that, so tough luck.