In Victor’s last talk, and the last talk of ST4, we turned to the computation of partition functions for gauge theories on .
The 2d chiral and vector multiplets are simply dimensional reductions of their counterparts in 4d, and they correspond respectively to matter and gauge degrees of freedom. We used the constraining power of extended supersymmetry in two dimensions to write down actions for the super-Yang-Mills and matter sectors, and noted that they are -exact. This allows us to use the localization arguments we reviewed in earlier lectures.
We are interested in solutions to the fixed-point equations (which are easy to write down as the action is written down as a sum of squares!) and when the appropriate reality conditions are imposed, we find that the scalars are required to be Cartan-valued, and further that the gauge field and auxiliary scalar are proportional to them. Dirac quantization then dictates that this gauge field flux is quantized.
Now that we have BPS solutions, we would like to compute 1-loop determinants of fluctuations about these solutions. Victor explained that there were many ways to do this, and the simplest of them is to decompose wavefunctions into spin spherical (or Wu-Yang) harmonics. In doing this, one encounters a generalized notion of spin, which is the usual spin minus a contribution coming from the quantum of flux piercing the sphere on which these gauge theories live. In the following discussions, it became clear that similar physics of flux attachment occurs when studying the quantum Hall effect.
Once the fields are decomposed into these “spin” spherical harmonics, it becomes straightforward to write down their determinant. However, when studying the 1-loop determinant of a matter multiplet for example, supersymmetry will ensure cancellations between factors contributing to the bosonic and fermionic determinants. There are, however, terms that do not cancel, and these come from chiral zero modes of the Dirac operator. Such cancellations are at the heart of any supersymmetric theory: all positive energy modes come in pairs — this follows from the SUSY algebra — but the zero modes are under no such algebraic restriction.
We played the same game with the vector multiplet and in the end, wrote down the most general partition function for a gauge theory on with matter. Victor concluded with some remarks on how the poles of the integrands that define the partition function encode information about non-perturbative sectors of the theory.
With this, all the main talks were concluded, and the audience broke into small groups that shared coffee, and boasts of who was more exhausted at the end of the workshop.
We ended a long, gruelling and satisfying two weeks with a four-hour discussion about future directions in the various sub-fields that are currently of interest in the community. We first listed out a bunch of fields and then went one by one and asked people to talk about what the interesting directions that would be worth exploring in the next five years would be. We feel safe saying that this session was extremely rewarding to all who came, helping the experts in these fields formulate what they thought was interesting, telling others what was worth expecting, and telling people who wanted to do something new what could be interesting. While we won’t try to summarise this entire discussion (for reasons of sanity), we did take a photo of the blackboard on which we listed out all the important points so that you, lovely reader, can look at it:
And, with that, we officially called to an end an amazing workshop that was our deepest honour to organise and attend.
In his third lecture Victor (ICTS) introduced the philosophy behind localization techniques and few examples to demonstrate it. We came to know that by localization we can exactly compute the partition function and expectation values of operators which are in a certain multiplet of the theory.
He started with a prototype action which is supersymmetric under .The partition function was that on ,
In order to localize the Partition function someone can add a deformed functional to the action in the way,
where is some arbitrary parameter. By taking a derivative w.r.t. t, we get an expression which is exact and hence reduces to zero. So, there is no harm in taking t to infinity limit which will allow us to analyze Saddle points of the action. Depending on the theory, the supersymmetry involved and the background etc. the classical solution can reduce just to a point, in that case we can say that the theory is localized well. we can also look for the expectation value of certain operators in that background. In the large limit we can write the in terms of classical value and a subleading term in .In a general action with more fermionic and bosonic fields involved, the ratio of bosonic fluctuations to fermionic fluctuations gives what is called one loop determinant,which he promised to calculate in his last lecture with gauge theory examples. For that he moved onto describing the theories on curved background with a R-symmetric multiplet and gravity multiplet,
Here the latex and are graviphotons related with central charges.The dual field strengths were introduced, and similarly . Then he discussed the rigid limit of the gravity multiplet. The gravitino variations are given in terms of the spinors and the field strengths and .For a particular choice of ( )we can write in terms of the spin connection as .If the choice is then Killing spinor equations reduce to,
Then in somewhat more technical way he discussed the twisted and anti-twisted superpotentials in this theory.In the twisted case he calculated the magnetic flux.
Then the matter and gauge field variations were written down along with the Lagrangian.
In the the YM lagrangian reduces to flat space YM lagrangian. He discussed schematically the Q-exact form of the above lagrangians, and then we dispersed for lunch.
After lunch, we came back for a dyptich of evening talks that began in the afternoon. The first was by Atanu Bhatta (IMSc), on a proposal by himself and collaborators for calculating conformal blocks (more precisely, conformal partial waves) in a CFT using “open Wilson networks” in the bulk.
There’s already such a proposal, by Perlmutter and collaborators, where they showed that conformal blocks could be calculated by geodesic Witten diagrams with the exchange of only the field dual to the primary the block corresponds to. However, this formulation isn’t good enough for spinors, because it uses the metric formulation of gravity in the bulk.
The idea behind this work was to write down something that worked for spinors as well, by using the Hilbert-Palatini formalism, in which the gravitational dynamics can be written down in terms of an auxiliary gauge field made out of the vielbeins. Specialising to the case of , we have an SL(2,C) Chern-Simons theory in 3 dimensions. As is well-known, the solution to the equations of motion here is that the gauge field configuration be locally pure gauge.
In this background, suppose there are three Wilson lines coming from three different points and fusing at the same point with a Clebsch-Gordan coefficient. Because the background is pure gauge, the Wilson line , where is the representation, is just we can use a fundamental identity of the Clebsch-Gordan coefficients to remove all dependence on y — so that an arbitrarily complicated network is completely specified by its endpoints, associated representations, and the Clebsch-Gordans at the vertices.
While these Wilson networks are well and good, they can’t be used for conformal block calculations in their present form, for the reason that they are in representations of a non-compact group which are generically infinite-dimensional; in particular, there are an infinite number of representations of the rotation group of the boundary in these representations. Therefore, they defined a class of “cap states” that project the end-points down to a definite irrep of the rotation group.
Then, the prescription for calculating conformal partial waves is: take a Wilson network for the four-point function with the Clebsch-Gordans chosen to have a particular representation in the internal leg, take the end-points on the boundary at the locations of the insertions, and sandwich it in the appropriate cap states. He ended his talk by showing some examples of this prescription in action.
The second evening talk of the day (and the second one to take place in the afternoon) was an overview of holographic renormalization based on lecture notes of Skenderis by Subramanya Hegde (IISER-Thiruvananthapuram). We can compactify a dimensional spacetime such that we have a smooth non degenerate metric on the compactified manifold. Such a conformal compactification induces a conformal class of metrics on the boundary. In particular for AdS spacetimes, the conformal class is that of conformally flat spacetimes. In this set up, one can do an isometry tranformation on the bulk dimensional , which corresponds to a scaling transformation in the boundary theory. This connection allows us to associate the radial direction of the bulk with differnt energy scales in the boundary theory.
The talk started with discussions on the UV/IR connection in holography, conformally compact manifolds, asymptotically locally AdS spacetimes, and the Fefferman-Graham expansion. With the stage set, one wants to calculate renormalized boundary correlators using bulk asymptotics via AdS/CFT. If one naively tries to make the identification
one sees that the correlation functions diverge, essentially because the on-shell action is divergent. We also noted that the variational problem in the bulk is often ill defined.
We need to introduce a cut-off along the radial direction, say at and add counterterms carefully to extract meaningful correlation functions. We use the Fefferman-Graham expansion to write a generic field as
and then regularize and renormalize the action, , and the correlators, order by order. For example, for a scalar, , whose boundary dual has a scaling dimension , one has relations of the form
There are similar relations for the other fields in the theory.
Subbu then proceeded to illustrate these general comments with a concrete example of the renormalization of a scalar field in AdS. After the example, an interesting discussion arose as to whether the CFT does indeed live at the boundary of the AdS, and interpretation of the radial direction of the bulk as the indicator of the energy scales on the boundary. After these intriguing but inconclusive discussions, Pranjal Nayak (TIFR) proceeded to talk about their proposal of holographic renormalization which is more in the spirit of Wilsonian RG. The evening session then concluded after almost four hours!
Victor Ivan Giraldo-Rivera (ICTS) began the ninth day’s morning session with his second lecture on SUSY and localization. He wrote the generalized Killing spinor (GKS) equations as , and introduced an integrability condition . Note that in Minkowski signature and in Euclidean signature and are independent. Combining and as a -component spinor , the integrability condition gave . Since the matrices and their products form the basis for complex matrices, the coefficients of the above equation give , , , , . The last expression shows how the auxiliary fields of the old minimal supergravity multiplet characterize the background manifold. Then Victor chose one solution for the above equations , , and gave two examples; (i) in Minkowski signature, for , one gets and (ii) in Euclidean signature, for , one gets as the background.
These backgrounds, for , , constants are conformally flat manifolds and require stress tensor to be traceless. From the coupling of the FZ-multiplet to old minimal supergravity multiplet, one gets , where traceless of stress-tensor requires as was mentioned in the first lecture.
Then Victor moved on to solve the GKS equations, not fully but tried to get a some solutions and defined the supercharge by these solutions, . When , he wrote a few tensors on , , , , , etc. in terms of the spinors and . Here is a Killing vector and generates generalized translations on i.e., .
Then he considered the solution , where is an almost complex structure. He then mentioned that when satisfies an integrability condition, is a complex manifold locally. He then argued that any even-dimensional real manifold is a complex manifold.
Coming back to the solution , he wrote the metric on in terms of above tensors, , etc.
Then he discussed two possibilities; (i) , where the metric written in complex coordinates describes a Torus fibration over a -dimensional complex surface and (ii) , where the metric describes an fibration over a line. These examples demonstrate how the metric of the background manifold is constrained by the supersymmetry placed on it.
After lunch, in his last lecture, Pranjal Nayak (TIFR) completed the analysis of dealing with the soft modes contributing to the four point function. These were modes (eigen functions to ) with eigen value , thus making blow up at the conformal fixed point (large J limit).
He first shows that at the conformal fixed point these eigenfunctions are indeed generated by infinitesimal diffeomorphisms of : where . Also since is singular for these modes we would also need corrections to (the eigen-values of)$K$ away from . This is first done by writing (and ) using the exact in solution for the 2-pt function in the large limit. Now Maldacena Stanford show that the form of correction to at finite is basically the same as that in the large limit ( expansion). They fix the dependent co-efficient by doing hardcore numerics. Having thus found the change in away from in for any , they finally show that the 4-pt function saturates the chaos bound with where .
We broke off for a much needed tea break and for also taking a group photo for the conference.
So the 4pt function gets contribution from the heavy modes of the form and a leading contribution from the soft modes as in expansion. The authors also find the effective action which governs the soft modes to be that of a Schawrzian: , where parametrizes infinitesimal diffeos from the conformal fixed point. So basically its the zero modes of the conformal fixed point, governed by the Schwarzian effective action closed to the conformal fixed point, which are responsible for saturating the chaos bound in the out of time ordered correlator.
Pranjal then discussed the possibility of having a bulk dual to such a model. The bulk model must necessarily have similar soft modes with a similar Schwarzain effective action. If looking at , of which there are till now 2 candidates: Jackiw-Teitelboim theory proposed by Maldacena and Stanford; Polyakov action in proposed by Pranjal, Gautam Mandal Spenta Wadia. Pranjal mentions that the key ingredient needed to yield a Schawrzian effective action at the boundary is the necessity to have a boundary term of the from where is an additional parameter which needs to be held fixed during the variation of the bulk theory.
Junggi Yoon (ICTS) gave a broad review of the vast and varied activity that has happened, and is currently happening, to understand the SYK and tensor-like models in the evening. The high-level overview focused mostly on the melonic aspects of the models.
The talk began by mentioning the quick proliferation of indices as one moves from vector models to tensor models and then from tensor models with a lower rank to tensor models with higher ranks. This makes it very difficult to calculate correlation functions for such models, and in fact there is no known low-energy effective action for any tensor models. One necessarily has to resort to some simplified version of the theory one is interested in. The crucial realization here is that as one takes the large limit for, e.g., or models, only the “melonic” diagrams contribute to the maximal chaotic behavior.
With this in mind, one can study many kinds of fermion lattices, depending on whether the links are colored, the sites labelled, gauge groups distinguished, and so on and so forth. One of the better known models is the Gurau-Witten models, which get the maximal chaotic behavior from the simplex for species of fermions in the Lagrangian. Another popular model is the Klebanov-Tarnopolsky model, where one distinguishes the various gauge group links between lattice sites.
The speaker then mentioned the close relation between the two models, and proceeded to counting the orders of the various melonic contributions to two, three and four point functions. Here, one has to make some fine-tuning choices to ensure that the “Cooper pairing” diagrams are the dominant contributors instead of the “pillow” diagrams (which do not give the desired maximal chaotic behavior). Doing the diagrammatics, one also sees that one needs to distinguish broken and unbroken diagrams which are just disconnected and connected diagrams. The speaker also showed that summing over these diagrams does indeed give the much coveted chaotic behavior.
Interspersed among the many colorful melons, there was also a high-level overview of the literature, divided into the old papers and papers from the last 18 odd months. We learnt about what models are being explored and the motivations for most of them. The talk finally concluded after about 145 minutes(!) with a summary of the speaker’s own very interesting work on tensor models.
Today’s morning session began with the first of the four lectures on Supersymmetric Gauge Theories and Localization by Victor Ivan Giraldo-Rivera (ICTS). Before starting his main lectures, Victor continued the review of supersymmetry in -dimensional flat space, which was initiated by Madhusudhan Raman (IMSc) in the evening talk of yesterday. He briefly reviewed chiral superfields and wrote down the Lagrangian for non-linear sigma model. Then he demonstrated a non-renormalization theorem in theories of chiral superfields, which says that the superpotential does not receive perturbative quantum corrections.
Then he moved on to discuss Supersymmetry on Curved Backgrounds, in particular, focusing on supersymmetry on -dimensional curved manifolds. To get a supersymmetric theory on a curved manifold , one couples the supersymmetric theory to off-shell supergravity and takes a rigid limit i.e., while keeping the metric to be some fixed background. In this limit, gravity becomes non-dynamical and one obtains a supersymmetric theory on a fixed curved manifold (i.e., classical background) .
Victor introduced the Ferrara-Zumino stress-tensor multiplet, which is given by a real superfield , such that , , where , a chiral superfield, is the trace submultiplet of the FZ-multiplet. When , the FZ-multiplet reduces to the superconformal multiplet. He then introduced the old minimal supergravity multiplet , which couples to the FZ multiplet. , , are auxiliary fields and characterize the classical background manifold, as will be seen later. For theories with -symmetry, there is a multiplet which couples to the new minimal supergravity multiplet.
He then wrote down the Lagrangian for chiral superfields coupled to supergravity (chap. 23 of Wess and Bagger), which is invariant under the supergravity transformations , , . Here one does not integrate out auxiliary fields but only imposes supersymmetry i.e., and . These are called generalized Killing spinor equations and are solved for the background supergravity fields and the spinors , the parameters of supersymmetry transformations; one way to see their origin is to take the limit of the full supergravity action, and demand that we work around a saddle-point which is a normal manifold, in which case these are the equations that tell you that this saddle-point is invariant under supersymmetry transformations. He then wrote down the Lagrangian, which is obtained by taking the rigid limit as . Here is the bosonic part of the Lagrangian, is the fermionic part with replaced by (with respect to background manifold ) and comes from replacing by in . Unlike flat space supersymmetry, this Lagrangian for supersymmetric theory of chiral superfields on fixed curved background manifold is invariant under deformed Kahler transformations viz., the Kahler transformation of together with a transformation of the superpotential. This allows one to transform away the superpotential and so it does not have a special significance here unlike in flat space theories.
After that crash course on how all supersymmetric theories live on manifolds that are saddle-points of the supergravity action, and lunch (one must never forget about lunch), we returned to see Pranjal Nayak (TIFR) jump through flaming hoops while juggling knives — a feat better known as working out the SYK four-point function by diagonalising the conformal Casimir and finding the correct set of eigenfunctions.
First, he reminded us where he’d left us yesterday: the four-point function was a sum of ladder diagrams, and so the main thing to do was work out the eigenfunction decomposition of each rung. Since the rung function commuted with the dilatation, the strategy was to diagonalise the dilatation, i.e. solve the conformal Casimir equation,
This was in the complexity class MP, or Mathematica-solvable problems. But not all solutions of this were permissible: one had to impose various conditions to make sure that the solutions belonged to a reasonable Hilbert space. They were that , that the singularity at was not singular enough to spoil the normalisability of the function, and that the conformal Casimir was Hermitian on this set. That gave an allowed spectrum
Using this and the exact form of the eigenfunctions and eigenvalues of the rung, he wrote down a beautiful expression for the four-point function,
where the contour C goes from to and circles all the even integers beginning from 2 in a counter-clockwise manner. Apart from the poles at the even integers because of the tan function, the integrand has poles whenever .
The most important of these poles is the pole at , which makes that point a double pole. For now, he just ignored that pole. Apart from that, all the poles were in the upper-half plane, as he showed simply from the fact that all the residues of the eigenvalue were positive, in this marvellously simple diagram
Then, he deformed the contour to go around these poles and get an infinite-sum representation of the four-point function. Taking an “OPE limit” (the state-operator map hasn’t been made precise in 1d CFT, but we must assume it exists, because why not), he interpreted these poles that contributed as the exchange of operators of the form .
Finally he was able to use this to calculate the late-time behaviour of the out-of-time-order four-point function that is used as a diagnostic of chaos to be (another problem which is in complexity class MP). However, this was a faster growth than was allowed by the theorem of Maldacena, Shenker and Stanford, where they said that the most chaotic growth is . He explained that it was because of that double pole contribution at that we’d dropped; the coefficient of this too-fast growth would turn out to be infinitely suppressed compared to the coefficient of the bound-saturating growth that would come from that contribution; thus, in classic saas-bahu serial fashion, he ended with a cliffhanger.
We returned for the second talk that worked around a gravitational saddle-point, the evening talk on bulk reconstruction by Nirmalya Kajuri (IIT-M). He reviewed recent developments on how to reconstruct the bulk information from the CFT side. This is a new perspective and an approach of the AdS/CFT correspondence. He nicely explained how this is done in three equivalent but different ways.
A first approach is very straightforward in some sense; giving a boundary condition to the bulk fields (which is a relation between the CFT operator and a boundary value of the bulk field) and solving the equation of motion for the bulk field. The bulk field is constructed by the smearing function and the boundary data. After he explained how this method works for the free scalar theory, he also developed the discussion to the interacting case.
The second approach uses the micro-causality and the CFT properties. In this picture, a bulk field is basically related to the higher dimension operators as well as the CFT operator discussed in a previous method. The precise relation is fixed by the requirement of the micro-causality.
The third approach is more “symmetry-based” argument in some sense. This was first proposed by Ooguri-Nakayama. In this approach, the bulk field is constructed by the isometries of the bulk spacetime and is related to the Ishibashi state of the CFT.
Finally, Nirmalya gave some comments on the future directions to be more clarified. A first problem is the back-reaction. In this talk the geometry is fixed. The simple and important generalization would be to take into account the back reaction to the geometry. A next problem is to study the corrections to these analysis. In addition to the usual perturbative corrections like cubic couplings, we also have the 1/N corrections. This should be also considered. Third one is to consider the more general CFT states and reconstruct the corresponding bulk. The final, very non-trivial problem is the bulk reconstruction on the BH geometries. In this case, naively speaking, we cannot reconstruct the bulk at some causal patch from the CFT side. This is a nontrivial and interesting question to be studied.
With a 24 hours long break behind us, we assembled in the afternoon of day 8 for second in the series of lectures on SYK model by Pranjal (TIFR). He started with reviewing some results from the 1st day, including the value of which was our homework problem.
Followed by this, he discussed an SYK-like (tensor) model.
and is trifundamental representation of product of three copies of . He made the remark that tensor models are unitary, though Hilbert space in these grows much faster than in SYK model because of large number of fermionic degrees of freedom. Most importantly, large N physics of tensor models is same as that of SYK model.
Next, we moved on to the large- limit of the SYK model, when there’s a large number of fermions interacting at a time. It is in this limit that the model has been shown to be solvable. Pranjal proposed an ansatz in large limit for the two-point function
Here, first term is the two-point function in UV limit \& is an unknown function. Fourier transforming this equation and substituting it in Schwinger-Dyson equation gave the Fourier transformed expression for 1PI, , which had been derived in the last lecture. Inverse transforming this \& solving the resulting differential equation
with boundary conditions gave the expression for . Using this, one can write the two point function. During this, we had a cameo by Rohan Poojary (TIFR) who termed his explanation for function on circle ‘the vaguest ever’! Though we were quite satisfied with it.
After this, we moved on to four-point functions, he termed them ‘ladder diagrams’.
This equation is
where represents number of rungs in the ladder.
The above equation he wrote in the eigenbasis as,
and started the program to evaluate its eigenvalues and eigenvectors.
He argued that acting on the three-point function gives back the same three-point function in IR limit ().
Motivated by this, he wrote the eigenvalue of using the form of three-point function in a CFT. To find the eigenvectors of the kernel, he showed that it commutes with Casimir for conformal group implying that these may have simultaneous eigenvectors. By another argument, which he could not exactly formulate, it can be shown that kernel can be written as a function of Casimir. Thus one can find instead the eigenvectors of Casimir.
This is where he ended his lecture.
Madhusudhan Raman (IMSc) delivered the evening talk about basics of supersymmetry, in order to lay the groundwork for Victor’s talks on supersymmetric localization. He started with the Coleman-Mandula theorem which states that Poincare and internal symmetries cannot be combined an any way except trivially. Two interesting points came up during the discussion: (i) that this was a statement that was true under the assumption that the resulting theory has a non-trivial and analytic S-matrix, and (ii) that it doesn’t apply to lower-dimensional quantum field theories! Coleman and Mandula assumed that the group is a Lie group. The way out is to -grade the algebra to include both commutators and anticommutators i.e.
and he begin with definition of the supersymmetry algebra
where, is a spin-1/2 operator so it transforms like
Then, he spoke about the effect of translations,
He explained by logical argument that the above commutator can be fixed in a simple way: let’s assume it transformed as
then since the l.h.s. satisfies Jacobi’s identity with respect to i.e.
Next came the effects of the supercharges on each other,
using the same arguments we can show .
From the last relation it is clear that the supersymmetry transformation knows about the underlying space-time and also he explained why supersymmetry commutes with internal symmetries which are clear from this relation,
Then he explained the R-symmetry
It satisfies following relation
so we observe that the supercharges are charged under R-symmetry. Then, he extend the SUSY algebra for charges
where, A,B = 1, 2, 3, …, N.
where is known as the central charge.
We know that Casmirs for Poincare algebra tell us how to label 1-particle states, i.e. we use the mass and spin/helicity.
then he explain what’s happen if we add SUSY (for example N=1) is still a Casimir but changes to a superspin that is defined by
Then he gave an example for N=1 supersymmetry for massless particles, where we boost to the frame . The SUSY algebra then gives
and we can define raising operator and lowering operators as
We know that for the massless particles helicity is good quantum number, as demonstrated by
In the last hour he spend the time to explain us a superspace (the fermionic directions are the soul to the bosonic directions’ body, as pictured above) and super-field
Madhu stated that the idea of susysymmetry in superspace is lot like momentum in spacetime: it generates translations. We can give this idea a differential-operator meaning; just like momentum generates translations in space, supercharges generate translations in superspace!
How do superfields transform under infinitesimal coordinate changes generated by supercharges? It is simply
Eventually, one would like to write down actions in superspace. However, the general superfield has far too many fields, corresponding to a reducible representation of the SUSY algebra. In order to cut down the number of components, one needs to define derivative operators in superspace that anticommute with the supercharges. These can then be used to (a) effect constraints on superfields, that will reduce the number of fields, and (b) write down superactions in superspace. These differential operators look like
At the end, he explained that any action of the form
is — by construction! — SUSY invariant! The first term is known as the Kahler potential and the second is known as the superpotential. A short discussion of the statement of non-renormalization theorems followed, and that was where we concluded for the day.
Apratim began his standalone lecture on Sunday by recapping on the important points in his previous lecture, that we expand the four-point function in terms of Witten blocks, which are linear combinations of conformal blocks of the form
where is the dimension of the external operator.
The Mellin transform of a Witten block is a polynomial known as a Mack polynomial , which corresponds to an exchange of spin and twist . It turns out to be much easier to work with the case — that is, operators of the form — since the corresponding polynomials, called the continuous Hahn polynomials, have an orthogonality relation.
Thus, we can just take a manifestly symmetric Witten block and integrate it against one of these polynomials with dimension to get the residue of a spurious pole (since we in general expect a double-trace operator’s dimension to get a quantum correction and so don’t expect a pole exactly at this dimension), which we can then equate to 0!
He then specialised to a theory with three properties:
It has a symmetry.
It lives in dimensions.
The dimension of the lightest primary, which is a scalar that we call is .
This is a very minimal set of assumptions that are all true of the familiar theory at the Wilson-Fisher fixed point; while the last assumption looks rather specific, it comes merely from conservation of the stress tensor.
The basic point is that, with these assumptions, when we integrate the continuous Hahn polynomial of dimension and spin against the s-channel Witten block with the same dimension and spin, we get terms at orders and that come only from the lowest twist operator . Similarly, when we integrate it against the t- or u-channel Witten block, we get terms at orders and that come only from the exchange of the scalar double-trace . This means that, at every spin, we have an algebraic equation, up to , each of whose coefficients get contribution from only a few exchanges.
At the next order, however, the number of contributions to each of these things blows up. This is basically because the Mack polynomials at arbitrary are not orthogonal so higher-twist double traces start mixing with the lowest-twist double traces, and there are an infinite number of these. There was some discussion about whether this blow-up was as fundamental as it seemed (it came from the non-orthogonality of the basis polynomials after all) or not, and he said that he was working on it and didn’t know whether it was possible yet.
There was also a question about why was special, and he said that it was the order where two-loop Feynman diagrams would start contributing (I, the person writing this post, didn’t understand this very clearly and may be summarising it wrong).
Subsequently, Apratim showed us how the large spin, limit further simplifies the expressions for the Hahn polynomials. This enables one to solve the Mellin bootstrap equations analytically, order by order in large and solve for the anomalous dimensions of the spinning operators.
Another simplifying regime where the bootstrap equations can be handled is when we take the large limit on the models. In these cases, a general OPE between the fields can be separated into singlets, symmetric traceless and anti-symmetric parts,
Here, are the different OPE coefficients. Mellin bootstrap equations can be applied independently in each of these sectors, giving us total of 6 bootstrap equations, that can be solved using the same techniques that are described earlier in the blog. As is the case in the well-understood examples, the role of in these theories is played by , which provides the expansion parameter for the bootstrap equations.
Apratim finally concluded these set of lectures with the proposal of studying and applying the Mellin bootstrap techniques to other global symmetries. And this is how Apratim and Parijat bootstrapped our knowledge of the subject.
The morning session was the third of the bootstrap lectures, this time given by Apratim Kaviraj from CHEP-IISc. He first gave a glimpse of how conformal bootstrap is solved numerically, later giving a quick intro into the idea behind Mellin bootstrap. The lecture was replete with necessary computational details.
Numerical bootstrap, or rather the bootstrap strategy in the more traditional sense equates the s-channel expansion of OPE of a 4-pt function to the t or the u-channel, with arbitrary positive coefficients appearing as square of OPE coefficients:
where and are the conformal cross-ratios. Next comes the laborious task of solving for s while making use of the functional form of the s (which solve the quadratic Casimir equations) in domains such as .
But Apratim and company (which includes Aninda Sinha, Rajesh Gopakumar, Kallol Sen and Parijat Dey) had a trick up their sleeves. (Or more precisely they unfolded the sleeve of Polyakov titled Non-Hamiltonian Approach to Conformal Field Theory.) The basic idea was to construct conformal blocks which are manifestly crossing symmetric (symmetric in ) and demanded that it be consistent with an OPE expansion of a CFT.
This exercise is a bit cumbersome if done in position space; just like computing Feynman diagrams in position space is more difficult than doing it in Fourier space. Mellin space is basically a Fourier space analogue for CFT’s (the Mellin transform,
looks very much like a Fourier transform with the variable instead of (caution: it’s best not to take this too seriously), reflecting the fact that the important symmetry transformation to diagonalise is not translation in but translation in , also known as scaling). Here, different powers of $\latex x$ in become poles on the complex plane,
Now, if we take the Mellin transform of a 4-pt function, we’ll find poles at the twists of all the exchanged operators — exactly as if it were a scattering amplitude and the twists of operators were masses of particles.
He then wrote down the formula he’d use for the Mellin transform of correlators,
where the measure is an integration over all the s with the constraints
by scale invariance and
by conformal invariance,
and the functions are conventional (they contain the contributions of double trace operators in the strict large N limit, where the double traces don’t get an anomalous dimension).
He then worked out the Mellin transform of a conformal block and showed that it had an exponential growth at the s of Mellin space, making them really hard to work with. So, he introduced objects called “Witten blocks,” which are inspired by the Witten diagrams with a particular exchanged field (corresponding to the choice of exchanged primary in a conformal block) that calculate the four-point functions in AdS/CFT. Witten blocks are particular linear cominations of conformal blocks that one can equally well use to expand the four-point function (like we can write polynomials as linear combinations of either the s or Legendre polynomials). Then, he showed us the Mellin transform of the Witten block, called the Mack polynomial — which, being a polynomial, doesn’t diverge exponentially.
Since the Witten block actually calculates something physical in AdS/CFT, there was some confusion about how it could be used in a non-holographic CFT. Apratim took great pains to explain that one shouldn’t think of it as the same sort of calculation but just a different basis of conformal blocks. Another point of contention was that Witten diagrams have contributions of certain double traces of the inserted operators apart from the contribution from the exchanged operators, but double traces develop an anomalous dimension in most CFTs, so how could he possibly be making sense. And thus did Apratim reach the punchline of his lecture: the basic idea of Mellin bootstrap is to sum over s channel, t channel and u channel Witten blocks and demand that the residues at these un-corrected dimensions vanish; this makes the bootstrap equations algebraic equations, and therefore much easier to solve.
We came back after lunch for the beginning of the lecture series that puts the trending in ST4: Pranjal Nayak (TIFR) on the Sachdev-Ye-Kitaev model. He began with a general introduction to large-N field theories, theories with a number of fields that goes to infinitiy when N goes to infinity.
There are many types of large-N models. The simplest type is a vector model: in which there are exactly N fields, and there’s a symmetry that mixes all N fields (the existence of this symmetry is important); the canonical example is the theory with N real scalars with an O(N) symmetry under which the fields transform like an N-dimensional vector. Similarly, there are matrix models that have fields organised into a matrix. The SYK model is, in this sense, a model of q-index tensors.
He took, as an example to illustrate the soothing effects of large N, an O(N) model with a interaction. First, he performed a Hubbard-Stratonovich transformation — he introduced an auxiliary field whose saddle-point value is (up to some factors) .
Then, he introduced a ‘t Hooft-like coupling constant by absorbing an N into the coupling constant; the idea of a ‘t Hooft coupling constant is that it is the correct combination of coupling constant and N that we must keep constant to get a good large-N limit. He explained that there is a very good organising principle at work here, that was explained to him by Prof. Shiraz Minwalla at TIFR. Consider a finite-temperature partition function, which is calculated by the Euclidean partition function. The Euclidean action is basically the energy and the partition function is roughly the exponential of the free energy, which is the difference of energy and entropy. This means that the measure in the path integral is like the entropy. If there are N fields, this corresponds to an entropy (more precisely, configuration space volume) of . The organising principle is this: that interesting theories come out of an equal fight between energy and entropy, and in particular we expect the fight to be on level ground at finite N and so not having the same fight at large N wouldn’t be very interesting for finite N physics. Therefore, every term in the action must also scale like N. Similarly, for matrix models, the action must scale like .
After he’d fixed the large-N scaling of the action, he integrated out the fermions and showed that there is a saddle-point where the auxiliary field is a constant, and pointed out this is actually a mass-shift for the original field (for basically the same reason that the self-energy correction is a mass-shift; it modifies the two-point function). Ignoring fluctuations around this saddle-point is what is normally called mean-field theory, but the speciality of the large-N approximation is that we can very easily solve for the dynamics of the fluctuation fields.
At this point, he recounted a cute comment of Witten’s that he’d read in Sidney Coleman’s book. If we’re perturbing in , the perturbation parameter for QCD (N-3) is around the same as the charge of the electron. This is the perfect answer to the ubiquitous question of how large-N can possibly be useful in the real world.
Then, he went over to an SU(N) gauge theory and showed us that diagrams that can be embedded in a plane scale as whereas diagrams that can’t are subleading. He also outline ‘t Hooft’s original argument for confinement at large-N; if we take any diagram that calculates the correlator of two mesons (fermion bilinears), and take some subset of the intermediate lines on-shell we’ll find that the on-shell lines collectively form a meson too. It may seem that it could have been two mesons, but he showed that such diagrams were subleading at large-N.
Having introduced us to some of the magic of large-N, he went on to introduce the SYK model, which is a (0+1)-d model with N fermions and q-point interactions between them,
where the s are coupling constants that are randomly chosen from a Gaussian ensemble with the properties
He tried to connect the scaling of N here with the organising principle he’d laid out earlier: the Hamiltonian has q sums taking N values each, and so the coupling constant has to scale like to make the Hamiltonian scale as N. However, someone pointed out that the coupling constants actually scaled as since the quantity in the above equation is the variance and not the standard deviation. Pranjal was unable to explain this and promised to explain it in the next lecture.
Because the couplings are random, it makes more physical sense to look at the averaged correlator
However, this model is not that easy to work with. There’s a related model, where the s are spacetime-independent dynamic variables,so that a correlator is calculated by
There was a lot confusion about how these two theories were different. Apart from the fact that the two expressions above are mathematically clearly different,we agreed on the fact that the first was an average over the correlators in different unitary theories whereas the second was a non-unitary theory where the j fields behave like a source and sink for energy (different people agreed on different statements of this fact, however 😉 ); so, while both correlators are non-unitary, they’re that way for very different reasons.
After this discussion, Pranjal got to the main point of introducing the two theories: at large N, they’re the same. The basic fact was that, in the second theory, the j-propagator doesn’t get corrected at leading order in large N. Then, we can interpret the averaging over j in the first theory as j lines connecting vertices in each realisation, and then we see that this is exactly the same set of diagrams as the leading large N diagrams of the second theory. Therefore, it doesn’t matter which one we consider at leading order, and therefore we may as well treat the js as fields.
Then, he showed us some diagrammatics, the basic interaction structure that turns up in the calculation of the partition function is the Melon diagram:
Here, the melon diagram in terms of fermion lines is
The leading contirbutions for the two-point and four-point functions are
Then, he explained that basic rule for quickly deciding if a diagram is leading order is that J lines only go between adjacent vertices.
Then, he showed us the Schwinger-Dysin equation for the two-point fucntion,
where is the self-energy correction from the Melon diagrams. However, we also know the form of since we know the diagram,
Plugging these equations into each other, we find
Looking at the scalings of all the s, we can fix the scaling weight of to be (finding this value was homework), and finally we can solve for the exact two-point function
This was the punchline of his first lecture. He told us to figure out on our own and let us off with a reminder to do the assigned homework.
For the evening talk, Jahanur Hoque (IMSc) delivered an excellent lecture on the complexities involved in the study of gravitational waves in the de Sitter spacetime. An important class of observables for gravitational waves consists of the fluxes of energy, momentum and angular momentum carried away by them. These notions are well understood for gravitational waves in the Minkowski spacetime. However, he pointed out, for the de Sitter background, future null infinity is spacelike and this makes the meaning of these observables subtle.
He started off with a brief discussion on the `cleanest articulation,’ of spacetime boundaries: conformal completion. Conformal completion of a physical spacetime preserves its light cone structure and captures the notion of approaching infinity along many directions, as well as with different speeds. It naturally identifies boundary components where timelike and null geodesics `terminate.’ The causal nature of a spacetime boundary is determined by the asymptotic form of ‘source-free’ equations. Future and past null infinities, , are either null-like, spacelike or timelike depending on whether is zero, positive or negative respectively.
For asymptotically flat spacetimes, these boundary components serve to define outgoing and incoming fields as those solutions of the asymptotic equations that have suitably finite limiting values on and respectively. The peeling-off theorem says that one can then evaluate the Weyl tensor of outgoing fields evaluated along outgoing null geodesics and see that it has a definite pattern of fall-off in inverse powers of an affine parameter along the geodesics. This enables one to identify the leading term as representing gravitational radiation, in a coordinate invariant manner. It is conveniently described in terms of the Weyl scalars which are defined with respect to a suitable null tetrad. When is null, a null tetrad at a point is uniquely determined. A non-zero value for the Weyl scalar is a direct indication of the presence of gravitational radiation. This feature is lost when the is spacelike. In this case, Jahanur pointed out that none of the Weyl scalars is invariant and an invariant characterization of gravitational radiation is no longer immediately available. Fortunately, however, in the Poincare patch, there are seven globally defined Killing vectors, all spacelike in the vicinity of , and corresponding conserved quantities can be expected.
Motivated thus, he began to consider the problem of gravitational waves in de Sitter. Why de Sitter? Cosmological observations suggest we live in a Universe with positive , meaning our spacetime could be described by the de Sitter background. He proceeded to setup the linearised Einstein equations for gravitational waves, which can be thought of as a perturbation on a background metric. He fixed a gauge, worked out the retarded Green’s function, set up the definition of source moments and considered the inhomogeneous solution for gravitational waves. Within the so called short wave approximation, for sources which are sufficiently rapidly varying (wrt ), he employed the Isaacson formalism to define an effective gravitational stress tensor, , for these ripples. For vanishing , it is symmetric, conserved and gauge invariant. For non-zero , it is not gauge invariant but the gauge violations are suppressed by powers of . He argued that it is very convenient to have such a stress tensor to define and compute fluxes of energy and momenta carried by the ripples across any hypersurface. He then used this to show that for the retarded solution, the flux of energy and momentum across the cosmological horizon exactly equals the corresponding flux across and also equals the flux computed at a coarse grained level. Finally, he showed that the instantaneous power received at infinity matches with that crossing the horizon.
And that was the end to a long and satisfying day!
We began our day with Parijat (IISc) continuing where she left off yesterday. Building on the motivation from the example of the mean field theory that she discussed in detail yesterday, she discussed the reproduction of the s-channel terms in a Bootstrap equation from the infinite sum of the t-channel terms in a more general setting. Still working with the correlation function of the 4 identical scalar operators and in the regime where operator 1 and 2 approach each other and 2 lies in between 1 and 3 (), she argued that the s-channel sum always contains an identity (contribution coming from the ‘identity’ block in the OPE expansion of correctly normalised operators):
which needs to be reproduced from the sum over terms in the t-channel expression. That requires the expressions of the conformal blocks. While some physicists have found enough time to compute them for 4-dimensions v chose to stay blissfully ignorant of their exact expressions, worrying ourselves only with their functional behaviour. In particular,
where, ; is the known function we chose not to worry about, but for the fact that it has a polynomial expansion in . U and v have some close cousins, and () that sometimes work better than us, hence being hired to do the rest of the job. Parijat showed us that to be able to reproduce the identity in the LHS, the set of operators exchanged in the 4-point function of identical scalar necessarily includes those that satisfy the following relation between theory dimension and spin:
With some success already under the belt, we next moved on to study constraints arising from the subleading terms in the bootstrap equations. This involves considering the term which scales like in the s-channel expression:
the smallest term after one (existence of a lower bound on the allowed values of twist follows from unitarity bound). The dependence of this expression on terms like serves as the strap to bootstrap and it is through matching them between the s- and t-channel expressions that we get the leverage at the subleading order. Miraculously (at least for the author), the anomalous dimensions of the operators whose existence was proved earlier in the lecture also reproduce the subleading corrections from the t-channel sum, giving rise to the :
As was the case with the yesterday’s mean field example, the sum over the large values of spin reproduces the correct powers of u to match with the LHS. Moreover, Parijat showed that a particular ansatz for the anomalous dimension: is required for the correct matching. There was some confusion about the motivation behind this ansatz, but Parijat told us it is only this that works and nothing else. With some of us in the audience needing our own bootstrapping due to dwindling amounts of blood-caffeine we broke off for a while, but not before computing the anomalous dimensions of the operator corresponding to n=0 above.
We resumed after our break with the discussion of -expansion in CFTs based on this paper. The example for demonstration was the good-old Wilson-Fischer fixed point theory. Parijat showed us in quite some detail how one can reproduce the scaling dimensions and critical exponents of this theory without computing the Feynman diagrams, just by the knowledge of the Conformal invariance. But while she did this, she didn’t fail to emphasise enough how most of these computations become a cake-walk through the Mellin space techniques (bootstrapping the bootstrap!). Getting the expansion results relied heavily on (1) are primary operators in both the free and the WF theory, (2) but for which is a descendent of in WF theory but not in free theory, and (3) in limit the WF operators (and by extension their correlators) approach the free values.
Parijat finally concluded her talk by setting stage for Mellin space bootstrap and Apratim to take over as the new hero in town.
The day’s second talk was the final talk on entanglement by Vinay Malvimat (IIT-K). He wisely chose to spend the time reviewing an involved calculation by Thoman Hartman of the entanglement entropy of two disjoint intervals with the rest of the system in a 2-d CFT at large central charge.
Since Renyi entropy in a 2d CFT is calculated by the correlation function of twist operators inserted at the endpoints of the interval and the twist operators used in the calculation of the n-th Renyi entropy are primaries (of conformal dimension ), he spent most of the time answering the more general question of how to calculate the correlator of four primaries (two intervals is four endpoints) of dimensions that scale like the central charge.
The first simplification is that at large central charge, the conformal block corresponding to a primary with holomorphic dimension that doesn’t scale like the central charge c behaves as
This is a statement that, though widely believed, has never really been proved; post-lecture reference-following led us to a paper where it is confidently stated as if obvious, and another one which doesn’t even seem to consider the large central charge limit, let alone simplifications resulting from it. This fact caused a lot of controversy and indignation and murmurings about the American style of doing physics.
After it was agreed that everyone in the room was a physicist and not a mathematician (though, as some pointed out, one must be careful not to become the catholic church either), Vinay went on to introduce an operator such that has a null descendant at level 2 into the correlation function and claim that this five-point function is related to the original four-point function simply by the multiplication of a function of the positions of the five insertions; the author confesses his ignorance of how this happened and whether it’s been proven or not.
Anyway, the fact that has a null descendant at level 2 means that the function follows a simple second-order differential equation with four undetermined coefficients related to the form of the central charge-invariant conformal block . Demanding that the stress tensor scale like as gives three equations relating these coefficents. Finally, demanding consistency with the expected behvaiour of the correlator as the is taken around one of the twist operators (in the replica picture, from one sheet to the other) gives one more equation on these four coefficients, allowing Vinay to completely fix all four.
Then, it was just a matter of showing that these four coefficients completely fix as we take one of the pairs of fusing operators extremely close to each other. Now, if we are interested in the entanglement of a really small interval and another one, we can fuse the twist operators at the end-points of the first interval and we obtain the entanglement
and taking the limit where one of the end;points of the first interval approaches an end-point of the second interval we find
It was immediately pointed out that this could only be true up to an infinite constant, since taking the end-points at 0,1 and this is trivially wrong. However, since no one cared much about infinite constants, it was generally agreed that this was rather a beautiful calculation.
[EDIT 16/7/2017: It turns out this answer was right after all. The problem was that the limit in which one of the operators was at infinity was very singular; if we perform a conformal transformation to get it back to a finite point, the Jacobian gives the extra term required to make it make sense. So, if points 2 and 3 are close by, we find an answer proportional at , which is an eminently satisfying result.]
Then, Vinay explained to us the interpretation of these results with the Ryu-Takayanagi formula; the first answer is the case where the RT surfaces corresponding to the two intervals is made up of two disjoint pieces — there is no mutual information — and the second answer is the case when the RT surface joins the first interval to the second. He then told us about the surprising kink at (while this calculation only worked in the limits mentioned above, he mentioned that other calculations like the RT calculation worked more generally).
He then ended with a short and sweet discussion of modular hamiltonians, relative entropy, and black hole entropy. Black hole entropy is related to the relative entropy between the vacuum and a state corresponding to the black hole solution. Since one normally expects black hole entropy to be the thermal entropy of a state, this has some interesting implications about the entanglement structure of these thermal states.
And that is how we said our goodbyes and paid our respects to our entanglement with Vinay.
For the evening talk, Bidisha Chakraborty (IoP) gave first a nice pedagogical review of the information paradox and then a summary of the proposed fuzzball resolution.
When considering the quantum theory in relation to black holes, there arise two connected problems – the entropy puzzle and the information paradox. Since black holes were initially these new objects with no definition of entropy, when objects were thrown into black holes, there was no clear picture of how the second law of thermodynamics could be salvaged. Bekenstein-Hawking’s definition of the black hole entropy as,
set up a new generalized second law of thermodynamics,
thereby solving the entropy puzzle. This led to a more serious problem: one may ask if black holes have a temperature, following the usual thermodynamic relation,
And hence a black hole has temperature,
And so we go down the rabbit hole: if the black hole has a temperature, should it radiate? Clearly, if the black hole can absorb quanta of a certain wavenumber with some cross section, then it should also radiate the same quanta at a rate,
Quanta can fall into a black hole implying is non-zero. Therefore, black holes must radiate. How, though? The classical geometry of the black hole does not allow any worldlines to emerge from the horizon!
Hawking discovered that, to explain this radiation, we must consider quantum processes; more precisely, quantum fluctuations of the vacuum. Bidisha proceeded to discuss particle creation in a curved spacetime, closely following Mathur’s approach, which employs a nice physical picture. Each fourier mode of a quantum field behaves like a harmonic oscillator, and for the excited state of this oscillator, there are particles in this fourier mode. Then, the amplitude of this fourier mode, say , has a Lagrangian of the form,
But as we move to later times, the spacetime can distort, and the frequency of the mode can change, so that
Let us consider that no particles are present in this fourier mode, that is we have the vacuum wavefunction for this harmonic oscillator. Then, if the change in frequency from to was slow (), we know from the ‘ adiabatic theorem’ that the vacuum wavefunction will change with the potential such that it remains the vacuum state for the potential we have at any given time. However, if the potential changes rapidly, then the wavefunction has no time to evolve to the new vacuum state and we can expand the initial ground state in the basis of the new potential as,
Actually, since the wavefunction we have is symmetric under reflections ,
Thus under slow changes of the potential the fourier mode remains in a vacuum state, while if the changes are fast then the fourier mode gets populated by particle pairs.
We now have the setup to discuss particle creation in curved spacetimes. Let the variations of the metric be characterized by the length scale ; i.e., the length scale for variations of in the space and time directions, and the region under consideration also has length in the space and time directions. We assume that the metric varies significantly (i.e. ) in this region. Then the particles produced in this region will have a wavelength and the number of produced particles will be order unity. To see this, consider a particular foliation of the Schwarzschild spacetime. Far outside the horizon, we would like to have the spacelike slice, that is we pick a surface constant, all the way from infinity to say , a point that doesn’t fall in the ‘ near-horizon regime.’ We call this part of the spacelike surface . For Schwarzschild, inside the horizon () space and time interchange roles; i.e., the direction is spacelike while the direction is timelike. Thus for the part of the slice inside the horizon we use a constant slice. Let us take this slice at . Let us call this part of the spacelike surface . We must now connect these two parts of our spacelike surface, which can be done with a smooth ‘connector’ segment, which is everywhere spacelike. Let us call this segment of the spacelike surface . In the plane of fig. (a), this may look like a strange set of slices, and so we redraw them a bit differently in fig. (b). The lowest slice corresponds to the time before the black hole is formed.
Thus it is essentially a flat slice constant all through. On later slices, the part on the right, which is in the ‘outer’ region, keeps advancing forward in time. The part on the ‘ inside’ advances very little. As a consequence there is a lot of stretching in the part that connects the part on the left to the part on the right. Later and later slices have to stretch more and more in this region. Then, we can see from fig. (c), that a wavemode which is a positive frequency mode on an initial spacelike surface gets distorted when it evolves to a later spacelike surface and this mode will not be made of purely positive frequencies after the distortion. As we have seen before, if the wavemode is distorted, there can be particle creation. Fig. (d) shows how the particle pairs created thus are entangled. Bidisha also introduced operators that create localised wavepackets to explicitly prove entanglement.
There were comments about how this was the ‘ old’ statement of the information paradox because of the modern understanding that from two-point correlation functions looking thermal, a conclusion that the system is in a mixed state cannot be drawn. This means that Hawking’s initial arguments are not sufficient to establish that there actually is information loss and therefore a paradox.
These comments were noted and some confusion regarding this remained, and we moved back to Mathur’s review, where it was discussed that the essential problem that we have is not created by the ‘ thermality’ of the black hole, but by the entangled nature of created states. There is order unity entropy of entanglement from the state created by each pair of creation operators, and so there is an entanglement entropy for the radiation which is order . It is this entanglement that will eventually lead to information loss. By contrast, if a piece of coal burns away completely to radiation, then this radiation is in a pure state.The central point is that vacuum modes evolve over smooth spacetime in the manner sketched in fig. (d), and thus create entangled particle pairs. Entangled states are not a problem by themselves. The problem arises because gravity is an attractive force with a negative potential energy. This makes the quanta inside the horizon have a net negative energy and eventually there is no net mass left in the hole. If we assume that there cannot be an infinite number of light ‘ remnants’ in our theory then we are forced to assume that the black hole disappears. Now, the radiation quanta are ‘ entangled with nothing.’
With this in the back of our minds, we proceeded to consider a proposal to resolve the information paradox: fuzzballs. The hope here is that quantum gravity effects can change the entire interior of the hole and resolve the information paradox. Bidisha then considered a compactified spacetime of string theory as follows,
We can wrap a string around the ; this will look like a point mass from the viewpoint of the noncompact directions. We can take a large number of these strings and take their bound state. Else, we may end up making many small black holes rather than a single massive hole. Then, the microscopic count of states would suggest an entropy . What about the ‘ black hole’ that it creates? The string carries ‘ winding charge’ and radiates a corresponding 2-form gauge field . When we make the metric with the mass and charge of the string we find that the horizon coincides with the singularity, and so the horizon area is zero. Thus the Bekenstein entropy , and so we get .
Alternatively we can take the massless gravitons of the theory and allow them to circle around the ; this would also look like a mass point from the viewpoint of the noncompact directions, but now the mass point will carry ‘ momentum charge’ due the momentum carried by the gravitons. To get a ‘ bound state’ of these gravitons we would have to put all the momentum into one energetic graviton, so the microscopic entropy would be again . The metric produced by this graviton carrying energy and ‘ momentum charge’ again ends up with no horizon area, and we get .
We may then combine the winding and momentum charges and make a bound state by letting the momentum be carried as traveling waves on the string. There are many states for a given winding and a given momentum : we can put all the energy in the lowest harmonic, or some in the first and some in the second harmonic etc. The number of such states turns out to give an entropy,
where the entropy is for an theory. Bidisha then discussed that the geometry produced by a point source carrying energy and gauge fields produced by the string winding and momentum can be computed and that in this computation, corrections to the leading Einstein action must be considered. She argued that this modifies the expression for the Bekenstein entropy (to the ‘ Bekenstein-Wald entropy’) and that with these needed corrections this entropy was computed for the case of K3 compactification. It was found that,
and the microscopic count exactly reproduces the entropy from the geometry of the horizon.
The important point here is that the elementary string of string
theory has no longitudinal waves; it admits only transverse oscillations. Thus when carrying momentum as traveling waves, it spreads over some transverse region, instead of just sitting at a point in the noncompact space. Instead of the spherically symmetric hole with a central singularity at , we get a ‘ fuzzball,’ with different states of the string creating different fuzzballs. Interestingly, the boundary of the typical fuzzball has an area that satisfies
So we see that the region occupied by the vibrating string is of order the entire horizon interior; in fact a horizon never forms. Now there is no information problem: any matter falling onto the fuzzball gets absorbed by the fuzz, and is eventually re-radiated with all its information, which is just how any other body would behave. The crucial point is that we do not have a horizon whose vicinity is ‘ empty space.’ The matter making the hole, instead of sitting at, spreads all the way to the horizon. So it can send its information out with the radiation, just like a piece of coal would do.
Bidisha then discussed a recent paper by Mathur on what prevents gravitational collapse in string theory, with two examples from classical gravity,
In each case, we have a shell with positive energy density, which does not collapse inwards. Since these examples share some qualitative features with the construction of microstates in string theory, it was argued that they served as a useful guide to the nature of fuzzballs. In each example, we have an extra dimension compactified to a circle; dimensionally reducing on this circle gives Einstein gravity in dimensions and a scalar field. The scalar field has positive energy density and gives the ‘ matter’ in the 3+1 dimensional description. The pressure and density of matter diverge at various points. However, it was argued that this was only an artifact of dimensional reduction. Results like Buchadahl’s theorem are therefore bypassed. Further, such microstates give the entropy of black holes and so these topologically nontrivial constructions dominate the state space of quantum gravity.
The takeaway from this study was that the dimensional description exhibits several pathologies like divergent energy densities and pressures, while the full higher dimensional solution always remained regular.
The day four of the meeting began with a fantastic talk on Conformal Bootstrap by Parijat Dey (IISc). She started her lecture by drawing the phase diagram of water and briefly discussing the universal features at the critical point. The fact that several theories like the 3d Ising model, and water share the same universal behavior makes the study of conformal symmetry at the critical point interesting. After giving a broad overview Parijat went on to explain the framework to calculate the critical exponents in CFT for d > 2. She considered theory in dimensions (the well known method of dimensional regularization),
From the function expansion in terms of one can see that there is a Wilson-Fisher fixed point of this theory at . Although to make true sense of the perturbative expansion the parameter should be kept small, but surprisingly here if is set to one then conformal dimensions of the operators match closely with that calculated numerically! To illustrate this matching Parijat gave examples of conformal dimensions of and which have been calculated upto five orders in :
Critical exponents are related to critical dimensions and the perturbative computation requires summing over innumerable Feynman diagrams which is extremely tedious process. However it turns out one can use conformal symmetry at the fixed point and the calculation becomes more convenient. That’s where the philosophy of conformal bootstrap comes in and as Parijat explained one can focus on a CFT itself and not on any specific microscopic realization without even writing down Lagrangian (and thus no Feynman diagrams!). The bootstrap program is set up based on the uniqueness of conformal symmetry which only requires consistencies of the operator product expansion (OPE) and crossing symmetry (or associativity of the OPE).
Before going into the details of the bootstrap program she did a quick recapitulation of conformal symmetries which are basically translations, rotations, boosts, dilatation and special conformal transformations. She also showed us the conformal algebra of these generators and that generators of translations, and special conformal transformations, act as raising and lowering operators respectively for conformal dimensions (eigenvalues of the dilatation). An interesting question came up at this point: can we keep on acting with to produce negative weight states? It turns out because of the unitarity bound of CFT such states are not allowed. Then she explained how to fix two point and three point correlation functions of scalar primaries upto an overall coefficient. Four point correlator is fixed upto exchange of cross ratios.
After this she took us through OPE expansion of two operators. We saw how to write four point correlator using pairs of OPE and construct conformal blocks. In that discussion she introduced a differential equation relating the quadratic casimir and coefficients of the conformal blocks. Then we saw the bootstrap equation by equating the s and t channels of four point correlation function of identical scalars:
From this bootstrap equation equation the coefficients can be constrained but in general solving this equation is difficult. It is usually done numerically, and with a lot of effort.
However, there are some general facts that are widely bellieved, and for good reason. One of the most basic ones is that one needs an infinite number of operators in one channel to reproduce even one conformal block in the cross channel. However, no actual examples of this decomposition exist.
To convince us, Parijat took the case of mean-field theory, which in this context is a theory in which we can write any correlation function as a sum of products of two-point functions — i.e., there’s a Wick factorisation, even if it isn’t free in the traditional sense.
For a four-point function of four identical scalars in such a mean-field theory, she took the limit , the limit where operators 1 and 2 (the numbering isn’t important except to keep track of things) come close to each other and operator 2 is in between operators 1 and 3. In this limit, the s-channel conformal block (from the fusion of 1 and 2) behaves as , where is the conformal dimension of the operator whose correlator we’re calculating. The u-channel expansion (from fusion of 2 and 3), however, behaves like a sum of terms . Since can in general be irrational, the only way to reproduce this non-analyticity is by an infinite sum. She ended her lecture with an explicit reproduction in mean-field theory of the behaviour in the s-channel by summing over the t-channel using the fact that the conformal blocks are well-known in mean-field theory She left us to ponder the infinite mysteries of this simple-seeming equation that needs an infinite sum to work, and also to have lunch.
Sudip Ghosh (ICTS) begun his talk by describing the Kawai-Llewelyn-Tye relations, that express tree-level closed string scattering amplitudes in terms of products of tree-level open string amplitudes. As usual, the limit is expected to yield field theory results, and we find that this is the case; it is possible to write tree-level graviton scattering amplitudes in terms of products of tree-level gluon scattering amplitudes.
We want to make contact with the CHY formalism, so we consider two sets of of solutions to the scattering equations and and define the bracket
Where is the Parke-Taylor factor evaluated on the solution of the scattering equations. The statement — called KLT Orthogonality — states that the above inner product (appropriately normalized!) on the space of solutions is simply .Appropriately normalizing the vectors, it is possible to express the inverse of — using the above orthogonality property — in a compact way that identifies it with the double partial amplitude that Arnab introduced in his talk. (As a reminder, the double partial amplitude is simply the integral of the product of Parke-Taylor factors evaluated on the permutation in question, and integrated over the moduli space .)
Then, he showed that in every case that the CHY integrand can be written as a product of two factors each of whose weights is half the full integrand’s (as in fact happens in gravity) we can use the same KLT metric to multiply the full amplitudes that we use for converting Yang-Mills amplitudes into gravity amplitudes. This is a simple consequence of the fact that the KLT orthogonality, mathematically, is just a statement about Parke-Taylor factors evaluated at the solutions of the scattering equations, which are ubiquitous in the CHY formalism.
We divine from the above discussion a general procedure that works for gravity and Yang-Mills theories: break the CHY integrand — which can in these cases be written down as two Parke-Taylor factors times some rational function of cross ratios of — into left and right parts (take one of the Parke-Taylor factors to be the left part), use the “generalized KLT relations” to write this in terms of double partial amplitudes (again, we’re doing this because these amplitudes are easier to compute) and now do the same for the right part. There appears to be some “judicious choice” of what Parke-Taylor factors to peel off, but we’ll get back to that in a moment.
One is led to wonder if such a procedure will necessarily terminate, and we find that it does. At some point, we get a left- or right-integrand that matches the one we started with. What it multiplies is a rational function with no zeroes, and this factor can be dealt with using graph theory. Insisting that the left- and right-integrands have the correct SL(2,C) weights implies that this rational function will have each appearing four times. One can write down a diagrammatic representation of this rational function where each is a node, and the terms it appears with (in the form ) are connected by a line. Graphs arrived at via this procedure are called 4-regular graphs, and one can write these graphs in terms of 2-regular graphs. This simplifies the CHY integrands.
Regarding the judicious choice of Parke-Taylor factors to peel off, we heard from Sudip that k-regular graphs can be broken down to combinations of 2-regular graphs in a finite number of ways. It turns out that the Parke-Taylor factors that we need to peel off are related to the factors constructed from these 2-regular graphs.
Sudip then turned to the question of whether loop amplitudes can be discussed within some extension of the CHY formalism. This is done in a deceptively simple way: by identifying 1-loop amplitudes in 4d theories as coming from the forward scattering limit of 5d tree-level amplitudes. Let’s try and unpackage this bombshell.
A 1-loop amplitude is represented by a Feynman diagram with a loop in it (yes, that’s right, physicists did manage to come up with a good, descriptive name). Now cut this loop into two propagators and lift their momenta into a higher dimension, and impose momentum conservation in this direction. This line of reasoning allows one to understand loop amplitudes as tree amplitudes in one higher dimension:
Here we have a propagator corresponding to the loop momentum, so the representation is a little closer to home.
And so he ended his and Arnab’s beautiful series of lectures with reflections on how crazy this all is, just how much it feels like those in this field feel like they’re feeling their way around a grey pillar and wondering how it can be simultaneously leathery and hairy without being able to see the elephant whose leg it is. Needless to say, Sudip said it in a rather less purple manner, emphasising how we might be on our way to a fundamentally new formulation and understanding of QFT where all these hidden structures are manifest.
The evening session today was bravely led by Pratik Roy (IoP) who, among other things, tried to elucidate the relation between index theorems and chiral anomalies in non-abelian gauge theories. He tried to work this out in a simple setting of a free fermion coupled to an external non-abelian gauge field, wherein the gauge anomaly manifests itself in the Euclidean effective action being invariant only upto a phase under gauge transformations.
where , is the anomaly term.
The Dirac operator doesn’t have a well defined eigenvalue problem. Hence Alvarez-Gaume and Ginsparg defined a new operator who’s determinant is the partition function which has a well defined eigenvalue problem. with in 2n space dimensions.
One can further prove that . The gauge group G is assumed to be simply connected and semi-simple Lie group. The gauge transformation elements depend on both space x and gauge transformation parameter . Further, is valued in with boundary conditions . Now in the enlarged space parametrized by x and we consider a gauge field , where contains derivatives w.r.t. both x and . Here, the anomaly in is manifest in when goes from . Which basically says which can be labelled by the winding number. The enlarged space is further enlarged by an addition of a parameter in .
The index of any operator is defined as the difference between the kernel of the operator and its adjoint. Now the speaker tried hard to convince that the index of the correspondingly enlarged operator in this (2n+2)-dimensional space is equal to the anomaly, for very confusing reasons. Some of us found it rather helpful to look at this and this for clarification.
Nonetheless, the important point that gauge anomalies can be related to topological invariants of gauge manifolds was not lost on the people in the audience, who by this time had had to recall definitions of Hopf fibrations, homotopy groups and other exotic objects without ever reaching for Nakahara. The tired audience then more or less forced the speaker to stop. So, he ended with a rather surprising statement, that the mathematical “determinant bundle” he was talking about was identical to the gauge-field configuration around a monopole. None of us understood this, but we all — especially the harried and tired speaker — understood the appeal of dinner.
The third day of the conference of awesomeness featured Vinay Malvimat’s (IIT-K) third talk on entanglement, Arnab Priya Saha’s (IMSc) second talk on CHY amplitudes, and an evening talk by Madhusudhan Raman (IMSc) on Resurgent Asymptotics.
Vinay devoted his third lecture to proving various properties of entanglement entropy, calculated via the holographic prescription. He first discussed the proof of strong subadditivity of entanglement entropy in the holographic context. It is a well known property of the entanglement entropy, proved by Lieb and Ruskai in the 1970s for any quantum system. The inequality is stated as where we consider three parties A, B and C with a product Hilbert space structure . denotes the von Neumann entropy of system M with density matrix . It was noted in the first lecture that strong subadditivity illustrates a trade off between entanglement that can be shared by multiple parties among each other. This is referred to as the monogamy property of entanglement.
We first set out to prove it In the holographic context, where we take the systems A, B and C to be three regions in the boundary. To begin with, Vinay considered the boundary to be of 1+1 dimension. For simplicity, the regions A,B and C were considered to be adjacent to each other(in that order). By drawing the minimal length geodesic curves that enclose different regions and comparing their lengths, he showed that strong subadditivity follows from geometry. (The proof is originally due to Headrick and Takayanagi) It was noted, in retrospect, that strong subadditivity of entanglement while written in terms of mutual information, translates to where .
He then set out to prove a stronger condition on mutual information satisfied by holographic systems and not necessarily by any general quantum system. The inequality stated as is known as the strong subadditivity(monogamy) property of mutual information. To show this, the A, B, C setup was drawn as earlier this time illustrating all the minimal area surfaces enclosing all the different regions. Members in the audience in this point came up with various questions about how the minimal area surfaces would look like in different situations. One of the points that Vinay explained here was that when the region B is very small, the minimal surface enclosing the region AC would almost be the same as the minimal surface enclosing ABC plus a small surface enclosing B. It was noted that this small surface is in fact the minimal length surface that encloses B. In the case when region B is large, however, such a combination of surfaces is no longer the surface of minimum area enclosing AC. The minimal surface enclosing AC would be minimal surface enclosing A plus minimal surface enclosing C. Moving on, the minimal surfaces enclosing different regions were split into disjoint regions which individually enclosed A, B, C and ABC but were not of the minimum area. The statement that their areas were not the minimal areas enclosing the said regions, lead to the strong subadditivity of holographic mutual information (proved originally by Headrick et al). As Vinay thought this was a natural point to stop and stated his own need for a chai, we decided to break for twenty minutes.
Rejuvenated from chai, Vinay returned to state the Bousso bound and the HRT conjecture. Bousso bound is an upper bound on the amount of thermodynamic entropy that can be contained in a region() enclosed by a congruence of null geodesics that passes through a codimension 2 spacelike hypersurface(S) with a non positive expansion parameter in the forward and backward lightcone of the hypersurface. It was pointed out that the restriction on the expansion parameter implies that the region considered is not an arbitrary region in an arbitrary spacetime. It was noted that the condition on the expansion parameter is a local condition as the expansion parameter, which is the trace of extrinsic curvature, is a local object. The Bousso bound reads where S is the thermodynamic entropy of the region and A is the area of the spacelike hypersurface considered.
He then stated the HRT conjecture, which states that when the spacelike hypersurface is anchored to the boundary, the holographic entanglement entropy saturates the Bousso bound. He proceeded for examples with different boundary geometries that have corresponding dual CFTs. In the case of AdS3, he solved for the region , which passes through a spacelike hypersurface(S) in the boundary, with zero expansion parameter. This region turned out to be the same as the minimal surface enclosing the region S. While this is the right hand side of the Bousso bound, this equals the entanglement entropy of a 1+1 D CFT calculated in the first lecture. Hence we could see that HRT conjecture is valid here. As all solutions of pure Einstein gravity in three dimensions are locally , he then considered the case of rotating non extremal and extremal BTZ blackhole where the RHS of the Bousso bound was calculated by doing coordinate transformation to get to. For non extremal BTZ, the RHS calculated was shown to be equal to the entanglement entropy of a CFT on a twisted cylinder. In the case of extremal BTZ, the RHS was shown to be equal to the entanglement entropy of a CFT where the left movers had a finite temperature but the right movers had zero temperature or vice versa.
While so far the discussion was restricted to stationary spacetimes, he then went on to consider Vaidya solution and stated behavior of entanglement entropy as a function of the null like time coordinate. The entropy increased as a function of time saturating for large times. This was compared with the case where the BTZ mass is made time dependent in the adiabatic approximation. This shows a similar behavior of entropy increase and saturation. At this point, there were some parallel discussions about the possible behaviors of entanglement entropy as a function of time for different time dependent situations as well as the results given in the HRT paper on the behavior of minimal surfaces in Vaidya as a function of time. It was noted that the minimal surface doesn’t enter the horizon at any point of time. We then left to different places to wait for lunch time.
After lunch, we reconvened to hear Arnab Priya Saha tell us some more details about the CHY formalism. He began by reminding us the central result he explained to us over the last lecture, that the scattering amplitude of n identical scalars of spin s, where s is 0,1 or 2, can be written as
where the integral is over the moduli space of an n-punctured $S^2$ with the punctures being at $ latex \sigma_i,\ i=1,2\cdots n$, the Parke-Taylor factor is
and the last factor is the “reduced Pfaffian” of a matrix built out of polarisations and momenta; this matrix has zero-modes because of momentum conservation that need to be removed to get a non-zero Pfaffian. One comment to remind the reader is that the reason scalar amplitudes have color factors is that they’re adjoint scalars.
Having reminded us of this basic result, he moved on to talking about the same form of the amplitude for scattering with non-identical particles. These are built out of building blocks that are the same Parke-Taylor factors and Pfaffians with subsets of the particles (it is only for a Pfaffian with all the particles that there are zero-modes that need to be taken out).
For an amplitude with q scalars, r gluons and s gravitons, he told us that the amplitude has the form
It is worth noting here that the gluon variables appear in the first Pfaffian only and the graviton variables appear in both the Pfaffians, so that the power counting matches with the earlier identical particle amplitude. As Arnab was explaining this power-counting, a really interesting question came up: why isn’t the amplitude literally just , seeing as this much more closely matches the naive power counting? Two points came up. The first, extremely heuristic, argument was that in this form the entire interaction between gravitons and gluons seems to come from the moduli space integral; and that this is “very little interaction.”
Much more important was the second point, which also happened to be the next thing Arnab had planned to talk about! He took the case when there’s only one graviton, and took the limit in which this graviton is soft. In this limit, the exact factorisation suggested above happened, and the Pfaffian-squared factor exactly gave the famed Weinberg soft factor that turns up in his soft theorem.
Then, he restricted to amplitudes with only external gravitons, but allowed his polarisation vectors to have indices in both the d physical dimensions and M fake dimensions. If one index takes a value in the M dimensions it behaves like a photon! Since there are M fake dimensions, there are actually M distinct photons. He showed an elegant perfect matching condition for a photon amplitude in this set up to be non-zero: there have to be an even number of every photon type. This seems to be a generalisation of Furry’s theorem in QED that disallows process with an odd number of photon external particles.
Then, he pointed out that using this same “fake dimension” setup with a one-index polarisation allows one to treat SU(N) gluons and M adjoint scalars in a unified manner. Taking a theory with minimal coupling between scalars and the gauge field and a four-scalar coupling and restricting to amplitudes with only scalar external particles, he showed that even here there is a perfect matching condition and showed that restricting to certain perfect matchings gives you the theory answer.
Finally, he turned his attention to the connection with helicity amplitudes. These CHY answers are sums over all possible helicity configurations. He showed that each of the solutions correspond to a different helicity configuration, and you can split up the solutions into classes whose sizes are given by “Eulerian numbers” , which are the number of ways of arranging the numbers 1 to n-3 such that m of these are greater than the previous one. He then showed that these Eulerian numbers are also the number of ways to arrange the helicities among the particles once we’ve stipulated that there be a fixed number of negative helicities. In particular, MHV and anti-MHV amplitudes correspond to classes of size 1, and this is one of the ways of seeing why they’re simple.
He was going to proceed to proving Weinberg’s soft theorem, but it was pointed out that he’d already done that and didn’t actually need to repeat it. And so we left to different places to wait for the next talk.
Madhusudhan Raman (IMSc) (depicted as a nose-less man with beard) delivered the evening talk about his ongoing work on Resurgent Asymptotics. The talk was mostly pedagogical in nature with a highly interactive audience. It was kind of nice to see so many enthusiastic people still talking about intense stuff after 6 hours of lectures.
Madhu started his lecture by stating the “myth” about perturbation theory which is: Perturbation theory works fine provided you know where to stop. To be a little more concrete, say we have a confining quartic potential where the Hamiltonian is essentially
Clearly, the quantum theory will have a discrete spectrum labelled by a certain quantum number i.e where the ‘s are the exact energy levels which we anyways cannot find for a general . Unfortunately, we live in a violent unfriendly universe which renders us inacapble of finding exact spectrum of the Hamiltonian. To make life easy, we resort to perturbation theory and write down an approximate answer for the energy levels when the coupling g is tuned to be small.
Generically this series is divergent and the estimate for the general coefficient for this series is given by . In fact the series is well-behaved and seems to converge upto a certain critical number of terms () and then blows up again which makes it looks like
Given the behaviour of the perturbations series, it is quite natural to ask now how close can we get to the actual from the divergent result or in other words can we make non-perturbative statements from our standard perturbative expansions? Maybe, a nicer way to put it would be as Abel put it
Divergent series are the invention of the devil, and it is shameful to base on them any demonstration whatsoever … That most of these things [summation of divergent series] are correct, in spite of that, is extraordinarily surprising. I am trying to find a reason for this; it is an exceedingly interesting question.
[Niels H. Abel, 1802-1829.]
After this Madhu proceeded to demonstrate certain nice features of the Trans-series (rather than giving a technical definition). As examples he chose Euler’s Equation and the Painleve II equation
Since, there was some confusion about the exact form of the Euler’s Equation, Madhu chose the Painleve Equation as an example. Pluggin in an ansatz of the form , we can write down the exact solution as
In general, these trans-series has some general features:
There is more than one small expansion parameter
The terms show factorial growth
The “late” terms which appears at large n can make large contribution to the series.
Since, Madhu found the WKB approximation quite interesting and wants to figure out when exactly is a semi-classical approximation a good way to describe physics, he chose to demonstrate resurgence in exponential integrals or in other words, he analysed the integral
In general to analyse such a problem we would use the standard saddle-point approximation. The value of the integral evluated about the n-th saddle point will get resurgent contribution from neighbouring other s saddles; such situations turn up in QM also (e.g: double well potential). This point actually generated a bit of confusion, since in perturbation theory, we expand around any one of the vacua and write perturbative series around those points. However, resurgence tells us in spite of doing such an exercise, the vacua actually knows about the other vacuum and gets resurgent contribution. So, perturbation expansion although might seem to be localized around the minima of the potential but still somehow captures the global structure.
In case of QM, if we consider the double well potential, the energy states are given by
the “small” perturbative part depicts deviations around the vacuum saddle point while the term represents instanton saddle-points and is in fact the non-perturbative sector.
Finally, he spoke about Borel resummation which helps us in actually performing the asymptotic expansion for the divergent perturbation series. Starting with a series of the form , and assuming that the coefficients display factorial growth (i.e ), we did a Borel transform to cancel out the factorial growth by defining it as
Further, we do a Laplace transform and define the Borel sum as
It was finally stated (without proof) that the asymptotic structure of the Borel resummed series is same as the original divergent series order-by-order. Thus with the enlightenment that devilish divergent series are actually nice and encodes non-perturbative physics we ended our third day of the workshop.