Today’s lecture, by Aneesh P B (CMI, Chennai), was focused on introducing the covariant phase space formalism with a view toward its use in calculating charges in de Sitter.

The phase space of a classical system is usually introduced as the space of positions and momenta of the system at a particular instant in time. However, we can think of each point in this space, given by a position and a momentum , as an initial condition for classical evolution and therefore the phase space is also the *space of classical solutions.*

The point of this reformulation is that while the first point of view depends on the picking of a time-slice and therefore breaks (local) boost symmetry, the second formulation doesn’t care about this. Because gravity is invariant under diffeomorphisms and therefore under local boosts, this *covariant* point-of-view of phase space is more natural.

The main problem with this point-of-view is how to calculate the symplectic form (the inverse of which is related to poisson brackets), which is required (among other things) for calculating dynamics. There are multiple approaches to this, for example the one by Crnkovic and Witten and the one by Lee and Wald. Aneesh followed the Lee and Wald formalism.

Aneesh began with a short introduction to forms and differential geometry. For those who need a refresher: a -form is a fully antisymmetric tensor with lower indices, and it’s written as,

where denotes wedge product which is just a fancy way of saying that the different s anticommute with each other.

Also important is the exterior derivative, which is a fully antisymmetrised derivative of a form, so that the final result is a -form. The reason forms are interesting is that they are the natural objects that are integrated on -dimensional surfaces — the antisymmetrisation is so that the integral gives zero when two of the ‘s point in the same direction; this is why the volume of a parallelopiped is the fully anti-symmetrised product of its three ‘basis’ vectors.

He then introduced the symplectic form as a two-form on phase space. For example, in an n-particle case, it is,

where is a way to label both positions and momenta with the same variable, so that . The Poisson brackets are given by the inverse of the symplectic form,

Then, he introduced the concepts of ‘Hamiltonian vector fields.’ Consider a vector field on phase space, . These can be thought of as flows on phase space, by identifying the components with the rate of variation of the coordinates, (where this derivative need not be with actual time, just come continuous parameter that parametrises the amount of flow). There are two types of flows on phase space: those that can be written as

and those that can’t be. Clearly, for consistency with Hamilton’s equations, we need time evolution to be of the first form. One might also want symmetry transformations to be of this form, for example. In a massive failure of sense and sensibility in nomenclature, flows that can be written as the Poisson brackets of a function on phase space with the coordinates are called ‘Hamiltonian vector fields’ and the functions whose Poisson brackets they are are called the Hamiltonian conjugate to : the Hamiltonian is the *hamiltonian* *conjugate* to time translation, the momentum is the *hamiltonian conjugate* to space translation, and in general a Noether charge is the Hamiltonian conjugate to a symmetry (up to possible subtleties).

An important question that would be required in the gravity case was: given a flow, how does one tell whether it is Hamiltonian or not? The answer is that the “Lie derivative of the symplectic form” along this vector should be zero. The Lie derivative of a tensor along a vector is the formalisation of the way the tensor changes under the flow corresponding to that vector, and this condition ensures that the symplectic form is invariant under this flow. In particular, the Hamiltonian conjugate to such a flow is the conserved quantity of that transformation of the phase space coordinates.

Moving to gravity in dimensions, he began with the space of all ‘kinematically allowed’ metrics on a manifold , where by ‘kinematically allowed’ he meant metrics one wouldn’t be devastated to find as solutions to one’s equations of motion. He then explained the rather involved process of defining the symplectic form as a two-form on the manifold (note that there are now two manifolds, the spacetime and the space of metrics — we will have to be careful about where we’re differentiating).

The process is simple but unintuitive. First, take the Lagrangian , which maps every point in to a -form on the spacetime ; this -form is just the scalar Lagrangian we normal human beings are used to, multiplied by the and the completely antisymmetric -index Levi-Civita tensor. Then, vary it, getting something of the form

Here, are the Einstein equations, and the second term is the familiar boundary term one always finds (and willy-nilly sets to 0) while integrating by parts to find equations of motion from a Lagrangian — in this case, it is known as the presymplectic potential. The name is analogous to a gauge potential, since and are indistinguishable — there is a gauge-invariance in , and changing gauge will correspond to canonical transformations on the phase space (which hasn’t been defined yet!). Finally, the presymplectic current is defined as the -form,

and the presymplectic form can be found by integrating the presymplectic current over a -dimensional surface — for example, the surface on which one would like to specify initial data. The prefix *pre* to all the symplectic objects are because they are not true symplectic objects – the reason being that they are highly degenerate (along the gauge directions) on the covariant phase space. The reason we went through all this was that

- we didn’t need to specify the intial data surface till the very end, so it can be anything as long as it is a valid initial data surface.
- This symplectic current is a conserved current, in the sense that its integral on a closed surface is 0, because of which it is the same for all spacelike surfaces that can be deformed to each other.

However, we still weren’t done: this symplectic form wasn’t defined on the phase space but on the space of all “kinematically allowed” metrics! So, we had to restrict from to the space of solutions . Then, we got the real pre-symplectic form. This was still not the true phase space, since two diffeomorphism-related metrics are two different points in despite being the same physical state, but it turned out that there were some subtleties too subtle even for our brave knight, and this was one of them.

From here, the process for calculating the charge is simple, up to some boundary term issues. Plug in a particular value for say in the symplectic form to get the generator of infinitesimal transformations of that form, and then try to integrate it to find the Noether charge. He showed how this worked for the specific example of the charge conjugate to the translations in conformal time that Jahanur had indicated would be central to understanding gravitational waves in de Sitter.

*******

Chandan began his third lecture with the derivation of the Lindblad equation, starting from the basic assumption of Markovianity which he had covered in the second lecture. He did this by starting with the equation,

which was shown to follow from Markovianity in the previous lecture, where

This time, we expanded in terms of number of orthonormal basis matrices . We then took a particular basis set in which and all other basis matrices are traceless (For eg. form a basis of complex matrices). We then defined some new variables in terms of these basis matrices. The condition that the evolution preserves the trace, was the final ingredient to derive the Lindblad equation.

where was shown to be a positive semi definite matrix. Curious readers can access the handwritten notes at the resources section of the ST4 2018 website. At this point, a discussion arose as to how the trace preserving condition was used, where it was also pointed out that we had used . The fact that commutator has zero trace while true for finite dimensional systems (as ), is not true for infinite dimensional systems in general. We then wondered if this told us that Lindblad equation applied only to a finite dimensional Hilbert space, or if the particular trace we evaluated would still be zero by some nice properties of the Hamiltonian and the density operator. In any case, the audience, who struggled to comprehend the jump to the Lindblad equation from the assumptions on the second day, were happier when this outline was presented and were ready for new stuff.

Chandan then introduced Veltman’s cutting rules to diagnose whether a theory is unitary. Consider the scattering matrix of a unitary quantum field theory. Then . Define such that , then the unitarity condition can be written as,

where are some basis vectors. Chandan told us that (according to Veltmann), this should be true diagram by diagram. It was pointed out by members in the audience that (if we trust Veltmann) in a theory, if we apply this to the four point function on the left side, we obtain product of three point vertices on the right side which are zero. Hence, in this case, the condition reduces to the imaginary part of being zero. Thus must be real in order to have a unitary quantum field theory.

We then studied the cutting rules in Schwinger Keldysh. Chandan reminded us from the first lecture that for , and . This was recast as the largest time equation for general diagrams. We used the largest time equation to “cut” various diagrams, in the sense that the RHS, and are on-shell. Therefore, the largest time equation can be used to make the internal lines (off-shell) on the left side to external lines (on-shell) in the diagrams appearing on the right side, which can also be seen as cutting the diagram on the left in various ways to convert internal lines to external lines.

Chandan then showed us how if we define a different basis (called the average-difference basis) in the Schwinger-Keldysh space , then the cutting equation reduces to the statement that all correlation functions with the difference field are zero.

*******

The last in the long line of evening lectures was presented by Kausik Ghosh (IISc, Bengaluru). He spoke to us about CFT bootstrapping at finite temperature. He started with a remark on how conformal invariance fixes all one-point functions of any operator in a CFT on to zero, barring the identity operator. Now, he argued, we can compactify one of the direction and study the CFT on a , where the length of the circle can be identified as a temperature. But introducing a length scale in the system will cost us: we will now have nonzero one-point functions for arbitrary primary operators, in principle. Although, because of the translational invariance, one-point functions of any descendents will still vanish. The one point function of a scalar primary operator is given by,

Kausik then went on to explain how symmetries restrict one-point functions of operators with spin as well. Now, a key point: if the distance between two operators is less than the length of the circle, we can use the operator product expansion of two operators (same as zero temperature CFT). Doing this repeatedly reduces any n-point function to one-point functions, which are in turn fixed in terms of , he says. Using OPE, a two-point function can be written as,

where and . is the full conformal block for a thermal two point function.

Here is the coordinate along the circle and $x$ is in $R^{d-1}$. The expression for the coefficient is,

These crossing relations for two-point functions (called the KMS relations) are due to the periodicity along the direction, which are like the crossing symmetry relations of conformal blocks. But since the sign of is not fixed, one cannot directly apply numerical bootstrap techniques. Rather, we will derive a powerful inversion formula which will enable the use of the KMS condition to do large spin analysis.

We can complexify and the two-point function can be written as,

The operators have simple poles in the space of s and have residues in ,

We use the inversion formula for to find the expression of , using some properties of Gegenbauer polynomials and a Laplace transform. The integration is in Euclidean space. invariance allows us to fix all the coordinates along a line and measure distance only in that direction. Therefore, we use coordinates and . For simplicity, Kausik showed the derivation of inversion formula in 2d, but similar reasoning follows through in higher dimension.

The Gegenbauer polynomials for 2-dimensions are given as,

where and . So, in the Euclidean version the formula for looks like,

We assume that the two-point function is analytic away from brunch cuts , and in the complex plane. Then, Kausik explained that the contour can be deformed around the bruch cuts. For our purpose, we deform the contour for towards the origin and for , we deform the contour towards the infinity. There is a symmetry under , which relates these deformations.

Assuming a particular fall off for two-point functions towards large , he showed that we can write down the final form of inversion formula for in terms of only the discontinuity of the two point function. While deforming the contour towards infinity, what we actually did is we analytically continued therefore obtaining the Lorentzian inversion formula, starting from a Euclidean formula.

Finally Kausik motivates us with some examples and future directions to explore. He mentions that this type of analysis usually used in mean field theory where main interest is to study critical points at finite temperature. For critical type model in large , the expectation value of the field involved gives the thermal mass . Numerically, the thermal mass has been solved for. In the finite temperature regime, the correlators of energy momentum tensors are not studied extensively. It will help to explore the AdS side and to also find transport coefficients.

After this, due to the shortage of time Kausik briefly outlined how large spin perturbation theory works in this setting. Also how large spin resummation can generate poles for other operators present in theory and how does this gives correction to pole locations.

With Kausik signing off, another meeting is almost at a close. Just a couple of lectures remain and by this time tomorrow, we’ll be looking towards ST4 2019!