The joining of two measure preserving systems is a third measure preserving system that has the two original systems as factors. In analogy with classical arithmetic, using joinings it is possible to have a notion of a common multiple of two systems, and *two* (nonequivalent) notions of relatively prime systems. In this post I will explore some basic properties of joinings and touch the notion of disjoint systems.

Joinings were introduced by Furstenberg and have become a fundamental tool in ergodic theory. For a complete treatment of the topic see the book of Glasner.

** — 1. Factors — **

For the sake of completeness I will briefly state the notions I will use.

A *measure preserving system* (or m.p.s.) is a quadruple , where is a probability space and is a measurable map such that for all .

Definition 1We say that two m.p.s. and are isomorphic if there exists a map such that

- We have almost everywhere.
- For every there exists such that .
- For every we have and

Example 1For any invertible m.p.s. , the map is an isomorphism between and itself. More generally, for any the map is an isomorphism.

Definition 2 (Factor map)Let and , be two m.p.s. A map is afactor mapif:

- It is surjective (up to measure ),
- for every ,
- .
In this situation we call a factor of and we call an extension of .

Example 2Any isomorphism of measure preserving systems is a factor map.

Example 3For any m.p.s. , the map is a factor map between and itself. More generally, the map (with or for invertible ) is a factor map.

Example 4If and are measure preserving systems we denote by the product system (the -algebra is the smallest -algebra over which contains all rectangles with and , and the measure is the unique measure on satisfying for every and ).In this situation, both projections and are factor maps.

Example 5For a more concrete example, take and , both with the discrete -algebra and normalized counting measure and let and . Then the reduction is a factor map.

Example 6Let with the usual topology, the Borel -algebra and the Lebesgue (or Haar) measure. Let and define by . Let , also with the Borel -algebra and the Haar measure, and define by . Then the map defined by is a factor map. Note that this example does not fit in Example 4: even though the set is the product of two copies of , is not the product of with another map on .

If is a m.p.s. and is a -algebra such that for every (in other words if is measurable with respect to ; we call such an *invariant* -algebra under ), then the system is a factor of with the identity of as factor map. Conversely, given a factor map , the -algebra of in invariant under . Therefore we can (and will) identify factors of a system with the invariant -subalgebras of .

An intuitive (but not completely trivial) property of factors is that if and are factor maps such that (up to sets of measure ) then is also a factor of , in the sense that there exists a factor map .

Given two -algebras and on the same set , we denote by the smallest -algebra on containing both and . Note that if and are factors of a m.p.s. , then also is a factor of ; we call this the factor generated by and .

If is a factor of a m.p.s. , the *disintegration* of over is the family of probability measures defined by

where denotes the conditional expectation. Observe that the measure is supported in the atom of which contains . I posted about disintegration of measures before.

** — 2. Joinings — **

Given two m.p.s. and , a *joining* (of and ) is a probability measure on the product measurable space preserved under and with marginals and . To be clear, we say that is preserved under if for all , and we say that has marginals and if for we have and for we have .

In other words, is a measure on such that is a m.p.s. and the projections and are factor maps. This interpretation gives the trivial example of a joining between two arbitrary systems. We will use the abuse of language to denote both the measure and the m.p.s. as the joining of and .

A more abstract (yet equivalent) definition of a joining of two m.p.s. and is an arbitrary m.p.s. together with two factor maps and which generate the whole -algebra of .

Proposition 3The two definitions of joining above coincide.

*Proof:* From the rewording of the first definition it is clear that such a joining satisfies the second definition. Now let and be two factors of the m.p.s. which generate the whole -algebra . This means that , where and are the respective factor maps.

Let be defined by . Let be the pushforward of under (this means that for every set ). Note that, for all and ,

Thus is invariant under . Since is a factor map we have, for any ,

and analogously we have for every . This shows that is a joining of and in the sense of the first definition.

Finally we need to show that the systems is an isomorphism. The only property of Definition 1 which does not follow directly from the construction is the second, but that property holds for because .

** — 3. Analogies with arithmetic — **

The use of the word factor and Example 4 suggest an arithmetic structure on measure preserving system. However one needs to be careful not to push the analogy with the arithmetic on too far. For instance, in , if is a factor if , then there exists another factor of such that is the product of and . It is not clear at all how to do this with m.p.s.; in other words, with m.p.s. we can not divide by a factor.

With the analogy in mind, a joining of two systems and is a common extension (“multiple”) of both and . Thus, in analogy with the notion of relatively prime numbers, we say that the systems and are *disjoint* if their only joining is (isomorphic to) the product .

The first question that pops to mind is whether being disjoint systems is equivalent to having no nontrivial factor in common. The answer is, unfortunately, no, (the first counter-example was given by Rudolph) but one implication is still true.

Theorem 4If and are disjoint measure preserving systems, then they have no non-trivial factor in common.

I will give a proof of this theorem in the end of this section.

An interesting example of a non-trivial joining is the relatively independent joining over a common factor:

Definition 5Let , and be m.p.s. and let and be factor maps. Then therelatively independent joiningof and over is the measure on defined bywhere and are disintegrations of and , respectively, over .

Equivalently, we can define for functions of the form the following way:

Since linear combinations of such functions forms a dense set in , this definition extends to all .

An important observation is that the set

Lemma 6The set in (1) satisfies , where is the relatively independent joining defined in Definition 5.

*Proof:* For almost every and , the measure on is supported on . Thus almost everywhere, and therefore

We can now give a proof of Theorem 4

*Proof:* Assume that the systems and shared a common factor . Let denote the relatively independent joining defined in Definition 5. Since and are disjoint, the joining must be isomorphic to the product . On the other hand, we know that the subset constructed in (1) has full measure. If is not a trivial system, there exists some with and . Then would have positive measure and be disjoint from , which contradicts Lemma 6. Thus is trivial as desired.

Pingback: Szemerédi’s Theorem Part II – Overview of the proof | I Can't Believe It's Not Random!