## Optimal intersectivity

In ergodic Ramsey theory, one often wants to prove that certain dynamically defined sets in a probability space intersect (or “recur”) in non-trivial ways. Typically, this is achieved by studying the long term behavior of the sets as the dynamics flow. However, in certain situations, one can establish the desired intersection (or recurrence) using purely combinatorial arguments, and without using the fact that the sets are dynamically defined. In such cases, one ends up obtaining a “static” (as opposed to dynamical) statement. An instance of this situation is the following intersectivity result of Bergelson, first used in this paper, and which I have mentioned before in this blog.

Lemma 1 Let ${A_1,A_2,\dots}$ be sets in a probability space ${(X,\mu)}$ with ${\inf_n\mu(A_n)>\delta}$. Then there exists an infinite set ${I\subset{\mathbb N}}$ with density ${d(I)>\delta}$, such that for every non-empty finite set ${F\subset I}$ we have

$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>0$

A different kind of intersection property is the following static modification of Poincaré’s recurrence theorem.

Lemma 2 Let ${(X,\mu)}$ be a probability space and let ${A_1, A_2,\dots}$ be sets with ${\inf_n\mu(A_n)>\delta}$. Then there exists an infinite subset ${I\subset{\mathbb N}}$ such that for every ${n,m\in I}$,

$\displaystyle \mu\left(A_n\cap A_m\right)>\delta^2$

Observe that if the events ${A_1,A_2,\dots}$ are independent, then the lower bound ${\delta^2}$ is essentially achieved, and so this lemma is that sense optimal. The purpose of this post is to present a proof of the following common strengthening of Lemmas 1 and 2.

Theorem 3 Let ${A_1,A_2,\dots}$ be sets in a probability space ${(X,\mu)}$ with ${\inf_n\mu(A_n)>\delta}$. Then there exists an infinite set ${I\subset{\mathbb N}}$ such that for every non-empty finite set ${F\subset I}$ we have

$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>\delta^{|F|}$

Observe that the bound ${\delta^{|F|}}$ is optimal, again by considering the case of independent sets.

I learned this strengthening, as well as its proof from Konstantinos Tyros last December, when we were in Lyon, France attending the conference Ultrafilters, Ramsey Theory and Dynamics.

One could ask whether something more can be said about the set ${I}$ in Theorem 3 other than being infinite . Something one can not hope for is that the set ${I}$ has positive density, as showed in my previous post, using Forest’s theorem that not all sets of recurrence are sets of nice recurrence. On the other hand, one can indeed obtain certain combinatorial structures inside ${I}$. For instance, assuming the index set of the sets ${(A_n)}$ is given the structure of a homogeneous tree, one can choose ${I}$ to be a strong subtree; this is Theorem 3 in this paper of Dodos, Kanellopoulos and Tyros. Results of this kind are related to density versions of the Hales-Jewett and the Carlson-Simpson theorems due respectively to Furstenberg and Katznelson, and to Dodos, Kanellopoulos and Tyros.

— 1. A truncated version —

We start with a proof of Lemma 2 which we will need to prove Theorem 3. In fact, we prove the following strengthening of Lemma 2 which I have mentioned before in this blog (but without a proof) and can be seen as a “truncated” version of Theorem 3.

Lemma 4 Let ${(X,\mu)}$ be a probability space and let ${A_1, A_2,\dots}$ be sets with ${\inf_n\mu(A_n)>\delta}$. Then for every ${k\in{\mathbb N}}$ there exists an infinite subset ${I\subset{\mathbb N}}$ such that for every ${F\subset I}$ with ${|F|= k}$ we have

$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>\delta^k$

Proof: We partition the collection ${\binom{\mathbb N} k}$ of all subsets ${F}$ of ${{\mathbb N}}$ with size ${|F|=k}$ into two pieces, according to whether ${\mu\big(\bigcap_{n\in F}A_n\big)}$ is bigger or smaller than ${\delta^k}$. We then use the infinite Ramsey’s theorem to find an infinite set ${I\subset{\mathbb N}}$ such that every ${F\subset I}$ is in the same cell of the partition. We now need only to show that it is impossible to have ${\mu\big(\bigcap_{n\in F}A_n\big)\leq\delta^k}$ for every ${F\subset I}$ with ${|F|=k}$.

In order to do this, let ${G\subset I}$ be the set of the first ${M}$ elements of ${I}$, where ${M}$ is very large and will be determined later. Also let ${f=f_G=\sum_{n\in G}1_{A_n}}$. Then by Jensen’s inequality

$\displaystyle \begin{array}{rcl} \left(\int_Xfd\mu\right)^k &\leq& \int_Xf^kd\mu = \sum_{(n_1,\dots,n_k)\in G^k}\int_X\prod_{i=1}^k1_{A_{n_i}}d\mu \\&=& \sum_{(n_1,\dots,n_k)\in G^k}\mu\left(\bigcap_{i=1}^kA_{n_i}\right) \end{array}$

It is clear that ${G^k}$ contains ${\binom Mk}$ tuples of distinct elements, each appearing ${k!}$ times, and hence ${G^k}$ contains ${M^k-k!\binom{M}k}$ tuples of elements with some repetition. Thus we get

$\displaystyle \left(\int_Xfd\mu\right)^k\leq M^k-k!\binom{M}k+k!\sum_{F\subset G, |F|=k}\mu\left(\bigcap_{n\in F}A_n\right)$

and so, using the fact that ${\int_Xfd\mu>M\delta}$, for some ${F\subset G}$ with ${|F|=k}$ we have

$\displaystyle \begin{array}{rcl} \mu\left(\bigcap_{n\in F}A_n\right) &\geq& \frac1{k!\binom{M}k}\left(\left(\int_Xfd\mu\right)^k- M^k+k!\binom{M}k\right) \\& >& \frac{M^k}{k!\binom{M}k}\left(\delta^k-1\right)+1 \end{array}$

Since ${M^k/k!\binom Mk\rightarrow1}$ as ${M\rightarrow\infty}$, if follows that if ${M}$ was large enough, then

$\displaystyle \mu\left(\bigcap_{n\in F}A_n\right)>\delta^k$

as claimed, and this finishes the proof. $\Box$

Observe that this proof would give the full Theorem 3 if the following extension of Ramsey’s theorem were true.

Statement Let ${{\mathcal F}}$ denote the collection of all finite non-empty subsets of ${{\mathbb N}}$. For every finite coloring of ${{\mathcal F}}$ there exists an infinite set ${I\subset{\mathbb N}}$ such that for every ${k\in{\mathbb N}}$, the collection

$\displaystyle \big\{F\subset I:|F|=k\big\}$

is monochromatic.

Unfortunately, this statement is false, as seen by the following example. In this sense it is perhaps surprising that Theorem 3 is true.

Example 1 Let ${\chi:{\mathcal F}\rightarrow\{1,0\}}$ be the coloring given by

$\displaystyle \chi(F)=\begin{cases} 1&\text{ if }~ \min(F)<|F|\\ 0&\text{otherwise} \end{cases}$

Then given any infinite set ${I\subset{\mathbb N}}$, let ${x\in I}$ and find a finite subset ${F\subset I}$ with ${|F|=x+1}$ and ${x\in F}$ (which then satisfies ${\chi(F)=1}$) and another finite subset ${G\subset I}$ with ${|G|=x+1}$ but ${x\notin G}$ (which thus satisfies ${\chi(G)=0}$).

— 2. Proof of Theorem 3

Since the proof of Theorem 3 involves some annoying parameters, we first outline the steps in the proof without full details.

By refining the sequence of sets ${(A_n)}$ we can assume that they all have similar measure ${\mu(A_n)\approx\delta}$ for all ${n}$. Applying Lemma 2 we find a set ${I_1\subset{\mathbb N}}$ with ${\mu(A_n\cap A_m)>\delta^2}$ for every ${n,m\in I_1}$. After refining ${I_1}$, if needed, we can assume that, in fact, for every distinct ${n,m\in I_1}$ we have ${\mu(A_n\cap A_m)\approx\delta^2}$. Let ${n_1\in I_1}$ be arbitrary.

Now comes the tricky part: we condition the measure on ${A_{n_1}}$. In other words, we consider the probability measure ${\mu_1}$ on ${X}$ defined by ${\mu_1(B)=\mu(B\cap A_{n_1})/\mu(A_{n_1})\approx\mu(A\cap A_{n_1})/\delta}$. Observe that ${\mu_1(A_n)\approx\delta}$ for every ${n\in I_1\setminus\{n_1\}}$. Now use Lemma 2 to find an infinite set ${I_{2}\subset I_1\setminus\{n_1\}}$ such that ${\mu_1(A_n\cap A_m)\approx\delta^2}$ for every distinct ${n,m\in I_2}$.

Let ${n_2\in I_2}$ be arbitrary. Now we condition the measure on ${A_{n_2}}$, letting ${\mu_2(B):=\mu(A_{n_2}\cap B)/\mu(A_{n_2})}$ and noting that, since ${n_2\in I_1}$, for every ${m\in I_2\setminus\{n_2\}}$ we have ${\mu_2(A_m)\approx\delta}$. Therefore by applying Lemma 2 we can find an infinite ${I_{3,1}\subset I_2\setminus\{n_2\}}$ such that for every distinct ${n,m\in I_{3,1}}$ we have ${\mu_2(A_n\cap A_m)\approx\delta^2}$. Before we can choose ${n_3}$, we need to also consider the situation conditional on ${A_{n_1}\cap A_{n_2}}$ (which we recall, has measure ${\approx\delta^2}$ because both ${n_1}$ and ${n_2}$ are in ${I_1}$). Thus, letting ${\mu_{1,2}(B):=\mu(A_{n_1}\cap A_{n_2}\cap B)/\delta^2}$ we obtain that, for every ${n\in I_2}$, ${\mu_{1,2}(A_n)\approx\delta}$. Therefore, applying Lemma 2 again, we can further refine ${I_{3,1}}$ to an infinite subset ${I_3}$ such that for every distinct ${n,m\in I_3}$ also ${\mu_{1,2}(A_n\cap A_m)\approx\delta^2}$. We can now chose ${n_3\in I_3}$ arbitrarily.

We continuing building the elements ${n_1,n_2,n_3,...}$ of the eventually infinite set ${I}$, at each step making sure we have an infinite set ${I_k}$ such that for every non-empty subset ${F\subset\{n_1,\dots,n_k\}}$ we have

$\displaystyle \mu\left(\bigcap_{i\in F}A_i\right)\approx\delta^{|F|},$

and that every ${n\in I_k}$ we have

$\displaystyle \mu_F(A_n):=\frac{\mu\left(A_n\cap\bigcap_{i\in F}A_i\right)}{\mu\left(\bigcap_{i\in F}A_i\right)}\approx\delta.$

In each stage we can keep moving by conditioning ${\mu}$ in each new subset of ${\{n_1,\dots,n_k\}}$ and applying Lemma 2.

To make everything work out, we need to introduce a refinement step at each stage, to make sure all the sets in ${I_k}$ have similar measure, for all the conditional measures ${\mu_F}$. To this end we make use of the following version of the pigeonhole principle.

Lemma 6 Let ${(X,\mu)}$ be a probability space, let ${\delta>0}$ and let ${A_1, A_2,\dots}$ be sets with ${\mu(A_n)>\delta}$ for every ${n\in{\mathbb N}}$. Then for every ${\rho>1}$ there exists ${\theta\geq\delta}$ and an infinite subset ${I\subset{\mathbb N}}$ such that for every ${n\in I}$ we have

$\displaystyle \mu\left(A_n\right)\in[\theta,\theta\rho).$

Proof: Let ${N}$ be large enough that the intervals ${\big[\delta\rho^{k-1},\delta\rho^k\big)}$, with ${k=1,2,\dots,N}$ cover the interval ${(\delta,1]}$. Since ${\mu(A_n)\in(\delta,1]}$ for every ${n\in{\mathbb N}}$, the pigeonhole principle implies that there exists ${k\in\{1,\dots,N\}}$ for which the result holds with ${\theta=\delta\rho^{k-1}}$. $\Box$

We are now ready to prove Theorem 3.

Proof of Theorem 3: For each ${F\subset{\mathbb N}}$, denote by ${A_F}$ the intersection ${\bigcap_{n\in F}A_n}$, with the convention that ${A_\emptyset=X}$, and let ${\mu_F}$ denote the measure on ${X}$ defined by ${\mu_F(B)=\mu(B\cap A_F)/\mu(A_F)}$ whenever ${\mu(A_F)\neq0}$. Let ${\sigma>1}$ be such that ${\mu(A_n)>\delta\sigma}$ for every ${n\in{\mathbb N}}$.

We will construct, for each ${k=0,1,\dots}$, a set ${F_k}$ with ${|F_k|=k}$, such that ${F_{k-1}\subset F_k}$ and

$\displaystyle \forall\ \emptyset\neq F\subset F_k\quad\exists\theta_F>\delta:\quad\mu\left(A_F\right)\in\left[\theta_F^{|F|},\theta_F^{|F|} \sigma^{1-2^{-k}}\right) \ \ \ \ \ (1)$

and an infinite set ${I_k\subset{\mathbb N}}$ such that for any ${n\in I_k}$ we have

$\displaystyle \forall\ F\subset F_k\quad\exists\lambda_F>\delta\sigma^{2^{1-k}}\quad\forall n\in I_k,\quad\mu_F\left(A_n\right)\in\big[\lambda_F,\lambda_F\sigma^{2^{-k-1}}\big) \ \ \ \ \ (2)$

If we can construct such sequences, then taking ${I=\bigcup F_k}$ we obtain the desired conclusion from (1). For ${k=0}$ we set ${F_0=\emptyset}$. Apply Lemma 6 with ${\rho=\sqrt{\sigma}}$ to find ${\lambda_\emptyset\geq\delta\sigma}$ such that (2) holds for all ${n}$ in an infinite set ${I_0}$.

Suppose now that ${k>1}$ and that we have found ${F_{k-1}}$, ${I_{k-1}}$ satisfying (1) and (2). Enumerate the subsets of ${F_{k-1}}$ as ${\{S_1,\dots,S_{2^{k-1}}\}}$. Let ${J_0=I_{k-1}}$ and apply successively Lemma 2 for each ${i}$ in ${\{1,\dots,2^{k-1}\}}$ to obtain an infinite set ${J_{i}\subset J_{i-1}}$ such that for every distinct ${n,m\in J_{i}}$ we have

$\displaystyle \mu_{S_i}(A_n\cap A_m)\geq\lambda_{S_i}^2\sigma^{-2^{-k}}, \ \ \ \ \ (3)$

Let ${J=J_{2^{k-1}}}$, take ${x\in J}$ arbitrary and let ${F_{k}=F_{k-1}\cup\{x\}}$. Observe that for every ${i\in\{1,\dots,2^{k-1}\}}$ and ${n\in J\setminus\{x\}}$, combining (2) and (3), we have

$\displaystyle \mu_{S_i\cup x}(A_n)=\frac{\mu(A_{S_i}\cap A_x\cap A_n)}{\mu(A_{S_i}\cap A_x)}=\frac{\mu_{S_i}(A_x\cap A_n)}{\mu_{S_i}(A_x)}\geq \frac{\lambda_{S_i}^2\sigma^{-2^{-k}}}{\lambda_{S_i}\sigma^{2^{-k}}}= \lambda_{S_i}\sigma^{-2^{1-k}}>\delta\sigma^{2^{1-k}} \ \ \ \ \ (4)$

We run another refinement cycle, setting ${K_0:=J\setminus\{x\}}$ and successively using Lemma 6 for each ${i\in\{1,\dots,2^{k-1}\}}$ to find ${\lambda_F\geq\delta\sigma^{2^{1-k}}}$, where ${F=S_i\cup\{x\}}$, and an infinite set ${K_i\subset K_{i-1}}$ such that for every ${n\in K_i}$,

$\displaystyle \mu_{F}(A_n)\in\big[\lambda_F,\lambda_F\sigma^{2^{-k-1}}\big) \ \ \ \ \ (5)$

Finally, let ${I_k=K_{2^{k-1}}}$. Observe that (2) follows immediately from (5) and induction (for those ${F\subset F_k}$ which do not contain ${x}$).

To verify (1), let ${\emptyset\neq F\subset F_k}$. If ${x\notin F}$, then ${F\subset F_{k-1}}$ and the result follows by induction. If ${x\in F}$, let ${S:=F\setminus\{x\}}$ and notice that ${\mu(A_F)=\mu_S(A_x)\mu(A_S)}$. The fact that ${x\in I_{k-1}}$ together with (2) for ${S}$ (which is a subset of ${F_{k-1}}$) implies that

$\displaystyle \mu(A_F)\in \left[\theta_S^{|S|},\theta_S^{|S|} \sigma^{1-2^{1-k}}\right) \cdot \big[\lambda_S,\lambda_S\sigma^{2^{-k}}\big) =\left[\theta_S^{|S|}\lambda_S,\theta_S^{|S|}\lambda_S\sigma^{1-2^{-k}}\right)$

and (1) follows by setting ${\theta_F=\big(\theta_S^{|S|}\lambda_S\big)^{1/|F|}}$, which will be greater that ${\delta}$ since both ${\theta_F}$ and ${\lambda_F}$ are.

$\Box$

This entry was posted in Combinatorics, Probability, Ramsey Theory, Tool and tagged , , , , . Bookmark the permalink.