## Hales-Jewett and a generalized van der Warden Theorems

The Hales-Jewett theorem is one of the most fundamental results in Ramsey theory, implying the celebrated van der Waerden theorem on arithmetic progressions, as well an its multidimensional and IP versions. One interesting property of the Hales-Jewett’s theorem is that it is a set theoretical statement, having no structure, and hence making it versatile for applications. Recently I realized that there exists an equivalent formulation of this theorem using some algebraic structure, and indeed it can be seen as an analogue of van der Waerden’s theorem. The main purpose of this post is to establish that equivalence. In an initial section I present the deduction of the multidimensional van der Waerden from the Hales-Jewett theorem, both to set the mood and to set the stage to later establish the analogy between van der Waerden’s theorem and the equivalent formulation of the Hales-Jewett theorem.

In a previous post I gave some motivation for, stated and proved the Hales-Jewett theorem and derived a few consequences of it. For convenience I will recall its statement here, starting with a few definitions.

Definition 1 Let ${n,k\in{\mathbb N}}$. We denote by ${[k]}$ the set ${\{0,1,\dots,k-1\}}$. Let ${*}$ be an element not in ${[k]}$ (some authors take ${*=k}$ for concreteness, this can become confusing at some point so I will use an abstract element ${*}$). A variable word in ${[k]^n}$ is an element ${w\in([k]\cup\{*\})^n}$ which is not in ${[k]^n}$ (so the ${*}$ must appear somewhere in ${w}$). Given a variable word ${w}$ and a letter ${a\in[k]}$ we denote by ${w(a)}$ the element of ${[k]^n}$ obtained from ${w}$ be replacing each instance of ${*}$ with ${a}$.

For instance, if ${k=4}$ and ${n=3}$, then examples of variable words are ${w_1=(1,3,*)}$, ${w_2=(*,2,*)}$, ${w_3=(*,*,*)}$ but not ${w_4=(1,2,3)}$. Using these words we have ${w_1(2)=(1,3,2)}$, ${w_2(0)=(0,3,0)}$ and ${w_3(1)=(1,1,1)}$.

Definition 2 Let ${k,n\in{\mathbb N}}$ and let ${w}$ be a variable word in ${[k]^n}$. The combinatorial line induced by ${w}$ is the set ${\{w(1),w(2),\dots,w(k)\}\subset[k]^n}$.

Using the word ${w_1}$ from above, the combinatorial line it induces is

$\displaystyle \{(1,3,0),(1,3,1),(1,3,2),(1,3,3)\}$

Theorem 3 (Hales-Jewett) For any ${k,r\in{\mathbb N}}$ there exists some ${n\in{\mathbb N}}$ such that for any finite partition of ${[k]^n}$ into ${r}$ pieces, one of the pieces contains a combinatorial line.

It should be clear that the set ${[k]}$ can be replaced with any other set of cardinality ${k}$.

The second formulation of Hakes-Jewett theorem does not require much set up

Definition 4

• For a set ${S}$, we denote by ${{\cal F}(S)}$ the family of finite subsets of ${S}$.
• Let ${\ell\in{\mathbb N}}$, ${\beta\in{\cal F}([\ell])}$ and ${\gamma\in{\cal F}({\mathbb N})}$. We define

$\displaystyle \gamma^\beta:=(\gamma_0,\dots,\gamma_{\ell-1})\in\big({\cal F}({\mathbb N})\big)^\ell$

where ${\gamma_i=\gamma}$ if ${i\in\beta}$ and ${\gamma_i=\emptyset}$ if ${i\notin\beta}$.

Theorem 5 (Hales-Jewett, again) For any ${\ell,r\in{\mathbb N}}$ there exists ${n\in{\mathbb N}}$ such that for any partition of ${{\mathcal F}([n])^\ell}$ into ${r}$ cells there exist ${\gamma,\alpha_0,\dots,\alpha_{\ell-1}\in{\cal F}({\mathbb N})}$ with ${\gamma}$ nonempty and disjoint from each ${\alpha_i}$, such that the configuration

$\displaystyle \Big\{(\alpha_0,\dots,\alpha_{\ell-1})\cup\gamma^\beta:\beta\in{\cal F}([\ell])\Big\}$

is contained in a single cell of the partition, where the union is to be taken coordinatewise.

This formulation is in this site as well as in this book by McCutcheon, but in both places the formulation is a little misleading, making it seem like ${\beta}$ is only a singleton, in other words, it seems to deal with the configuration

$\displaystyle \Big\{(\alpha_0,\dots,\alpha_{\ell-1})\cup\gamma^{\{i\}}:i\in[\ell]\Big\}$

Such a result is trivially a corollary of Theorem 5, but I do not know if it implies it (or, equivalently, Theorem 3).

Another equivalent formulation of Hales-Jewett theorem is an infinitistic version:

Theorem 6 (Infinite Hales-Jewett) Let ${k\in{\mathbb N}}$. For any finite coloring of

$\displaystyle \bigcup_{n\in{\mathbb N}}[k]^n$

there exists a monochromatic combinatorial line.

Unlike van der Waerden or Szemerédi’s theorems, where one needs a compactness argument to deduce the finitistic version from the infinitistic one, Theorems 3 and 6 are readily seen to be equivalent, since the object colored in the infinitistic version is simply the union of all ${[k]^n}$.

Similarly, there is an equivalent infinitistic formulation of Theorem 5:

Theorem 7 (Infinite Hales-Jewett, again) For any ${\ell\in{\mathbb N}}$ and any finite partition of ${{\mathcal F}({\mathbb N})^\ell}$, there exist ${\gamma,\alpha_0,\dots,\alpha_{\ell-1}\in{\cal F}({\mathbb N})}$ with ${\gamma}$ nonempty and disjoint from each ${\alpha_i}$, such that the configuration

$\displaystyle \Big\{(\alpha_0,\dots,\alpha_{\ell-1})\cup\gamma^\beta:\beta\in{\cal F}([\ell])\Big\}$

is contained in a single cell of the partition.

Again it should be clear that Theorems 5 and 7 are indeed equivalent.

— 1. Van der Waerden type theorem for commutative semigroups —

As mentioned in the introduction, the Hales-Jewett theorem is a set theoretical result and therefore it can be applied to any structure. We first deduce from it the multidimensional IP van der Warden theorem. Recall that the IP set generated by a set ${S\subset{\mathbb N}}$ is the set of all finite sums ${\big\{\sum_{i\in\alpha}i:\emptyset\neq\alpha\in{\cal F}(S)\big\}}$.

Theorem 8 (IP-Multidimensional van der Waerden) Let ${A\subset{\mathbb N}}$ be an IP-set, let ${m,t\in{\mathbb N}}$ and let ${{\mathbb N}^m=C_1\cup\cdots\cup C_r}$ be a finite coloring. Then there exist some ${i\in[r]}$, ${b\in{\mathbb N}^m}$ and ${a\in A}$ such that

$\displaystyle \{b+j_1ae_1+\cdots+j_mae_m:j_1,\dots,j_m\in[t]\}\subset C_i$

where ${e_1,\dots,e_m}$ form the canonical basis of ${{\mathbb N}^m}$.

Proof: Let ${S=\{s_1 be the infinite set which generates the IP-set ${A}$. Let ${k=t^m}$ and define the function ${f:[k]^{<\infty}\rightarrow{\mathbb N}^m}$ the following way: If ${{\bf w}=(w_1,w_2,\cdots, w_n)\in[k]^n\subset[k]^{<\infty}}$, we can write each ${w_i\in[k]}$ uniquely as

$\displaystyle w_i=w_i^{(0)}+w_i^{(1)}t+\cdots+w_i^{(m-1)}t^{m-1}$

with ${w_i^{(j)}\in[t]}$. We let

$\displaystyle f({\bf w})=\left(\sum_{i=1}^ns_iw_i^{(0)},\sum_{i=1}^ns_iw_i^{(1)},\dots,\sum_{i=1}^nw_i^{(m-1)}\right)\in{\mathbb N}^m$

This will induce a finite coloring of ${[k]^{<\infty}}$ by coloring ${{\bf w}}$ with the color of ${f({\bf w})}$. Let ${{\bf c}=(c_1,c_2,\cdots,c_n)\in\big([k]\cup\{*\}\big)^n}$ be the variable word which generates a monochromatic combinatorial line ${L}$ with respect to this coloring. Let ${F}$ be the nonempty set ${F=\{i\in[n]:c_i=*\}}$. Let ${a=\sum_{i\in F}s_i}$, note that ${a\in A}$ and let ${b=f\big({\bf c}(0)\big)}$ (where ${{\bf c}(0)\in[k]^n}$ is the sequence we obtain from replacing each ${*}$ in ${{\bf c}}$ with ${0}$).

I claim that the set ${\{b+j_1ae_1+\cdots+j_mae_m:j_1,\dots,j_m\in[t]\}}$ is exactly ${f(L)}$ and hence it is monochromatic. Indeed, given ${j_1,\dots,j_m\in[t]}$, let

$\displaystyle j=j_1+j_2k+\cdots+j_mk^{m-1}\in[k]$

Let ${{\bf c}(j)\in L}$ be the sequence obtained from ${{\bf c}}$ by replacing each ${*}$ with ${j}$. Then

$\displaystyle \begin{array}{rcl} \displaystyle f\big({\bf c}(j)\big)&=&\displaystyle \left(\sum_{i=1}^ns_i{\bf c}(j)_i^{(0)},\sum_{i=1}^ns_i{\bf c}(j)_i^{(1)},\dots,\sum_{i=1}^n{\bf c}(j)_i^{(m-1)}\right)\\&=&\displaystyle \left(\sum_{i\notin F}s_i{\bf c}_i^{(0)},\sum_{i\notin F}s_i{\bf c}_i^{(1)},\dots,\sum_{i\notin F}s_i{\bf c}_i^{(m-1)}\right)+\left(\sum_{i\in F}s_ij_1,\sum_{i\in F}s_ij_2,\dots,\sum_{i\in F}s_ij_m\right) \\&=&\displaystyle b+a(j_1,j_2,\dots,j_m) \end{array}$

$\Box$

A careful analysis of the previous proof reveals a much more general result which holds for any commutative semigroup, and indeed for an even more general setting. The definition below of a partial semigroup is only one among many reasonable structures weaker than semigroup. For the purposes of this post I only need partial semigroups to include the class of all semigroups, and the structure from Example 2 below.

Definition 9 (Partial semigroup) Let ${G}$ be a set, let ${X\subset G\times G}$ and let ${\bullet:X\rightarrow G}$ be an operation. The triple ${(G,X,\bullet)}$ is a partial semigroup if it satisfies the following properties:

• For any ${x,y,z\in G}$, then ${(x\bullet y)\bullet z=x\bullet(y\bullet z)}$ in the sense that either both sides are undefined or both are defined and equal.
• For any ${n\in{\mathbb N}}$ and ${x_1,\dots,x_n\in G}$ there exists ${y\in G\setminus\{x_1,\dots,x_n\}}$ such that ${x_i\bullet y}$ is defined.

Observe that the second condition implies that ${G}$ is infinite. A partial semigroup is commutative if ${x\bullet y=y\bullet x}$ for every ${(x,y)\in X}$.

Example 1 Every infinite semigroup is a partial semigroup with ${X=G\times G}$.

Example 2 Let ${G={\cal F}({\mathbb N})}$, let ${X:=\{(\alpha,\beta)\in G^2:\alpha\cap\beta=\emptyset\}}$ be the family of all pairs of disjoint sets, and let ${\bullet:X\rightarrow G}$ be the union. It is easy to check that this is a commutative partial semigroup.

Lemma 10 Let ${G}$ be an infinite commutative partial semigroup, denoted additively, and let ${n\in{\mathbb N}}$. Then there exist ${x_0,\dots,x_{n-1}}$ such that for any ${\emptyset\neq\alpha\in[n]}$ the sum ${\sum_{i\in\alpha}x_i}$ is defined.

Proof: We use induction on ${n}$. For ${n=1}$ the claim is trivial. Assume now we are have found ${x_0,\dots,x_{n-2}}$ satisfying the claim. Let ${y_1,\dots,y_m}$ be the an enumeration of all the sums of the form ${\sum_{i\in\alpha}x_i}$ with ${\emptyset\neq\alpha\subset[n-1]}$ (and so ${m=2^{n-1}-1}$). Using the second condition of the definition of partial semigroup we can find some ${x_{n-1}\in G\setminus\{y_1,\dotsm y_m\}}$ such that the sum ${y_j+x_{n-1}}$ is defined for all ${1\leq j\leq m}$. Thus all the sums ${\sum_{i\in\alpha}x_i}$ with ${\emptyset\neq\alpha\in[n]}$ are well defined as desired. $\Box$

Definition 11 Let ${(G,\bullet),(H,\circ)}$ be partial semigroups and let ${f:G\rightarrow H}$. We say that ${f}$ is a partial semigroup homomorphism if for any ${x,y\in G}$ for which ${x\bullet y}$ is defined, also ${f(x)\circ f(y)}$ is defined and equals ${f(x\bullet y)}$.

Observe that when ${G}$ and ${H}$ are semigroups, then a partial semigroup homomorphism is just a semigroup homomorphism.

Definition 12 Let ${G}$ be a partial semigroup and let ${e\in G}$. We say that ${e}$ is the identity of ${G}$ if for every ${x\in G}$, ${e\bullet x}$ is defined and equals ${x}$. We will denote the identity by ${0}$.

Observe that if an identity exists it is unique.

Theorem 13 Let ${H}$ be a commutative partial semigroup, let ${G}$ be a semigroup and let ${f_0,\dots,f_{k-1}}$ be homomorphisms from ${H}$ to ${G}$. For any finite coloring of ${G}$ there exist ${a\in H\setminus\{0\}}$ (if ${H}$ has no identity then simply ${a\in H}$) and ${b\in G}$ such that the set ${\{b+f_i(a):i\in[k]\}}$ is monochromatic.

Proof: Given a finite coloring ${G=C_1\cup\cdots\cup C_r}$, let ${n}$ be given by Theorem 3 for ${k}$ (the number of homomorphisms) and ${r}$ (the number of colors/cells in the partition). Let ${h_1,\dots,h_n\in H}$ be distinct arbitrary elements given by Lemma 10, so that all sums among those elements are defined. Let ${b_0\in G}$ be arbitrary and let ${f:[k]^n\rightarrow G}$ be map

$\displaystyle f:(w_1,\dots,w_n)\mapsto b_0+f_{w_1}(h_1)+f_{w_2}(h_2)+\cdots+f_{w_n}(h_n)$

Now we can partition ${[k]^n}$ into ${r}$ cells by letting ${\tilde C_i=\{w\in[k]^n:f(w)\in C_i\}}$ for each ${i=1,\dots,r}$. Applying the Hales-Jewett Theorem we can now find a variable word ${c\in([k]\cup\{*\})^n}$ whose corresponding combinatorial line lies within a single ${\tilde C_i}$.

Let ${A=\{i\in[n]:c_i=*\}}$ be the nonempty set of the positions of the wild card and let ${B=[n]\setminus A}$. Let ${b=b_0+\sum_{i\in B}f_{c_i}(h_i)}$ and let ${a=\sum_{i\in A}h_i}$, which is well defined by the construction of ${h_1,\dots,h_n}$ in Lemma 10. Finally observe that ${\{b+f_i(a):i\in[k]\}}$ is exactly the image of the monochromatic combinatorial line under the map ${f}$, and hence is contained in ${C_i}$ as desired.

$\Box$

Observe that Theorem 8 is a direct corollary of Theorem 13. Indeed, an IP-set is the image of a homomorphism from the partial semigroup ${({\cal F},\cup)}$ from Example 2 (without the identity ${\emptyset}$), and the configuration ${\{b+j_1ae_1+\cdots+j_mae_m:j_1,\dots,j_m\in[t]\}}$ can be written as ${\{b+f(\tilde a):f\in F\}}$, where ${\tilde a\in{\cal F}({\mathbb N})}$ and ${F}$ is the set of homomorphisms

$\displaystyle \tilde a\mapsto j_1ae_1+\cdots+j_mae_m$

with ${j_1,\dots,j_m\in[t]}$.

We could have required ${G}$ to be only a partial semigroup, but then we need to put some constraint on the homomorphisms ${f_i}$ so that the sum of their images is defined (such a restriction is essential, otherwise one can take two disjoint copies of a semigroup ${G}$ and make ${f_1,f_2}$ the identity map from ${G}$ to each of the copies. In that situation ${f_1(x)+f_2(y)}$ is never defined!).

— 2. Proof of the equivalency —

In this section we prove that Theorems 3 and 5 are equivalent.

First we see how Theorem 3 implies Theorem 5. Unfortunately one can not simply apply Theorem 13, as we need some extra information about ${a}$ and ${b}$ in the conclusion of that theorem. However, the information needed can be extracted from the proof of Theorem 13 provided above.

Let ${\ell,r\in{\mathbb N}}$, let ${H={\cal F}({\mathbb N})}$ and let ${G=\big({\cal F}({\mathbb N})\big)^\ell}$. Let ${k=2^\ell}$ and identify ${[k]}$ with ${{\cal F}([\ell])}$. Let ${n}$ be given by Theorem 3.

For each ${\beta\in{\cal F}([\ell])}$ let ${f_\beta:H\rightarrow G}$ be the homomorphism ${f_\beta:\gamma\mapsto\gamma^\beta}$.

Assume we are given a finite coloring of ${G}$, we will go through the proof of Theorem 13 above, letting ${h_i=\{i\}\in H}$ for each ${i\in[n]}$. Letting ${b_0=(\emptyset,\dots,\emptyset)}$ be the identity for ${G}$ we find

$\displaystyle b=\sum_{i\in B}f_{c_i}(h_i)\in G\qquad\qquad a=\sum_{i\in A}h_i=A\in H$

for some ${c_i\in{\cal F}([\ell]}$, ${A\subset[n]}$ nonempty and ${B=[n]\setminus A}$. Observe that we can write ${b=(\alpha_0,\dots,\alpha_\ell)}$ and each ${\alpha_j\subset b}$. Hence, letting ${\gamma=A}$ we have that ${\gamma}$ is disjoint from each ${\alpha_j}$.

Finally, observe that

$\displaystyle \Big\{b+f_i(a):i\in[k]\Big\}=\Big\{(\alpha_0,\dots,\alpha_\ell)\cup\gamma^\beta:\beta\in{\cal F}\big([\ell]\big)\Big\}$

We now prove the other direction, i.e. that Theorem 5 implies Theorem 3. Let ${k,r\in{\mathbb N}}$ and assume first that ${k=2^\ell}$ for some ${\ell\in{\mathbb N}}$. Then we apply Theorem 5 with this ${\ell}$ and ${r}$ and we get ${n\in{\mathbb N}}$. Again we associate ${[k]}$ with (the set of same cardinality) ${{\cal F}([\ell])}$ (in other words, fix an arbitrary bijection between them).

Given a partition ${[k]^n\equiv\big({\cal F}([\ell])\big)^n=C_1\cup\cdots C_r}$ we can induce a partition of ${\big({\cal F}([n])\big)^\ell=\tilde C_1\cup\cdots\cup\tilde C_r}$ by putting ${{\bf\alpha}=(\alpha_0,\dots,\alpha_{\ell-1})}$ in ${\tilde C_i}$ if and only if ${(w_1,\dots,w_n)\in C_i}$, where ${w_j=\{t\in[\ell]:j\in\alpha_t\}\in{\cal F}([\ell])\equiv[k]}$.

Invoking Theorem 5 we can now find ${(\alpha_0,\dots,\alpha_{\ell-1})\in\big({\cal F}([n])\big)^\ell}$ and ${\gamma\in{\cal F}([n])}$ disjoint from all the ${\alpha_i}$ such that

$\displaystyle \Big\{(\alpha_0,\dots,\alpha_{\ell-1})\cup\gamma^\beta:\beta\in{\cal F}\big([\ell]\big)\Big\}\subset\tilde C_i$

for some ${i\in\{1,\dots,r\}}$. This means that for every ${\beta\in{\cal F}([\ell])}$, the word ${(w_1,\dots,w_n)\in C_i}$, where ${w_j=\{t\in[\ell]:j\in\alpha_t\}}$ for ${j\neq\gamma}$ and ${w_j=\beta}$ for ${j\in\gamma}$. But this forms the combinatorial line generated by the variable word ${(c_1,\dots,c_n)}$ where ${c_j=\{t\in[\ell]:j\in\alpha_t\}}$ for ${j\neq\gamma}$ and ${c_j=*}$ for ${j\in\gamma}$. This finishes the proof for the case when ${k=2^\ell}$.

For general ${k}$, notice that if Theorem 3 is true for some ${\tilde k>k}$, then it must be true for ${k}$ as well. To see this, simply map all the letters ${k,k+1,\dots,\tilde k-1}$ to ${k}$.

More precisely, let first ${\tilde k\geq k}$ be such that ${\tilde k=2^\ell}$ for some ${\ell\in{\mathbb N}}$. We have already showed that Theorem 3 is true for ${\tilde k}$. Let ${r\in{\mathbb N}}$ and let ${n\in{\mathbb N}}$ given by the Hales-Jewett theorem for ${\tilde k}$. Given a partition ${[k]^n=C_1\cup\cdots\cup C_r}$, we can induce a partition ${[\tilde k]^n=\tilde C_1\cup\cdots\cup\tilde C_r}$ by letting ${(v_1,\dots,v_n)\in\tilde C_i}$ if and only if ${(w_1,\dots,w_n)\in C_i}$, where ${w_i=v_i}$ if ${v_i\in[k]}$ and ${w_i=k-1}$ otherwise. If ${(c_1,\dots,c_n)\in\big([\tilde k]\cup\{*\}\big)^n}$ is a variable word which produces a combinatorial line contained in ${\tilde C_i}$ for some ${i\in\{1,\dots,r\}}$ (and such variable word exists by the previous argument), then the variable word ${(d_1,\dots,d_n)\in\big([k]\cup\{*\}\big)^n}$, defined by ${d_i=c_i}$ if ${c_i\in[k]\cup\{*\}}$ and ${d_i=k-1}$ otherwise, generates a combinatorial word in ${C_i}$.

This finishes the proof.