EMIS/ELibM Electronic Journals

Outdated Archival Version

These pages are not updated anymore. They reflect the state of 20 August 2005. For the current production of this journal, please refer to http://www.math.psu.edu/era/.


%_ **************************************************************************
%_ * The TeX source for AMS journal articles is the publishers TeX code     *
%_ * which may contain special commands defined for the AMS production      *
%_ * environment.  Therefore, it may not be possible to process these files *
%_ * through TeX without errors.  To display a typeset version of a journal *
%_ * article easily, we suggest that you retrieve the article in DVI,       *
%_ * PostScript, or PDF format.                                             *
%_ **************************************************************************
% Author Package file for use with AMS-LaTeX 1.2
\controldates{16-APR-2001,16-APR-2001,16-APR-2001,16-APR-2001}
 
\documentclass{era-l}

\issueinfo{7}{05}{}{2001}
\dateposted{April 24, 2001}
\pagespan{28}{36}
\PII{S 1079-6762(01)00091-9}
\copyrightinfo{2001}{American Mathematical Society}
\copyrightinfo{2001}{American Mathematical Society}
\usepackage{graphicx}


\theoremstyle{plain}
\newtheorem*{clemma}{Closing Lemma}


\theoremstyle{definition}
\newtheorem{Def}{Definition}[section]

\newtheorem*{phyper}{Perturbation of hyperbolicity}

\newcommand{\R}{\mathbb R}
\newcommand{\Z}{\mathbb Z}
\newcommand{\J}{\mathcal{J}}
\newcommand{\D}{\mathcal{D}}
\newcommand{\dt}{\delta}
\newcommand{\myr}{\mathbf r}
\newcommand{\gm}{\gamma}
\newcommand{\gmn}{\gm_n}
\newcommand{\tgmn}{\tilde{\gmn}}
\newcommand{\gmncdt}{\gm_n(C,\dt)}
\newcommand{\myeps}{\varepsilon}
\newcommand{\veps}{{\vec {\myeps}}}
\newcommand{\pa}{\partial}

\begin{document}

\title[Periodic points II]{A stretched exponential bound on the rate of growth
of the number of periodic points for prevalent diffeomorphisms II}

\author{Vadim Yu. Kaloshin}
\address{Fine Hall, Princeton University,
Princeton, NJ 08544}
\email{kaloshin@math.princeton.edu}

\author{Brian R. Hunt}
\address{Department of Mathematics and Institute for
Physical Science and Technology, University of Maryland,
College Park, MD 20742}
\email{bhunt@ipst.umd.edu}

\commby{Svetlana Katok}

\date{December 21, 2000}

\subjclass[2000]{Primary 37C20, 37C27, 37C35, 34C25, 34C27}

\keywords{Periodic points, prevalence, diffeomorphisms}

\begin{abstract}
We continue the previous article's discussion of bounds, for prevalent
diffeomorphisms of smooth compact manifolds, on the growth of the
number of periodic points and the decay of their hyperbolicity as a
function of their period $n$.  In that article we reduced the main
results to a problem, for certain families of diffeomorphisms, of
bounding the measure of parameter values for which the diffeomorphism
has (for a given period $n$) an almost periodic point that is almost
nonhyperbolic.  We also formulated our results for $1$-dimensional
endomorphisms on a compact interval.  In this article we describe some
of the main techniques involved and outline the rest of the proof.  To
simplify notation, we concentrate primarily on the $1$-dimensional case.
\end{abstract}
\maketitle

\section{Introduction}
In the previous article, we described in Section~3 bounds on the
growth (as a function of $n$) of the number of periodic points of
period $n$ and the decay of their hyperbolicity for ``almost
every'' $C^{1+\rho}$ diffeomorphism of a finite-dimensional smooth
compact manifold.  Our definition of ``almost every'' is based on the
measure-theoretic notion of prevalence, described in Section~2 of the
previous article.  In Section~5 of that article, we reduced the main
results to a problem of estimating, within a particular parametrized
family of diffeomorphisms, the measure of ``bad'' parameters --- those
for which the diffeomorphism has an almost periodic point that is
almost nonhyperbolic.  The reader should refer to the previous
article, primarily Sections~3 and 5, for notation and terminology used
below.

\section{Perturbation of recurrent trajectories by
Newton interpolation polynomials}\label{Newtonstart}

The approach we take to estimate the measure of ``bad'' parameter
values in the space of perturbations $HB^N(\myr)$ is to choose a
coordinate system for this space and for a finite subset of the
coordinates to estimate the amount that we must change a particular
coordinate to make a ``bad'' parameter value ``good''.  Actually we
will choose a coordinate system that depends on a particular point
$x_0 \in B^N$, the idea being to use this coordinate system to
estimate the measure of ``bad'' parameter values corresponding to
initial conditions in some neighborhood of $x_0$, then cover $B^N$
with a finite number of such neighborhoods and sum the corresponding
estimates.  For a particular set of initial conditions, a
diffeomorphism will be ``good'' if every point in the set is either
sufficiently nonperiodic or sufficiently hyperbolic.

In order to keep the notations and formulas simple as we formalize
this approach, we consider the case of 1-dimensional maps, but the
reader should always have in mind that our approach is designed for
multidimensional diffeomorphisms.  Let $f:I \to I$ be a
$C^1$ map of the interval $I=[-1,1]$. Recall that a trajectory
$\{x_k\}_{k \in \Z}$ of $f$ is called {\em recurrent\/} if
it returns arbitrarily close to its initial position---that 
is, for all $\gm>0$ we have $|x_0-x_n|<\gm$ for some $n>0$.
A very basic question is how much one should perturb $f$ to
make $x_0$ periodic.  Here is an elementary lemma that
gives a simple partial answer to this question.


%\noindent {\bf Closing Lemma.}\ {\it
\begin{clemma}
Let
$\{x_k=f^k(x_0)\}_{k=0}^{n}$ be a trajectory of length $n+1$ of a
map $f:I \to I$. Let $u=(x_0-x_n)/\prod_{k=0}^{n-2}(x_{n-1}-x_k)$.
Then $x_0$ is a periodic point of period $n$ of the map
\begin{equation}
\label{closing} f_u(x)=f(x)+u\prod_{k=0}^{n-2}(x-x_k).
\end{equation}
\end{clemma}

Of course $f_u$ is close to $f$ if and only if $u$ is
sufficiently small, meaning that $|x_0-x_n|$ should be small
compared to $\prod_{k=0}^{n-2}|x_{n-1}-x_k|$. However, this
product is likely to contain small factors for recurrent
trajectories. In general, it is difficult to control the effect
of perturbations for recurrent trajectories. The simple reason
why is because {\em one cannot perturb $f$ at two nearby points
independently\/}.

The Closing Lemma above also gives an idea of how much we must change
the parameter $u$ to make a point $x_0$ that is $(n,\gm)$-periodic not
be $(n,\gm)$-periodic for a given $\gm > 0$, which, as we described
above, is one way to make a map that is ``bad'' for the initial
condition $x_0$ become ``good''.  To make use of our other alternative
we must determine how much we need to perturb a map $f$ to make a
given $x_0$ be $(n,\gm)$-hyperbolic for some $\gm > 0$.

%{\bf Perturbation of hyperbolicity.}\ {\it
\begin{phyper}
Let $\{x_k=f^k(x_0)\}_{k=0}^{n-1}$ be a trajectory
of length $n$ of a $C^1$ map $f:I \to I$.  Then for the map
\begin{equation} \label{movehyperbolicity}
f_v(x)=f(x)+v(x-x_{n-1})\prod_{k=0}^{n-2}(x-x_k)^2
\end{equation}
such that $v\in \R$ and
\begin{equation}
\left|\vphantom{{f'}^2}|(f^n_v)'(x_0)| - 1\right| =
\left|\vphantom{{{\prod_0^n}^2}^2}
\left|\prod_{k=0}^{n-1}f'(x_k)+
v\prod_{k=0}^{n-2}(x_{n-1}-x_k)^2\prod_{k=0}^{n-2}f'(x_k)\right|
- 1\right| > \gm
\end{equation}
we have that $x_0$ is an $(n,\gm)$-hyperbolic point of $f_v$.
\end{phyper}

One more time we can see that the product of distances
$\prod_{k=0}^{n-2}|x_{n-1}-x_k|$ along the trajectory
is an important quantitative characteristic of how much freedom
we have to perturb.

The perturbations (\ref{closing}) and (\ref{movehyperbolicity}) are
reminiscent of Newton interpolation polynomials.  Let us put these
formulas into a general setting using singularity theory.

Given $n > 0$ and a $C^1$ function $f : I \to \R$ we define
an associated function $j^{1,n}f: I^n \to I^n \times \R^{2n}$ by
\begin{equation}
\label{multijet}
j^{1,n}f(x_0, \dots, x_{n-1}) = \left(x_0, \dots, x_{n-1}, f(x_0),
\dots, f(x_{n-1}), f'(x_0), \dots, f'(x_{n-1})\right).
\end{equation}
In singularity theory this function is called the {\em $n$-tuple
$1$-jet\/} of $f$. The ordinary $1$-jet of $f$, usually
denoted by $j^1 f(x) = (x,f(x),f'(x))$, maps $I$ to
the {\em $1$-jet space\/} $\J^1(I,\R) \simeq I \times \R^2$.
The product of $n$ copies of $\J^1(I,\R)$, called the {\em multijet
space\/}, is denoted by
\begin{equation}
\J^{1,n}(I,\R)=\underbrace{\J^1(I,\R) \times \dots
\times \J^1(I,\R)}_{n \ \textup{times}},
\end{equation}
and is equivalent to $I^n \times \R^{2n}$ after rearranging
coordinates.
The $n$-tuple 1-jet of $f$ associates with each $n$-tuple of points
in $I^n$ all the information necessary to determine how close the
$n$-tuple is to being a periodic orbit, and if so, how close it is to
being nonhyperbolic.

The set
\begin{equation}
\Delta_n(I)=\left\{\{x_0, \dots, x_{n-1}\}\times \R^{2n} \subset
\J^{1,n}(I,\R) :
\exists\ i \neq j\ \textup{such that}\ x_i=x_j\right\}
\end{equation}
is called the {\em diagonal\/} (or sometimes the {\em generalized
diagonal\/}) in the space of multijets.
In singularity theory the space of multijets is defined outside
of the diagonal $\Delta_n(I)$ and is usually denoted by
$\J^1_n(I,\R)=\J^{1,n}(I,\R) \setminus \Delta_n(I)$ (see \cite{GG}).
It is easy to see that {\em a recurrent trajectory $\{x_k\}_{k\in \Z_+}$
is located in a neighborhood of the diagonal  $\Delta_n(I)$ in
the space of multijets for a sufficiently large $n$\/}. If
$\{x_k\}_{k=0}^{n-1}$ is a part of a recurrent
trajectory of length $n$, then the product of distances along the
trajectory
\begin{equation} \label{productformula}
\prod_{k=0}^{n-2} \left| x_{n-1}-x_k \right|
\end{equation}
measures how close $\{x_k\}_{k=0}^{n-1}$ is to the diagonal
$\Delta_n(I)$, or how independently one can perturb points of a
trajectory. One can also say that (\ref{productformula})
is a quantitative characteristic of how recurrent a trajectory
of length $n$ is. Introduction of this {\em product of distances
along a trajectory\/} into the analysis of recurrent trajectories
is a new point of our paper.

\section{Newton interpolation and blow-up
along the diagonal in multijet space}\label{blowup}

Now we present a construction due to Grigoriev and Yakovenko
{\cite{GY}} which puts the ``Closing Lemma'' and ``Perturbation of
Hyperbolicity'' statements above into a general framework.
It is an interpretation of Newton interpolation
polynomials as an algebraic blow-up along
the diagonal in the multijet space.
In order to keep the notations and formulas simple we continue in this
section to consider only the 1-dimensional case.

Consider the $2n$-parameter family of perturbations of a $C^1$ map
$f:I \to I$ by polynomials of degree $2n-1$,
\begin{equation} \label{2ndegree}
f_{\myeps}(x)=f(x)+\phi_{\myeps}(x), \qquad \phi_{\myeps}(x) =
\sum_{k=0}^{2n-1}{\myeps}_k x^k,
\end{equation}
where ${\myeps} = ({\myeps}_0, \dots, {\myeps}_{2n-1}) \in \R^{2n}$.  The
perturbation vector ${\myeps}$ consists of coordinates from the Hilbert
brick $HB^1(\myr)$ of analytic perturbations defined in
Section~3 of the previous article.
Our goal now is to describe how such perturbations affect the
$n$-tuple $1$-jet of $f$, and since the operator $j^{1,n}$ is linear
in $f$, for the time being we consider only the perturbations
$\phi_{\myeps}$ and their $n$-tuple $1$-jets.  For each $n$-tuple
$\{x_k\}_{k=0}^{n-1}$ there is a natural transformation
$\J^{1,n} : I^n \times \R^{2n} \to \J^{1,n}(I,\R)$ from
${\myeps}$-coordinates to jet-coordinates, given by
\begin{equation}
\label{epstojet}
\J^{1,n}(x_0, \dots, x_{n-1}, {\myeps}) =
j^{1,n} \phi_{\myeps}(x_0, \dots, x_{n-1}).
\end{equation}

Instead of working directly with the transformation $\J^{1,n}$, we
introduce intermediate $u$-coordinates based on Newton interpolation
polynomials.  The relation between ${\myeps}$-coordinates and
$u$-coordinates is given implicitly by
\begin{equation}\label{identity}
\phi_{\myeps}(x) = \sum_{k=0}^{2n-1}{\myeps}_k x^k=
\sum_{k=0}^{2n-1}u_k \prod_{j=0}^{k-1} (x-x_{j(\textup{mod}\ n)}).
\end{equation}
Based on this identity, we will define functions
$\D^{1,n} : I^n \times \R^{2n} \to I^n \times \R^{2n}$
and
$\pi^{1,n}:I^n \times \R^{2n}\to \J^{1,n}(I,\R)$
so that $\J^{1,n} = \pi^{1,n} \circ \D^{1,n}$, or in other words the
diagram in Figure~1 commutes.  We will show later that
$\D^{1,n}$ is invertible, while $\pi^{1,n}$ is invertible away from
the diagonal $\Delta_n(I)$ and defines a blow-up along it in the space
of multijets $\J^{1,n}(I,\R)$.
\begin{figure}[!h]
\includegraphics{era91el-fig-1}
\caption{An algebraic blow-up along the diagonal $\Delta_n(I)$.}
\end{figure}

The intermediate space, which we denote by $\mathcal{DD}^{1,n}(I,\R)$,
is called {\em the space of divided differences\/} and consists of
$n$-tuples of points $\{x_k\}_{k=0}^{n-1}$ and $2n$ real coefficients
$\{u_k\}_{k=0}^{2n-1}$.  Here are explicit coordinate-by-coordinate
formulas defining $\pi^{1,n} : \mathcal{DD}^{1,n}(I,\R) \to
\J^{1,n}(I,\R)$:
\begin{equation}
\begin{aligned}
\label{Newtonexpression}
\phi_{\myeps}(x_0) = &\, u_0, \\
\phi_{\myeps}(x_1) = &\, u_0+u_1(x_1-x_0),\\
\phi_{\myeps}(x_2) = &\, u_0+u_1(x_2-x_0)+u_2(x_2-x_0)(x_2-x_1),\\
\vdots\,&\\
\phi_{\myeps}(x_{n-1}) = &\, u_0+u_1(x_{n-1}-x_0)+\dots
+u_{n-1}(x_{n-1}-x_0)\cdots (x_{n-1}-x_{n-2}),\\
\phi_{\myeps}'(x_0) = &\, \frac{\pa}{\pa x}\left(\sum_{k=0}^{2n-1}
u_k \prod_{j=0}^{k-1} (x-x_{j(\textup{mod}\ n)})\right)\Big|_{x=x_0}, \\
\vdots\,&\\
\phi_{\myeps}'(x_{n-1}) = &\, \frac{\pa}{\pa x} \left(\sum_{k=0}^{2n-1}
u_k \prod_{j=0}^{k-1} (x-x_{j(\textup{mod}\ n)})\right)\Big|_{x=x_{n-1}}.
\end{aligned}
\end{equation}

These formulas are very useful for dynamics.  For a given base map $f$
and initial point $x_0$, the image $f_{\myeps}(x_0) = f(x_0) +
\phi_{\myeps}(x_0)$ of $x_0$ depends only on $u_0$.  Furthermore, the
image can be set to any desired point by choosing $u_0$ appropriately---we 
say then that it depends nontrivially on $u_0$.  If
$x_0$, $x_1$, and $u_0$ are fixed, the image $f_{\myeps}(x_1)$ of $x_1$
depends only on $u_1$, and as long as $x_0 \neq x_1$, it depends
nontrivially on $u_1$.  More generally, for $0 \leq k \leq n-1$, if
pairwise distinct points $\{x_j\}_{j=0}^k$ and coefficients
$\{u_j\}_{j=0}^{k-1}$ are fixed, then the image $f_{\myeps}(x_k)$ of
$x_k$ depends only and nontrivially on $u_k$.

Suppose now that an $n$-tuple of points $\{x_j\}_{j=0}^{n}$ not on the
diagonal $\Delta_n(I)$ and Newton coefficients $\{u_j\}_{j=0}^{n-1}$
are fixed.  Then the derivative $f'_{\myeps}(x_0)$ at $x_0$ depends only and
nontrivially on $u_n$.  Likewise for $0 \leq k \leq n-1$, if distinct
points $\{x_j\}_{j=0}^{n}$ and Newton coefficients
$\{u_j\}_{j=0}^{n+k-1}$ are fixed, then the derivative $f'_{\myeps}(x_k)$
at $x_k$ depends only and nontrivially on $u_{n+k}$.

As Figure~2 illustrates, these considerations show that for any map
$f$ and any desired trajectory of distinct points with any given
derivatives along it, one can choose Newton coefficients
$\{u_k\}_{k=0}^{2n-1}$ and explicitly construct a map $f_{\myeps} = f +
\phi_{\myeps}$ with such a trajectory.  Thus we have shown that $\pi^{1,n}$ is
invertible away from the diagonal $\Delta_n(I)$ and defines a blow-up
along it in the space of multijets $\J^{1,n}(I,\R)$.
\begin{figure}[h]\label{NIP}
       \includegraphics{era91el-fig-2}
       \caption{Newton coefficients and their action.}
\end{figure}

Next we define the function $\D^{1,n} : I^n \times \R^{2n} \to
\mathcal{DD}^{1,n}(I,\R)$ explicitly using so-called divided
differences.  Let $g:\R \to \R$ be a $C^r$ function of one real
variable.
\begin{Def} \label{divdif}
The {\em first order divided difference\/} of $g$ is defined as
\begin{equation} \begin{aligned}
\Delta g(x_0, x_{1})=\frac {g(x_1)-g(x_0)}{x_1-x_0}
\end{aligned}
\end{equation}
for $x_1 \neq x_0$ and extended by its limit value as
$g'(x_0)$ for $x_1=x_0$.  Iterating this construction we define
divided differences of the $m$th order for $2 \leq m \leq r$,
\begin{equation} \begin{aligned}
\Delta^m g(x_0, \dots, x_m) =
\frac {\Delta^{m-1} g(x_0, \dots, x_{m-2}, x_m)-
\Delta^{m-1} g(x_0, \dots, x_{m-2}, x_{m-1})}{x_{m}-x_{m-1}}
\end{aligned}
\end{equation}
for $x_{m-1} \neq x_m$ and extended by its limit value for
$x_{m-1}=x_m$.
\end{Def}

A function loses at most one derivative of smoothness with each
application of $\Delta$, so $\Delta^m g$ is at least $C^{r-m}$ if $g$
is $C^r$.  Notice that $\Delta^m$ is linear as a function of $g$, and
one can show that it is a symmetric function of $x_0,\dots,x_m$; in
fact, by induction it follows that
\begin{equation}
\Delta^m g(x_0, \dots, x_m)=
\sum_{i=0}^m \frac{g(x_i)}{\prod_{j \neq i} (x_i - x_j)}.
\end{equation}
Another identity that is proved by induction will be more important
for us, namely
\begin{equation}
\Delta^m x^k(x_0,\dots, x_m)=p_{k,m}(x_0,\dots, x_m),
\end{equation}
where $p_{k,m}(x_0, \dots, x_m)$ is $0$ for $m > k$, and for $m \leq k$, it
is the sum of all degree $k-m$ monomials in $x_0, \dots, x_m$ with
unit coefficients,
\begin{equation} \label{homogpolyn}
p_{k,m}(x_0,\dots, x_m)=\sum_{r_0+\dots + r_m=k-m}\;
\prod_{j=0}^{m} x_j^{r_j}.
\end{equation}

The divided differences form coefficients for the Newton interpolation
formula.  For all $C^\infty$ functions $g : \R \to \R$ we have
\begin{equation}
\begin{aligned}
\label{int}
g(x) &=  \Delta^0 g(x_0) +\Delta^1 g(x_0,x_1) (x-x_0)+ \cdots \\
&\quad + \Delta^{n-1} g(x_0,\dots, x_{n-1}) (x-x_0) \cdots (x-x_{n-2}) \\
&\quad + \Delta^{n} g(x_0,\dots, x_{n-1},x) (x-x_0) \cdots (x-x_{n-1})
\end{aligned}
\end{equation}
identically for all values of $x, x_0, \dots, x_{n-1}$.
All terms of this representation are polynomial in $x$
except for the last one which we view as a remainder term.
The sum of the polynomial terms is the degree $(n-1)$ {\em Newton
interpolation polynomial\/} for $g$ at $\{x_k\}_{k=0}^{n-1}$.  To
obtain a degree $2n-1$ interpolation polynomial for $g$ and its
derivative at $\{x_k\}_{k=0}^{n-1}$, we simply use (\ref{int}) with
$n$ replaced by $2n$ and the $2n$-tuple of points
$\{x_{k(\textup{mod}\ n)}\}_{k=0}^{2n-1}$.

Recall that $\D^{1,n}$ was defined implicitly by (\ref{identity}).
We have described how to use divided differences to construct a degree
$2n-1$ interpolating polynomial of the form on the right-hand side of
(\ref{identity}) for an arbitrary $C^\infty$ function $g$.  Our
interest then is in the case $g = \phi_{\myeps}$, which as a degree
$2n-1$ polynomial itself will have no remainder term and coincide
exactly with the interpolating polynomial.  Thus $\D^{1,n}$ is given
coordinate-by-coordinate by
\begin{equation}
\begin{aligned}\label{Newtonmap}
u_m = &\, \Delta^m \left( \sum_{k=0}^{2n-1}{\myeps}_k x^k\right)
(x_0,\dots,x_{m(\textup{mod}\ n)}) \\
= &\, {\myeps}_m+\sum_{k=m+1}^{2n-1} {\myeps}_k p_{k,m}(x_0,\dots , 
x_{m(\textup{mod}\ n)})
\end{aligned}
\end{equation}
for $m = 0, \dots, 2n-1$.  We call the transformation given by
(\ref{Newtonmap}) the {\em Newton map\/}.  Notice that for fixed
$\{x_k\}_{k=0}^{2n-1}$, the Newton map is linear and given by an upper
triangular matrix with units on the diagonal.  Hence it is Lebesgue
volume-preserving and invertible, whether or not
$\{x_k\}_{k=0}^{2n-1}$ lies on the diagonal $\Delta_n(I)$.

We call the basis of monomials
\begin{equation} \label{Newtonbasis}
\prod_{j=0}^k (x-x_{j(\textup{mod}\ n)}) \ \ \
\textup{for}\ \ \ k=0,\dots, 2n-1
\end{equation}
in the space of polynomials of degree $2n-1$ the {\em Newton basis\/}
defined by the $n$-tuple $\{x_k\}_{k=0}^{n-1}$.  The Newton map and
the Newton basis, and their analogues in dimension $N$, are useful
tools for perturbing trajectories and estimating the measure
$\mu_n(C,\dt,\rho, M_{1+\rho})$
of ``bad'' parameter values $\veps \in HB^N(\myr)$.

\section{Discretization method}

The fundamental problem with using the Newton basis to estimate the
measure of ``bad'' parameter values, those for which there is an
almost periodic point of period $n$ that is not sufficiently
hyperbolic, is that the Newton basis depends on the almost periodic
$n$-tuple $\{x_k\}_{k=0}^{n-1}$.  For a particular ``bad'' parameter
value we can fix this $n$-tuple and the corresponding Newton basis,
then estimate the measure of the set of parameters for which a nearby
$n$-tuple is both almost periodic and not sufficiently hyperbolic.
But there are a continuum of possible $n$-tuples, so how can we
account for all of the possible cells of ``bad'' parameter values
$\veps$ within our parameter brick $HB^N(\myr
)$?  At the beginning
of Section~\ref{Newtonstart}, we indicated that for a particular
initial condition
$x_0$ we would obtain an estimate on the measure of ``bad'' parameter
values corresponding to an almost periodic point in a neighborhood of
$x_0$, and thus need only to consider a discrete set of initial
conditions.  But as the parameter vector $\veps$ varies over
$HB^N(\myr)$, there is (for large $n$ at least) a wide range of
possible length-$n$ trajectories starting from a particular $x_0$, so
there is no hope of using a single Newton basis to estimate even the
measure of ``bad'' parameter values corresponding to a single $x_0$.

The solution to this problem is to discretize the entire space of
$n$-tuples $\{x_k\}_{k=0}^{n-1}$, considering only those that lie on a
particular grid.  If we choose the grid spacing small enough, then
every almost periodic orbit of period $n$ that is not sufficiently
hyperbolic will have a corresponding pseudotrajectory of length-$n$ on
the grid that also has small hyperbolicity.  In this way we reduce the
problem to bounding the measure of a set of ``bad'' parameter values
corresponding to a particular length-$n$ pseudotrajectory, and then
summing the bounds over all possible length-$n$ pseudotrajectories on
the chosen grid.

Returning to the general case of $C^{1+\rho}$ diffeomorphisms on
$B^N$, where we assume $0 < \rho \leq 1$, the grid spacing we use at
stage $n$ is $\tgmn(C,\dt,\rho) \!= N^{-1} (M_{1+\rho}^{-2n}
\gmncdt)^{1/\rho}$, where $M_{1+\rho} > 1$ is a bound on the $C^{1+\rho}$
norm of the diffeomorphisms $f_\veps$ corresponding to parameters
$\veps \in HB^N(\myr)$.  This ensures that when rounded off to the
nearest grid points $\{x_k\}_{k=0}^{n-1}$, an almost periodic orbit of
length $n$ becomes \linebreak an $N(M_{1+\rho} +1)\tgmn(C,\dt,\rho)$-pseudotrajectory,
meaning that $|f_\veps(x_j) -x_{j+1}| \leq \break N(M_{1+\rho} + 1)
\tgmn(C,\dt,\rho)$ for $j = 0, 1, \dots,
n-2$.  It also ensures that when rounding, the derivative $df_\veps$
changes by at most $M_{1+\rho}^{1-2n} \gmncdt$, which in turn implies
that the change in hyperbolicity over all $n$ points is small compared
with $\gmncdt$.  (Recall that $\gmncdt$ is our tolerance for
hyperbolicity at stage $n$.)

Roughly speaking, in the case $N \!= \!1$ our estimate on the measure
of ``bad'' parameter values for a particular $n$-tuple
$\{x_k\}_{k=0}^{n-1}$ is then proportional to
$\left(\tgmn(C,\dt,\rho)\right)^n \cdot \gmncdt$,
whereas the number of possible $n$-tuples is proportional to
$\left(\tgmn(C,\dt,\rho)\right)^{-n}$, making our bound
$\mu_n(C,\dt,\rho,\myr,M_{1+\rho})$
on the total measure of ``bad'' parameter values at stage $N$
proportional to $\gmncdt$.  The remaining problem then is to show that
for maps satisfying the Inductive Hypothesis of order $n-1$, we can
bound the proportionality factor in such a way that
$\mu_n(C,\dt,\rho,\myr,M_{1+\rho})$ meets the conditions prescribed
in Section~5 of the previous article, namely that it is summable over
$n$ and that the sum
approaches $0$ as $C \to \infty$.  (Notice that the sequence $\gmncdt$
meets these conditions.)  The proportionality factor depends on the
product of distances described in Section~\ref{Newtonstart}, and in
\cite{K4} we proceed as follows.  At the $n$th stage we split 
length-$n$ trajectories of diffeomorphisms satisfying the Inductive
Hypothesis into three groups.  One group consists of what we call
``simple'' trajectories for which the product of distances is not
too small.  For nonsimple trajectories we show that either
the trajectory is sufficiently hyperbolic by the Inductive Hypothesis
(second group) or the trajectory returns very close to itself before
the $n$th iteration and is simple (not recurrent) up to this point
(third group).  In the latter case, perturbation
by Newton polynomials of order lower than $n$ at the point of
a close return can control the behavior of that trajectory up to
length $n$.

Notice that in the preceding paragraph, even if the product of
distances is not small, the proportionality factor in our estimate on
the measure of ``bad'' parameter values for a given $n$-tuple
$\{x_k\}_{k=0}^{n-1}$ is large because the parameter
measure is normalized to be $1$ on a brick $HB^1(\myr)$ whose sides
decay rapidly; the normalization increases the measure by a factor of
$r_0 r_1 \cdots r_{n-1} r_{2n-1}$.  However, we are able to
show that when considering only diffeomorphisms $f_\veps$ with $\veps
\in HB^1(\myr)$, the number of $n$-tuples we must regard as possible
pseudotrajectories of $f_\veps$ is reduced by the factor $r_0 r_1
\cdots r_{n-2}$.  Due to our definition of an admissible sequence $\myr$,
the remaining factor $r_{n-1} r_{2n-1}$ does not affect the necessary
summability properties for the bounds
$\mu_n(C,\dt,\rho,\myr,M_{1+\rho})$.  There is an additional
distortion of our estimates that is exponential in $n$, due to the fact that
an image of a finite-dimensional brick of ${\myeps}$-parameters
under the Newton map is a parallelepiped of $u$-parameters, but
no longer a brick.  This exponential factor is also not problematic,
because our bound $\mu_n(C,\dt,\rho,\myr,M_{1+\rho})$ decays
superexponentially in $n$.

\section{Conclusion}

In this and the previous article we have only been able to outline
some of the fundamental tools that are needed for the proof of the
main result, which will appear in \cite{K5} and \cite{KH}.  Here we
list some of major difficulties appearing in the proof.

$\bullet$ We must handle almost periodic trajectories of length $n$
that have a close return after $k1$, the Newton interpolation
polynomials involve products of differences of coordinates
of points, which may be small even though the points themselves
are not close.  Thus we must be careful about how we construct
the Newton basis for a given $n$-tuple of points
$x_0,\dots,x_{n-1} \subset B^N$ and how to incorporate this into
the general framework of the space of Newton interpolation polynomials.

$\bullet$ At $n$th stage of the induction we need to deal with the
$(2n)^N$-dimensional space $W_{\leq 2n-1,N}$ of polynomials of degree
$2n-1$ in $N$ variables and handle the distortion properties of
the Newton map. In a space of such a large dimension, even the ratio of
volumes of the unit ball and the unit cube is of order
$(2n)^{N(2n)^N}$ \cite{San}.

In \cite{K5}, \cite{KH}, based on \cite{K4}, we first prove the main
$1$-dimensional result for the case $N = \rho = 1$, discussed in
Section~4 of the previous article and Sections~2 and 3 of this
article, and then, using additional tools and ideas, complete the proof
in the general case.

\begin{thebibliography}{GST2}

\bibitem[GG]{GG}
M. Golubitsky and V. Guillemin,
\textit{ Stable mappings and their singularities},
Springer-Verlag, 1973.
\MR{49:6269}
\bibitem[GY]{GY}
A.\ Grigoriev, S.\ Yakovenko,
Topology of generic multijet preimages and blow-up via
Newton interpolation,
J. Diff. Equations \textbf{150} (1998), 349--362.
\MR{99m:58028}
\bibitem[K4]{K4}
V. Yu.\ Kaloshin,
Ph.D. thesis, Princeton University, 2001.
\bibitem[K5]{K5}
V.\ Kaloshin,
Stretched exponential bound on growth of the number
of periodic points for prevalent diffeomorphisms, part 1,
in preparation.
\bibitem[KH]{KH}
V.\  Kaloshin, B.\ Hunt,
Stretched exponential bound on growth of the number
of periodic points for prevalent diffeomorphisms, part 2,
in preparation.
\bibitem[San]{San}
L. Santalo, Integral geometry and geometric probability,
Encycl. of Math. and its Appl., Vol. 1, Addison-Wesley, Reading,
MA--London--Amsterdam, 1976.
\MR{55:6340}
\end{thebibliography}

\end{document}