next up previous
Next: Rotation of coefficients Up: New FMM Previous: New FMM

   
Formulation for the new FMM

Now we consider a source point $\mbox{\boldmath$\space y $ }$ located at (y1,y2,y3) and a target point $\mbox{\boldmath$\space x $ }$ at (x1,x2,x3) and denote a cell including a source point by Cs and a cell including a target point by Ct. Here we introduce the following integral representation:
 
$\displaystyle { \frac{1}{\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2}}=}$
    $\displaystyle \frac{1}{2\pi} \int_{0}^{\infty} e^{-\lambda (x_3-y_3)}$  
    $\displaystyle \qquad \int_{0}^{2\pi} e^{\scriptsize i\lambda((x_1-y_1)\cos\alpha
+(x_2-y_2)\sin\alpha)} d\alpha d\lambda$ (22)

The integral in (23) converges under an assumption that the inequality x3 > y3 is valid. This is why we restrict the discussion to the case where Ct is located in +x3 direction of Cs for the present. Suppose that each of the cells Cs and Ct has a volume of d3. Then the double integral in (23) is evaluated with the following double sum:
 
$\displaystyle { \frac{1}{\sqrt{(x_1-y_1)^2+(x_2-y_2)^2+(x_3-y_3)^2}}=}$
    $\displaystyle \sum_{k=1}^{s(\varepsilon)} \sum_{j=1}^{M(k)} \frac{\omega_k}{M(k)d}
e^{\scriptsize -(\lambda_k/d)(x_3-y_3)}$  
    $\displaystyle e^{\scriptsize i(\lambda_k/d)((x_1-y_1) \cos \alpha_j(k)
+ (x_2-y_2) \sin \alpha_j(k))}+ \varepsilon,$  
    $\displaystyle \qquad\qquad\qquad\qquad\qquad\qquad(x_3 > y_3)$ (23)

where $\alpha_j(k)$ is given by

\begin{displaymath}\alpha_j(k) = \frac{2\pi j}{M(k)},
\end{displaymath}

$\varepsilon$ is the error term and the numbers $s(\varepsilon)$, M(k), Gaussian weights $\omega_k$ and nodes $\lambda_k$ are given in Yarvin and Rokhlin[23]. One may determine these parameters considering the required accuracy.

Noting the following formulae:

  \begin{eqnarray*}S_{n,m}(\overrightarrow{O\mbox{\boldmath$ x $ }}) &=& (-1)^{n}\...
...\partial}{\partial x}
\pm i \frac{\partial}{\partial y} \right)
\end{eqnarray*}


and (24), one can evaluate the integral in (5) in the following manner:

\begin{eqnarray*}\lefteqn{\mbox{p.f.}\int_{S_y} \frac{\partial}{\partial x_k}
\...
...s \alpha_q(p) + (\overrightarrow{O\vec{x}})_2 \sin \alpha_q(p))}
\end{eqnarray*}


where W1j(p,q) and W2(p,q) are the coefficients of the exponential expansion at O, defined as
  
W1j(p,q)=
    $\displaystyle \frac{\omega_p}{M(p)d}\sum_{m=-\infty}^{\infty}(-i)^m e^{-i m
\alpha_q(p)} \sum_{n=\vert m\vert}^{\infty} (\lambda_p/d)^n M^1_{j,n,m}(O)$  
      (24)
W2(p,q)=
    $\displaystyle \frac{\omega_p}{M(p)d}\sum_{m=-\infty}^{\infty}(-i)^m e^{-i m
\alpha_q(p)} \sum_{n=\vert m\vert}^{\infty} (\lambda_p/d)^n M^2_{n,m}(O)$  

and $\cal F,G$ are operators defined as

\begin{eqnarray*}{\cal F}_{kij}(\overrightarrow{O\mbox{\boldmath$ x $ }}) &=& \f...
...2\mu}\frac{\partial}{\partial x_k}\frac{\partial}{\partial x_i}
\end{eqnarray*}


These formulae (25) and (26) convert the multipole moments into the coefficients of the exponential expansion. The coefficients of the exponential expansion is translated according to the following formulae when the centre of the exponential expansion is shifted from O to $\mbox{\boldmath$\space x $ }_0$:
  
V1j(p,q)=W1j(p,q)
    $\displaystyle e^{\scriptsize -(\lambda_p/d) (\overrightarrow{O\vec{x}_0})_3 + i...
...{x}_0})_1 \cos \alpha_q(p) + (\overrightarrow{O\vec{x}_0})_2 \sin
\alpha_q(p))}$  
      (25)
$\displaystyle {V^2(p,q)=(W^2(p,q)-(\overrightarrow{O\vec{x}_0})_k W^1_k(p,q))}$
    $\displaystyle e^{\scriptsize -(\lambda_p/d) (\overrightarrow{O\vec{x}_0})_3 + i...
...{x}_0})_1 \cos \alpha_q(p) + (\overrightarrow{O\vec{x}_0})_2 \sin
\alpha_q(p))}$  

where V1j(p,q) and V2(p,q) are the coefficients of the exponential expansion at $\mbox{\boldmath$\space x $ }_0$ and $(\mbox{\boldmath$\space x $ })_i$ stand for the components of the vector $\mbox{\boldmath$\space x $ }$.

We next need to convert the coefficients of the exponential expansion into the coefficients of the local expansion. Noting that the integral in (13) is evaluated with V1j(p,q) and V2(p,q)as follows:

\begin{eqnarray*}\lefteqn{\mbox{p.f.}\int_{S_y} \frac{\partial}{\partial x_k}
\...
...q(p)
+ (\overrightarrow{\vec{x}_0\vec{x}})_2 \sin \alpha_q(p))}
\end{eqnarray*}


and using (18) and the following formula:

\begin{eqnarray*}\lefteqn{\sum_{m=-n}^{n} (-i)^m e^{- i m \alpha_j(k)}
\ R_{n,...
...rightarrow{O\mbox{\boldmath$ x $ }})_2 \sin \alpha_j(k))^n}{n!}
\end{eqnarray*}


one obtains the following formulae which converts the coefficients of the exponential expansion into the coefficient of the local expansion:
  
$\displaystyle {L^1_{j,n,m}(\mbox{\boldmath$ x $ }_0) =}$
    $\displaystyle \sum_{p=1}^{s(\varepsilon)}\sum_{q=1}^{M(p)} V^1_j(p,q)
(-i)^m (-\lambda_p/d)^n e^{-i m \alpha_q(p)}$  
      (26)
$\displaystyle {L^2_{n,m}(\mbox{\boldmath$ x $ }_0) =}$
    $\displaystyle \sum_{p=1}^{s(\varepsilon)}\sum_{q=1}^{M(p)} V^2(p,q)
(-i)^m (-\lambda_p/d)^n e^{-i m \alpha_q(p)}$  

The procedures in (25)-(30) correspond to the M2L translation expressed by (19) and (20) for the original FMM. Suppose that one truncates the infinite series in (10) taking p terms. Then the computational costs for (19) and (20) are obviously O(p4). Also, if one assumes

\begin{eqnarray*}\sum_{k=1}^{s(\varepsilon)} M(k) \approx p^2 ,\quad s(\varepsilon) \approx p,
\end{eqnarray*}


the computational costs for (25) and (26), (27) and (28) and (29) and (30) are O(p3), O(p2) and O(p3), respectively. Hence, the total cost for the procedures in (25)-(30) is O(p3). This is why the new FMM is faster than the original FMM.
next up previous
Next: Rotation of coefficients Up: New FMM Previous: New FMM
Ken-ichi Yoshida
2001-03-26