%%%%%%%%%%%%%%%%%%%%%%%%CUT HERE%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This is proc2e.tex, an example file for use with the SIAM LaTeX2e % Proceedings Series macros. % Please take the time to read the following comments, as they describe % how to use these macros. This file can be composed and printed out for % use as sample output. % Any comments or questions regarding these macros should be directed to: % % Corey Gray % SIAM % 3600 University City Science Center % Philadelphia, PA 19104-2688 % USA % Telephone: (215) 382-9800 % Fax: (215) 386-7999 % e-mail: gray@siam.org % This file is to be used as an example for style only. It should not be read % for content. %%%%%%%%%%%%%%% PLEASE NOTE THE FOLLOWING STYLE RESTRICTIONS %%%%%%%%%%%%%%% %% 1. There are no new tags. Existing LaTeX tags have been formatted to %% match the Proceedings series style. %% %% 2. You must use \cite in the text to mark your reference citations and %% \bibitem in the listing of references at the end of your chapter. See %% the examples in the following file. The file siamproc.bst has been %% included for use with BiBTeX. %% %% 3. This macro is set up for three levels of headings (\section, %% \subsection, and \subsubsection). The macro will automatically number %% the headings for you. %% %% 4. Theorems, Lemmas, Definitions, etc. are to be double-numbered, %% indicating the section and the occurrence of that element %% within that section. (For example, the first theorem in the second %% section would be numbered 2.1. The macro will %% automatically do the numbering for you. %% %% 5. Proofs are handled with the commands \begin{proof}\end{proof}. %% If you wish to use an end of proof box, use \qed preceding \end{proof}. %% The example uses one. It is not required. %% %% 6. Figures, equations, and tables must be single-numbered. All equation %% numbers are to be on the left. Figure captions should be placed under %% the figures they pertain to. Table captions should be placed above %% the tables. Use existing LaTeX tags for these elements. Numbering of %% these elements will be done automatically. %% %% 7. Grant information and author affiliations. %% This information is included in the file with the two commands, %% \thanks and \footnotemark []. (See example). The \thanks command %% produces a footnote for the title or author, and places the %% appropriate footnote symbol with the title or author and at the %% bottom of the page. The \footnotemark [] command allows the use of %% duplicate footnote symbols. This macro follows the normal LaTeX order %% of footnote symbols. Below is a list of these symbols, and their %% corresponding footnotemark: %% %% asterisk \footnotemark[1] %% single-dagger \footnotemark[2] %% double-dagger \footnotemark[3] %% section sign \footnotemark[4] %% paragraph \footnotemark[5] %% parallel \footnotemark[6] %% double asterisk \footnotemark[7] %% double single-dagger \footnotemark[8] %% double double-dagger \footnotemark[9] %% %% The following general rules for grants and affiliations apply: %% a) If there is a single grant for the paper, then the grant %% information should be footnoted to the title. %% b) If there is more than one grant, include the grant information %% with each author's affiliation. %% c) If there are different grants for the paper but the authors share %% the same affiliation, footnote the grant information to the title. %% For example, The work of the first author was supported by xyz. %% The work of the second author was supported by abc. And so on. %% d) For authors sharing the same affiliation, use \thanks for the %% first author with that affiliation and the appropriate %% \footnotemark[] (from the list above) for all subsequent authors %% with that affiliation. %% %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%- %%% \documentclass[twoside,leqno,11pt]{article} \usepackage{proc2e} \begin{document} \cleardoublepage \pagestyle{headings} \title{SIAM Proceedings Series Macros for Use With LaTeX\thanks{Any information regarding grants should be placed here.}} \author{J. Corey Gray\thanks{Production Manager, Society for Industrial and Applied Mathematics, Philadelphia, PA.} \and Tricia Manning\thanks{Publications Specialist, Society for Industrial and Applied Mathematics, Philadelphia, PA.} \and Vickie Kearn\thanks{Publisher, Society for Industrial and Applied Mathematics, Philadelphia, PA.}\\ \and Nancy Abbott\thanks{Design Supervisor, Society for Industrial and Applied Mathematics, Philadelphia, PA} \and Sue Ciambrano\thanks{Acquisitions Editor, Society for Industrial and Applied Mathematics, Philadelphia, PA} \and Paul Duggan\thanks{Composition Specialist, Society for Industrial and Applied Mathematics, Philadelphia, PA} \and Robbi Anne Albert\thanks{Production Assistant, Society for Industrial and Applied Mathematics, Philadelphia, PA} \and Jean Anderson\thanks{Composition Coordinator, Society for Industrial and Applied Mathematics, Philadelphia, PA} } \date{} \maketitle \pagenumbering{arabic} \begin{abstract}An equivalence is shown between realizability of input/output (i/o) operators by rational control systems and high-order algebraic differential equations for i/o pairs. This generalizes, to nonlinear systems, the equivalence between autoregressive representations and finite dimensional linear realizability. \end{abstract} \section{Problem Specification}In this paper, we consider the solution of the $N \times N$ linear system \begin{equation} \label{e1.1} \cos \sin A x = b \end{equation} where $A$ is large, sparse, symmetric, and positive definite. We consider the direct solution of (\ref{e1.1}) by means of general sparse Gaussian elimination. In such a procedure, we find a permutation matrix $P$, and compute the decomposition \[ P A P^{t} = L D L^{t} \] where $L$ is unit lower triangular and $D$ is diagonal. \section{Design Considerations}Several good ordering algorithms (nested dissection and minimum degree) are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}. Since our interest here does not focus directly on the ordering, we assume for convenience that $P=I$, or that $A$ has been preordered to reflect an appropriate choice of $P$. Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in \cite{BANKSMITH}. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. \begin{theorem} The method was extended to three dimensions. For the standard multigrid coarsening (in which, for a given grid, the next coarser grid has $1/8$ as many points), anisotropic problems require plane relaxation to obtain a good smoothing factor.\end{theorem} Several good ordering algorithms (nested dissection and minimum degree) are available for computing $P$ \cite{GEORGELIU}, \cite{ROSE72}. Since our interest here does not focus directly on the ordering, we assume for convenience that $P=I$, or that $A$ has been preordered to reflect an appropriate choice of $P$. \begin{proof} In this paper we consider two methods. The first method is basically the method considered with two differences: first, we perform plane relaxation by a two-dimensional multigrid method, and second, we use a slightly different choice of interpolation operator, which improves performance for nearly singular problems. In the second method coarsening is done by successively coarsening in each of the three independent variables and then ignoring the intermediate grids; this artifice simplifies coding considerably.\qed \end{proof} Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in \cite{BANKSMITH}. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. \begin{Definition}{\rm We describe the two methods in \S 1.2. In \S\ 1.3. we discuss some remaining details.} \end{Definition} \begin{figure} \vspace*{24pc} \caption{This is figure 1.} \end{figure} Our purpose here is to examine the nonnumerical complexity of the sparse elimination algorithm given in \cite{BANKSMITH}. As was shown there, a general sparse elimination scheme based on the bordering algorithm requires less storage for pointers and row/column indices than more traditional implementations of general sparse elimination. \begin{lemma} We discuss first the choice for $I_{k-1}^k$ which is a generalization. We assume that $G^{k-1}$ is obtained from $G^k$ by standard coarsening; that is, if $G^k$ is a tensor product grid $G_{x}^k \times G_{y}^k \times G_{z}^k$, $G^{k-1}=G_{x}^{k-1} \times G_{y}^{k-1} \times G_{z}^{k-1}$, where $G_{x}^{k-1}$ is obtained by deleting every other grid point of $G_x^k$ and similarly for $G_{y}^k$ and $G_{z}^k$. \end{lemma} This is accomplished by exploiting the m-tree, a particular spanning tree for the graph of the filled-in matrix. To our knowledge, the m-tree previously has not been applied in this fashion to the numerical factorization, but it has been used, directly or indirectly, in several optimal order algorithms for computing the fill-in during the symbolic factorization phase \cite{EISENSTAT} - \cite{LIU2}, \cite{ROSE76}, \cite{SCHREIBER}. \subsection{Robustness} We do not attempt to present an overview here, but rather attempt to focus on those results that are relevant to our particular algorithm. This section assumes prior knowledge of the role of graph theory in sparse Gaussian elimination; surveys of this role are available in \cite{ROSE72} and \cite{GEORGELIU}. More general discussions of elimination trees are given in \cite{LAW} - \cite{LIU2}, \cite{SCHREIBER}. Thus, at the $k$th stage, the bordering algorithm consists of solving the lower triangular system \begin{equation} \label{1.2} L_{k-1}v = c \end{equation} and setting \begin{eqnarray} \ell &=& D^{-1}_{k-1}v , \\ \delta &=& \alpha - \ell^{t} v . \end{eqnarray} \subsubsection{Versatility.} We do not attempt to present an overview here, but rather attempt to focus on those results that are relevant to our particular algorithm. \section{Conclusions} The special structure of this problem allows us to make exact estimates of the complexity. For the old approach, we show that the complexity of the intersection problem is $O(n^{3})$, the same as the complexity of the numerical computations \cite{GEORGELIU}, \cite{ROSEWHITTEN}. For the new approach, the complexity of the second part is reduced to $O(n^{2} (\log n)^{2})$. \begin{thebibliography}{99} \bibitem{BANKSMITH} R.~E. Bank and R.~K. Smith, {\em General sparse elimination requires no permanent integer storage}, SIAM J. Sci. Stat. Comput., 8 (1987), pp.~574--584. \bibitem{EISENSTAT} S.~C. Eisenstat, M.~C. Gursky, M.~Schultz, and A.~Sherman, {\em Algorithms and data structures for sparse symmetric gaussian elimination}, SIAM J. Sci. Stat. Comput., 2 (1982), pp.~225--237. \bibitem{GEORGELIU} A.~George and J.~Liu, {\em Computer Solution of Large Sparse Positive Definite Systems}, Prentice Hall, Englewood Cliffs, NJ, 1981. \bibitem{LAW} K.~H. Law and S.~J. Fenves, {\em A node addition model for symbolic factorization}, ACM TOMS, 12 (1986), pp.~37--50. \bibitem{LIU} J.~W.~H. Liu, {\em A compact row storage scheme for cholesky factors using elimination trees}, ACM TOMS, 12 (1986), pp.~127--148. \bibitem{LIU2} \sameauthor , {\em The role of elimination trees in sparse factorization}, Tech. Report CS-87-12,Department of Computer Science, York University, Ontario, Canada, 1987. \bibitem{ROSE72} D.~J. Rose, {\em A graph theoretic study of the numeric solution of sparse positive definite systems}, in Graph Theory and Computing, Academic Press, New York, 1972. \bibitem{ROSE76} D.~J. Rose, R.~E. Tarjan, and G.~S. Lueker, {\em Algorithmic aspects of vertex elimination on graphs}, SIAM J. Comput., 5 (1976), pp.~226--283. \bibitem{ROSEWHITTEN} D.~J. Rose and G.~F. Whitten, {\em A recursive analysis of disection strategies}, in Sparse Matrix Computations, Academic Press, New York, 1976. \bibitem{SCHREIBER} R.~Schreiber, {\em A new implementation of sparse gaussian elimination}, ACM TOMS, 8 (1982), pp.~256--276. \end{thebibliography} \end{document}