summaryrefslogtreecommitdiff
path: root/Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex
diff options
context:
space:
mode:
Diffstat (limited to 'Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex')
-rw-r--r--Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex199
1 files changed, 49 insertions, 150 deletions
diff --git a/Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex b/Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex
index 5c09088a383..5dd97a7a084 100644
--- a/Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex
+++ b/Master/texmf-dist/doc/latex/thuthesis/data/appendix-survey.tex
@@ -6,168 +6,67 @@
\title{Title of the Survey}
\maketitle
-写出至少 5000 外文印刷字符的调研阅读报告或者书面翻译 1-2 篇(不少于 2 万外文印刷符)。
-
\tableofcontents
-\section{Single-Objective Programming}
-
-It is impossible to cover in a single chapter every concept of mathematical
-programming\cite{tex}. This chapter introduces only the basic concepts and techniques of
-mathematical programming such that readers gain an understanding of them
-throughout the book.~\cite{abrahams99tex,salomon1995advanced}.
-
-The general form of single-objective programming (SOP) is written
-as follows,
-\begin{equation*} % 如果附录中的公式不想让它出现在公式索引中,那就请
- % 用 equation*
-\left\{\begin{array}{l}
-\max \,\,f(x)\\%[0.1 cm]
-\mbox{subject to:} \\%[0.1 cm]
-\qquad g_j(x)\le 0,\quad j=1,2,\cdots,p
-\end{array}\right.
-\end{equation*}
-which maximizes a real-valued function $f$ of
-$x=(x_1,x_2,\cdots,x_n)$ subject to a set of constraints.
-
-\newtheorem{mpdef}{Definition}[chapter]
-\begin{mpdef}
-In SOP, we call $x$ a decision vector, and
-$x_1,x_2,\cdots,x_n$ decision variables. The function
-$f$ is called the objective function. The set
-\begin{equation*}
-S=\left\{x\in\real^n\bigm|g_j(x)\le 0,\,j=1,2,\cdots,p\right\}
-\end{equation*}
-is called the feasible set. An element $x$ in $S$ is called a
-feasible solution.
-\end{mpdef}
-
-\newtheorem{mpdefop}[mpdef]{Definition}
-\begin{mpdefop}
-A feasible solution $x^*$ is called the optimal
-solution of SOP if and only if
-\begin{equation}
-f(x^*)\ge f(x)
-\end{equation}
-for any feasible solution $x$.
-\end{mpdefop}
-
-One of the outstanding contributions to mathematical programming was known as
-the Kuhn-Tucker conditions~\eqref{eq:ktc}. In order to introduce them, let us give
-some definitions. An inequality constraint $g_j(x)\le 0$ is said to be active at
-a point $x^*$ if $g_j(x^*)=0$. A point $x^*$ satisfying $g_j(x^*)\le 0$ is said
-to be regular if the gradient vectors $\nabla g_j(x)$ of all active constraints
-are linearly independent.
-
-Let $x^*$ be a regular point of the constraints of SOP and assume that all the
-functions $f(x)$ and $g_j(x),j=1,2,\cdots,p$ are differentiable. If $x^*$ is a
-local optimal solution, then there exist Lagrange multipliers
-$\lambda_j,j=1,2,\cdots,p$ such that the following Kuhn-Tucker conditions hold,
-\begin{equation}
-\label{eq:ktc}
-\left\{\begin{array}{l}
- \nabla f(x^*)-\sum\limits_{j=1}^p\lambda_j\nabla g_j(x^*)=0\\%[0.3cm]
- \lambda_jg_j(x^*)=0,\quad j=1,2,\cdots,p\\%[0.2cm]
- \lambda_j\ge 0,\quad j=1,2,\cdots,p.
-\end{array}\right.
-\end{equation}
-If all the functions $f(x)$ and $g_j(x),j=1,2,\cdots,p$ are convex and
-differentiable, and the point $x^*$ satisfies the Kuhn-Tucker conditions
-\eqref{eq:ktc}, then it has been proved that the point $x^*$ is a global optimal
-solution of SOP.
-
-\subsection{Linear Programming}
-\label{sec:lp}
-
-If the functions $f(x),g_j(x),j=1,2,\cdots,p$ are all linear, then SOP is called
-a \emph{linear programming}.
-
-The feasible set of linear is always convex. A point $x$ is called an extreme
-point of convex set $S$ if $x\in S$ and $x$ cannot be expressed as a convex
-combination of two points in $S$. It has been shown that the optimal solution to
-linear programming corresponds to an extreme point of its feasible set provided
-that the feasible set $S$ is bounded. This fact is the basis of the \emph{simplex
- algorithm} which was developed by Dantzig as a very efficient method for
-solving linear programming.
-\begin{table}[ht]
-\centering
- \centering
- \caption{This is an example for table}
- \label{tab:badtabular2}
- \begin{tabular}[c]{|m{1.5cm}|c|c|c|c|c|c|}\hline
- \multicolumn{2}{|c|}{Network Topology} & \# of nodes &
- \multicolumn{3}{c|}{\# of clients} & Server \\\hline
- GT-ITM & Waxman Transit-Stub & 600 &
- \multirow{2}{2em}{2\%}&
- \multirow{2}{2em}{10\%}&
- \multirow{2}{2em}{50\%}&
- \multirow{2}{1.2in}{Max. Connectivity}\\\cline{1-3}
- \multicolumn{2}{|c|}{Inet-2.1} & 6000 & & & &\\\hline
- \multirow{2}{1.5cm}{Xue} & Rui & Ni &\multicolumn{4}{c|}{\multirow{2}*{\thuthesis}}\\\cline{2-3}
- & \multicolumn{2}{c|}{ABCDEF} &\multicolumn{4}{c|}{} \\\hline
-\end{tabular}
-\end{table}
+本科生的外文资料调研阅读报告。
+
-Roughly speaking, the simplex algorithm examines only the extreme points of the
-feasible set, rather than all feasible points. At first, the simplex algorithm
-selects an extreme point as the initial point. The successive extreme point is
-selected so as to improve the objective function value. The procedure is
-repeated until no improvement in objective function value can be made. The last
-extreme point is the optimal solution.
+\section{Figures and Tables}
-\subsection{Nonlinear Programming}
+\subsection{Figures}
-If at least one of the functions $f(x),g_j(x),j=1,2,\dots,p$ is nonlinear, then
-SOP is called a \emph{nonlinear programming}.
+An example figure in appendix (Figure~\ref{fig:appendix-survey-figure}).
-A large number of classical optimization methods have been developed to treat
-special-structural nonlinear programming based on the mathematical theory
-concerned with analyzing the structure of problems.
-\begin{figure}[h]
+\begin{figure}
\centering
- \includegraphics{thu-lib-logo.pdf}
- \caption{This is an example for figure.}
- \label{tab:badfigure2}
+ \includegraphics[width=0.6\linewidth]{example-image-a.pdf}
+ \caption{Example figure in appendix}
+ \label{fig:appendix-survey-figure}
\end{figure}
-Now we consider a nonlinear programming which is confronted solely with
-maximizing a real-valued function with domain $\real^n$. Whether derivatives are
-available or not, the usual strategy is first to select a point in $\real^n$ which
-is thought to be the most likely place where the maximum exists. If there is no
-information available on which to base such a selection, a point is chosen at
-random. From this first point an attempt is made to construct a sequence of
-points, each of which yields an improved objective function value over its
-predecessor. The next point to be added to the sequence is chosen by analyzing
-the behavior of the function at the previous points. This construction continues
-until some termination criterion is met. Methods based upon this strategy are
-called \emph{ascent methods}, which can be classified as \emph{direct methods},
-\emph{gradient methods}, and \emph{Hessian methods} according to the information
-about the behavior of objective function $f$. Direct methods require only that
-the function can be evaluated at each point. Gradient methods require the
-evaluation of first derivatives of $f$. Hessian methods require the evaluation
-of second derivatives. In fact, there is no superior method for all
-problems. The efficiency of a method is very much dependent upon the objective
-function.
-
-\subsection{Integer Programming}
-
-\emph{Integer programming} is a special mathematical programming in which all of
-the variables are assumed to be only integer values. When there are not only
-integer variables but also conventional continuous variables, we call it \emph{
- mixed integer programming}. If all the variables are assumed either 0 or 1,
-then the problem is termed a \emph{zero-one programming}. Although integer
-programming can be solved by an \emph{exhaustive enumeration} theoretically, it
-is impractical to solve realistically sized integer programming problems. The
-most successful algorithm so far found to solve integer programming is called
-the \emph{branch-and-bound enumeration} developed by Balas (1965) and Dakin
-(1965). The other technique to integer programming is the \emph{cutting plane
- method} developed by Gomory (1959).
-
-\hfill\textit{Uncertain Programming\/}\quad(\textsl{BaoDing Liu, 2006.2})
+
+\subsection{Tables}
+
+An example table in appendix (Table~\ref{tab:appendix-survey-table}).
+
+\begin{table}
+ \centering
+ \caption{Example table in appendix}
+ \begin{tabular}{ll}
+ \toprule
+ File name & Description \\
+ \midrule
+ thuthesis.dtx & The source file including documentaion and comments \\
+ thuthesis.cls & The template file \\
+ thuthesis-*.bst & BibTeX styles \\
+ thuthesis-*.bbx & BibLaTeX styles for bibliographies \\
+ thuthesis-*.cbx & BibLaTeX styles for citations \\
+ \bottomrule
+ \end{tabular}
+ \label{tab:appendix-survey-table}
+\end{table}
+
+
+\section{Equations}
+
+An example equation in appendix (Equation~\eqref{eq:appendix-survey-equation}).
+\begin{equation}
+ \frac{1}{2 \symup{\pi} \symup{i}} \int_\gamma f = \sum_{k=1}^m n(\gamma; a_k) \mathscr{R}(f; a_k)
+ \label{eq:appendix-survey-equation}
+\end{equation}
+
+
+\section{Citations}
+
+Example citations in appendix.
+\cite{abrahams99tex}
+\cite{salomon1995advanced}
+\cite{abrahams99tex,salomon1995advanced}
+
\bibliographystyle{unsrtnat}
-\bibliography{ref/refs,ref/appendix}
+\bibliography{ref/appendix}
\end{survey}