summaryrefslogtreecommitdiff
path: root/Master/texmf-dist/doc/latex/expl3
diff options
context:
space:
mode:
authorKarl Berry <karl@freefriends.org>2006-01-09 00:49:07 +0000
committerKarl Berry <karl@freefriends.org>2006-01-09 00:49:07 +0000
commit007f67a693e4d031fd3d792df8e4d5f43e2cb2e7 (patch)
tree90d17e00e572ecb1e24764b6f29c80e098b08d29 /Master/texmf-dist/doc/latex/expl3
parent950209b26f70aa87ed07c54f82a95b6f03b7c3a0 (diff)
doc/latex
git-svn-id: svn://tug.org/texlive/trunk@84 c570f23f-e606-0410-a88d-b1316a301751
Diffstat (limited to 'Master/texmf-dist/doc/latex/expl3')
-rw-r--r--Master/texmf-dist/doc/latex/expl3/expl3.tex754
-rw-r--r--Master/texmf-dist/doc/latex/expl3/test1.tex47
-rw-r--r--Master/texmf-dist/doc/latex/expl3/test2.tex31
-rw-r--r--Master/texmf-dist/doc/latex/expl3/test3.tex100
4 files changed, 932 insertions, 0 deletions
diff --git a/Master/texmf-dist/doc/latex/expl3/expl3.tex b/Master/texmf-dist/doc/latex/expl3/expl3.tex
new file mode 100644
index 00000000000..2dda01b6c3d
--- /dev/null
+++ b/Master/texmf-dist/doc/latex/expl3/expl3.tex
@@ -0,0 +1,754 @@
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+% based on version for the TUGboat proceedings
+% Copyright 1997--98 David Carlisle, Chris Rowley, Frank Mittelbach
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+%
+%
+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
+\documentclass{article}
+
+\usepackage{shortvrb}
+\MakeShortVerb{\|}
+
+% A couple of \provide.. so document runs with
+% both ltugproc and ltxguide classes
+%
+\providecommand \m [1]{$\langle$\textit{#1}$\rangle$}
+%\providecommand \netaddress {\date}
+\providecommand \acro [1]{\textsc{\MakeLowercase{#1}}}
+\providecommand \ie {i.e.,~}
+\providecommand \eg {e.g.,~}
+
+\hyphenation{para-meters para-meter}
+% I have found at least 3 other hyphenations of this in various refs,
+% but I did also find this, which is my personal favourite ---chris
+
+\hyphenation{ignore ignored ignores}
+
+\begin{document}
+
+\title{The \LaTeX3 Programming Language---\\
+a proposed system for \TeX\ macro programming}
+
+
+\author{\copyright~David Carlisle, Chris Rowley and\\ Frank Mittelbach\\
+\LaTeX3 project\\
+\texttt{latex-l@urz.uni-heidelberg.de}}
+
+
+\maketitle
+
+\begin{abstract}
+
+This paper gives s brief introduction to a new set of programming
+conventions that have been designed to meet the requirements of
+implementing large scale \TeX\ macro programming projects such as
+\LaTeX.
+
+The main features of the system described are:
+\begin{itemize}
+\item classification of the macros (or, in \LaTeX{} terminology,
+ commands) into \LaTeX{} functions and \LaTeX{} parameters, and also
+ into modules containing related commands;
+\item a systematic naming scheme based on these
+ classifications;
+\item a simple mechanism for controlling the expansion of a function's
+arguments.
+\end{itemize}
+A system such as this is being used experimentally as the basis for
+\TeX{} programming within the \LaTeX3 project.
+Note that the language is not intended for either
+document mark-up or style specification.
+
+This paper is based on a talk given by David Carlisle in San
+Francisco, July 1997, but it describes the work of several people:
+principally
+ Frank Mittelbach and
+ Denys Duchier,
+together with
+ Johannes Braams,
+ David Carlisle,
+ Michael Downes,
+ Alan Jeffrey,
+ Chris Rowley and
+ Rainer Sch\"opf.
+\end{abstract}
+
+\vspace{4pt}
+
+
+\section{Introduction}
+
+This paper describes the conventions for a \TeX-based programming
+language which is intended to provide a more consistent and rational
+environment for the construction of large scale systems, such as
+\LaTeX, using \TeX{} macros.
+
+Variants of this language have been in use by The \LaTeX3 Project Team
+since around 1990 but the syntax specification to be outlined here
+should \emph{not} be considered final. This is an experimental
+language thus many aspects, such as the syntax conventions and naming
+schemes, may (and probably will) change as more experience is gained
+with using the language in practice.
+
+The next section shows where this language fits into a complete
+\TeX-based document processing system. We then describe the major
+features of the syntactic structure of command names, including the
+argument specification syntax used in function names.
+
+The practical ideas behind this argument syntax will be explained,
+together with the semantics of the expansion control mechanism and the
+interface used to define variant forms of functions. The paper also
+discusses some advantages of the syntax for parameter names.
+
+As we shall demonstrate, the use of a structured naming scheme and of
+variant forms for functions greatly improves the readability of the
+code and hence also its reliability. Moreover, experience has shown
+that the longer command names which result from the new syntax do not
+make the process of \emph{writing} code significantly harder
+(especially when using a reasonably intelligent editor).
+
+The final section gives some details of our plans to distribute parts
+of this system during the next year.
+More general information concerning the work of the \LaTeX3 Project
+can be found in~\cite{tub:MR98-1}.
+
+
+\section{Languages and interfaces}
+\label{sec:langs}
+
+It is possible to identify several distinct languages related to the
+various interfaces that are needed in a \TeX-based document processing
+system. This section looks at those we consider most important for
+the \LaTeX3 system.
+
+\begin{description}
+\item[Document mark-up] This comprises those commands (often called tags)
+ that are to embedded in the document (the |.tex| file).
+
+ It is generally accepted that such mark-up should be essentially
+ \emph{declarative}.
+ It may be traditional \TeX-based mark-up such as
+ \LaTeXe, as described in~\cite{A-W:LLa94} and~\cite{A-W:GMS94},
+ or a mark-up language defined via \acro{SGML} or \acro{XML}.
+
+ One problem with more traditional \TeX\ coding conventions (as
+ described in~\cite{A-W:K-TB}) is that the names and syntax of \TeX's
+ primitive formatting commands are ingeniously designed to be
+ `natural' when used directly by the author as document mark-up or in
+ macros. Ironically, the ubiquity (and widely recognised
+ superiority) of logical mark-up has meant that such explicit
+ formatting commands are almost never needed in documents or in
+ author-defined macros. Thus they are used almost exclusively by
+ \TeX{} programmers to define higher-level commands; and their
+ idiosyncratic syntax is not at all popular with this community.
+ Moreover, many of them have names that could be very useful as
+ document mark-up tags were they not pre-empted as primitives (\eg
+ |\box| or |\special|).
+
+\item[Designer interface] This relates a (human) typographic
+ designer's specification for a document to a program that `formats
+ the document'. It should ideally use a declarative language that
+ facilitates expression of the relationship and spacing rules specified
+ for the layout of the various document elements.
+
+ This language is not embedded in document text and it will be very
+ different in form to the document mark-up language. For
+ \acro{SGML}-based systems the \acro{DSSSL} language may come to play
+ this role. For \LaTeX, this level was almost completely missing
+ from \LaTeX2.09; \LaTeXe\ made some improvements in this area but it
+ is still the case that implementing a design specification in
+ \LaTeX\ requires far more `low-level' coding than is acceptable.
+\item[Programmer interface]
+ This language is the implementation
+ language within which the basic typesetting functionality is
+ implemented, building upon the primitives of \TeX\ (or a
+ successor program).
+ It may also be used to implement the previous
+ two languages `within' \TeX, as in the current \LaTeX\ system.
+\end{description}
+
+Only the last of these three interfaces is covered by this paper,
+which describes a system aimed at providing a suitable basis for
+coding large scale projects in \TeX{} (but this should not preclude its
+use for smaller projects). Its main distinguishing features are
+summarised here.
+
+\begin{itemize}
+\item A consistent naming scheme for all commands, including \TeX\
+ primitives.
+\item The classification of commands as \LaTeX{} functions or \LaTeX{}
+ parameters, and also their division into modules according to their
+ functionality.
+\item A simple mechanism for controlling argument expansion.
+\item Provision of a set of core \LaTeX{} functions that is sufficient
+ for handling programming constructs such as queues, sets,
+ stacks, property lists.
+\item A \TeX{} programming environment in which, for example, all
+ white space is ignored.
+\end{itemize}
+
+\section{The naming scheme}
+\label{sec:scheme}
+
+The naming conventions for this programming language distinguish
+between \textit{functions} and \textit{parameters}. Functions can have
+arguments and they are executed. Parameters can be assigned values
+and they are used in arguments to functions; they are not directly
+executed but are manipulated by mutator and accessor functions.
+Functions and parameters with a related functionality (for example
+accessing counters, or manipulating token-lists, etc.)\ are collected
+together into a
+\textit{module}.
+
+
+Note that all these terms are only \LaTeX{} terminology and are not,
+for example, intended to indicate that the commands have these
+properties when considered in the context of basic \TeX{} or in any
+more general programming context.
+
+
+\subsection{Examples}
+\label{sec:ex}
+
+Before giving the details of the naming scheme, here are a few typical
+examples to indicate the flavour of the scheme; first some parameter
+names.
+\begin{quote}
+|\l_tmpa_box| is a local parameter (hence the~|l_| prefix)
+corresponding to a box register.\\
+|\g_tmpa_int| is a global parameter (hence the~|g_| prefix)
+corresponding to an integer register (\ie a \TeX{} count register).\\
+|\c_empty_toks|
+is the constant~(|c_|) token register parameter that is for ever empty.
+\end{quote}
+Now here is an example of a typical function name.
+
+|\seq_push:Nn| is the function which puts the token list specified by
+its second argument onto the stack specified by its first argument.
+The different natures of the two arguments are indicated by the~|:Nn|
+suffix. The first argument must be a single token which `names'
+the stack parameter: such single-token arguments are denoted~|N|.
+The second argument is a normal \TeX\ `undelimited argument', which
+may either be a single token or a balanced, brace-delimited token
+list (which we shall here call a \textit{braced token list}): the~|n|
+denotes such a `normal' argument form.
+
+|\seq_push:cn| would be similar to the above, but in this case the~|c|
+means that the stack-name is specified in the first argument by a
+token list that expands, using |\csname...|, to a control sequence that
+is the \emph{name} of the stack parameter.
+
+\noindent
+The names of these two functions also indicate that they are in the
+module called |seq|.
+
+
+\subsection{Formal syntax of the conventions}
+\label{sec:namesyn}
+
+We shall now look in more detail at the syntax of these names.\\
+The syntax of parameter names is as follows:
+ \begin{quote}
+ |\|\m{access}|_|\m{module}|_|\m{description}|_|\m{type}
+ \end{quote}
+The syntax of function names is as follows:
+ \begin{quote}
+ |\|\m{module}|_|\m{description}|:|\m{arg-spec}
+ \end{quote}
+
+
+\subsection{Modules and descriptions}
+\label{sec:modules}
+
+The syntax of all names contains
+\begin{quote}
+ \m{module} and \m{description}:
+\end{quote}
+these both give information about the command.
+
+A \textit{module} is a collection of
+closely related functions and parameters.
+Typical module names include~|int| for integer parameters
+and related functions,~|seq| for sequences and~|box| for boxes.
+
+Packages providing new programming functionality will add new modules
+as needed; the programmer can choose any unused name, consisting
+of letters only, for a module.
+
+The \textit{description} gives more detailed information about the
+function or parameter, and provides a unique name for it. It should
+consist of letters and, possibly,~|_|~characters.
+
+\subsection{Parameters: access and type}
+\label{sec:parms}
+
+The \m{access} part of the name describes how the parameter can be
+accessed. Parameters are primarily classified as local, global or
+constant (there are further, more technical, classes). This
+\textit{access} type appears as a code at the beginning of the name;
+the codes used include:
+\begin{itemize}
+\item[\bf c]
+ constants (global parameters whose value should not be changed);
+\item[\bf g]
+ parameters whose value should only be set globally;
+\item[\bf l]
+ parameters whose value should only be set locally.
+\end{itemize}
+
+The \m{type} will normally (except when introducing a new data-type)
+be in the list of available \textit{data-types}; these include the
+primitive \TeX\ data-types, such as the various registers, but to
+these will be added data-types built within the \LaTeX{} programming
+system.
+
+Here are some typical data-type names:
+\begin{description}
+\item[int] integer-valued count register;
+\item[toks] token register;
+\item[box] box register;
+\item[fint] `Fake-integer': (or fake-counter) a data type created to
+ avoid problems with the limited number of available count registers
+ in (standard) \TeX;
+\item[seq] `sequence': a data-type used to implement lists
+ (with access at both ends) and stacks;
+\item[plist] property list
+\end{description}
+When the \m{type} and \m{module} are identical (as often happens in
+the more basic modules) the \m{module} part is often omitted for
+aesthetic reasons.
+
+
+\subsection{Functions: argument specifications}
+\label{sec:args}
+
+ Function names end with an \m{arg-spec} after a colon. This
+ gives an indication of the types of argument that a function takes,
+ and provides a convenient method of naming similar functions that
+ differ only in their argument forms (see the next section for
+ examples).
+
+ The \m{arg-spec} consists of a (possibly empty) list of characters,
+ each denoting one argument of the function. It is important to
+ understand that `argument' here refers to the effective argument of
+ the \LaTeX{} function, not to an argument at the \TeX-level. Indeed,
+ the top level \TeX\ macro that has this name typically has no
+ arguments. This is an extension of the existing \LaTeX\ convention
+ where one says that |\section| has an optional argument and a
+ mandatory argument, whereas the \TeX\ macro |\section| actually has
+ zero parameters at the \TeX\ level, it merely calls an internal \LaTeX\
+ command which in turn calls others that look ahead for star forms and
+ optional arguments.
+
+The list of possible argument specifiers includes the following.
+\begin{itemize}
+\item[\bf n] Unexpanded token or braced token list.\\
+ This is a standard \TeX\ undelimited macro argument.
+\item[\bf o] One-level-expanded token or braced token list.\\
+ This means that the argument is expanded one level, as by
+ |\expandafter|, and the expansion is passed to the function as a braced
+ token list. Note that if the original argument is a braced
+ token list then only the first token in that list is expanded.
+\item[\bf x] Fully-expanded token or braced token list.\\
+ This means that the argument is expanded as in the replacement text of
+ an~|\edef|, and the expansion is passed to the function as a
+ braced token list.
+\item[\bf c] Character string used as a command name.\\ The argument (a
+ token or braced token list) must, when fully expanded, produce a
+ sequence of characters which is then used to construct a command
+ name (via~|\csname|, |\endcsname|). This command name is the single
+ token that is passed to the function as the argument.
+\item[\bf N] Single token (unlike~|n|, the argument must \emph{not} be
+ surrounded by braces).\\
+ A typical example of a command taking an~|N|
+ argument is~|\def|, in which the command being defined must be
+ unbraced.
+ \item[\bf O] One-level-expanded single token (unbraced).\\
+ As for~|o|, the one-level expansion is passed (as a
+ braced token list) to the function.
+ \item[\bf X] Fully-expanded single token (unbraced).\\
+ As for~|x|, the full expansion is passed (as a
+ braced token list) to the function.
+ \item[\bf C] Character string used as a command name
+ then one-level expanded.\\
+ The form of the argument is exactly as for~|c|, but the
+ resulting token is then expanded one level (as for~|O|), and
+ the expansion is passed to the function as a braced token list.
+ \item[\bf p] Primitive \TeX\ parameter specification.\\
+ This can be something simple like~|#1#2#3|, but may use arbitrary
+ delimited argument syntax such as: |#1,#2\q_stop#3|.
+ \item[\bf T,F\hspace{-10pt}]
+ \hspace{10pt}%
+ These are special cases of~|n| arguments, used for the
+ true and false code in conditional commands.
+\end{itemize}
+There are two other specifiers with more general meanings:
+\begin{itemize}
+\item[\bf D] This means: \textbf{Do not use}. This special case is used
+ for \TeX\ primitives and other commands that are provided for use
+ only while bootstrapping the \LaTeX\ kernel. If the \TeX\ primitive
+ needs to be used in other contexts it will be given an alternative,
+ more appropriate, name with a useful argument specification. The
+ argument syntax of these is often weird, in the sense described next.
+ \item[\bf w] This means that the argument syntax is `weird' in that it
+ does not follow any standard rule. It is used for functions with
+ arguments that take non standard forms: examples are \TeX-level
+ delimited arguments and the boolean tests needed after certain
+ primitive |\if|\ldots\ commands.
+\end{itemize}
+
+
+\section{Expansion control}
+\label{sec:exp}
+
+\subsection{Simpler means better}
+\label{sec;simpler}
+
+Anyone who programs in \TeX\ is frustratingly familiar with the
+problem of arranging that arguments to functions are suitably expanded
+before the function is called. To illustrate how expansion control
+can bring instant relief to this problem we shall consider two
+examples copied from \texttt{latex.ltx}.
+
+\begin{verbatim}
+ \global
+ \expandafter
+ \expandafter
+ \expandafter
+ \let
+ \expandafter
+ \reserved@a
+ \csname \curr@fontshape \endcsname
+\end{verbatim}
+This first piece of code is in essence simply a
+global |\let|. However, the token to be defined is obtained by
+expanding |\reserved@a| one level; and, worse, the token to which it
+is to be let is obtained by fully expanding |\curr@fontshape| and then
+using the characters produced by that expansion to construct a command
+name. The result is a mess of interwoven |\expandafter| and~|\csname|
+beloved of all \TeX\ programmers, and the code is essentially
+unreadable.
+
+Using the conventions and functionality outlined here, the task would
+be achieved with code such as this:
+\begin{verbatim}
+ \glet:Oc \g_reserved_a_tlp
+ \l_current_font_shape_tlp
+\end{verbatim}
+The command |\glet:Oc| is a global~|\let| that expands its
+first argument once, and generates a command name out of its second
+argument, before making the definition. This produces code that
+is far more readable and more likely to be correct first time.
+
+Here is the second example.
+\begin{verbatim}
+ \expandafter
+ \in@
+ \csname sym#3%
+ \expandafter
+ \endcsname
+ \expandafter
+ {%
+ \group@list}%
+\end{verbatim}
+This piece of code is part of the definition of another function. It
+first produces two things: a token list, by expanding |\group@list| once;
+and a token whose name comes from~`|sym#3|'. Then the function~|\in@|
+is called and this tests if its first argument occurs in the token list
+of its second argument.
+
+Again we can improve enormously on the code. First we shall rename
+the function~|\in@| according to our conventions. A function such as
+this but taking two normal `\texttt{n}' arguments might reasonably be
+named |\seq_test_in:nn|; thus the variant function we need will be
+defined with the appropriate argument types and its name will be
+|\seq_test_in:cO|. Now this code fragment will be simply:
+\begin{verbatim}
+ \seq_test_in:cO {sym#3} \l_group_seq
+\end{verbatim}
+Note that, in addition to the lack of |\expandafter|, the space after
+the~|}| will be silently ignored since all white space is ignored in
+this programming environment.
+
+
+\subsection{New functions from old}
+\label{sec:newfunc}
+
+For many common functions the \LaTeX3 kernel will provide variants
+with a range of argument forms, and similarly it is expected that
+extension packages providing new functions will make them available in
+the all the commonly needed forms.
+
+However, there will be occasions where it is necessary to construct a
+new such variant form; therefore the expansion module provides a
+straightforward mechanism for the creation of functions with any
+required argument type, starting from a function that takes `normal'
+\TeX\ undelimited arguments.
+
+To illustrate this let us suppose you have a `base function'
+|\demo_cmd:nnn| that takes three normal arguments, and that you need
+to construct the variant |\demo_cmd:cnx|, for which the first argument
+is used to construct the \emph{name} of a command, whilst the third
+argument must be fully expanded before being passed to
+|\demo_cmd:nnn|.
+To produce the variant form from the base form, simply use this:
+\begin{verbatim}
+ \exp_def_form:nnn {demo_cmd} {nnn} {cnx}
+\end{verbatim}
+This defines the variant form so that you can then write, for example:
+\begin{verbatim}
+ \demo_cmd:cnx {abc} {pq} {\rst \xyz }
+\end{verbatim}
+rather than \ldots\ well, something like this!
+\begin{verbatim}
+ \def \tempa {{pq}}%
+ \edef \tempb {\rst \xyz}%
+ \expandafter
+ \demo@cmd
+ \csname abc%
+ \expandafter
+ \expandafter
+ \expandafter
+ \endcsname
+ \expandafter
+ \tempa
+ \expandafter
+ {%
+ \tempb
+ }%
+\end{verbatim}
+
+Another example: you may wish to declare a function
+|\demo_cmd_b:xcxcx|, a variant of an existing function
+|\demo_cmd_b:nnnnn|, that fully
+expands arguments 1,~3 and~5, and produces commands to pass as
+arguments 2 and~4 using~|\csname|.
+The definition you need is simply
+\begin{verbatim}
+ \exp_def_form:nnn
+ {demo_cmd_b} {nnnnn} {xcxcx}
+\end{verbatim}
+
+This extension mechanism is written so that if the same new form of
+some existing command is implemented by two extension packages then the
+two definitions will be identical and thus no conflict will occur.
+
+
+\section{Parameter assignments and accessor functions}
+\label{sec:access}
+
+\subsection{Checking assignments}
+\label{sec:check}
+
+One of the advantages of having a consistent scheme is that the system
+can provide more extensive error-checking and debugging facilities.
+For example, an accessor function that makes a \emph{global}
+assignment of a value to a parameter can check that it is not passed
+the name of a \emph{local} parameter as that argument: it does this by
+checking that the name starts with~|\g_|.
+
+Such checking is probably too slow for normal use, but the code can
+have hooks built in that allow a format to be made in which all
+functions perform this kind of check.
+
+A typical section of the source%
+\footnote{This code uses the \textsf{docstrip}
+system described in~\cite{A-W:GMS94}, Section~14.3.}
+for such code might look like this
+(recall that all white space is ignored):
+
+\begin{verbatim}
+ %<*!check>
+ \let_new:NN
+ \toks_gset:Nn \tex_global:D
+ %</!check>
+ %<*check>
+ \def_new:Npn
+ \toks_gset:Nn #1
+ {
+ \chk_global:N #1
+ \tex_global:D #1
+ }
+ %</check>
+\end{verbatim}
+In the above code the function |\toks_gset:Nn| takes a single
+token~(|N|) specifying a token register, and globally sets it to the
+value passed in the second argument.
+
+A typical use of it would be:
+\begin{verbatim}
+ \toks_gset \g_xxx_toks {<some value>}
+\end{verbatim}
+In the normal definition, |\toks_gset| can be simply~|\let|
+to~|\global| because the primitive \TeX{} token register does not
+require any explicit assignment function:
+this is done by the |%<*!check>| code above.
+
+The alternative definition first checks that the argument
+passed as~|#1| is the name of a global parameter and raises an error
+if it is not. It does this by taking apart the command name passed
+as~|#1| and checking that it starts~|\g_|.
+
+\subsection{Consistency}
+\label{sec:cons}
+
+The primitive \TeX\ syntax for register assignments has a very minimal
+syntax and, apart from box functions, there are no explicit functions
+for assigning values to these registers.
+
+This makes it impossible to implement alternative data-types with a
+syntax that is both consistent and at all similar to the syntax for
+the primitives; moreover, it encourages a coding style that is very
+error prone.
+
+As in the |\toks_gset:Nn| example given above, all \LaTeX\ data-types
+are provided with explicit functions for assignment and for use, even
+when these have essentially empty definitions. This allows for better
+error-checking as described above; it also allows the construction of
+further data-types with a similar interface, even when the
+implementation of the associated functions is very complex.
+
+For example, the `fake-counter' (\texttt{fint}) data-type mentioned
+above will appear at the \LaTeX{} programming level to be exactly like
+the data-type based on primitive count registers; however, internally
+it makes no use of count registers. Typical functions in this module
+are illustrated here.
+
+\begin{verbatim}
+\fint_new:N \l_tmpa_fint
+\end{verbatim}
+This declares the local parameter |\l_example_fint| as a fake-counter.
+
+\begin{verbatim}
+\fint_add:Nn \l_example_fint \c_thirty_two
+\end{verbatim}
+This increments the value of this fake-counter by 32.
+
+
+\section{The experimental distribution}
+\label{sec:dist}
+
+The initial implementations of a \LaTeX\ programming language using
+this kind of syntax remain unreleased (and not completely functional);
+they partly pre-date \LaTeXe! The planned distribution will provide a
+subset of the functionality of those implementations, in the form of
+packages to be used on top of \LaTeXe.
+
+The intention is to allow experienced \TeX\ programmers to experiment
+with the system and to comment on the interface. This means that
+\textbf{\itshape the interface will change}. No part of this system,
+including the name of anything, should be relied upon as being
+available in a later release. Please do \emph{experiment} with these
+packages, but do \emph{not} use them for code that you expect to keep
+unchanged over a long period.
+
+In view of the intended experimental use for this distribution we
+shall, in the first instance, produce only a few modules for use with
+\LaTeXe. These will set up the conventions and the basic functionality
+of, for example, the expansion mechanism; they will also implement some
+of the basic programming constructs, such as token-lists and sequences.
+They are intended only to give a flavour of the code: the full \LaTeX3
+kernel will provide a very rich set of programming constructs so that
+packages can efficiently share code, in contrast with the situation in
+the current \LaTeX\ where every large package must implement its own
+version of queues, stacks, etc., as necessary.
+
+In the first release of this experimental system at least the
+following modules will be distributed.
+\begin{description}
+\item[l3names] This sets up the basic naming scheme and renames all
+the \TeX\ primitives. If it is loaded with the option
+\texttt{[removeoldnames]} then the old primitive names such as~|\box|
+become \emph{undefined} and are thus available for user
+definition. Caution: use of this option will certainly break existing
+\TeX\ code!
+
+\item [l3basics]
+This contains the basic definition modules used
+by the other packages.
+
+\item[l3chk] A module that provides functionality comparable to
+\LaTeX's |\newcommand| and |\renewcommand|, and also the extra level of
+checking described above in section~\ref{sec:check}.
+
+\item[l3tlp]
+This implements a basic data-type, called a \textit{token-list
+pointer}, used for storing named token lists: these are essentially
+\TeX{} macros with no arguments.
+
+\item[l3expan] This is the argument expansion module discussed above.
+
+\item[l3quark] A `quark' is a command that is defined to expand to
+itself! Therefore they must never be expanded as this will generate
+infinite recursion; they do however have many uses, \eg as
+special markers and delimiters within code.
+
+\item[l3seq]
+This implements data-types such as queues and stacks.
+
+\item[l3prop]
+This implements the data-type for `property lists' that are used, in
+particular, for storing key/value pairs.
+
+\item[l3int]
+This implements the integer and `fake integer' data-types.
+
+\item[l3toks]
+A data-type corresponding to \TeX's primitive token registers.
+
+\item[l3io]
+A module providing low level input and output functions.
+
+\item[l3precom]
+A `pre-compilation' module that provides functions dealing with pointer
+creation and handling, and using external files to record the state of the current
+definitions.
+\end{description}
+This distribution will also contain the \LaTeX\ source for the latest
+version of this document, a docstrip install file and three small test
+files.
+
+
+In later releases we plan to add further modules and a full-fledged
+example of the use of the new language: a proto-type implementation
+for the ideas described in the article `Language Information in
+Structured Documents: A Model for Mark-up and
+Rendering'~\cite{tub:MR98-2}.
+
+\begin{thebibliography}{1}
+
+\bibitem{A-W:K-TB}
+Donald E Knuth
+\newblock {\em The {\TeX}book}.
+\newblock Addison-Wesley, Reading, Massachusetts, 1984.
+
+\bibitem{A-W:GMS94}
+Goossens, Mittelbach and Samarin.
+\newblock {\em The {\LaTeX} Companion}.
+\newblock Addison-Wesley, Reading, Massachusetts, 1994.
+
+\bibitem{A-W:LLa94}
+Leslie Lamport.
+\newblock {\em {\LaTeX:} A Document Preparation System}.
+\newblock Addison-Wesley, Reading, Massachusetts, second edition, 1994.
+
+\bibitem{tub:MR98-1}
+Frank Mittelbach and Chris Rowley.
+\newblock The {\LaTeX3} Project.
+\newblock {\em {TUG}boat}, ????? ??? 1998.
+
+\bibitem{tub:MR98-2}
+Frank Mittelbach and Chris Rowley.
+\newblock Language Information in
+Structured Documents: A Model for Mark-up and Rendering.
+\newblock {\em {TUG}boat}, ????? ??? 1998.
+
+\end{thebibliography}
+
+\end{document}
+
+
+
+
+
diff --git a/Master/texmf-dist/doc/latex/expl3/test1.tex b/Master/texmf-dist/doc/latex/expl3/test1.tex
new file mode 100644
index 00000000000..e419bdb37aa
--- /dev/null
+++ b/Master/texmf-dist/doc/latex/expl3/test1.tex
@@ -0,0 +1,47 @@
+
+\documentclass{article}
+
+
+\usepackage{l3expan,l3io}
+
+\begin{document}
+
+\CodeStart
+
+
+\def_new:Npn\test_cmd:nn#1#2{
+ \iow_unexpanded_term:n{}
+ \iow_unexpanded_term:n{Argument~1:~#1}
+ \iow_unexpanded_term:n{Argument~2:~#2}}
+
+
+
+\def:Npn\a{A}
+\def:Npn\b{B}
+
+\def:Npn\aa{\a}
+\def:Npn\bb{\b}
+
+\exp_def_form:nnn {test_cmd}{nn}{oo}
+\exp_def_form:nnn {test_cmd}{nn}{xx}
+\exp_def_form:nnn {test_cmd}{nn}{cc}
+\exp_def_form:nnn {test_cmd}{nn}{nx}
+
+\test_cmd:nn{a}{b}
+
+\test_cmd:nn{\a}{\b}
+
+\test_cmd:oo\aa\bb
+
+\test_cmd:xx\aa\bb
+
+\test_cmd:cc{a}{b}
+
+\test_cmd:nx{a}{\b}
+
+\CodeStop
+
+\LaTeX\ still works!
+
+\end{document}
+
diff --git a/Master/texmf-dist/doc/latex/expl3/test2.tex b/Master/texmf-dist/doc/latex/expl3/test2.tex
new file mode 100644
index 00000000000..1a1ed8d813b
--- /dev/null
+++ b/Master/texmf-dist/doc/latex/expl3/test2.tex
@@ -0,0 +1,31 @@
+
+
+\RequirePackage[removeoldnames]{l3names}
+\RequirePackage{l3expan}
+
+\CodeStart
+
+
+\def_new:Npn\foo:nn#1#2{\def:Npn\xxx{(#1)(#2)}\tex_show:D\xxx}
+
+\def:Npn\a{A}\def:Npn\b{B}
+\def:Npn\aa{\a}\def:Npn\bb{\b}
+
+\def:Npn\foo:oo{\exp_args:Noo\foo:nn}
+\def:Npn\foo:xx{\exp_args:Nxx\foo:nn}
+\def:Npn\foo:cc{\exp_args:Ncc\foo:nn}
+\def:Npn\foo:nx{\exp_args:Nnx\foo:nn}
+
+\foo:nn{a}{b}
+
+\foo:nn{\a}{\b}
+
+\foo:oo\aa\bb
+
+\foo:xx\aa\bb
+
+\foo:cc{a}{b}
+
+\foo:nx{a}{\b}
+
+\tex_end:D
diff --git a/Master/texmf-dist/doc/latex/expl3/test3.tex b/Master/texmf-dist/doc/latex/expl3/test3.tex
new file mode 100644
index 00000000000..8d3630839bb
--- /dev/null
+++ b/Master/texmf-dist/doc/latex/expl3/test3.tex
@@ -0,0 +1,100 @@
+\documentclass{article}
+
+\usepackage{l3precom}
+
+% let's dump what is known about the LaTeX internals so far.
+% this will not be much as the very basic stuff doesn't get
+% dumped and we haven't got anything else.
+%
+\dumpLaTeXstate{test1}
+
+\CodeStart
+
+% we need some variants of tlp_set which are not yet
+% defined for use in the code below.
+%
+\exp_def_form:nnn{tlp_set}{Nn}{on}
+\exp_def_form:nnn{tlp_gset}{Nn}{on}
+
+% okay, here we either load a dump file (testdump.cmp)
+% and then jump tp \cs_dump: or we compile one for next time.
+% don't forget that if you change code below it will only have any
+% affect if a new dump file is written so you may have to remove
+% the existing one.
+%
+\cs_load_dump:n{testdump}
+
+% two test definitions
+%
+\def_new:Npn\foo{some foo}
+\def_new:Npn\baz{some baz}
+
+% we say that \foo should be dumped in the compiled style.
+% this is pretty useless as it is certainly not faster than defining
+% it in the first place. but this is only done for showing that it
+% works. just assume that \foo is actually a pretty difficult
+% definition which does need a lot of static compilation due to
+% parsing, comparing values, etc., so that it is much faster load the
+% final version rather than do the compilation each time again.
+%
+% btw note that \baz is not dumped and will not be available in the
+% production run (ie the one using the cmp file)
+%
+\cs_record_name:N\foo
+
+% get our self a scratch register (again this will not be available in
+% the production run)
+%
+\tlp_new:Nn\l_scratch_tlp{}
+
+% now we generate a unique cs name and assign it the string "foo".
+% again pretty useless example. but with this mechansim you can build
+% complex graph structures etc using these names as pointers, etc. and
+% in such a case you need to dump the state of your graph at some
+% point to be able to load it very fast in production.
+%
+\cs_gen_sym:N\l_scratch_tlp{}
+\tlp_set:on \l_scratch_tlp {foo}
+
+% ditto for a global unique name
+%
+\cs_ggen_sym:N\l_scratch_tlp{}
+\tlp_gset:on \l_scratch_tlp {bar}
+
+% and now we dump the whole rubbish. In the current implementation
+% only csnames can be precompiled, perhaps registers should be handled
+% similarly.
+%
+\cs_dump:
+
+% and some int register to show something in the second LaTeX state
+% dump.
+%
+\int_new:N\l_my_int
+\int_set:Nn\l_my_int{42}
+%
+% as the allocation routines are not distributed we have to do this
+% manually.
+%
+\register_record_name:N\l_my_int
+
+\dumpLaTeXstate{test2}
+
+% and changing something ... what happens with the LaTeX state?
+%
+\int_set:Nn\l_my_int{0}
+\def:Npn\file_not_found:nTF#1#2#3{}
+
+\dumpLaTeXstate{test3}
+
+\CodeStop
+
+\begin{document}
+
+\LaTeX\ still works!
+
+\end{document}
+
+
+
+