summaryrefslogtreecommitdiff
path: root/macros/luatex/generic/spelling
diff options
context:
space:
mode:
authorNorbert Preining <norbert@preining.info>2019-09-02 13:46:59 +0900
committerNorbert Preining <norbert@preining.info>2019-09-02 13:46:59 +0900
commite0c6872cf40896c7be36b11dcc744620f10adf1d (patch)
tree60335e10d2f4354b0674ec22d7b53f0f8abee672 /macros/luatex/generic/spelling
Initial commit
Diffstat (limited to 'macros/luatex/generic/spelling')
-rw-r--r--macros/luatex/generic/spelling/CHANGES90
-rw-r--r--macros/luatex/generic/spelling/LICENSE416
-rw-r--r--macros/luatex/generic/spelling/README70
-rw-r--r--macros/luatex/generic/spelling/spelling-doc-lst-lua.tex84
-rw-r--r--macros/luatex/generic/spelling/spelling-doc.bad4
-rw-r--r--macros/luatex/generic/spelling/spelling-doc.pdfbin0 -> 128505 bytes
-rw-r--r--macros/luatex/generic/spelling/spelling-doc.tex830
-rw-r--r--macros/luatex/generic/spelling/spelling-main.lua220
-rw-r--r--macros/luatex/generic/spelling/spelling-recurse.lua110
-rw-r--r--macros/luatex/generic/spelling/spelling-stage-1.lua370
-rw-r--r--macros/luatex/generic/spelling/spelling-stage-2.lua675
-rw-r--r--macros/luatex/generic/spelling/spelling-stage-3.lua301
-rw-r--r--macros/luatex/generic/spelling/spelling-stage-4.lua202
-rw-r--r--macros/luatex/generic/spelling/spelling.sty150
14 files changed, 3522 insertions, 0 deletions
diff --git a/macros/luatex/generic/spelling/CHANGES b/macros/luatex/generic/spelling/CHANGES
new file mode 100644
index 0000000000..b96745410f
--- /dev/null
+++ b/macros/luatex/generic/spelling/CHANGES
@@ -0,0 +1,90 @@
+This material is subject to the LaTeX Project Public License. See
+<http://www.latex-project.org/lppl/> for the details of that license.
+
+
+### v0.41 (2013-05-25)
+
+Fixes:
+
+* Fixed compatibility issue with LuaTeX 0.70.2 that caused text output
+ file written to be emtpy.
+
+
+### v0.4 (2013-05-23)
+
+New features:
+
+* In addition to lists of bad and good spellings, words can be checked
+ against user-defined match rules to determine highlighting status.
+
+Changes:
+
+* Removed means to configure EOL character of text output file.
+ Standard Lua EOL character is always used, which is platform
+ dependent.
+
+* Improved compatibility with recent LuaTeX versions (v0.74 and newer).
+
+* File `<jobname>.spell.xml` is loaded before file `<jobname>.spell.bad`
+ (if both files exist).
+
+Fixes:
+
+* Bad words with surrounding punctuation haven't been highlighted.
+ Determining whether a word needs to be highlighted is now done by
+ checking against the lists of bad and good spellings words as they
+ appear in the document with possible punctuation as well as with all
+ surrounding punctuation stripped ([GitHub issue 8][ghi#8]).
+
+* Macro `\spellingoutputlinelength` was broken.
+
+* Raising an error when a file cannot be opened for reading causes
+ problems when compiling a document for the first time. Now, only a
+ warning is written to the console and log file in that case.
+
+[ghi#8]: https://github.com/sh2d/spelling/issues/8
+
+
+### v0.3 (2013-02-12)
+
+New:
+
+* [LanguageTool][lt] support: LanguageTool error reports in the XML
+ format can be parsed for spelling errors (with the help of the
+ [LuaXML][luaxml] package). LanguageTool is a cross-platform style and
+ grammar checker.
+
+[lt]: http://www.languagetool.org/
+[luaxml]: http://www.ctan.org/pkg/luaxml
+
+Changes:
+
+* Default file names used by the package have been changed:
+
+ <jobname>.spb => <jobname>.spell.bad
+ <jobname>.spg => <jobname>.spell.good
+ <jobname>.txt => <jobname>.spell.txt
+
+
+### v0.2 (2012-12-04)
+
+Fixes:
+
+* File `spelling.lua` could not be found by LaTeX style file
+ ([GitHub issue 14][ghi#14]).
+
+[ghi#14]: https://github.com/sh2d/spelling/issues/14
+
+
+### v0.1 (2012-11-30)
+
+First upload to CTAN.
+
+
+
+<!--
+%%% Local Variables:
+%%% coding: utf-8
+%%% mode: markdown
+%%% End:
+-->
diff --git a/macros/luatex/generic/spelling/LICENSE b/macros/luatex/generic/spelling/LICENSE
new file mode 100644
index 0000000000..2244313901
--- /dev/null
+++ b/macros/luatex/generic/spelling/LICENSE
@@ -0,0 +1,416 @@
+The LaTeX Project Public License
+=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
+
+LPPL Version 1.3c 2008-05-04
+
+Copyright 1999 2002-2008 LaTeX3 Project
+ Everyone is allowed to distribute verbatim copies of this
+ license document, but modification of it is not allowed.
+
+
+PREAMBLE
+========
+
+The LaTeX Project Public License (LPPL) is the primary license under
+which the LaTeX kernel and the base LaTeX packages are distributed.
+
+You may use this license for any work of which you hold the copyright
+and which you wish to distribute. This license may be particularly
+suitable if your work is TeX-related (such as a LaTeX package), but
+it is written in such a way that you can use it even if your work is
+unrelated to TeX.
+
+The section `WHETHER AND HOW TO DISTRIBUTE WORKS UNDER THIS LICENSE',
+below, gives instructions, examples, and recommendations for authors
+who are considering distributing their works under this license.
+
+This license gives conditions under which a work may be distributed
+and modified, as well as conditions under which modified versions of
+that work may be distributed.
+
+We, the LaTeX3 Project, believe that the conditions below give you
+the freedom to make and distribute modified versions of your work
+that conform with whatever technical specifications you wish while
+maintaining the availability, integrity, and reliability of
+that work. If you do not see how to achieve your goal while
+meeting these conditions, then read the document `cfgguide.tex'
+and `modguide.tex' in the base LaTeX distribution for suggestions.
+
+
+DEFINITIONS
+===========
+
+In this license document the following terms are used:
+
+ `Work'
+ Any work being distributed under this License.
+
+ `Derived Work'
+ Any work that under any applicable law is derived from the Work.
+
+ `Modification'
+ Any procedure that produces a Derived Work under any applicable
+ law -- for example, the production of a file containing an
+ original file associated with the Work or a significant portion of
+ such a file, either verbatim or with modifications and/or
+ translated into another language.
+
+ `Modify'
+ To apply any procedure that produces a Derived Work under any
+ applicable law.
+
+ `Distribution'
+ Making copies of the Work available from one person to another, in
+ whole or in part. Distribution includes (but is not limited to)
+ making any electronic components of the Work accessible by
+ file transfer protocols such as FTP or HTTP or by shared file
+ systems such as Sun's Network File System (NFS).
+
+ `Compiled Work'
+ A version of the Work that has been processed into a form where it
+ is directly usable on a computer system. This processing may
+ include using installation facilities provided by the Work,
+ transformations of the Work, copying of components of the Work, or
+ other activities. Note that modification of any installation
+ facilities provided by the Work constitutes modification of the Work.
+
+ `Current Maintainer'
+ A person or persons nominated as such within the Work. If there is
+ no such explicit nomination then it is the `Copyright Holder' under
+ any applicable law.
+
+ `Base Interpreter'
+ A program or process that is normally needed for running or
+ interpreting a part or the whole of the Work.
+
+ A Base Interpreter may depend on external components but these
+ are not considered part of the Base Interpreter provided that each
+ external component clearly identifies itself whenever it is used
+ interactively. Unless explicitly specified when applying the
+ license to the Work, the only applicable Base Interpreter is a
+ `LaTeX-Format' or in the case of files belonging to the
+ `LaTeX-format' a program implementing the `TeX language'.
+
+
+
+CONDITIONS ON DISTRIBUTION AND MODIFICATION
+===========================================
+
+1. Activities other than distribution and/or modification of the Work
+are not covered by this license; they are outside its scope. In
+particular, the act of running the Work is not restricted and no
+requirements are made concerning any offers of support for the Work.
+
+2. You may distribute a complete, unmodified copy of the Work as you
+received it. Distribution of only part of the Work is considered
+modification of the Work, and no right to distribute such a Derived
+Work may be assumed under the terms of this clause.
+
+3. You may distribute a Compiled Work that has been generated from a
+complete, unmodified copy of the Work as distributed under Clause 2
+above, as long as that Compiled Work is distributed in such a way that
+the recipients may install the Compiled Work on their system exactly
+as it would have been installed if they generated a Compiled Work
+directly from the Work.
+
+4. If you are the Current Maintainer of the Work, you may, without
+restriction, modify the Work, thus creating a Derived Work. You may
+also distribute the Derived Work without restriction, including
+Compiled Works generated from the Derived Work. Derived Works
+distributed in this manner by the Current Maintainer are considered to
+be updated versions of the Work.
+
+5. If you are not the Current Maintainer of the Work, you may modify
+your copy of the Work, thus creating a Derived Work based on the Work,
+and compile this Derived Work, thus creating a Compiled Work based on
+the Derived Work.
+
+6. If you are not the Current Maintainer of the Work, you may
+distribute a Derived Work provided the following conditions are met
+for every component of the Work unless that component clearly states
+in the copyright notice that it is exempt from that condition. Only
+the Current Maintainer is allowed to add such statements of exemption
+to a component of the Work.
+
+ a. If a component of this Derived Work can be a direct replacement
+ for a component of the Work when that component is used with the
+ Base Interpreter, then, wherever this component of the Work
+ identifies itself to the user when used interactively with that
+ Base Interpreter, the replacement component of this Derived Work
+ clearly and unambiguously identifies itself as a modified version
+ of this component to the user when used interactively with that
+ Base Interpreter.
+
+ b. Every component of the Derived Work contains prominent notices
+ detailing the nature of the changes to that component, or a
+ prominent reference to another file that is distributed as part
+ of the Derived Work and that contains a complete and accurate log
+ of the changes.
+
+ c. No information in the Derived Work implies that any persons,
+ including (but not limited to) the authors of the original version
+ of the Work, provide any support, including (but not limited to)
+ the reporting and handling of errors, to recipients of the
+ Derived Work unless those persons have stated explicitly that
+ they do provide such support for the Derived Work.
+
+ d. You distribute at least one of the following with the Derived Work:
+
+ 1. A complete, unmodified copy of the Work;
+ if your distribution of a modified component is made by
+ offering access to copy the modified component from a
+ designated place, then offering equivalent access to copy
+ the Work from the same or some similar place meets this
+ condition, even though third parties are not compelled to
+ copy the Work along with the modified component;
+
+ 2. Information that is sufficient to obtain a complete,
+ unmodified copy of the Work.
+
+7. If you are not the Current Maintainer of the Work, you may
+distribute a Compiled Work generated from a Derived Work, as long as
+the Derived Work is distributed to all recipients of the Compiled
+Work, and as long as the conditions of Clause 6, above, are met with
+regard to the Derived Work.
+
+8. The conditions above are not intended to prohibit, and hence do not
+apply to, the modification, by any method, of any component so that it
+becomes identical to an updated version of that component of the Work as
+it is distributed by the Current Maintainer under Clause 4, above.
+
+9. Distribution of the Work or any Derived Work in an alternative
+format, where the Work or that Derived Work (in whole or in part) is
+then produced by applying some process to that format, does not relax or
+nullify any sections of this license as they pertain to the results of
+applying that process.
+
+10. a. A Derived Work may be distributed under a different license
+ provided that license itself honors the conditions listed in
+ Clause 6 above, in regard to the Work, though it does not have
+ to honor the rest of the conditions in this license.
+
+ b. If a Derived Work is distributed under a different license, that
+ Derived Work must provide sufficient documentation as part of
+ itself to allow each recipient of that Derived Work to honor the
+ restrictions in Clause 6 above, concerning changes from the Work.
+
+11. This license places no restrictions on works that are unrelated to
+the Work, nor does this license place any restrictions on aggregating
+such works with the Work by any means.
+
+12. Nothing in this license is intended to, or may be used to, prevent
+complete compliance by all parties with all applicable laws.
+
+
+NO WARRANTY
+===========
+
+There is no warranty for the Work. Except when otherwise stated in
+writing, the Copyright Holder provides the Work `as is', without
+warranty of any kind, either expressed or implied, including, but not
+limited to, the implied warranties of merchantability and fitness for a
+particular purpose. The entire risk as to the quality and performance
+of the Work is with you. Should the Work prove defective, you assume
+the cost of all necessary servicing, repair, or correction.
+
+In no event unless required by applicable law or agreed to in writing
+will The Copyright Holder, or any author named in the components of the
+Work, or any other party who may distribute and/or modify the Work as
+permitted above, be liable to you for damages, including any general,
+special, incidental or consequential damages arising out of any use of
+the Work or out of inability to use the Work (including, but not limited
+to, loss of data, data being rendered inaccurate, or losses sustained by
+anyone as a result of any failure of the Work to operate with any other
+programs), even if the Copyright Holder or said author or said other
+party has been advised of the possibility of such damages.
+
+
+MAINTENANCE OF THE WORK
+=======================
+
+The Work has the status `author-maintained' if the Copyright Holder
+explicitly and prominently states near the primary copyright notice in
+the Work that the Work can only be maintained by the Copyright Holder
+or simply that it is `author-maintained'.
+
+The Work has the status `maintained' if there is a Current Maintainer
+who has indicated in the Work that they are willing to receive error
+reports for the Work (for example, by supplying a valid e-mail
+address). It is not required for the Current Maintainer to acknowledge
+or act upon these error reports.
+
+The Work changes from status `maintained' to `unmaintained' if there
+is no Current Maintainer, or the person stated to be Current
+Maintainer of the work cannot be reached through the indicated means
+of communication for a period of six months, and there are no other
+significant signs of active maintenance.
+
+You can become the Current Maintainer of the Work by agreement with
+any existing Current Maintainer to take over this role.
+
+If the Work is unmaintained, you can become the Current Maintainer of
+the Work through the following steps:
+
+ 1. Make a reasonable attempt to trace the Current Maintainer (and
+ the Copyright Holder, if the two differ) through the means of
+ an Internet or similar search.
+
+ 2. If this search is successful, then enquire whether the Work
+ is still maintained.
+
+ a. If it is being maintained, then ask the Current Maintainer
+ to update their communication data within one month.
+
+ b. If the search is unsuccessful or no action to resume active
+ maintenance is taken by the Current Maintainer, then announce
+ within the pertinent community your intention to take over
+ maintenance. (If the Work is a LaTeX work, this could be
+ done, for example, by posting to comp.text.tex.)
+
+ 3a. If the Current Maintainer is reachable and agrees to pass
+ maintenance of the Work to you, then this takes effect
+ immediately upon announcement.
+
+ b. If the Current Maintainer is not reachable and the Copyright
+ Holder agrees that maintenance of the Work be passed to you,
+ then this takes effect immediately upon announcement.
+
+ 4. If you make an `intention announcement' as described in 2b. above
+ and after three months your intention is challenged neither by
+ the Current Maintainer nor by the Copyright Holder nor by other
+ people, then you may arrange for the Work to be changed so as
+ to name you as the (new) Current Maintainer.
+
+ 5. If the previously unreachable Current Maintainer becomes
+ reachable once more within three months of a change completed
+ under the terms of 3b) or 4), then that Current Maintainer must
+ become or remain the Current Maintainer upon request provided
+ they then update their communication data within one month.
+
+A change in the Current Maintainer does not, of itself, alter the fact
+that the Work is distributed under the LPPL license.
+
+If you become the Current Maintainer of the Work, you should
+immediately provide, within the Work, a prominent and unambiguous
+statement of your status as Current Maintainer. You should also
+announce your new status to the same pertinent community as
+in 2b) above.
+
+
+WHETHER AND HOW TO DISTRIBUTE WORKS UNDER THIS LICENSE
+======================================================
+
+This section contains important instructions, examples, and
+recommendations for authors who are considering distributing their
+works under this license. These authors are addressed as `you' in
+this section.
+
+Choosing This License or Another License
+----------------------------------------
+
+If for any part of your work you want or need to use *distribution*
+conditions that differ significantly from those in this license, then
+do not refer to this license anywhere in your work but, instead,
+distribute your work under a different license. You may use the text
+of this license as a model for your own license, but your license
+should not refer to the LPPL or otherwise give the impression that
+your work is distributed under the LPPL.
+
+The document `modguide.tex' in the base LaTeX distribution explains
+the motivation behind the conditions of this license. It explains,
+for example, why distributing LaTeX under the GNU General Public
+License (GPL) was considered inappropriate. Even if your work is
+unrelated to LaTeX, the discussion in `modguide.tex' may still be
+relevant, and authors intending to distribute their works under any
+license are encouraged to read it.
+
+A Recommendation on Modification Without Distribution
+-----------------------------------------------------
+
+It is wise never to modify a component of the Work, even for your own
+personal use, without also meeting the above conditions for
+distributing the modified component. While you might intend that such
+modifications will never be distributed, often this will happen by
+accident -- you may forget that you have modified that component; or
+it may not occur to you when allowing others to access the modified
+version that you are thus distributing it and violating the conditions
+of this license in ways that could have legal implications and, worse,
+cause problems for the community. It is therefore usually in your
+best interest to keep your copy of the Work identical with the public
+one. Many works provide ways to control the behavior of that work
+without altering any of its licensed components.
+
+How to Use This License
+-----------------------
+
+To use this license, place in each of the components of your work both
+an explicit copyright notice including your name and the year the work
+was authored and/or last substantially modified. Include also a
+statement that the distribution and/or modification of that
+component is constrained by the conditions in this license.
+
+Here is an example of such a notice and statement:
+
+ %% pig.dtx
+ %% Copyright 2005 M. Y. Name
+ %
+ % This work may be distributed and/or modified under the
+ % conditions of the LaTeX Project Public License, either version 1.3
+ % of this license or (at your option) any later version.
+ % The latest version of this license is in
+ % http://www.latex-project.org/lppl.txt
+ % and version 1.3 or later is part of all distributions of LaTeX
+ % version 2005/12/01 or later.
+ %
+ % This work has the LPPL maintenance status `maintained'.
+ %
+ % The Current Maintainer of this work is M. Y. Name.
+ %
+ % This work consists of the files pig.dtx and pig.ins
+ % and the derived file pig.sty.
+
+Given such a notice and statement in a file, the conditions
+given in this license document would apply, with the `Work' referring
+to the three files `pig.dtx', `pig.ins', and `pig.sty' (the last being
+generated from `pig.dtx' using `pig.ins'), the `Base Interpreter'
+referring to any `LaTeX-Format', and both `Copyright Holder' and
+`Current Maintainer' referring to the person `M. Y. Name'.
+
+If you do not want the Maintenance section of LPPL to apply to your
+Work, change `maintained' above into `author-maintained'.
+However, we recommend that you use `maintained', as the Maintenance
+section was added in order to ensure that your Work remains useful to
+the community even when you can no longer maintain and support it
+yourself.
+
+Derived Works That Are Not Replacements
+---------------------------------------
+
+Several clauses of the LPPL specify means to provide reliability and
+stability for the user community. They therefore concern themselves
+with the case that a Derived Work is intended to be used as a
+(compatible or incompatible) replacement of the original Work. If
+this is not the case (e.g., if a few lines of code are reused for a
+completely different task), then clauses 6b and 6d shall not apply.
+
+
+Important Recommendations
+-------------------------
+
+ Defining What Constitutes the Work
+
+ The LPPL requires that distributions of the Work contain all the
+ files of the Work. It is therefore important that you provide a
+ way for the licensee to determine which files constitute the Work.
+ This could, for example, be achieved by explicitly listing all the
+ files of the Work near the copyright notice of each file or by
+ using a line such as:
+
+ % This work consists of all files listed in manifest.txt.
+
+ in that place. In the absence of an unequivocal list it might be
+ impossible for the licensee to determine what is considered by you
+ to comprise the Work and, in such a case, the licensee would be
+ entitled to make reasonable conjectures as to which files comprise
+ the Work.
+
diff --git a/macros/luatex/generic/spelling/README b/macros/luatex/generic/spelling/README
new file mode 100644
index 0000000000..a5f2ee4158
--- /dev/null
+++ b/macros/luatex/generic/spelling/README
@@ -0,0 +1,70 @@
+This material is subject to the LaTeX Project Public License. See
+<http://www.latex-project.org/lppl/> for the details of that license.
+
+
+
+### Package information
+
+Package name: spelling
+Summary description: support for spell-checking of LuaTeX documents
+Version: v0.41
+Date: 2013-05-25
+License: [LPPL v1.3c](http://www.latex-project.org/lppl/lppl-1-3c.html)
+Maintenance status: maintained
+Current maintainer: Stephan Hennig, <sh2d@arcor.de>
+
+
+
+### Description
+
+This package supports spell-checking of TeX documents compiled with the
+LuaTeX engine. It can give visual feedback in PDF output similar to
+WYSIWYG word processors. The package relies on an external
+spell-checker application that can check a plain text file and output a
+list of bad spellings. The package should work with most
+spell-checkers, even dumb, TeX-unaware ones.
+
+
+
+### Development
+
+The development repository is currently hosted at
+[GitHub](https://github.com/sh2d/spelling/). Code documentation is in
+[LuaDoc](http://keplerproject.github.com/luadoc/) format and can be
+generated via
+
+ luadoc -d API *.lua
+
+Bugs and a wish list can be found in the
+[issue tracker](https://github.com/sh2d/spelling/issues/). Patches
+welcome!
+
+
+_Happy TeXing!_
+
+
+
+### File list
+
+ CHANGES
+ LICENSE
+ README
+ spelling.sty
+ spelling-doc.bad
+ spelling-doc.tex
+ spelling-doc-lst-lua.tex
+ spelling-main.lua
+ spelling-recurse.lua
+ spelling-stage-1.lua
+ spelling-stage-2.lua
+ spelling-stage-3.lua
+ spelling-stage-4.lua
+
+
+
+<!--
+%%% Local Variables:
+%%% coding: utf-8
+%%% mode: markdown
+%%% End:
+-->
diff --git a/macros/luatex/generic/spelling/spelling-doc-lst-lua.tex b/macros/luatex/generic/spelling/spelling-doc-lst-lua.tex
new file mode 100644
index 0000000000..4fee01d300
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-doc-lst-lua.tex
@@ -0,0 +1,84 @@
+%%% spelling-doc-lst-lua.tex
+%%% Copyright 2013 Stephan Hennig
+%%
+%% This work may be distributed and/or modified under the conditions of
+%% the LaTeX Project Public License, either version 1.3 of this license
+%% or (at your option) any later version. The latest version of this
+%% license is in http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+\lstdefinelanguage[5.2]{Lua}{%
+ alsoletter={.},%
+ % language keywords
+ morekeywords=[1]{%
+ and,break,do,else,elseif,end,%
+ false,for,function,goto,if,in,%
+ local,nil,not,or,repeat,return,%
+ then,true,until,while,%
+ },%
+ % standard library identifiers
+ morekeywords=[2]{%
+ % basic library
+ assert,collectgarbage,dofile,error,_G,getmetatable,ipairs,%
+ load,loadfile,next,pairs,pcall,print,rawequal,rawget,rawlen,rawset,%
+ select,setmetatable,tonumber,tostring,type,_VERSION,xpcall,%
+ % coroutine library
+ coroutine.create,coroutine.resume,coroutine.running,%
+ coroutine.status,coroutine.wrap,coroutine.yield,%
+ % package library
+ require,package.config,package.cpath,package.loaded,%
+ package.loadlib,package.path,package.preload,package.searchers,%
+ package.searchpath,%
+ % string library
+ string.byte,string.char,string.dump,string.find,string.format,%
+ string.gmatch,string.gsub,string.len,string.lower,string.match,%
+ string.rep,string.reverse,string.sub,string.upper,%
+ % table library
+ table.concat,table.insert,table.pack,table.remove,table.sort,%
+ table.unpack,%
+ % mathematical library
+ math.abs,math.acos,math.asin,math.atan,math.atan2,math.ceil,%
+ math.cos,math.cosh,math.deg,math.exp,math.floor,math.fmod,%
+ math.frexp,math.huge,math.ldexp,math.log,math.max,math.min,%
+ math.modf,math.pi,math.pow,math.rad,math.random,math.randomseed,%
+ math.sin,math.sinh,math.sqrt,math.tan,math.tanh,%
+ % bit library
+ bit32.arshift,bit32.band,bit32.bnot,bit32.bor,bit32.btest,%
+ bit32.bxor,bit32.extract,bit32.replace,bit32.lrotate,bit32.lshift,%
+ bit32.rrotate,bit32.rshift,%
+ % io library
+ io.close,io.flush,io.input,io.lines,io.open,io.output,io.popen,%
+ io.read,io.stderr,io.stdin,io.stdout,io.tmpfile,io.type,io.write,%
+ % os library
+ os.clock,os.date,os.difftime,os.execute,os.exit,os.getenv,%
+ os.remove,os.rename,os.setlocale,os.time,os.tmpname,%
+ % debug library
+ debug.debug,debug.gethook,debug.getinfo,debug.getlocal,%
+ debug.getmetatable,debug.getregistry,debug.getupvalue,%
+ debug.getuservalue,debug.sethook,debug.setlocal,debug.setmetatable,%
+ debug.setupvalue,debug.setuservalue,debug.traceback,%
+ debug.upvalueid,debug.upvaluejoin,%
+ },%
+ % add environment
+ morekeywords=[2]{_ENV},%
+ %
+ sensitive=true,%
+ % single line comments
+ morecomment=[l]{--},%
+ % multi line comments
+ morecomment=[s]{--[[}{]]},%
+ morecomment=[s]{--[=[}{]=]},%
+ morecomment=[s]{--[==[}{]==]},%
+ morecomment=[s]{--[===[}{]===]},%
+ % backslash escaped strings
+ morestring=[b]",%
+ morestring=[b]',%
+ % multi line strings
+ morestring=[s]{[[}{]]},%
+ morestring=[s]{[=[}{]=]},%
+ morestring=[s]{[==[}{]==]},%
+ morestring=[s]{[===[}{]===]},%
+ % labels
+ moredelim=[s][keywordstyle3]{::}{::},%
+}[keywords,comments,strings]%
diff --git a/macros/luatex/generic/spelling/spelling-doc.bad b/macros/luatex/generic/spelling/spelling-doc.bad
new file mode 100644
index 0000000000..9e34486307
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-doc.bad
@@ -0,0 +1,4 @@
+Ther
+mispellings
+spellling
+foo’s
diff --git a/macros/luatex/generic/spelling/spelling-doc.pdf b/macros/luatex/generic/spelling/spelling-doc.pdf
new file mode 100644
index 0000000000..527d70d6ac
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-doc.pdf
Binary files differ
diff --git a/macros/luatex/generic/spelling/spelling-doc.tex b/macros/luatex/generic/spelling/spelling-doc.tex
new file mode 100644
index 0000000000..79e59b2aec
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-doc.tex
@@ -0,0 +1,830 @@
+%%% spelling-doc.tex
+%%% Copyright 2012, 2013 Stephan Hennig
+%%
+%% This work may be distributed and/or modified under the conditions of
+%% the LaTeX Project Public License, either version 1.3 of this license
+%% or (at your option) any later version. The latest version of this
+%% license is in http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% See file README for more information.
+%%
+\documentclass[11pt]{article}
+\usepackage{fontspec}
+\defaultfontfeatures{Ligatures=TeX}
+\usepackage{multicol}
+\usepackage[rgb, x11names]{xcolor}
+\usepackage{listings}
+\input{\jobname-lst-lua.tex}
+\lstset{
+ basicstyle=\ttfamily,
+ columns=spaceflexible,
+}
+% Short-cut for non-language code snippets.
+\lstMakeShortInline\|
+% Short-cut for LaTeX code snippets.
+\lstMakeShortInline[
+language={[LaTeX]TeX},
+basicstyle=\ttfamily,
+]°
+\lstdefinestyle{Lua}{
+ language=[5.2]Lua,
+ keywordstyle=\bfseries\color{Blue4},
+ keywordstyle=[2]\bfseries\color{RoyalBlue3},
+ keywordstyle=[3]\bfseries\color{Purple3},
+ stringstyle=\bfseries\color{Coral4},
+ commentstyle=\itshape\color{Green4},
+}
+\usepackage{xspace}
+\usepackage{array}
+\usepackage{booktabs}
+\usepackage[latin, UKenglish]{babel}
+\usepackage{hyperref}
+\hypersetup{
+ pdftitle={spelling},
+ pdfauthor={Stephan Hennig},
+ pdfkeywords={spell-checking, spelling, TeX, LuaTeX}
+}
+\hypersetup{
+ english,% For \autoref.
+ pdfstartview={XYZ null null null},% Zoom factor is determined by viewer.
+ colorlinks,
+ linkcolor=RoyalBlue3,
+ urlcolor=Chocolate4,
+ citecolor=DeepPink2
+}
+\usepackage{spelling}
+\spellingreadbad{\jobname.bad}
+\newcommand*{\pkg}{\textsf{spelling}}
+\newcommand*{\acr}[1]{\mbox{\scshape#1}}
+\newcommand*{\descr}[1]{〈\emph{#1}〉}
+\newcommand*{\cmd}[1]{\mbox{\ttfamily\textbackslash#1}}
+\newcommand*{\macro}[1]{\cmd{#1}\marginpar{\cmd{#1}}}
+\newcommand*{\latinphrase}[1]{\foreignlanguage{latin}{\emph{#1}}}
+\newcommand*{\lpcf}{\latinphrase{cf.}\xspace}
+\newcommand*{\lpeg}{\latinphrase{e.\,g.}\xspace}
+\newcommand*{\lpetc}{\latinphrase{etc.}\xspace}
+\newcommand*{\lpie}{\latinphrase{i.\,e.}\xspace}
+\begin{document}
+\author{Stephan Hennig\thanks{sh2d@arcor.de}}
+\title{\pkg\thanks{This document describes the \pkg\ package v0.41.}}
+\maketitle
+
+
+\begin{abstract}
+ This package supports spell-checking of \TeX\ documents compiled with
+ the Lua\TeX\ engine. It can give visual feedback in \acr{pdf} output
+ similar to \acr{wysiwyg} word processors. The package relies on an
+ external spell-checker application that can check a plain text file
+ and output a list of bad spellings. The package should work with most
+ spell-checkers, even dumb, \TeX-unaware ones.
+
+ \emph{Warning! This package is in a very early state. Everything may
+ change!}
+\end{abstract}
+
+\begin{multicols}{2}
+\small
+% Set toc entries ragged right. Trick taken from tocloft.pdf.
+\makeatletter
+\renewcommand{\@tocrmarg}{2.55em plus1fil}
+\makeatother
+\tableofcontents
+\end{multicols}
+
+
+\section{Introduction}
+\label{sec:intro}
+
+Ther%
+\footnote{A footnote containing mispellings.}
+%
+are three main approaches to spell-checking \TeX\ documents:
+
+\begin{enumerate}
+
+\item checking spelling in the |.tex| source file,
+
+\item converting a |.tex| file to another format, for which a proved
+ spell-checking solution exists,
+
+\item checking spelling after a |.tex| file has been processed by \TeX.
+
+\end{enumerate}
+
+All of these approaches have their strengths and weaknesses. This
+package follows the third approach, providing some unique features:
+
+\begin{itemize}
+
+\item In traditional solutions, text is extracted from typeset
+ \acr{dvi}, \acr{ps} or \acr{pdf} files, including hyphenated words.
+ To avoid (lots of) false positives being reported by the
+ spell-checker, hyphenation needs to be switched off during the \TeX\
+ run. That is, one doesn't work on the original document any more.
+
+ In contrast to that, the \pkg\ package works transparently on the
+ original |.tex| source file. Text is extracted \emph{during}
+ typesetting, after Lua\TeX\ has applied its catcode and macro
+ machinery, but before hyphenation takes place.
+
+\item The \pkg\ package can highlight words with known incorrect
+ spelling in \acr{pdf} output, giving visual feedback similar to
+ \acr{wysiwyg} word processors.%
+ \footnote{Currently, only colouring words is implemented.}
+
+\end{itemize}
+
+
+\section{Usage}
+\label{sec:usage}
+
+The \pkg\ package requires the Lua\TeX\ engine. All functionality of
+the package is implemented in Lua. The \LaTeX\ interface, which is
+described below, is effectively a wrapper around the Lua interface.
+
+\emph{Implementing such wrappers for other formats shouldn't be too
+ difficult. The author is a \LaTeX-only user, though, and therefore
+ grateful for contributions. By the way, the \LaTeX\ package needs
+ some polishing, too, \lpeg, a key-value interface is desirable.
+ Patches welcome!}
+
+
+\subsection{Work-flow}
+\label{sec:work-flow}
+
+Here's a short outline of how using the \pkg\ package fits into the
+general process of compiling a document with Lua\TeX:
+
+\begin{enumerate}
+
+\item After loading the package in the preamble of a |.tex| source file,
+ a list of bad spellings is read from a file (if that file exists).
+
+\item During the Lua\TeX\ run, text is extracted from pages and all
+ words are checked against the list of bad spellings. Words with a
+ known incorrect spelling are highlighted in \acr{pdf} output.
+
+\item At the end of the Lua\TeX\ run, in addition to the \acr{pdf} file,
+ a text file is written, containing most of the text of the typeset
+ document.
+
+\item The text file is then checked by your favourite external
+ spell-checker application, \lpeg, Aspell or Hunspell. The
+ spell-checker should be able to write a list of bad spellings to a
+ file. Otherwise, visual feedback in \acr{pdf} output won't work.
+
+\item Visually minded people may now compile their document a second
+ time. This time, the new list of bad spellings is read-in and words
+ with incorrect spelling found by the spell-checker should now be
+ highlighted in \acr{pdf} output. Users can then apply the necessary
+ corrections to the |.tex| source file.
+
+\end{enumerate}
+
+Whatever way spell-checker output is employed, users not interested in
+visual feedback (because their spell-checker has an interactive mode
+only or because they prefer grabbing bad spellings from a file directly)
+can also benefit from this package. Using it, Lua\TeX\ writes a pure
+text file that is particularly well suited as spell-checker input,
+because it contains no hyphenated words (and neither macros, nor active
+characters). That way, any spell-checker application, even \TeX-unaware
+ones, can be used to check spelling of \TeX\ documents.
+
+
+\subsection{Word lists}
+\label{sec:wordlists}
+
+As described above, after loading the \pkg\ package, a list of bad
+spellings is read from a file \descr{jobname}.|spell.bad|, if that file
+exists. Words found in this file are stored in an internal list of bad
+spellings and are later used for highlighting spelling mistakes in
+\acr{pdf} output. Additionally, a list of good spellings is read from a
+file \descr{jobname}|.spell.good|, if that file exists. Words found in
+the latter file are stored in an internal list of good spellings. File
+format for both files is one word per line. Files must be in the
+\acr{utf-8} encoding. Letter case is significant.
+
+A word in the document is highlighted, if it occurs in the internal list
+of bad spellings, but not in the internal list of good spellings. That
+is, known good spellings take precedence over known bad spellings.
+
+Users can load additional files containing lists of bad or good
+spellings with macros \macro{spellingreadbad} and
+\macro{spellingreadgood}. Argument to both macros is a file name. If a
+file cannot be found, a warning is written to the console and |log| file
+and compilation continues. As an example, the command
+
+\begin{lstlisting}[language={[LaTeX]TeX}]
+\spellingreadgood{myproject.whitelist}
+\end{lstlisting}
+%
+reads words from a file |myproject.whitelist| and adds them to the list
+of good spellings.
+
+Known good spellings can be used to deal with words wrongly reported as
+bad spellings by the spell-checker (false positives). But note, most
+spell-checkers also provide means to deal with unknown words via
+additional dictionaries. It is recommended to configure your
+spell-checker to report as few false positives as possible.
+
+
+\subsection{Match rules}
+\label{sec:matchrules}
+
+\emph{This section describes an advanced feature. You may safely skip
+ this section upon first reading.}
+
+The \pkg\ package provides an additional way to deal with bad and good
+spellings, match rules. Match rules can be used to employ regular
+patterns within certain ‘words’. A typical example are bibliographic
+references like \emph{Lin86}, which are often flagged by spell-checkers,
+but need not be highlighted as they are generated by \TeX.
+
+There are two kinds of rules, bad and good rules. A rule is a Lua
+function whose boolean return value indicates whether a word
+\emph{matches} the rule. A bad rule should return a true value for all
+strings identified as bad spellings, otherwise a false value. A good
+rule should return a true value for all strings identified as good
+spellings, otherwise a false value. A word in the document is
+highlighted if it matches any bad rule, but no good rule.
+
+Function arguments are a \emph{raw} string and a \emph{stripped} string.
+The raw string is a string representing a word as it is found in the
+document possibly surrounded by punctuation characters. The stripped
+string is the same string with surrounding punctuation already stripped.
+
+As an example, the rule in \autoref{lst:mr-three-letter-words} matches
+all words consisting of exactly three letters. The function matches the
+stripped string against the Lua string pattern |^%a%a%a$| via function
+|unicode.utf8.find| from the Selene Unicode library. The latter
+function is a \acr{utf-8} capable version of Lua's built-in function
+|string.find|. It returns |nil| (a false value) if there has been no
+match and a number (a true value) if there has been a match. The
+pattern |%a| represents a character class matching a single letter.
+Characters |^| and |$| are anchors for the beginning and the end of the
+string in question. Note, pattern |%a%a%a| without anchors would match
+any string containing three letters in a row. More information about
+Lua string patterns can be found in the Lua reference manual%
+\footnote{\url{http://www.lua.org/manual/5.2/manual.html\#6.4}}%
+%
+, the Selene Unicode library documentation%
+\footnote{\url{https://github.com/LuaDist/slnunicode/blob/master/unitest}}
+%
+and in the Unicode standard%
+\footnote{\url{http://www.unicode.org/Public/4.0-Update1/UCD-4.0.1.html\#General_Category_Values}}%
+.
+
+\suppressfloats[b]
+
+\begin{lstlisting}[style=Lua, float, label=lst:mr-three-letter-words, caption={Matching three-letter words.}]
+function three_letter_words(raw, stripped)
+ return unicode.utf8.find(stripped, '^%a%a%a$')
+end
+\end{lstlisting}
+
+\autoref{lst:mr-double-punctuation} shows a rule matching all ‘words’
+containing double punctuation. Note, how the raw string is examined
+instead of the stripped one.
+
+\begin{lstlisting}[style=Lua, float, label=lst:mr-double-punctuation, caption={Matching double punctuation.}]
+function double_punctuation(raw, stripped)
+ return unicode.utf8.find(raw, '%p%p')
+end
+\end{lstlisting}
+
+The rule in \autoref{lst:mr-bibtex-alpha} combines the results of three
+string searches to match bibliographic references as generated by the
+Bib\TeX\ style \emph{alpha}.
+
+\begin{lstlisting}[style=Lua, float, label=lst:mr-bibtex-alpha, caption={Matching references generated by the Bib\TeX\ style \emph{alpha}.}]
+function bibtex_alpha(raw, stripped)
+ return unicode.utf8.find(stripped, '^%u%l%l?%d%d$')
+ or unicode.utf8.find(stripped, '^%u%u%u?%u?%d%d$')
+ or unicode.utf8.find(stripped, '^%u%u%u%+%d%d$')
+end
+\end{lstlisting}
+
+Match rules have to be provided by means of a Lua module. Such modules
+can be loaded with the \macro{spellingmatchrules} command. Argument is
+a module name. To tell bad rules from good rules, the table returned by
+the module must follow this convention: Function identifiers
+representing bad and good match rules are prefixed |bad_rule_| and
+|good_rule_|, resp. The rest of an identifier is actually irrelevant.
+Other and non-function identifiers are ignored.
+
+\autoref{lst:mr-module} shows an example module declaring the rules from
+\autoref{lst:mr-three-letter-words} and
+\autoref{lst:mr-double-punctuation} as \emph{bad} match rules and the
+rule from \autoref{lst:mr-bibtex-alpha} as a \emph{good} match rule.
+Note, how function identifiers are made local and how exported function
+identifiers are prefixed |bad_rule_| and |good_rule_|, while local
+function identifiers have no prefixes. When the module resides in a
+file named |myproject.rules.lua|, it can be loaded in the preamble of a
+document via
+\begin{lstlisting}[language={[LaTeX]TeX}]
+\spellingmatchrules{myproject.rules}
+\end{lstlisting}
+
+\begin{lstlisting}[style=Lua, float=p, label=lst:mr-module, caption={A Lua module containing two bad and one good match rule.}]
+-- Module table.
+local M = {}
+
+-- Import Selene Unicode library.
+local unicode = require('unicode')
+-- Add short-cut.
+local Ufind = unicode.utf8.find
+
+-- Local function matching three letter words.
+local function three_letter_words(raw, stripped)
+ return Ufind(stripped, '^%a%a%a$')
+end
+-- Make this a bad rule.
+M.bad_rule_three_letter_words = three_letter_words
+
+local function double_punctuation(raw, stripped)
+ return Ufind(raw, '%p%p')
+end
+M.bad_rule_double_punctuation = double_punctuation
+
+local function bibtex_alpha(raw, stripped)
+ return Ufind(stripped, '^%u%l%l?%d%d$')
+ or Ufind(stripped, '^%u%u%u?%u?%d%d$')
+ or Ufind(stripped, '^%u%u%u%+%d%d$')
+end
+M.good_rule_bibtex_alpha = bibtex_alpha
+
+-- Export module table.
+return M
+\end{lstlisting}
+
+How are match rules and lists of bad and good spellings related?
+Internally, the lists of bad and good spellings are referred to by two
+special default match rules, that look-up raw and stripped strings and
+return a true value if either argument has been found in the
+corresponding list. Since good rules take precedence over bad rules, an
+entry in the list of good spellings takes precedence over any
+user-supplied bad rule. Likewise, any user-supplied good rule takes
+precedence over an entry in the list of bad spellings.
+
+\paragraph{Some final remarks on match rules} It must be stressed that
+the boolean return value of a match rule \emph{does not} indicate
+whether a spelling is bad or good, but whether a word matches a certain
+rule or not. Whether it's a bad or a good spelling, depends on the name
+of the match rule in the module table.
+
+Match rules are only called upon the first occurrence of a spelling in a
+document. The information, whether a spelling needs to be highlighted,
+is stored in a cache table. Subsequent occurrences of a spelling just
+need a table look-up to determine highlighting status. For that reason,
+it is safe to do relatively expensive operations within a match rule
+without affecting compilation time too much. Nevertheless, match rules
+should be stated as efficient as possible.%
+\footnote{Some Lua performance tips can be found in the book \emph{Lua
+ Programming Gems} by Figueiredo, Celes and Ierusalimschy
+ \emph{(eds.)}, 2008, ch.~2. That chapter is also available online at
+ \url{http://www.lua.org/gems/}.}
+
+When written without care, match rules can easily produce false
+positives as well as false negatives. While false positives in bad
+rules and false negatives in good rules can easily be spotted due to the
+unexpected highlighting of words, the other cases are more problematic.
+To avoid all kinds of false results, match rules should be stated as
+specific as possible.
+
+
+\subsection{Highlighting spellling mistakes}
+\label{sec:highlight}
+
+\paragraph{Enabling/disabling} Highlighting spelling mistakes (words
+with known incorrect spelling) in \acr{pdf} output can be toggled on and
+off with command \macro{spellinghighlight}. If the argument is |on|,
+highlighting is enabled. For other arguments, highlighting is disabled.
+Highlighting is enabled, by default.
+
+\paragraph{Colour} The colour used for highlighting bad spellings can be
+determined by command \cmd{spellinghighlightcolor}. Argument is a
+colour statement in the \acr{pdf} language. As an example, the colour
+red in the \acr{rgb} colour space is represented by the statement %
+|1 0 0 rg|. In the \acr{cmyk} colour space, a reddish colour is
+represented by |0 1 1 0 k|. Default colour used for highlighting is %
+|1 0 0 rg|, \lpie, red in the \acr{rgb} colour space.
+
+
+\subsection{Text output}
+\label{sec:textoutput}
+
+\paragraph{Text file} After loading the \pkg\ package, at the end of the
+Lua\TeX\ run, a text file is written that contains most of the document
+text. The text file is no close text rendering of the typeset document,
+but serves as input for your favourite spell-checker application. It
+contains the document text in a simple format: paragraphs separated by
+blank lines. A paragraph is anything that, during typesetting, starts
+with a |local_par| whatsit node in the node list representing a typeset
+page of the original document, \lpeg, paragraphs in running text,
+footnotes, marginal notes, (in-lined) °\parbox° commands or cells from
+°p°-like table columns \lpetc
+
+Paragraphs consist of words separated by spaces. A word is the textual
+representation of a chain of consecutive nodes of type |glyph|, |disc|
+or |kern|. Boxes are processed transparently. That is, the \pkg\
+package (highly imperfectly) tries to recognise as a single word what in
+typeset output looks like a single word. As an example, the \LaTeX\
+code
+
+\begin{center}
+ \begin{tabular}{c}
+\begin{lstlisting}[language={[LaTeX]TeX}]
+foo\mbox{'s bar}s
+\end{lstlisting}
+ \end{tabular}
+\end{center}
+which is typeset as
+
+\begin{center}
+ foo\mbox{'s bar}s
+\end{center}
+is considered two words \textit{foo's} and \textit{bars}, instead of the
+four words \textit{foo}, \textit{'s}, \textit{bar} and~\textit{s}.%
+\footnote{This document has been compiled with a custom list of bad
+ spellings, which is why the word \emph{foo's} should be highlighted.}
+
+\paragraph{Enabling/disabling} Text output can be toggled on and off
+with command \macro{spellingoutput}. If the argument is |on|, text
+output is enabled. For other arguments, text output is disabled. Text
+output is enabled, by default.
+
+\paragraph{File name} \hspace{0pt plus 5em} Text output file name can be
+configured via command \macro{spellingoutputname}. Argument is the new
+file name. Default text output file name is
+\descr{jobname}|.spell.txt|.
+
+\paragraph{Line length} In text output, paragraphs can either be put on
+a single line or broken into lines of a fixed length. The behaviour can
+be controlled via command \macro{spellingoutputlinelength}. Argument is
+a number. If the number is less than~1, paragraphs are put on a single
+line. For larger arguments, the number specifies maximum line length.
+Note, lines are broken at spaces only. Words longer than maximum line
+length are put on a single line exceeding maximum line length. Default
+line length is~72.
+
+
+\subsection{Text extraction}
+\label{sec:textextraction}
+
+\paragraph{Enabling/disabling} Text extraction can be enabled and
+disabled in the document via command \macro{spellingextract}. If the
+argument is |on|, text extraction is enabled. For other arguments, text
+extraction is disabled. The command should be used in vertical mode,
+\lpie, outside paragraphs. If text extraction is disabled in the
+document preamble, an empty text file is written at the end of the
+Lua\TeX\ run. Text extraction is enabled, by default.
+
+Note, text extraction and visual feedback are orthogonal features. That
+is, if text extraction is disabled for part of a document, \lpeg, a long
+table, words with a known incorrect spelling are still highlighted in
+that part.
+
+
+\subsection{Code point mapping}
+\label{sec:cp-mapping}
+
+As explained in \autoref{sec:textoutput}, the text file written at the
+end of the Lua\TeX\ run is in the \acr{utf-8} encoding. Unicode
+contains a wealth of code points with a special meaning, such as
+ligatures, alternative letters, symbols \lpetc Unfortunately, not all
+spell-checker applications are smart enough to correctly interpret all
+Unicode code points that may occur in a document. For that reason, a
+code point mapping feature has been implemented that allows for mapping
+certain Unicode code points that may appear in a node list to arbitrary
+strings in text output. A typical example is to map ligatures to the
+characters corresponding to their constituting letters. The default
+mappings applied can be found in \autoref{tab:cp-mapping}.
+
+\begin{table}
+ \begin{minipage}{1.0\linewidth}
+ \centering
+
+ \newcommand*{\coltitle}[2]{%
+ \normalfont%
+ \vbox{
+ \hbox{\strut#1}
+ \hbox{\strut#2}
+ }%
+ }
+
+ \begin{tabular}{>{\ttfamily}l>{\fontspec{Linux Libertine
+ O}}l>{\ttfamily}l>{\ttfamily}l}
+ \normalfont Unicode name & \coltitle{sample}{glyph\footnote{Sample
+ glyphs are taken from font \emph{Linux Libertine~O}.}} &
+ \coltitle{code}{point} & \coltitle{target}{characters}\\
+ \addlinespace
+ \toprule
+ \addlinespace
+
+ LATIN CAPITAL LIGATURE IJ & ^^^^0132 & 0x0132 & IJ \\
+ LATIN SMALL LIGATURE IJ & ^^^^0133 & 0x0133 & ij \\
+ LATIN CAPITAL LIGATURE OE & ^^^^0152 & 0x0152 & OE \\
+ LATIN SMALL LIGATURE OE & ^^^^0153 & 0x0153 & oe \\
+ LATIN SMALL LETTER LONG S & ^^^^017f & 0x017f & s \\
+ \addlinespace
+ LATIN SMALL LIGATURE FF & ^^^^fb00 & 0xfb00 & ff \\
+ LATIN SMALL LIGATURE FI & ^^^^fb01 & 0xfb01 & fi \\
+ LATIN SMALL LIGATURE FL & ^^^^fb02 & 0xfb02 & fl \\
+ LATIN SMALL LIGATURE FFI & ^^^^fb03 & 0xfb03 & ffi \\
+ LATIN SMALL LIGATURE FFL & ^^^^fb04 & 0xfb04 & ffl \\
+ LATIN SMALL LIGATURE LONG S T & ^^^^fb05 & 0xfb05 & st \\
+ LATIN SMALL LIGATURE ST & ^^^^fb06 & 0xfb06 & st \\
+ \end{tabular}
+
+ \caption{Default code point mappings.}
+ \label{tab:cp-mapping}
+
+ \end{minipage}
+\end{table}
+
+Additional mappings can be declared by command \macro{spellingmapping}.
+This command takes two arguments, a number that refers to the Unicode
+code point, and a sequence of arbitrary characters that is the mapping
+target. The code point number must be in a format that can be parsed by
+Lua. The characters must be in the \acr{utf-8} encoding.
+
+New mappings only have effect on the following document text. The
+command should therefore be used in the document preamble. As an
+example, the character |A| can be mapped to |Z| and \latinphrase{vice
+ versa} with the following code:
+
+\begin{lstlisting}[language={[LaTeX]TeX}]
+\spellingmapping{65}{Z}% A => Z
+\spellingmapping{90}{A}% Z => A
+\end{lstlisting}
+
+Another command \macro{spellingclearallmappings} can be used to remove
+all existing code point mappings.
+
+
+\subsection{Tables}
+\label{sec:tables}
+
+How do tables fit into the simple text file format that has only
+paragraphs and blank lines as described in \autoref{sec:textoutput}?
+What is a paragraph with regards to tables? A whole table? A row? A
+single cell?
+
+By default, only text from cells in °p°(aragraph)-like columns is put on
+their own paragraph, because the corresponding node list branches
+contain a |local_par| whatsit node (\lpcf \autoref{sec:textoutput}).
+The behaviour can be changed with the \macro{spellingtablepar} command.
+This command takes as argument a number. If the argument is~0, the
+behaviour is described as above. If the argument is~1, a blank line is
+inserted before and after every table row (but at most once between
+table rows). If the argument is~2, a blank line is inserted before and
+after every table cell. By default, no blank lines are inserted.
+
+
+\section{LanguageTool support}
+\label{sec:languagetool}
+
+Installing spell-checkers and dictionaries can be a difficult task if
+there are no pre-built packages available for an architecture. That's
+one reason the \pkg\ package is rather spell-checker agnostic and the
+manual doesn't recommend a particular spell-checker application.
+Another reason is, there is no best spell-checker. The only
+recommendation the author makes is not to trust in one spell-checker,
+but to use multiple spell-checkers at the same time, with different
+dictionaries or, better yet, different checking engines under the hood.
+
+Among the set of options available, LanguageTool%
+\footnote{\url{http://www.languagetool.org/}}%
+%
+, a style and grammar checker that can also check spelling since
+version~1.8, deserves some notice for its portability, ease of
+installation and active development. For these reasons, the \pkg\
+package provides explicit LanguageTool support. LanguageTool uses
+Hunspell as the spell-checking engine, augmenting results with a rule
+based engine and a morphological analyser (depending on the language).
+The \pkg\ package can parse LanguageTool's error reports in the
+\acr{xml} format, pick those errors that are spelling related and use
+them to highlight bad spellings.%
+\footnote{Highlighting style and grammar errors found by LanguageTool
+ should be possible, but requires major restructuring of the \pkg\
+ package.}
+
+
+\subsection{Installation}
+\label{sec:lt-installation}
+
+Here are some brief installation instructions for the stand-alone
+version of LanguageTool (tested with LanguageTool~2.1). The stand-alone
+version contains a \acr{gui} as well as a command-line interface. For
+the \pkg\ package, the latter is needed.
+
+\begin{enumerate}
+
+\item LanguageTool is primarily written in Java. Make sure a recent
+ Java Runtime Environment (\acr{jre}) is installed.
+
+\item\label{enum:run-java} Open a command-line and type
+
+\begin{lstlisting}
+java -version
+\end{lstlisting}
+%
+ If you get an error message, find out the full path to the Java
+ executable (called |java.exe| on Windows) for later reference.
+
+\item Download the stand-alone version of LanguageTool (should be a
+ \acr{zip} archive).
+
+\item Uncompress the downloaded archive to a location of your choice.
+
+\item Open a command-line in the directory containing file
+ |languagetool-commandline.jar| and type
+
+\begin{lstlisting}[escapeinside=°°]
+°\descr{path to}°/java -jar languagetool-commandline.jar --help
+\end{lstlisting}
+%
+ Prepending the path to the Java executable is optional, depending on
+ the result in step~\ref{enum:run-java}. If you now see a list of
+ LanguageTool's command-line options rush by, all is well.
+
+\item For easier access to LanguageTool, create a small batch script and
+ put that somewhere into the |PATH|.
+
+ \begin{itemize}
+
+ \item For users of unixoide systems, the script might look like
+
+\begin{lstlisting}[escapeinside=°°]
+#!/bin/sh
+°\descr{path to}°/java -jar °\descr{path to}°/languagetool-commandline.jar $*
+\end{lstlisting}
+%
+ where \texttt{\descr{path to}} should point to the Java executable
+ (optional) and file |languagetool-commandline.jar| (mandatory). If
+ the script is named |lt.sh|, you should be able to run LanguageTool
+ on the command shell by typing, \lpeg,
+
+\begin{lstlisting}
+sh lt.sh --version
+\end{lstlisting}
+%
+ Don't forget to put the script into the |PATH|! For other ways of
+ making scripts executable, please consult the operating system
+ documentation.
+
+ \item For Windows users, the script might look like
+
+\begin{lstlisting}[escapeinside=°°]
+@echo off
+°\descr{path to}°\java -jar °\descr{path to}°\languagetool-commandline.jar %*
+\end{lstlisting}
+%
+ where \texttt{\descr{path to}} should point to the Java executable
+ (optional) and file |languagetool-commandline.jar| (mandatory). If
+ the script is named |lt.bat|, you should be able to run LanguageTool
+ on the command-line by typing, \lpeg,
+
+\begin{lstlisting}
+lt --version
+\end{lstlisting}
+%
+ Don't forget to put the script into the |PATH|!
+
+ \end{itemize}
+
+\end{enumerate}
+
+
+\subsection{Usage}
+\label{sec:lt-usage}
+
+The results of checking a text file with LanguageTool are written to an
+error report, either in a human readable format or in a machine friendly
+\acr{xml} format. The \pkg\ package can only parse the latter format.
+When it was said in \autoref{sec:wordlists} that the \pkg\ package reads
+files \descr{jobname}|.spell.bad| and \descr{jobname}|.spell.good|, if
+they exist, that was not the whole truth. Additionally, a file
+\descr{jobname}|.spell.xml| is read, if it exists. This file should
+contain a LanguageTool error report in the \acr{xml} format. Additional
+LanguageTool \acr{xml} error reports can be loaded via the
+\macro{spellingreadLT} command. Argument is a file name. Macros
+|\spellingreadLT|, |\spellingreadbad| and |\spellingreadgood| can be
+used in combination in a \TeX\ file.
+
+To check a text file and create an error report in the \acr{xml} format,
+LanguageTool can be called on the command-line like this
+\begin{lstlisting}[escapeinside=°°]
+lt °\descr{options}° °\descr{input file}° > °\descr{error report}°
+\end{lstlisting}
+where \texttt{\descr{options}} is a list of options described below,
+\texttt{\descr{input file}} is the text file written by the \pkg\
+package in the first Lua\TeX\ run and \texttt{\descr{error report}} is
+the file containing the error report. Note, how standard output is
+redirected to a file via the |>| operator. By default, LanguageTool
+writes error reports to standard output, that is, the command-line.
+Redirection is a feature most operating systems provide.
+
+\begin{itemize}
+
+\item Option |-l| determines the language (variant) of the file to
+ check. As an example, language variant US English can be selected via
+ |-l en-US|. The full list of languages supported by LanguageTool can
+ be requested via option |--list|.
+
+\item Option |-c| determines the encoding of the input file. Since the
+ text file written by the \pkg\ package is in the \acr{utf-8} encoding,
+ this part should be |-c utf-8|.
+
+\item By default, LanguageTool outputs error reports in a human readable
+ format. The \pkg\ package can only parse error reports in the
+ \acr{xml} format. If the |--api| option is present, LanguageTool
+ outputs \acr{xml} data.
+
+\item Users that don't want to highlight bad spellings, but prefer to
+ study the list of bad spellings themselves, should refer to the |-u|
+ option. But note, that with the latter option present, LanguageTool
+ doesn't output pure \acr{xml} any more, even if the |--api| option is
+ present. Make sure such error reports aren't read by the \pkg\
+ package.
+
+\item If the |--help| option is present, LanguageTool shows more
+ information about command-line options.
+
+\end{itemize}
+
+As an example, to compile a \LaTeX\ file |myletter.tex| written in
+French that uses the \pkg\ package with standard settings to highlight
+bad spellings and to use LanguageTool as a spell-checker, the following
+commands should be typed on the command-line:
+
+\begin{lstlisting}
+lualatex myletter
+lt --api -c utf-8 -l fr myletter.spell.txt > myletter.spell.xml
+lualatex myletter
+\end{lstlisting}
+
+
+\section{Bugs}
+\label{sec:bugs}
+
+Note, this package is in a very early state. Expect bugs! Package
+development is hosted at
+\href{http://github.com/sh2d/spelling/}{\bfseries GitHub}. The full
+list of known bugs and feature requests can be found in the
+\href{http://github.com/sh2d/spelling/issues/}{\bfseries issue tracker}.
+New bugs should be reported there.
+
+The most user-visible issues are listed below:
+
+\begin{itemize}
+
+\item There's no support for the Plain \TeX\ or Con\TeX\ formats other
+ than the \acr{API} of the package's Lua modules, yet
+ (\href{https://github.com/sh2d/spelling/issues/1}{issue~1}).
+
+\item Macros provided by the \LaTeX\ package have very long names. A
+ \emph{key-value} package option interface would be much more
+ user-friendly
+ (\href{https://github.com/sh2d/spelling/issues/2}{issue~2}).
+
+\item There are a couple of issues with text extraction and highlighting
+ incorrect spellings:
+
+ \begin{itemize}
+
+ \item Text in head and foot lines is neither extracted nor highlighted
+ (\href{https://github.com/sh2d/spelling/issues/7}{issue~7}).
+
+ \item The first word starting right after an |hlist|, \lpeg, the first
+ word within an |\mbox|, is never highlighted. It is extracted and
+ written to the text file, though. This might affect acronyms, names
+ \lpetc (\href{https://github.com/sh2d/spelling/issues/6}{issue~6}).
+
+ \item Bad spellings that are hyphenated at a page break are not
+ highlighted
+ (\href{https://github.com/sh2d/spelling/issues/10}{issue~10}).
+
+ \end{itemize}
+
+
+\end{itemize}
+
+Patches welcome!
+
+\bigskip
+\emph{Happy \TeX ing!}
+
+
+\end{document}
+
+
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-PDF-mode: t
+%%% TeX-master: t
+%%% coding: utf-8
+%%% End:
diff --git a/macros/luatex/generic/spelling/spelling-main.lua b/macros/luatex/generic/spelling/spelling-main.lua
new file mode 100644
index 0000000000..8aaecbda0c
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-main.lua
@@ -0,0 +1,220 @@
+--- spelling-main.lua
+--- Copyright 2012, 2013 Stephan Hennig
+--
+-- This work may be distributed and/or modified under the conditions of
+-- the LaTeX Project Public License, either version 1.3 of this license
+-- or (at your option) any later version. The latest version of this
+-- license is in http://www.latex-project.org/lppl.txt
+-- and version 1.3 or later is part of all distributions of LaTeX
+-- version 2005/12/01 or later.
+--
+-- See file README for more information.
+--
+
+
+--- Main Lua file.
+--
+-- @author Stephan Hennig
+-- @copyright 2012, 2013 Stephan Hennig
+-- @release version 0.41
+--
+
+
+-- Module identification.
+if luatexbase.provides_module then
+ luatexbase.provides_module(
+ {
+ name = 'spelling',
+ date = '2013/05/25',
+ version = '0.41',
+ description = 'support for spell-checking of LuaTeX documents',
+ author = 'Stephan Hennig',
+ licence = 'LPPL ver. 1.3c',
+ }
+ )
+end
+
+
+--- Global table of modules.
+-- The work of the spelling package can be separated into four
+-- stages:<br />
+--
+-- <dl>
+--
+-- <dt>Stage 1</dt>
+-- <dd><ul>
+-- <li>Load bad strings.</li>
+-- <li>Load good strings.</li>
+-- <li>Handle match rules.</li>
+-- </ul></dd>
+--
+-- <dt>Stage 2 (call-back <code>pre_linebreak_filter</code>)</dt>
+-- <dd><ul>
+-- <li>Tag word strings in node lists before paragraph breaking
+-- takes place.</li>
+-- <li>Check spelling of strings.</li>
+-- <li>Highlight strings with known incorrect spelling in PDF
+-- output.</li>
+-- </ul></dd>
+--
+-- <dt>Stage 3 (<code>\AtBeginShipout</code>)</dt>
+-- <dd><ul>
+-- <li>Store all strings found on built page via tag nodes in text
+-- document data structure.</li>
+-- </ul></dd>
+--
+-- <dt>Stage 4 (call-back <code>stop_run</code>)</dt>
+-- <dd><ul>
+-- <li>Output text stored in text document data structure to a
+-- file.</li>
+-- </ul></dd>
+--
+-- </dl>
+--
+-- The code of the spelling package is organized in modules reflecting
+-- these stages. References to modules are stored in a table. Table
+-- indices correspond to the stages as shown above. The table of module
+-- references is shared in a global table (`PKG_spelling`) so that
+-- public module functions are accessible from within external code.<br
+-- />
+--
+-- <ul>
+-- <li><code>spelling-stage-1.lua : stage[1]</code></li>
+-- <li><code>spelling-stage-2.lua : stage[2]</code></li>
+-- <li><code>spelling-stage-3.lua : stage[3]</code></li>
+-- <li><code>spelling-stage-4.lua : stage[4]</code></li>
+-- </ul>
+--
+-- @class table
+-- @name stage
+stage = {}
+
+
+--- Table of package-wide resources that are shared among several
+--- modules.
+--
+-- @class table
+-- @name res
+--
+-- @field rules_bad Table.<br />
+--
+-- This table contains all bad rules. Spellings can be matched against
+-- these rules.
+--
+-- @field rules_good Table.<br />
+--
+-- This table contains all good match rules. Spellings can be matched
+-- against these rules.
+--
+-- @field text_document Table.<br />
+--
+-- Data structure that stores the text of a document. The text document
+-- data structure stores the text of a document. The data structure is
+-- quite simple. A text document is an ordered list (an array) of
+-- paragraphs. A paragraph is an ordered list (an array) of words. A
+-- word is a single UTF-8 encoded string.<br />
+--
+-- During the LuTeX run, node lists are scanned for strings before
+-- hyphenation takes place. The strings found in a node list are stored
+-- in the current paragraph. After finishing scanning a node list, the
+-- current paragraph is inserted into the text document. At the end of
+-- the LuaTeX run, all paragraphs of the text document are broken into
+-- lines of a fixed length and the lines are written to a file.<br />
+--
+-- Here's the rationale of this approach:
+--
+-- <ul>
+--
+-- <li> It reduces file access <i>during</i> the LuaTeX run by delaying
+-- write operations until the end.
+--
+-- <li> It saves space. In Lua, strings are internalized. Since in a
+-- document, the same words are used over and over again, relatively
+-- few strings are actually stored in memory.
+--
+-- <li> It allows for pre-processing the text document before writing it
+-- to a file.
+--
+-- </ul>
+--
+-- @field whatsit_uid Number.<br />
+--
+-- Unique ID for marking user-defined whatsit nodes created by this
+-- package. The ID is generated at run-time. See this <a
+-- href="https://github.com/mpg/luatexbase/issues/8">GitHub issue</a>.
+--
+local res = {
+
+ rules_bad,
+ rules_good,
+ text_document,
+ whatsit_ids,
+
+}
+
+
+--- Global package table.
+-- This global table provides access to package-wide variables from
+-- within other chunks.
+--
+-- @class table
+-- @name PKG_spelling
+PKG_spelling = {}
+
+
+--- Determine unique IDs for user-defined whatsit nodes used by this
+-- package. Package luatexbase provides user-defined whatsit node ID
+-- allocation since version v0.6 (TL 2013). For older package versions,
+-- we start allocating at an arbitrary hard-coded value of 13**8
+-- (ca. 2**30). Note, for compatibility with LuaTeX 0.70.2, the value
+-- must be less than 2^31.
+--
+-- @return Table mapping names to IDs.
+local function __allocate_whatsit_ids()
+ local ids = {}
+ -- Allocation support present?
+ if luatexbase.new_user_whatsit_id then
+ ids.start_tag = luatexbase.new_user_whatsit_id('start_tag', 'spelling')
+ ids.end_tag = luatexbase.new_user_whatsit_id('end_tag', 'spelling')
+ else
+ local uid = 13^8
+ ids.start_tag = uid + 1
+ ids.end_tag = uid + 2
+ end
+ return ids
+end
+
+
+--- Package initialisation.
+--
+local function __init()
+ -- Create resources.
+ res.rules_bad = {}
+ res.rules_good = {}
+ res.text_document = {}
+ res.whatsit_ids = __allocate_whatsit_ids()
+ -- Provide global access to package ressources during module loading.
+ PKG_spelling.res = res
+ -- Load sub-modules:
+ -- * bad and good string loading
+ -- * match rule handling
+ stage[1] = require 'spelling-stage-1'
+ -- * node list tagging
+ -- * spell-checking
+ -- * bad string highlighting
+ stage[2] = require 'spelling-stage-2'
+ -- * text storage
+ stage[3] = require 'spelling-stage-3'
+ -- * text output
+ stage[4] = require 'spelling-stage-4'
+ -- Remove global reference to package ressources.
+ PKG_spelling.res = nil
+ -- Provide global access to module references.
+ PKG_spelling.stage = stage
+ -- Enable text storage.
+ stage[3].enable_text_storage()
+end
+
+
+-- Initialize package.
+__init()
diff --git a/macros/luatex/generic/spelling/spelling-recurse.lua b/macros/luatex/generic/spelling/spelling-recurse.lua
new file mode 100644
index 0000000000..70b48eea2b
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-recurse.lua
@@ -0,0 +1,110 @@
+--- spelling-recurse.lua
+--- Copyright 2012, 2013 Stephan Hennig
+--
+-- This work may be distributed and/or modified under the conditions of
+-- the LaTeX Project Public License, either version 1.3 of this license
+-- or (at your option) any later version. The latest version of this
+-- license is in http://www.latex-project.org/lppl.txt
+-- and version 1.3 or later is part of all distributions of LaTeX
+-- version 2005/12/01 or later.
+--
+-- See file README for more information.
+--
+
+
+--- Helper module for recursing into a node list.
+-- This module provides means to recurse into a node list, calling
+-- user-provided call-back functions upon certain events.
+--
+-- @author Stephan Hennig
+-- @copyright 2012, 2013 Stephan Hennig
+-- @release version 0.41
+--
+-- @trick Prevent LuaDoc from looking past here for module description.
+--[[ Trick LuaDoc into entering 'module' mode without using that command.
+module(...)
+--]]
+
+
+-- Module table.
+local M = {}
+
+
+-- Function short-cuts.
+local traverse = node.traverse
+
+
+-- Short-cuts for constants.
+local HLIST = node.id('hlist')
+local VLIST = node.id('vlist')
+
+
+--- Scan a node list and call call-back functions upon certain events.
+-- This function scans a node list. Upon certain events, user-defined
+-- call-back functions are called. Call-back functions have to be
+-- provided in a table. Call-back functions and corresponding events
+-- are:
+--
+-- <dl>
+--
+-- <dt>`vlist_pre_recurse`</dt> <dd>A vlist is about to be recursed
+-- into. Argument is the vlist node.</dd>
+--
+-- <dt>`vlist_post_recurse`</dt> <dd>Recursing into a vlist has been
+-- finished. Argument is the vlist node.</dd>
+--
+-- <dt>`hlist_pre_recurse`</dt> <dd>An hlist is about to be recursed
+-- into. Argument is the hlist node.</dd>
+--
+-- <dt>`hlist_post_recurse`</dt> <dd>Recursing into a hlist has been
+-- finished. Argument is the hlist node.</dd>
+--
+-- <dt>`visit`</dt> <dd>A node of type other that `vlist` and `hlist`
+-- has been found. Arguments are the head node of the current node
+-- (head node of the current branch) and the current node.</dd>
+--
+-- </dl>
+--
+-- If a call-back entry in the table is `nil`, the corresponding event
+-- is ignored.
+--
+-- @param head Node list.
+-- @param cb Table of call-back functions.
+local function recurse_node_list(head, cb)
+ -- Make call-back functions local identifiers.
+ local cb_vlist_pre_recurse = cb.vlist_pre_recurse
+ local cb_vlist_post_recurse = cb.vlist_post_recurse
+ local cb_hlist_pre_recurse = cb.hlist_pre_recurse
+ local cb_hlist_post_recurse = cb.hlist_post_recurse
+ local cb_visit_node = cb.visit_node
+ -- Iterate over nodes in current branch.
+ for n in traverse(head) do
+ local nid = n.id
+ -- Test for vlist node.
+ if nid == VLIST then
+ -- Announce vlist pre-traversal.
+ if cb_vlist_pre_recurse then cb_vlist_pre_recurse(n) end
+ -- Recurse into 'vlist'.
+ recurse_node_list(n.head, cb)
+ -- Announce vlist post-traversal.
+ if cb_vlist_post_recurse then cb_vlist_post_recurse(n) end
+ -- Test for hlist node.
+ elseif nid == HLIST then
+ -- Announce hlist pre-traversal.
+ if cb_hlist_pre_recurse then cb_hlist_pre_recurse(n) end
+ -- Recurse into 'hlist'.
+ recurse_node_list(n.head, cb)
+ -- Announce hlist post-traversal.
+ if cb_hlist_post_recurse then cb_hlist_post_recurse(n) end
+ -- Other nodes.
+ else
+ -- Visit node.
+ if cb_visit_node then cb_visit_node(head, n) end
+ end
+ end
+end
+M.recurse_node_list = recurse_node_list
+
+
+-- Return module table.
+return M
diff --git a/macros/luatex/generic/spelling/spelling-stage-1.lua b/macros/luatex/generic/spelling/spelling-stage-1.lua
new file mode 100644
index 0000000000..c54bac98eb
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-stage-1.lua
@@ -0,0 +1,370 @@
+--- spelling-stage-1.lua
+--- Copyright 2012, 2013 Stephan Hennig
+--
+-- This work may be distributed and/or modified under the conditions of
+-- the LaTeX Project Public License, either version 1.3 of this license
+-- or (at your option) any later version. The latest version of this
+-- license is in http://www.latex-project.org/lppl.txt
+-- and version 1.3 or later is part of all distributions of LaTeX
+-- version 2005/12/01 or later.
+--
+-- See file README for more information.
+--
+
+
+--- Handle lists of bad and good strings and match rules.
+--
+-- @author Stephan Hennig
+-- @copyright 2012, 2013 Stephan Hennig
+-- @release version 0.41
+--
+-- @trick Prevent LuaDoc from looking past here for module description.
+--[[ Trick LuaDoc into entering 'module' mode without using that command.
+module(...)
+--]]
+
+
+-- Module table.
+local M = {}
+
+
+-- Import external modules.
+local unicode = require('unicode')
+local xml = require('luaxml-mod-xml')
+
+
+-- Function short-cuts.
+local Sfind = string.find
+
+local tabinsert = table.insert
+
+local Ufind = unicode.utf8.find
+local Ugmatch = unicode.utf8.gmatch
+local Usub = unicode.utf8.sub
+
+
+-- Declare local variables to store references to resources that are
+-- provided by external code.
+--
+-- Table of known bad strings.
+local __is_bad
+--
+-- Table of known good strings.
+local __is_good
+--
+-- Table of bad rules.
+local __rules_bad
+--
+-- Table of good rules.
+local __rules_good
+
+
+--- Generic function for reading bad or good spellings from a file.
+-- All data from the file is read into a string, which is then parsed by
+-- the given parse function.
+--
+-- @param fname File name.
+-- @param parse_string Custom parse function.
+-- @param t Mapping table bad or good spellings should be added to.
+-- @param hint String for info message. Either `bad` or `good`.
+local function __parse_file(fname, parse_string, t, hint)
+ local total_c = 0
+ local new_c = 0
+ local f, err = io.open(fname, 'r')
+ if f then
+ local s = f:read('*all')
+ f:close()
+ total_c, new_c = parse_string(s, t)
+ else
+ texio.write_nl('package spelling: Warning! ' .. err)
+ end
+ texio.write_nl('package spelling: Info! ' .. total_c .. '/' .. new_c .. ' total/new ' .. hint .. ' strings read from file \'' .. fname .. '\'.')
+end
+
+
+--- Generic function for parsing a string containing a plain list of
+-- strings. Input format are strings separated by new line or carriage
+-- return, i.e., one string per line. All lines found in the list are
+-- mapped to the boolean value `true` in the given table.
+--
+-- @param s Input string (a list of strings).
+-- @param t Table that maps strings to the value `true`.
+-- @return Number of total and new strings found.
+local function __parse_plain_list(s, t)
+ local total_c = 0
+ local new_c = 0
+ -- Iterate line-wise through input string.
+ for l in Ugmatch(s, '[^\r\n]+') do
+ -- Map string to boolean value `true`.
+ if not t[l] then
+ t[l] = true
+ new_c = new_c + 1
+ end
+ total_c = total_c + 1
+ end
+ return total_c, new_c
+end
+
+
+--- Parse a plain list of bad strings read from a file.
+-- All strings found (words with known incorrect spelling) are mapped to
+-- the boolean value `true` in table `__is_bad`. The format of the
+-- input file is one string per line.
+--
+-- @param fname File name.
+local function parse_bad_plain_list_file(fname)
+ __parse_file(fname, __parse_plain_list, __is_bad, 'bad')
+end
+M.parse_bad_plain_list_file = parse_bad_plain_list_file
+
+
+--- Parse a plain list of good strings read from a file.
+-- All strings found (words with known correct spelling) are mapped to
+-- the boolean value `true` in table `__is_good`. The format of the
+-- input file is one string per line.
+--
+-- @param fname File name.
+local function parse_good_plain_list_file(fname)
+ __parse_file(fname, __parse_plain_list, __is_good, 'good')
+end
+M.parse_good_plain_list_file = parse_good_plain_list_file
+
+
+--- Get a custom LanguageTool XML handler.
+-- The returned XML handler scans LanguageTool XML data for incorrect
+-- spellings. For every incorrect spelling found, the given call-back
+-- function is called with the incorrect spelling string as argument.<br
+-- />
+--
+-- XML data is checked for being created by LanguageTool (via attribute
+-- <code>software</code> in tag <code>matches</code>).
+--
+-- @param cb Call-back function handling incorrect spellings found in
+-- XML data.
+-- @return XML handler.
+local function __get_XML_handler_LanguageTool(cb)
+
+ -- Some flags for checking validity of XML data. LanguageTool XML
+ -- data must declare as being UTF-8 encoded and advertise as being
+ -- created by LanguageTool.
+ local is_XML_encoding_UTF_8 = false
+ local is_XML_creator_LanguageTool = false
+ local is_XML_valid = false
+
+ --- Handler object for parsing LanguageTool XML data.
+ -- This table contains call-backs used by LuaXML when parsing XML
+ -- data.
+ --
+ -- @class table
+ -- @name XML_handler
+ -- @field decl Handle XML declaration.
+ -- @field starttag Handle all relevant tags.
+ -- @field endtag Not used, but mandatory.
+ local XML_handler = {
+
+ decl = function(self, text, attr)
+ -- Check XML encoding declaration.
+ if attr.encoding == 'UTF-8' then
+ is_XML_encoding_UTF_8 = true
+ is_XML_valid = is_XML_encoding_UTF_8 and is_XML_creator_LanguageTool
+ else
+ error('package spelling: Error! XML data not in the UTF-8 encoding.')
+ end
+ end,
+
+ starttag = function(self, text, attr)
+ -- Process <matches> tag.
+ if text == 'matches' then
+ -- Check XML creator is LanguageTool.
+ if attr and attr.software == 'LanguageTool' then
+ is_XML_creator_LanguageTool = true
+ is_XML_valid = is_XML_encoding_UTF_8 and is_XML_creator_LanguageTool
+ end
+ -- Check XML data is valid.
+ elseif not is_XML_valid then
+ error('package spelling: Error! No valid LanguageTool XML data.')
+ -- Process <error> tags.
+ elseif text == 'error' then
+ local ruleid = attr.ruleid
+ if ruleid == 'HUNSPELL_RULE'
+ or ruleid == 'HUNSPELL_NO_SUGGEST_RULE'
+ or ruleid == 'GERMAN_SPELLER_RULE'
+ or Ufind(ruleid, '^MORFOLOGIK_RULE_')
+ then
+ -- Extract misspelled word from context attribute.
+ local word = Usub(attr.context, attr.contextoffset + 1, attr.contextoffset + attr.errorlength)
+ cb(word)
+ end
+ end
+ end,
+
+ endtag = function(self, text)
+ end,
+
+ }
+
+ return XML_handler
+end
+
+
+--- Parse a string containing LanguageTool XML data.
+-- All incorrect spellings found in the given XML data are mapped to the
+-- boolean value `true` in the given table.
+--
+-- @param s String containing XML data.
+-- @param t Table mapping incorrect spellings to a boolean.
+-- @return Number of total and new incorrect spellings found.
+local function __parse_XML_LanguageTool(s, t)
+ local total_c = 0
+ local new_c = 0
+
+ -- Create call-back for custom LanguageTool XML handler that stores a
+ -- bad word in the given table and does some statistics.
+ local cb_incorrect_spelling = function(word)
+ if not t[word] then
+ t[word] = true
+ new_c = new_c + 1
+ end
+ total_c = total_c + 1
+ end
+
+ -- Create custom XML handler.
+ local XML_handler_LT = __get_XML_handler_LanguageTool(cb_incorrect_spelling)
+ -- Create custom XML parser.
+ local x = xml.xmlParser(XML_handler_LT)
+ -- Parse XML data.
+ x:parse(s)
+ return total_c, new_c
+end
+
+
+--- Parse LanguageTool XML data read from a file.
+-- All strings found in the file (words with known incorrect spelling)
+-- are mapped to the boolean value `true` in table `__is_bad`.
+--
+-- @param fname File name.
+local function parse_XML_LanguageTool_file(fname)
+ __parse_file(fname, __parse_XML_LanguageTool, __is_bad, 'bad')
+end
+M.parse_XML_LanguageTool_file = parse_XML_LanguageTool_file
+
+
+--- Parse default sources for bad and good strings.
+-- All strings found in default sources for words with known incorrect
+-- spelling are mapped to the boolean value `true` in table `__is_bad`.
+-- All strings found in default sources for words with known correct
+-- spelling are mapped to the boolean value `true` in table `__is_good`.
+-- Default sources for bad spellings are files `<jobname>.spell.xml` (a
+-- LanguageTool XML file) and `<jobname>.spell.bad` (a plain list file).
+-- Default sources for good spellings are file `<jobname>.spell.good` (a
+-- plain list file).
+local function parse_default_bad_and_good()
+ local fname, f
+ -- Try to read bad spellings from LanguageTool XML file
+ -- '<jobname>.spell.xml'.
+ fname = tex.jobname .. '.spell.xml'
+ f = io.open(fname, 'r')
+ if f then
+ f:close()
+ parse_XML_LanguageTool_file(fname)
+ end
+ -- Try to read bad spellings from plain list file
+ -- '<jobname>.spell.bad'.
+ fname = tex.jobname .. '.spell.bad'
+ f = io.open(fname, 'r')
+ if f then
+ f:close()
+ parse_bad_plain_list_file(fname)
+ end
+ -- Try to read good spellings from plain list file
+ -- '<jobname>.spell.good'.
+ fname = tex.jobname .. '.spell.good'
+ f = io.open(fname, 'r')
+ if f then
+ f:close()
+ parse_good_plain_list_file(fname)
+ end
+end
+M.parse_default_bad_and_good = parse_default_bad_and_good
+
+
+--- Default bad dictionary look-up match rule.
+-- This function looks-up both arguments in the list of bad spellings.
+-- It returns `true` if either of the arguments is found in the list of
+-- bad spellings, otherwise `false`.
+--
+-- @param raw Raw string to check.
+-- @param stripped Same as `raw`, but with stripped surrounding
+-- punctuation.
+-- @return A boolean value indicating a match.
+local function __bad_rule_bad_dictionary_lookup(raw, stripped)
+ return __is_bad[stripped] or __is_bad[raw]
+end
+
+
+--- Default good dictionary look-up match rule.
+-- This function looks-up both arguments in the list of good spellings.
+-- It returns `true` if either of the arguments is found in the list of
+-- good spellings, otherwise `false`.
+--
+-- @param raw Raw string to check.
+-- @param stripped Same as `raw`, but with stripped surrounding
+-- punctuation.
+-- @return A boolean value indicating a match.
+local function __good_rule_good_dictionary_lookup(raw, stripped)
+ return __is_good[stripped] or __is_good[raw]
+end
+
+
+--- Load match rule module.
+-- Match rule modules are loaded using `require`. The module table must
+-- follow the following convention: Indentifiers of bad match rules
+-- start `bad_rule_`. Indentifiers of good match rules start
+-- `good_rule_`. Other and non-function identifiers are ignore.
+--
+-- All match rules found in a module are added to the table of bad and
+-- good match rules. Arguments of a match rule function are a raw
+-- string and the same string with stripped surrounding punctuation.
+--
+-- @param fname Module file name.
+local function read_match_rules(fname)
+ local bad_c = 0
+ local good_c = 0
+ local rules = require(fname)
+ for k,v in pairs(rules) do
+ if type(v) == 'function' then
+ if Sfind(k, '^bad_rule_') then
+ tabinsert(__rules_bad, v)
+ bad_c = bad_c + 1
+ elseif Sfind(k, '^good_rule_') then
+ tabinsert(__rules_good, v)
+ good_c = good_c + 1
+ end
+ end
+ end
+ texio.write_nl('package spelling: Info! ' .. bad_c .. '/' .. good_c .. ' bad/good match rules read from module \'' .. fname .. '\'.')
+end
+M.read_match_rules = read_match_rules
+
+
+--- Module initialisation.
+--
+local function __init()
+ -- Get local references to package ressources.
+ __rules_bad = PKG_spelling.res.rules_bad
+ __rules_good = PKG_spelling.res.rules_good
+ -- Add default dictionary look-up match rules.
+ tabinsert(__rules_bad, __bad_rule_bad_dictionary_lookup)
+ tabinsert(__rules_good, __good_rule_good_dictionary_lookup)
+ -- Create emtpy lists of known spellings.
+ __is_bad = {}
+ __is_good = {}
+end
+
+
+-- Initialize module.
+__init()
+
+
+-- Return module table.
+return M
diff --git a/macros/luatex/generic/spelling/spelling-stage-2.lua b/macros/luatex/generic/spelling/spelling-stage-2.lua
new file mode 100644
index 0000000000..c7cb98f1f2
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-stage-2.lua
@@ -0,0 +1,675 @@
+--- spelling-stage-2.lua
+--- Copyright 2012, 2013 Stephan Hennig
+--
+-- This work may be distributed and/or modified under the conditions of
+-- the LaTeX Project Public License, either version 1.3 of this license
+-- or (at your option) any later version. The latest version of this
+-- license is in http://www.latex-project.org/lppl.txt
+-- and version 1.3 or later is part of all distributions of LaTeX
+-- version 2005/12/01 or later.
+--
+-- See file README for more information.
+--
+
+
+--- Tag node lists with word strings before hyphenation takes place.
+-- This module provides means to tag node lists by inserting
+-- user-defined whatsit nodes before and after first and last node
+-- belonging to a chain representing a string in the node list. The
+-- final tag node contains a reference to a string containing the word
+-- string. Tagging is applied before hyphenation takes place.
+--
+-- @author Stephan Hennig
+-- @copyright 2012, 2013 Stephan Hennig
+-- @release version 0.41
+--
+-- @trick Prevent LuaDoc from looking past here for module description.
+--[[ Trick LuaDoc into entering 'module' mode without using that command.
+module(...)
+--]]
+
+
+-- Module table.
+local M = {}
+
+
+-- Import external modules.
+local recurse = require('spelling-recurse')
+local unicode = require('unicode')
+
+
+-- Function short-cuts.
+local tabconcat = table.concat
+local tabinsert = table.insert
+local tabremove = table.remove
+
+local node_new = node.new
+local node_insert_after = node.insert_after
+local node_insert_before = node.insert_before
+
+local recurse_node_list = recurse.recurse_node_list
+
+local Sfind = string.find
+local Sgmatch = string.gmatch
+local Smatch = string.match
+
+local Uchar = unicode.utf8.char
+local Umatch = unicode.utf8.match
+
+
+-- Short-cuts for constants.
+local DISC = node.id('disc')
+local GLYPH = node.id('glyph')
+local KERN = node.id('kern')
+local WHATSIT = node.id('whatsit')
+local LOCAL_PAR = node.subtype('local_par')
+local USER_DEFINED = node.subtype('user_defined')
+local PDF_COLORSTACK = node.subtype('pdf_colorstack')
+
+
+-- Declare local variables to store references to resources that are
+-- provided by external code.
+--
+-- Table of bad rules.
+local __rules_bad
+--
+-- Table of good rules.
+local __rules_good
+--
+-- ID of user-defined whatsit nodes marking the start of a word.
+local __uid_start_tag
+--
+-- ID of user-defined whatsit nodes marking the end of a word.
+local __uid_end_tag
+
+
+--- Module options.
+-- This table contains all module options. User functions to set
+-- options are provided.
+--
+-- @class table
+-- @name __opts
+-- @field hl_color Colour used for highlighting bad spellings in PDF
+-- output.
+local __opts = {
+ hl_color,
+}
+
+
+--- Set colour used for highlighting.
+-- Set colour used for highlighting bad spellings in PDF output. The
+-- argument is checked for a valid PDF colour statement. As an example,
+-- the string `1 0 0 rg` represents a red colour in the RGB colour
+-- space. A similar colour in the CMYK colour space would be
+-- represented by the string '0 1 1 0 k'.
+--
+-- @param col New colour.
+local function set_highlight_color(col)
+ -- Extract all colour components.
+ local components = Smatch(col, '^(%S+ %S+ %S+) rg$') or Smatch(col, '^(%S+ %S+ %S+ %S+) k$')
+ local is_valid_arg = components
+ if is_valid_arg then
+ -- Validate colour components.
+ for comp in Sgmatch(components, '%S+') do
+ -- Check number syntax.
+ local is_valid_comp = Sfind(comp, '^%d+%.?%d*$') or Sfind(comp, '^%d*%.?%d+$')
+ if is_valid_comp then
+ -- Check number range.
+ comp = tonumber(comp)
+ is_valid_comp = comp >= 0 and comp <= 1
+ end
+ is_valid_arg = is_valid_arg and is_valid_comp
+ end
+ end
+ if is_valid_arg then
+ __opts.hl_color = col
+ else
+ error('package spelling: Error! Invalid PDF colour statement: ' .. tostring(col))
+ end
+end
+M.set_highlight_color = set_highlight_color
+
+
+--- Highlighting status cache table.
+-- Determining the highlighting status of a string can be an expensive
+-- operation. To reduce average run-time penalty per string,
+-- highlighting status of all strings found in a document is cached in
+-- this table, so that determining the highlighting status of a known
+-- string requires only one table look-up.<br />
+--
+-- This table needs an `__index` meta method calculating the
+-- highlighting status of unknown keys (strings).
+--
+-- @class table
+-- @name __is_highlighting_needed
+local __is_highlighting_needed = {}
+
+
+--- Calculate and cache the highlighting status of a string.
+-- First, surrounding punctuation is stripped from the string argument.
+-- Then, the given raw as well as the stripped string are checked
+-- against all rules. Highlighting of the string is required, if any
+-- bad rule matches, but no good rule matches. That is, good rules take
+-- precedence over bad rules.
+--
+-- @param t Original table.
+-- @param raw Raw string to check.
+-- @return True, if highlighting is required. False, otherwise.
+local function __calc_is_highlighting_needed(t, raw)
+ -- Strip surrounding punctuation from string.
+ local stripped = Umatch(raw, '^%p*(.-)%p*$')
+ -- Check for a bad match.
+ local is_bad = false
+ for _,matches_bad in ipairs(__rules_bad) do
+ is_bad = is_bad or matches_bad(raw, stripped)
+ if is_bad then break end
+ end
+ -- Check for a good match.
+ local is_good = false
+ for _,matches_good in ipairs(__rules_good) do
+ is_good = is_good or matches_good(raw, stripped)
+ if is_good then break end
+ end
+ -- Calculate highlighting status.
+ local status = (is_bad and not is_good) or false
+ -- Store status in cache table.
+ rawset(t, raw, status)
+ -- Return status.
+ return status
+end
+
+
+-- Set-up meta table for highlighting status cache table.
+setmetatable(__is_highlighting_needed, {
+ __index = __calc_is_highlighting_needed,
+})
+
+
+--- Convert a Unicode code point to a regular UTF-8 encoded string.
+-- This function can be used as an `__index` meta method.
+--
+-- @param t original table
+-- @param cp originl key, a Unicode code point
+-- @return UTF-8 encoded string corresponding to the Unicode code point.
+local function __meta_cp2utf8(t, cp)
+ return Uchar(cp)
+end
+
+
+--- Table of Unicode code point mappings.
+-- This table maps Unicode code point to strings. The mappings are used
+-- during text extraction to translate certain Unicode code points to an
+-- arbitrary string instead of the corresponding UTF-8 encoded
+-- character.<br />
+--
+-- As an example, by adding an appropriate entry to this table, the
+-- single Unicode code point U-fb00 (LATIN SMALL LIGATURE FF) can be
+-- resolved into the multi character string 'ff' instead of being
+-- converted to the single character string 'ff' ('&#xfb00;').<br />
+--
+-- Keys are Unicode code points. Values must be strings in the UTF-8
+-- encoding. If a key is not present in this table, the regular UTF-8
+-- character is returned by means of a meta table.<br />
+--
+-- @class table
+-- @name __codepoint_map
+local __codepoint_map = {
+
+ [0x0132] = 'IJ',-- LATIN CAPITAL LIGATURE IJ
+ [0x0133] = 'ij',-- LATIN SMALL LIGATURE IJ
+ [0x0152] = 'OE',-- LATIN CAPITAL LIGATURE OE
+ [0x0153] = 'oe',-- LATIN SMALL LIGATURE OE
+ [0x017f] = 's',-- LATIN SMALL LETTER LONG S
+
+ [0xfb00] = 'ff',-- LATIN SMALL LIGATURE FF
+ [0xfb01] = 'fi',-- LATIN SMALL LIGATURE FI
+ [0xfb02] = 'fl',-- LATIN SMALL LIGATURE FL
+ [0xfb03] = 'ffi',-- LATIN SMALL LIGATURE FFI
+ [0xfb04] = 'ffl',-- LATIN SMALL LIGATURE FFL
+ [0xfb05] = 'st',-- LATIN SMALL LIGATURE LONG S T
+ [0xfb06] = 'st',-- LATIN SMALL LIGATURE ST
+
+}
+
+
+--- Meta table for code point mapping table.
+--
+-- @class table
+-- @name __meta_codepoint_map
+-- @field __index Index operator.
+local __meta_codepoint_map = {
+ __index = __meta_cp2utf8,
+}
+
+
+-- Set meta table for code point mapping table.
+setmetatable(__codepoint_map, __meta_codepoint_map)
+
+
+--- Clear all code point mappings.
+-- After calling this function, there are no known code point mappings
+-- and no code point mapping takes place during text extraction.
+local function clear_all_mappings()
+ __codepoint_map = {}
+ setmetatable(__codepoint_map, __meta_codepoint_map)
+end
+M.clear_all_mappings = clear_all_mappings
+
+
+--- Manage Unicode code point mappings.
+-- This function can be used to set-up code point mappings. First
+-- argument must be a number, second argument must be a string in the
+-- UTF-8 encoding or `nil`.<br />
+--
+-- If the second argument is a string, after calling this function, the
+-- Unicode code point given as first argument, when found in a node list
+-- during text extraction, is mapped to the string given as second
+-- argument instead of being converted to a UTF-8 encoded character
+-- corresponding to the code point.<br />
+--
+-- If the second argument is `nil`, a mapping for the given code point,
+-- if existing, is deleted.
+--
+-- @param cp A Unicode code point, e.g., 0xfb00 for the code point LATIN
+-- SMALL LIGATURE FF.
+-- @param newt New target string to map the code point to or `nil`.
+-- @return Old target string the code point was mapped to before
+-- (possibly `nil`). If any arguments are invalid, return value is
+-- `false`. Arguments are invalid if code point is not of type `number`
+-- or not in the range 0 to 0x10ffff or if new target string is neither
+-- of type `string` nor `nil`).
+local function set_mapping(cp, newt)
+ -- Prevent from invalid entries in mapping table.
+ if (type(cp) ~= 'number') or
+ (cp < 0) or
+ (cp > 0x10ffff) or
+ ((type(newt) ~= 'string') and (type(newt) ~= 'nil')) then
+ return false
+ end
+ -- Retrieve old mapping.
+ local oldt = rawget(__codepoint_map, cp)
+ -- Set new mapping.
+ __codepoint_map[cp] = newt
+ -- Return old mapping.
+ return oldt
+end
+M.set_mapping = set_mapping
+
+
+-- First and last nodes known to belong to the current word and their
+-- head nodes. These nodes are logged, so that after recognizing the
+-- end of a word, they can be tagged by inserting new user-defined
+-- whatsit nodes before and after them.
+local __curr_word_start_head
+local __curr_word_start
+local __curr_word_end_head
+local __curr_word_end
+
+
+--- Tag the current word in the node list.
+-- Insert tag nodes (user-defined whatsit nodes) into the node list
+-- before and after the first and last nodes belonging to the current
+-- word. The tag marking the start of a word contains as value an empty
+-- string. The tag marking the end of a word contains as value a
+-- reference to the word string.
+--
+-- @param word Word string.
+local function __tag_word(word)
+ -- Check, if start node of current word is a head node. Inserting
+ -- before head nodes needs special attention. This is not yet
+ -- implemented.
+ if (__curr_word_start ~= __curr_word_start_head) then
+ -- Create new start tag node.
+ local start_tag = node_new(WHATSIT, USER_DEFINED)
+ -- Mark whatsit node with module ID, so that we can recognize it
+ -- later.
+ start_tag.user_id = __uid_start_tag
+ -- Value is an empty string.
+ start_tag.type = 115
+ start_tag.value = ''
+ -- Insert start tag before first node belonging to current word.
+ node_insert_before(__curr_word_start_head, __curr_word_start, start_tag)
+ end
+ -- Create new end tag node.
+ local end_tag = node_new(WHATSIT, USER_DEFINED)
+ -- Mark whatsit node with module ID, so that we can recognize it
+ -- later.
+ end_tag.user_id = __uid_end_tag
+ -- Value of end tag is an index (a number).
+ end_tag.type = 115
+ end_tag.value = word
+ -- Insert end tag after last node belonging to current word.
+ node_insert_after(__curr_word_end_head, __curr_word_end, end_tag)
+end
+
+
+--- Highlight bad spelling by colour.
+-- Insert colour whatsits before and after the first and last nodes
+-- known to belong to the current word.
+local function __highlight_by_color()
+ -- Check, if start node of current word is a head node. Inserting
+ -- before head nodes needs special attention. This is not yet
+ -- implemented.
+ if (__curr_word_start ~= __curr_word_start_head) then
+ -- Create pdf_colorstack whatsit nodes.
+ local push = node_new(WHATSIT, PDF_COLORSTACK)
+ local pop = node_new(WHATSIT, PDF_COLORSTACK)
+ push.stack = 0
+ pop.stack = 0
+ push.command = 1
+ pop.command = 2
+ push.data = __opts.hl_color
+ node_insert_before(__curr_word_start_head, __curr_word_start, push)
+ node_insert_after(__curr_word_end_head, __curr_word_end, pop)
+ end
+end
+
+
+--- Highlight bad spelling by colour (using node field `cmd`).
+-- Insert colour whatsits before and after the first and last nodes
+-- known to belong to the current word.
+-- @see function __highlight_by_color
+local function __highlight_by_color_cmd()
+ -- Check, if start node of current word is a head node. Inserting
+ -- before head nodes needs special attention. This is not yet
+ -- implemented.
+ if (__curr_word_start ~= __curr_word_start_head) then
+ -- Create pdf_colorstack whatsit nodes.
+ local push = node_new(WHATSIT, PDF_COLORSTACK)
+ local pop = node_new(WHATSIT, PDF_COLORSTACK)
+ push.stack = 0
+ pop.stack = 0
+ push.cmd = 1
+ pop.cmd = 2
+ push.data = __opts.hl_color
+ node_insert_before(__curr_word_start_head, __curr_word_start, push)
+ node_insert_after(__curr_word_end_head, __curr_word_end, pop)
+ end
+end
+
+
+--- Generic function for highlighting bad spellings.
+local function __highlight_bad_word()
+ __highlight_by_color()
+end
+
+
+-- Tagging status.
+local __is_active_tagging
+
+
+-- Highlighting status.
+local __is_active_highlighting
+
+
+--- Data structure that stores the characters of a word string.
+-- The current word data structure is an ordered list (an array) of the
+-- characters of the word. The characters are collected while scanning
+-- a node list. They are concatenated to a single string only after the
+-- end of a word is detected, before inserting the current word into the
+-- current paragraph data structure.
+--
+-- @class table
+-- @name __curr_word
+local __curr_word
+
+
+--- Act upon detection of end of current word string.
+-- If the current word contains visible characters, store the current
+-- word in the current tag.
+local function __finish_current_word()
+ -- Finish a word?
+ if __curr_word then
+ local word = tabconcat(__curr_word)
+ -- Check, if the current word has already been tagged. This is only
+ -- a quick hack.
+ local start_prev = __curr_word_start.prev
+ local end_next = __curr_word_end.next
+ if start_prev and end_next
+ and (start_prev.id == WHATSIT)
+ and (start_prev.subtype == USER_DEFINED)
+ and (start_prev.user_id == __uid_start_tag)
+ and (end_next.id == WHATSIT)
+ and (end_next.subtype == USER_DEFINED)
+ and (end_next.user_id == __uid_end_tag)
+ and (end_next.value == word) then
+ __curr_word = nil
+ __curr_word_start_head = nil
+ __curr_word_start = nil
+ __curr_word_end_head = nil
+ __curr_word_end = nil
+ return
+ end
+ -- Tag node list with word string.
+ if __is_active_tagging then
+ __tag_word(word)
+ end
+ -- Highlighting needed?
+ if __is_highlighting_needed[word] and __is_active_highlighting then
+ __highlight_bad_word()
+ end
+ __curr_word = nil
+ end
+ __curr_word_start_head = nil
+ __curr_word_start = nil
+ __curr_word_end_head = nil
+ __curr_word_end = nil
+end
+
+
+--- Act upon detection of end of current paragraph.
+-- If the current paragraph contains words, store the current paragraph
+-- in the text document.
+local function __finish_current_paragraph()
+ -- Finish current word.
+ __finish_current_word()
+end
+
+
+--- Paragraph management stack.
+-- Stack of boolean flags, that are used for logging the occurence of a
+-- new paragraph within nested vlists.
+local __is_vlist_paragraph
+
+
+--- Paragraph management.
+-- This function puts a new boolean flag onto a stack that is used to
+-- log the occurence of a new paragraph, while recursing into the coming
+-- vlist. After finishing recursing into the vlist, the flag needs to
+-- be removed from the stack. Depending on the flag, the then current
+-- paragraph can be finished.
+local function __vlist_pre_recurse()
+ tabinsert(__is_vlist_paragraph, false)
+end
+
+
+--- Paragraph management.
+-- Remove flag from stack after recursing into a vlist. If necessary,
+-- finish the current paragraph.
+local function __vlist_post_recurse()
+ local p = tabremove(__is_vlist_paragraph)
+ if p then
+ __finish_current_paragraph()
+ end
+end
+
+
+--- Find paragraphs and strings.
+-- While scanning a node list, this call-back function finds nodes
+-- representing the start of a paragraph (local_par whatsit nodes) and
+-- strings (chains of nodes of type glyph, kern, disc).
+--
+-- @param head Head node of current branch.
+-- @param n The current node.
+local function __visit_node(head, n)
+ local nid = n.id
+ -- Test for word string component node.
+ if nid == GLYPH then
+ -- Save first node belonging to current word and its head for later
+ -- reference.
+ if not __curr_word_start then
+ __curr_word_start_head = head
+ __curr_word_start = n
+ end
+ -- Save latest node belonging to current word and its head for later
+ -- reference.
+ __curr_word_end_head = head
+ __curr_word_end = n
+ -- Provide new empty word, if necessary.
+ if not __curr_word then
+ __curr_word = {}
+ end
+ -- Append character to current word string.
+ tabinsert(__curr_word, __codepoint_map[n.char])
+ -- Test for other word string component nodes.
+ elseif (nid == DISC) or (nid == KERN) then
+ -- We're still within the current word string. Do nothing.
+ -- Test for paragraph start.
+ elseif (nid == WHATSIT) and (n.subtype == LOCAL_PAR) then
+ __finish_current_paragraph()
+ __is_vlist_paragraph[#__is_vlist_paragraph] = true
+ else
+ -- End of current word string detected.
+ __finish_current_word()
+ end
+end
+
+
+--- Table of call-back functions for node list recursion: store the
+--- word strings found in a node list.
+-- The call-back functions in this table identify chains of nodes
+-- representing word strings in a node list and stores the strings in
+-- the text document. Local_par whatsit nodes are word boundaries.
+-- Nodes of type `hlist` are recursed into as if they were non-existent.
+-- As an example, the LaTeX input `a\mbox{a b}b` is recognized as two
+-- strings `aa` and `bb`.
+--
+-- @class table
+-- @name __cb_tag_words
+-- @field vlist_pre_recurse Paragraph management.
+-- @field vlist_post_recurse Paragraph management.
+-- @field visit_node Find nodes representing paragraphs and words.
+local __cb_tag_words = {
+
+ vlist_pre_recurse = __vlist_pre_recurse,
+ vlist_post_recurse = __vlist_post_recurse,
+ visit_node = __visit_node,
+
+}
+
+
+--- Process node list according to this stage.
+-- This function recurses into the given node list, extracts all text
+-- and stores it in the text document.
+--
+-- @param head Node list.
+local function __process_node_list(head)
+ __curr_word_start_head = nil
+ __curr_word_start = nil
+ __curr_word_end_head = nil
+ __curr_word_end = nil
+ recurse_node_list(head, __cb_tag_words)
+ -- Clean-up left-over word and/or paragraph.
+ __finish_current_paragraph()
+end
+
+
+--- Call-back function that processes the node list.
+--
+-- @param head Node list.
+local function __cb_pre_linebreak_filter_pkg_spelling(head)
+ __process_node_list(head)
+ return head
+end
+
+
+--- Start tagging text.
+-- After calling this function, words are tagged in node lists before
+-- hyphenation takes place.
+local function enable_text_tagging()
+ __is_active_tagging = true
+end
+M.enable_text_tagging = enable_text_tagging
+
+
+--- Stop tagging text.
+-- After calling this function, no more word tagging in node lists takes
+-- place.
+local function disable_text_tagging()
+ __is_active_tagging = false
+end
+M.disable_text_tagging = disable_text_tagging
+
+
+--- Start highlighting bad spellings.
+-- After calling this function, bad spellings are highlighted in PDF
+-- output.
+local function enable_word_highlighting()
+ __is_active_highlighting = true
+end
+M.enable_word_highlighting = enable_word_highlighting
+
+
+--- Stop highlighting bad spellings.
+-- After calling this function, no more bad spellings are highlighted in
+-- PDF output.
+local function disable_word_highlighting()
+ __is_active_highlighting = false
+end
+M.disable_word_highlighting = disable_word_highlighting
+
+
+--- Try to maintain compatibility with older LuaTeX versions.
+-- Between LuaTeX 0.70.2 and 0.76.0 node field `cmd` of `whatsits` nodes
+-- of subtype `pdf_colorstack` has been renamed to `command`. This
+-- function checks which node field is the correct one and activates a
+-- fall-back function in case. Due to a bug in LuaTeX 0.76.0 (shipped
+-- with TL2013) function `node.has_field()` doesn't return correct
+-- results. It is therefore tested if an assignment to field `command`
+-- raises an error or not.
+local function __maintain_compatibility()
+ -- Create pdf_colorstack whatsit node.
+ local n = node.new(WHATSIT, PDF_COLORSTACK)
+ -- Function that assigns a value to node field 'command'.
+ local f = function()
+ n.command = 1
+ end
+ -- If the assignment is not successful, fall-back to node field 'cmd'.
+ if not pcall(f) then
+ __highlight_by_color = __highlight_by_color_cmd
+ end
+ -- Delete test node.
+ node.free(n)
+end
+
+
+--- Module initialisation.
+--
+local function __init()
+ -- Try to maintain compatibility with older LuaTeX versions.
+ __maintain_compatibility()
+ -- Get local references to package ressources.
+ __rules_bad = PKG_spelling.res.rules_bad
+ __rules_good = PKG_spelling.res.rules_good
+ __uid_start_tag = PKG_spelling.res.whatsit_ids.start_tag
+ __uid_end_tag = PKG_spelling.res.whatsit_ids.end_tag
+ -- Create empty paragraph management stack.
+ __is_vlist_paragraph = {}
+ -- Remember tagging status.
+ __is_active_tagging = false
+ -- Remember highlighting status.
+ __is_active_highlighting = false
+ -- Set default highlighting colour.
+ set_highlight_color('1 0 0 rg')
+ -- Register call-back: Before TeX breaks a paragraph into lines, tag
+ -- and highlight strings.
+ luatexbase.add_to_callback('pre_linebreak_filter', __cb_pre_linebreak_filter_pkg_spelling, '__cb_pre_linebreak_filter_pkg_spelling')
+end
+
+
+-- Initialize module.
+__init()
+
+
+-- Return module table.
+return M
diff --git a/macros/luatex/generic/spelling/spelling-stage-3.lua b/macros/luatex/generic/spelling/spelling-stage-3.lua
new file mode 100644
index 0000000000..613e6af995
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-stage-3.lua
@@ -0,0 +1,301 @@
+--- spelling-stage-3.lua
+--- Copyright 2012, 2013 Stephan Hennig
+--
+-- This work may be distributed and/or modified under the conditions of
+-- the LaTeX Project Public License, either version 1.3 of this license
+-- or (at your option) any later version. The latest version of this
+-- license is in http://www.latex-project.org/lppl.txt
+-- and version 1.3 or later is part of all distributions of LaTeX
+-- version 2005/12/01 or later.
+--
+-- See file README for more information.
+--
+
+
+--- Store the text of a LuaTeX document in a text document data
+--- structure.
+-- This module provides means to extract text from a LuaTeX document and
+-- to store it in a text document data structure.
+--
+-- In the text document, words are stored as UTF-8 encoded strings. A
+-- mapping mechanism is provided by which, during word string
+-- recognition, individual code-points, e.g., of glyph nodes, can be
+-- translated to arbitrary UTF-8 strings, e.g., ligatures to single
+-- letters.
+--
+-- @author Stephan Hennig
+-- @copyright 2012, 2013 Stephan Hennig
+-- @release version 0.41
+--
+-- @trick Prevent LuaDoc from looking past here for module description.
+--[[ Trick LuaDoc into entering 'module' mode without using that command.
+module(...)
+--]]
+
+
+-- Module table.
+local M = {}
+
+
+-- Import external modules.
+local recurse = require('spelling-recurse')
+
+
+-- Function short-cuts.
+local recurse_node_list = recurse.recurse_node_list
+
+local tabinsert = table.insert
+local tabremove = table.remove
+
+
+-- Short-cuts for constants.
+local WHATSIT = node.id('whatsit')
+local LOCAL_PAR = node.subtype('local_par')
+local USER_DEFINED = node.subtype('user_defined')
+
+
+-- Declare local variables to store references to resources that are
+-- provided by external code.
+--
+-- Text document data structure.
+local __text_document
+--
+-- ID of user-defined whatsit nodes marking the start of a word.
+local __uid_start_tag
+--
+-- ID of user-defined whatsit nodes marking the end of a word.
+local __uid_end_tag
+
+
+--- Module options.
+-- This table contains all module options. User functions to set
+-- options are provided.
+--
+-- @class table
+-- @name __opts
+-- @field table_par When processing a table, when should paragraphs be
+-- inserted into the text document?<br />
+--
+-- <ul>
+-- <li> 0 - Don't touch tables in any way.</li>
+-- <li> 1 - Insert paragraphs before and after hlists of type
+-- <i>alignment column or row</i>, i.e., before and after
+-- every table row.</li>
+-- <li> 2 - Insert paragraphs before and after hlists of type
+-- <i>alignment cell</i>, i.e., before and after every table
+-- cell.</li>
+-- </ul>
+local __opts = {
+ table_par,
+}
+
+
+--- Set table behaviour.
+-- Determine when paragraphs are inserted within tables.
+--
+-- @param value New value.
+local function set_table_paragraphs(value)
+ __opts.table_par = value
+end
+M.set_table_paragraphs = set_table_paragraphs
+
+
+--- Data structure that stores the word strings found in a node list.
+--
+-- @class table
+-- @name __curr_paragraph
+local __curr_paragraph
+
+
+--- Act upon detection of end of current word string.
+-- If the current word contains visible characters, store the current
+-- word in the current paragraph.
+--
+-- @param n String tag node.
+local function __finish_current_word(n)
+ -- Provide new empty paragraph, if necessary.
+ if not __curr_paragraph then
+ __curr_paragraph = {}
+ end
+ -- Append current string to current paragraph.
+ tabinsert(__curr_paragraph, n.value)
+end
+
+
+--- Act upon detection of end of current paragraph.
+-- If the current paragraph contains words, store the current paragraph
+-- in the text document.
+local function __finish_current_paragraph()
+ -- Finish a paragraph?
+ if __curr_paragraph then
+ -- Append current paragraph to document structure.
+ tabinsert(__text_document, __curr_paragraph)
+ __curr_paragraph = nil
+ end
+end
+
+
+--- Paragraph management stack.
+-- Stack of boolean flags, that are used for logging the occurence of a
+-- new paragraph within nested vlists.
+local __is_vlist_paragraph
+
+
+--- Paragraph management.
+-- This function puts a new boolean flag onto a stack that is used to
+-- log the occurence of a new paragraph, while recursing into the coming
+-- vlist. After finishing recursing into the vlist, the flag needs to
+-- be removed from the stack. Depending on the flag, the then current
+-- paragraph can be finished.
+local function __vlist_pre_recurse()
+ tabinsert(__is_vlist_paragraph, false)
+end
+
+
+--- Paragraph management.
+-- Remove flag from stack after recursing into a vlist. If necessary,
+-- finish the current paragraph.
+local function __vlist_post_recurse()
+ local p = tabremove(__is_vlist_paragraph)
+ if p then
+ __finish_current_paragraph()
+ end
+end
+
+
+--- Handle tables lines and cells.
+-- Start a new paragraph before and after an hlist of subtype `alignment
+-- column or row` or `alignment cell`, depending on option `table_par`.
+--
+-- @param n hlist node.
+local function __handle_table(n)
+ local subtype = n.subtype
+ local table_par = __opts.table_par
+ if (subtype == 4) and (table_par == 1) then
+ __finish_current_paragraph()
+ elseif (subtype == 5) and (table_par == 2) then
+ __finish_current_paragraph()
+ end
+end
+
+
+--- Find paragraphs and strings.
+-- While scanning a node list, this call-back function finds nodes
+-- representing the start of a paragraph (local_par whatsit nodes) and
+-- string tags (user_defined whatsit nodes).
+--
+-- @param head Head node of current branch.
+-- @param n The current node.
+local function __visit_node(head, n)
+ local nid = n.id
+ -- Test for node containing a word string.
+ if nid == WHATSIT then
+ -- Test for word string tag.
+ if (n.subtype == USER_DEFINED) and (n.user_id == __uid_end_tag) then
+ __finish_current_word(n)
+ -- Test for paragraph start.
+ elseif n.subtype == LOCAL_PAR then
+ __finish_current_paragraph()
+ __is_vlist_paragraph[#__is_vlist_paragraph] = true
+ end
+ end
+end
+
+
+--- Table of call-back functions for node list recursion: store the
+--- word strings found in a node list.
+-- The call-back functions in this table identify chains of nodes
+-- representing word strings in a node list and stores the strings in
+-- the text document. A new paragraph is started at local_par whatsit
+-- nodes and after finishing a vlist containing a local_par whatsit
+-- node. Nodes of type `hlist` are recursed into as if they were
+-- non-existent. As an example, the LaTeX input `a\mbox{a b}b` is
+-- recognized as two strings `aa` and `bb`.
+--
+-- @class table
+-- @name __cb_store_words
+-- @field vlist_pre_recurse Paragraph management.
+-- @field vlist_post_recurse Paragraph management.
+-- @field hlist_pre_recurse Table management.
+-- @field hlist_post_recurse Table management.
+-- @field visit_node Find nodes representing paragraphs and words.
+local __cb_store_words = {
+
+ vlist_pre_recurse = __vlist_pre_recurse,
+ vlist_post_recurse = __vlist_post_recurse,
+ hlist_pre_recurse = __handle_table,
+ hlist_post_recurse = __handle_table,
+ visit_node = __visit_node,
+
+}
+
+
+--- Process node list according to this stage.
+-- This function recurses into the given node list, finds strings in
+-- tags and stores them in the text document.
+--
+-- @param head Node list.
+local function __process_node_list(head)
+ recurse_node_list(head, __cb_store_words)
+ -- Clean-up left-over word and/or paragraph.
+ __finish_current_paragraph()
+end
+
+
+-- Call-back status.
+local __is_active_storage
+
+
+--- Call-back function that processes the node list.
+-- <i>This function is not made available in the module table, but in
+-- the global package table!</i>
+--
+-- @param head Node list.
+local function cb_AtBeginShipout(box)
+ if __is_active_storage then
+ __process_node_list(tex.box[box])
+ end
+end
+
+
+--- Start storing text.
+-- After calling this function, text is stored in the text document.
+local function enable_text_storage()
+ __is_active_storage = true
+end
+M.enable_text_storage = enable_text_storage
+
+
+--- Stop storing text.
+-- After calling this function, no more text is stored in the text
+-- document.
+local function disable_text_storage()
+ __is_active_storage = false
+end
+M.disable_text_storage = disable_text_storage
+
+
+--- Module initialisation.
+--
+local function __init()
+ -- Get local references to package ressources.
+ __text_document = PKG_spelling.res.text_document
+ __uid_start_tag = PKG_spelling.res.whatsit_ids.start_tag
+ __uid_end_tag = PKG_spelling.res.whatsit_ids.end_tag
+ -- Make \AtBeginShipout function available in package table.
+ PKG_spelling.cb_AtBeginShipout = cb_AtBeginShipout
+ -- Create empty paragraph management stack.
+ __is_vlist_paragraph = {}
+ -- Remember call-back status.
+ __is_active_storage = false
+ -- Set default table paragraph behaviour.
+ set_table_paragraphs(0)
+end
+
+
+-- Initialize module.
+__init()
+
+
+-- Return module table.
+return M
diff --git a/macros/luatex/generic/spelling/spelling-stage-4.lua b/macros/luatex/generic/spelling/spelling-stage-4.lua
new file mode 100644
index 0000000000..ce027c8c50
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling-stage-4.lua
@@ -0,0 +1,202 @@
+--- spelling-stage-4.lua
+--- Copyright 2012, 2013 Stephan Hennig
+--
+-- This work may be distributed and/or modified under the conditions of
+-- the LaTeX Project Public License, either version 1.3 of this license
+-- or (at your option) any later version. The latest version of this
+-- license is in http://www.latex-project.org/lppl.txt
+-- and version 1.3 or later is part of all distributions of LaTeX
+-- version 2005/12/01 or later.
+--
+-- See file README for more information.
+--
+
+
+--- At the end of a LuaTeX run, write the text stored in a text document
+--- data structure to a file.
+-- This module provides means to write the text stored in a text
+-- document data structure to a file at the end of a LuaTeX run.
+--
+-- @author Stephan Hennig
+-- @copyright 2012, 2013 Stephan Hennig
+-- @release version 0.41
+--
+-- @trick Prevent LuaDoc from looking past here for module description.
+--[[ Trick LuaDoc into entering 'module' mode without using that command.
+module(...)
+--]]
+
+
+-- Module table.
+local M = {}
+
+
+-- Import external modules.
+local unicode = require('unicode')
+
+
+-- Function short-cuts.
+local tabconcat = table.concat
+local tabinsert = table.insert
+
+local Ulen = unicode.utf8.len
+
+
+-- Declare local variables to store references to resources that are
+-- provided by external code.
+--
+-- Text document data structure.
+local __text_document
+
+
+--- Module options.
+-- This table contains all module options. User functions to set
+-- options are provided.
+--
+-- @class table
+-- @name __opts
+-- @field output_file_name Output file name.
+-- @field output_line_length Line length in output.
+local __opts = {
+ output_file_name,
+ output_line_lenght,
+}
+
+
+--- Set output file name.
+-- Text output will be written to a file with the given name.
+--
+-- @param name New output file name.
+local function set_output_file_name(name)
+ __opts.output_file_name = name
+end
+M.set_output_file_name = set_output_file_name
+
+
+--- Set output line length.
+-- Set the number of columns in text output. Text output will be
+-- wrapped at spaces so that lines don't contain more than the specified
+-- number of characters per line. There's one exception: if a word is
+-- longer than the specified number of characters, the word is put on
+-- its own line and that line will be overfull.
+--
+-- @param cols New line length in output. If the argument is smaller
+-- than 1, lines aren't wrapped, i.e., all text of a paragraph is put on
+-- a single line.
+local function set_output_line_length(cols)
+ __opts.output_line_length = cols
+end
+M.set_output_line_length = set_output_line_length
+
+
+--- Break a text paragraph into lines.
+-- Lines are broken at spaces only. Lines containing only one word may
+-- exceed maximum line length.
+--
+-- @param par A text paragraph (an array of words).
+-- @param max_line_len Maximum length of lines in wrapped paragraph. If
+-- the argument is less then 1, paragraph isn't wrapped at all.
+-- @return Table containing the lines of the paragraph.
+local function __wrap_text_paragraph(par, max_line_len)
+ local wrapped_par = {}
+ -- Index of first word on current line. Initialize current line with
+ -- first word of paragraph.
+ local line_start = 1
+ -- Track current line length.
+ local line_len = Ulen(par[line_start])
+ -- Iterate over remaining words in paragraph.
+ for i = 2,#par do
+ local word_len = Ulen(par[i])
+ local new_line_len = line_len + 1 + word_len
+ -- Maximum line length exceeded?
+ if new_line_len > max_line_len and max_line_len >= 1 then
+ -- Insert current line into wrapped paragraph.
+ tabinsert(wrapped_par, tabconcat(par, ' ', line_start, i-1))
+ -- Initialize new current line.
+ line_start = i
+ new_line_len = word_len
+ end
+ -- Append word to current line.
+ line_len = new_line_len
+ end
+ -- Insert last line of paragraph.
+ tabinsert(wrapped_par, tabconcat(par, ' ', line_start))
+ return wrapped_par
+end
+
+
+--- Write all text stored in the text document to a file.
+--
+local function __write_text_document()
+ -- Open output file.
+ local fname = __opts.output_file_name or (tex.jobname .. '.spell.txt')
+ local f = assert(io.open(fname, 'w'))
+ local max_line_len = __opts.output_line_length
+ -- Iterate through document paragraphs.
+ for _,par in ipairs(__text_document) do
+ -- Write wrapped paragraph to file with a leading empty line.
+ f:write('\n', tabconcat(__wrap_text_paragraph(par, max_line_len), '\n'), '\n')
+ -- Delete paragraph from memory.
+ __text_document[_] = nil
+ end
+ -- Close output file.
+ f:close()
+end
+
+
+--- Callback function that writes all document text into a file.
+local function __cb_stopr_pkg_spelling()
+ __write_text_document()
+end
+
+
+-- Call-back status.
+local __is_active_output
+
+
+--- Enable text document output.
+-- Registers call-back `stop_run` to output the text stored in the text
+-- document at the end of the LuaTeX run.
+local function enable_text_output()
+ if not __is_active_output then
+ -- Register call-back: At the end of the LuaTeX run, output all text
+ -- stored in the text document.
+ luatexbase.add_to_callback('stop_run', __write_text_document, '__cb_stop_run_pkg_spelling')
+ __is_active_output = true
+ end
+end
+M.enable_text_output = enable_text_output
+
+
+--- Disable text document output.
+-- De-registers call-back `stop_run`.
+local function disable_text_output()
+ if __is_active_output then
+ -- De-register call-back.
+ luatexbase.remove_from_callback('stop_run', '__cb_stop_run_pkg_spelling')
+ __is_active_output = false
+ end
+end
+M.disable_text_output = disable_text_output
+
+
+--- Module initialisation.
+--
+local function __init()
+ -- Get local references to package ressources.
+ __text_document = PKG_spelling.res.text_document
+ -- Set default output file name.
+ set_output_file_name(nil)
+ -- Set default output line length.
+ set_output_line_length(72)
+ -- Remember call-back status.
+ __is_active_output = false
+end
+
+
+-- Initialize module.
+__init()
+
+
+-- Return module table.
+return M
diff --git a/macros/luatex/generic/spelling/spelling.sty b/macros/luatex/generic/spelling/spelling.sty
new file mode 100644
index 0000000000..8e780b432f
--- /dev/null
+++ b/macros/luatex/generic/spelling/spelling.sty
@@ -0,0 +1,150 @@
+%%% spelling.sty
+%%% Copyright 2012, 2013 Stephan Hennig
+%%
+%% This work may be distributed and/or modified under the conditions of
+%% the LaTeX Project Public License, either version 1.3 of this license
+%% or (at your option) any later version. The latest version of this
+%% license is in http://www.latex-project.org/lppl.txt
+%% and version 1.3 or later is part of all distributions of LaTeX
+%% version 2005/12/01 or later.
+%%
+%% See file README for more information.
+%%
+\ProvidesPackage{spelling}
+ [2013/05/25 v0.41 Support for spell-checking of LuaTeX documents (SH)]
+\NeedsTeXFormat{LaTeX2e}[1999/12/01]
+% Test for the LuaTeX engine.
+\RequirePackage{ifluatex}
+\ifluatex
+\else
+\PackageError{spelling}{LuaTeX engine required}{You could try with the
+ `lualatex' command.}
+\fi
+% Lua module version management.
+\RequirePackage{luatexbase-modutils}
+% LuaTeX call-back management.
+\RequirePackage{luatexbase-mcb}
+% User-defined whatsit node ID allocation.
+\RequirePackage{luatexbase-attr}
+% Load main Lua file.
+\directlua name {spelling}{
+ % The main Lua file is not a module, so we must pass a full path to
+ % `dofile`.
+ local f = kpse.find_file('spelling-main.lua', 'lua')
+ dofile(f)
+}
+% Words are extracted after pages have been built. In lack of a proper
+% Lua call-back, we're hooking into shipout from the LaTeX side.
+\RequirePackage{atbegshi}
+\AtBeginShipout{%
+ \directlua name {spelling-atbeginshipout}{
+ PKG_spelling.cb_AtBeginShipout(\the\AtBeginShipoutBox)
+ }%
+}
+% Provide command for reading-in a list of bad spellings.
+\newcommand*{\spellingreadbad}[1]{%
+ \directlua{
+ PKG_spelling.stage[1].parse_bad_plain_list_file('\luaescapestring{#1}')
+ }%
+}
+% Provide command for reading-in a list of good spellings.
+\newcommand*{\spellingreadgood}[1]{%
+ \directlua{
+ PKG_spelling.stage[1].parse_good_plain_list_file('\luaescapestring{#1}')
+ }%
+}
+% Provide command for reading bad spellings from a LanguageTool XML
+% file.
+\newcommand*{\spellingreadLT}[1]{%
+ \directlua{
+ PKG_spelling.stage[1].parse_XML_LanguageTool_file('\luaescapestring{#1}')
+ }%
+}
+% Provide command for reading match rules from a file. Argument must be
+% a file name.
+\newcommand*{\spellingmatchrules}[1]{%
+ \directlua{
+ PKG_spelling.stage[1].read_match_rules('\luaescapestring{#1}')
+ }%
+}
+% Provide command for enabling/disabling visual feedback.
+\newcommand*{\spellinghighlight}[1]{%
+ \directlua{
+ if '\luaescapestring{#1}' == 'on' then
+ PKG_spelling.stage[2].enable_word_highlighting()
+ else
+ PKG_spelling.stage[2].disable_word_highlighting()
+ end
+ }%
+}
+% Provide command for setting visual feedback colour.
+\newcommand*{\spellinghighlightcolor}[1]{%
+ \directlua{
+ PKG_spelling.stage[2].set_highlight_color('\luaescapestring{#1}')
+ }%
+}
+% Provide command for enabling/disabling text ouput.
+\newcommand*{\spellingoutput}[1]{%
+ \directlua{
+ if '\luaescapestring{#1}' == 'on' then
+ PKG_spelling.stage[4].enable_text_output()
+ else
+ PKG_spelling.stage[4].disable_text_output()
+ end
+ }%
+}
+% Provide command for setting text output file name.
+\newcommand*{\spellingoutputname}[1]{%
+ \directlua{
+ PKG_spelling.stage[4].set_output_file_name('\luaescapestring{#1}')
+ }%
+}
+% Provide command for setting text output file line length.
+\newcommand*{\spellingoutputlinelength}[1]{%
+ \directlua{
+ PKG_spelling.stage[4].set_output_line_length(\luaescapestring{#1})
+ }%
+}
+% Provide command for enabling/disabling text extraction.
+\newcommand*{\spellingextract}[1]{%
+ \directlua{
+ if '\luaescapestring{#1}' == 'on' then
+ PKG_spelling.stage[2].enable_text_tagging()
+ else
+ PKG_spelling.stage[2].disable_text_tagging()
+ end
+ }%
+}
+% Provide command to declare code point mappings.
+\newcommand*{\spellingmapping}[2]{%
+ \directlua{
+ local r = PKG_spelling.stage[2].set_mapping(\luaescapestring{#1}, '\luaescapestring{#2}')
+ if r == false then
+ texio.write_nl('package spelling: invalid mapping: \luaescapestring{#1} => \luaescapestring{#2}')
+ end
+ }%
+}
+% Provide command to clear all existing code point mappings.
+\newcommand*{\spellingclearallmappings}{%
+ \directlua{
+ PKG_spelling.stage[2].clear_all_mappings()
+ }%
+}
+% Provide command to specify table paragraph behaviour.
+\newcommand*{\spellingtablepar}[1]{%
+ \directlua{
+ PKG_spelling.stage[3].set_table_paragraphs(\luaescapestring{#1})
+ }%
+}
+%
+% Read bad and good spellings from default sources.
+\directlua{
+ PKG_spelling.stage[1].parse_default_bad_and_good()
+}%
+%
+% Enable visual feedback.
+\spellinghighlight{on}
+% Enable text ouput at the end of the LuaTeX run.
+\spellingoutput{on}
+% Enable text extraction.
+\spellingextract{on}