diff options
Diffstat (limited to 'Build/source/libs/zziplib/zziplib-0.13.60/docs')
63 files changed, 15729 insertions, 0 deletions
diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/64on32.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/64on32.htm new file mode 100644 index 00000000000..4da84140d5c --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/64on32.htm @@ -0,0 +1,195 @@ +<section> <date> 13. Aug 2003 </date> +<h2> 64on32 largefile information </h2> + +<section><!--border--> +<h3> largefile problems </h3> + +<P> + Through actual problems with handling files larger than 2 Gigabyte, + I had to face the fact that there are serious problems with the + largefile implementation around shared-libraries. This is bound + to the effect that 64on32 platforms allow a preprocessor #define + _FILE_OFFSET_BITS=64 which will shift the integretal type of + off_t from a 32bit to a 64bit entity. +</P> + +<P> + That will in fact lead to problems if an application is compiled + with a different off_t size than the shared-library it will be + linked to. Among the problems are different sizes of the callframe + for those functions that take an argument of off_t type. Note how + the "seek" call uses an off_t in the middle of its arguments which + may be 32bit or 64bit depending on a preprocessor define. And + you know that zziplib wraps up "seek"-like calls as well. +</P> + +<P> + The observations were largely made independent of the zziplib + however, as a zip-file does uses 32bit offsets in its header + fields anyway, and therefore a single zip-archive (or any of + its wrapped files) can not be larger than 2 Gigabyte anyway. + I would to refer you for deeper information to my website + about this problem-space at +</P> +<p><center><big> + <a href="http://ac-archive.sf.net/largefile" remap="url"> + ac-archive.sf.net/largefile </a> +</big></center></p> + +</section><section> +<h3> zziplib related </h3> + +<P> + Still however the problems hold for zziplib usage when + the functions are linked dynamically as a shared-library. + Here we can face the fact that an application (or a higher + level library) uses a different off_t size than the + underlying zziplib shared-library. +</P> + +<P> + If you read the <code>zzip/zzip.h</code> header file then + you will surely see a number of off_t usages around, here + they are wrapped in the form of zzip_off_t to get away with + platforms not predefining off_t in the first place. Those + functions are (at the time of writing): +<blockquote><ul> +<li> <code>zzip_telldir</code> (return type) </li> +<li> <code>zzip_seekdir</code> (second param) </li> +<li> <code>zzip_tell</code> (return type) </li> +<li> <code>zzip_seek</code> (second param of three)</li> +</ul></blockquote> +</P> + +<P> + What might be not as obvious however: you will find + also the off_t type being used in the plugin-handlers + callback functions. That is based on the fact that + the plugin-structures is filled by default with the + posix-functions from the C library. A 64on32 platform + however offers quite usually a mixedmode C library + exporting two symbols for tell/seek calls to match + either 32bit off_t or 64bit off_t +<blockquote><ul> +<li> <code>zzip_plugin_io->seeks</code> (return type and second param) </li> +<li> <code>zzip_plugin_io->filesize</code> (return type) </li> +</ul></blockquote> +</P> + +<P> + The problem here: the application might not make use of + zzip_seek/zzip_tell explicitly, but may be the internal + implementation of a zzip call uses io->seeks or io->filesize. + When an application uses plugin-io with these callbacks + overridden then surely problems will arise. +</P> + +</section><section> +<h3> zziplib mixedmode option </h3> + +<P> + I have extended the zziplib implementation to allow itself to + live fine on 64on32 systems. The 64on32 system are like linux + and solaris where the default off_t is 32bit and only by the + preprocessor hint they shift into 64bit. The C library on + these systems is a mixedmode one offering a pair for each of + the problematic functions - lseek <em>and</em> lseek64 for + example. +</P> + +<P> + The zziplib header file detects when it is present on a + 64on32 system (through hints in configured zzip/conf.h) + and that _FILE_OFFSET_BITS has been set to 64bit. In that + case it does automatically issue #defines that shift the + symbol-name from zzip_seek into zzip_seek64. Likewise, + <em>all</em> the *_ext_io functions are renamed into + *_ext_io64 calls +</P> + +<P> + The zziplib library itself will also pick up the renamings + when it is compiled with 64bit off_t - in effect an application + with a 64bit-off_t dependency can only link with a zziplib + compiled in 64bit-off_t mode. If the application does not + use any call symbol with an off_t dependency then it does + not matter and the link will succeed. That's simply because + function calls without an off_t dependency will not be + renamed and they are the same for a 32bit-off_t zziplib or + a 64bit-off_t zziplib. +</P> + +<P> + As an extra, the zziplib exports a few of its common calls + like being a mixedmode library when you compile it both in + 64bit mode and as a shared library. In that case, the + resulting shared library will export symbol pairs for the + calls with an off_t dependency, i.e. both zzip_seek and + zzip_seek64 are present. +</P> + +<P> + Note that for reasons of being a lightweight library, the + zziplib library does not export mixmode call pairs for + the *_ext_io family of functions. The current generation + of zziplib does call io->seeks unconditionally of any + case'ing flag and so far there are no problems with the + current design. +</P> + +</section><section> +<h3> Implementation details </h3> + +<P> + In the header file zzip/zzip.h you will find the define + ZZIP_LARGEFILE_RENAME which triggers the renaming process. + See zzip/conf.h on the conditions where it is being triggered. +</P> + +<P> + For the implementation of the mixedmode symbol pairs, see + zzip/dir.c for an example for the zzip_seekdir/zzip_seekdir64 + pair - here we use libtools -DPIC to detect the situation of + being compiled as shared-library, we use the preprocessor + #def ZZIP_LARGEFILE_RENAME to know we are on a 64on32 + system compiled in 64bit-off_t, and we check the transitional + largefile API to be present by looking for EOVERFLOW errno. +</P> + +<P> + When all the three are present then we simply #undef the + renaming preprocessor macro and define a function symbol + (without the renaming) and call the renamed symbol + already compiled a few lines before. We use the pre-off_t + type "long" for the 32bit entity of these calls. While + we mostly let the compiler do the shrink/expand of these + integer types, we do also sometimes check for overflows + of the seekvalue. +</P> + +</section><section> +<h3> rpm extras and pkg-config </h3> + +<P> + The provided .spec file shows how to compile both variants + of the zziplib shared library and to install them in + parallel in the system. Also we provide doubled sets + of .pc files for pkg-config installation. That should make + it a lot easier for applications to link to the correct + library they want. +</P> + +<P> + Here are all the variants that you can find after installing + the vanilla rpm files from zziplib.sf.net: +<pre> +$ pkg-config --list-all | sort | grep zzip +zziplib32 zziplib32 - ZZipLib - libZ-based ZIP-access Library +zziplib64 zziplib64 - ZZipLib - libZ-based ZIP-access Library +zziplib zziplib - ZZipLib - libZ-based ZIP-access Library +zzip-sdl-config zzip-sdl-config - SDL Config (for ZZipLib) +zzip-sdl-rwops zzip-sdl-rwops - SDL_rwops for ZZipLib +zzipwrap zzipwrap - Callback Wrappers for ZZipLib +zzip-zlib-config zzip-zlib-config - ZLib Config (for ZZipLib) +</pre></P> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.LIB b/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.LIB new file mode 100644 index 00000000000..eb685a5ec98 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.LIB @@ -0,0 +1,481 @@ + GNU LIBRARY GENERAL PUBLIC LICENSE + Version 2, June 1991 + + Copyright (C) 1991 Free Software Foundation, Inc. + 675 Mass Ave, Cambridge, MA 02139, USA + Everyone is permitted to copy and distribute verbatim copies + of this license document, but changing it is not allowed. + +[This is the first released version of the library GPL. It is + numbered 2 because it goes with version 2 of the ordinary GPL.] + + Preamble + + The licenses for most software are designed to take away your +freedom to share and change it. By contrast, the GNU General Public +Licenses are intended to guarantee your freedom to share and change +free software--to make sure the software is free for all its users. + + This license, the Library General Public License, applies to some +specially designated Free Software Foundation software, and to any +other libraries whose authors decide to use it. You can use it for +your libraries, too. + + When we speak of free software, we are referring to freedom, not +price. Our General Public Licenses are designed to make sure that you +have the freedom to distribute copies of free software (and charge for +this service if you wish), that you receive source code or can get it +if you want it, that you can change the software or use pieces of it +in new free programs; and that you know you can do these things. + + To protect your rights, we need to make restrictions that forbid +anyone to deny you these rights or to ask you to surrender the rights. +These restrictions translate to certain responsibilities for you if +you distribute copies of the library, or if you modify it. + + For example, if you distribute copies of the library, whether gratis +or for a fee, you must give the recipients all the rights that we gave +you. You must make sure that they, too, receive or can get the source +code. If you link a program with the library, you must provide +complete object files to the recipients so that they can relink them +with the library, after making changes to the library and recompiling +it. And you must show them these terms so they know their rights. + + Our method of protecting your rights has two steps: (1) copyright +the library, and (2) offer you this license which gives you legal +permission to copy, distribute and/or modify the library. + + Also, for each distributor's protection, we want to make certain +that everyone understands that there is no warranty for this free +library. If the library is modified by someone else and passed on, we +want its recipients to know that what they have is not the original +version, so that any problems introduced by others will not reflect on +the original authors' reputations. + + Finally, any free program is threatened constantly by software +patents. We wish to avoid the danger that companies distributing free +software will individually obtain patent licenses, thus in effect +transforming the program into proprietary software. To prevent this, +we have made it clear that any patent must be licensed for everyone's +free use or not licensed at all. + + Most GNU software, including some libraries, is covered by the ordinary +GNU General Public License, which was designed for utility programs. This +license, the GNU Library General Public License, applies to certain +designated libraries. This license is quite different from the ordinary +one; be sure to read it in full, and don't assume that anything in it is +the same as in the ordinary license. + + The reason we have a separate public license for some libraries is that +they blur the distinction we usually make between modifying or adding to a +program and simply using it. Linking a program with a library, without +changing the library, is in some sense simply using the library, and is +analogous to running a utility program or application program. However, in +a textual and legal sense, the linked executable is a combined work, a +derivative of the original library, and the ordinary General Public License +treats it as such. + + Because of this blurred distinction, using the ordinary General +Public License for libraries did not effectively promote software +sharing, because most developers did not use the libraries. We +concluded that weaker conditions might promote sharing better. + + However, unrestricted linking of non-free programs would deprive the +users of those programs of all benefit from the free status of the +libraries themselves. This Library General Public License is intended to +permit developers of non-free programs to use free libraries, while +preserving your freedom as a user of such programs to change the free +libraries that are incorporated in them. (We have not seen how to achieve +this as regards changes in header files, but we have achieved it as regards +changes in the actual functions of the Library.) The hope is that this +will lead to faster development of free libraries. + + The precise terms and conditions for copying, distribution and +modification follow. Pay close attention to the difference between a +"work based on the library" and a "work that uses the library". The +former contains code derived from the library, while the latter only +works together with the library. + + Note that it is possible for a library to be covered by the ordinary +General Public License rather than by this special one. + + GNU LIBRARY GENERAL PUBLIC LICENSE + TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION + + 0. This License Agreement applies to any software library which +contains a notice placed by the copyright holder or other authorized +party saying it may be distributed under the terms of this Library +General Public License (also called "this License"). Each licensee is +addressed as "you". + + A "library" means a collection of software functions and/or data +prepared so as to be conveniently linked with application programs +(which use some of those functions and data) to form executables. + + The "Library", below, refers to any such software library or work +which has been distributed under these terms. A "work based on the +Library" means either the Library or any derivative work under +copyright law: that is to say, a work containing the Library or a +portion of it, either verbatim or with modifications and/or translated +straightforwardly into another language. (Hereinafter, translation is +included without limitation in the term "modification".) + + "Source code" for a work means the preferred form of the work for +making modifications to it. For a library, complete source code means +all the source code for all modules it contains, plus any associated +interface definition files, plus the scripts used to control compilation +and installation of the library. + + Activities other than copying, distribution and modification are not +covered by this License; they are outside its scope. The act of +running a program using the Library is not restricted, and output from +such a program is covered only if its contents constitute a work based +on the Library (independent of the use of the Library in a tool for +writing it). Whether that is true depends on what the Library does +and what the program that uses the Library does. + + 1. You may copy and distribute verbatim copies of the Library's +complete source code as you receive it, in any medium, provided that +you conspicuously and appropriately publish on each copy an +appropriate copyright notice and disclaimer of warranty; keep intact +all the notices that refer to this License and to the absence of any +warranty; and distribute a copy of this License along with the +Library. + + You may charge a fee for the physical act of transferring a copy, +and you may at your option offer warranty protection in exchange for a +fee. + + 2. You may modify your copy or copies of the Library or any portion +of it, thus forming a work based on the Library, and copy and +distribute such modifications or work under the terms of Section 1 +above, provided that you also meet all of these conditions: + + a) The modified work must itself be a software library. + + b) You must cause the files modified to carry prominent notices + stating that you changed the files and the date of any change. + + c) You must cause the whole of the work to be licensed at no + charge to all third parties under the terms of this License. + + d) If a facility in the modified Library refers to a function or a + table of data to be supplied by an application program that uses + the facility, other than as an argument passed when the facility + is invoked, then you must make a good faith effort to ensure that, + in the event an application does not supply such function or + table, the facility still operates, and performs whatever part of + its purpose remains meaningful. + + (For example, a function in a library to compute square roots has + a purpose that is entirely well-defined independent of the + application. Therefore, Subsection 2d requires that any + application-supplied function or table used by this function must + be optional: if the application does not supply it, the square + root function must still compute square roots.) + +These requirements apply to the modified work as a whole. If +identifiable sections of that work are not derived from the Library, +and can be reasonably considered independent and separate works in +themselves, then this License, and its terms, do not apply to those +sections when you distribute them as separate works. But when you +distribute the same sections as part of a whole which is a work based +on the Library, the distribution of the whole must be on the terms of +this License, whose permissions for other licensees extend to the +entire whole, and thus to each and every part regardless of who wrote +it. + +Thus, it is not the intent of this section to claim rights or contest +your rights to work written entirely by you; rather, the intent is to +exercise the right to control the distribution of derivative or +collective works based on the Library. + +In addition, mere aggregation of another work not based on the Library +with the Library (or with a work based on the Library) on a volume of +a storage or distribution medium does not bring the other work under +the scope of this License. + + 3. You may opt to apply the terms of the ordinary GNU General Public +License instead of this License to a given copy of the Library. To do +this, you must alter all the notices that refer to this License, so +that they refer to the ordinary GNU General Public License, version 2, +instead of to this License. (If a newer version than version 2 of the +ordinary GNU General Public License has appeared, then you can specify +that version instead if you wish.) Do not make any other change in +these notices. + + Once this change is made in a given copy, it is irreversible for +that copy, so the ordinary GNU General Public License applies to all +subsequent copies and derivative works made from that copy. + + This option is useful when you wish to copy part of the code of +the Library into a program that is not a library. + + 4. You may copy and distribute the Library (or a portion or +derivative of it, under Section 2) in object code or executable form +under the terms of Sections 1 and 2 above provided that you accompany +it with the complete corresponding machine-readable source code, which +must be distributed under the terms of Sections 1 and 2 above on a +medium customarily used for software interchange. + + If distribution of object code is made by offering access to copy +from a designated place, then offering equivalent access to copy the +source code from the same place satisfies the requirement to +distribute the source code, even though third parties are not +compelled to copy the source along with the object code. + + 5. A program that contains no derivative of any portion of the +Library, but is designed to work with the Library by being compiled or +linked with it, is called a "work that uses the Library". Such a +work, in isolation, is not a derivative work of the Library, and +therefore falls outside the scope of this License. + + However, linking a "work that uses the Library" with the Library +creates an executable that is a derivative of the Library (because it +contains portions of the Library), rather than a "work that uses the +library". The executable is therefore covered by this License. +Section 6 states terms for distribution of such executables. + + When a "work that uses the Library" uses material from a header file +that is part of the Library, the object code for the work may be a +derivative work of the Library even though the source code is not. +Whether this is true is especially significant if the work can be +linked without the Library, or if the work is itself a library. The +threshold for this to be true is not precisely defined by law. + + If such an object file uses only numerical parameters, data +structure layouts and accessors, and small macros and small inline +functions (ten lines or less in length), then the use of the object +file is unrestricted, regardless of whether it is legally a derivative +work. (Executables containing this object code plus portions of the +Library will still fall under Section 6.) + + Otherwise, if the work is a derivative of the Library, you may +distribute the object code for the work under the terms of Section 6. +Any executables containing that work also fall under Section 6, +whether or not they are linked directly with the Library itself. + + 6. As an exception to the Sections above, you may also compile or +link a "work that uses the Library" with the Library to produce a +work containing portions of the Library, and distribute that work +under terms of your choice, provided that the terms permit +modification of the work for the customer's own use and reverse +engineering for debugging such modifications. + + You must give prominent notice with each copy of the work that the +Library is used in it and that the Library and its use are covered by +this License. You must supply a copy of this License. If the work +during execution displays copyright notices, you must include the +copyright notice for the Library among them, as well as a reference +directing the user to the copy of this License. Also, you must do one +of these things: + + a) Accompany the work with the complete corresponding + machine-readable source code for the Library including whatever + changes were used in the work (which must be distributed under + Sections 1 and 2 above); and, if the work is an executable linked + with the Library, with the complete machine-readable "work that + uses the Library", as object code and/or source code, so that the + user can modify the Library and then relink to produce a modified + executable containing the modified Library. (It is understood + that the user who changes the contents of definitions files in the + Library will not necessarily be able to recompile the application + to use the modified definitions.) + + b) Accompany the work with a written offer, valid for at + least three years, to give the same user the materials + specified in Subsection 6a, above, for a charge no more + than the cost of performing this distribution. + + c) If distribution of the work is made by offering access to copy + from a designated place, offer equivalent access to copy the above + specified materials from the same place. + + d) Verify that the user has already received a copy of these + materials or that you have already sent this user a copy. + + For an executable, the required form of the "work that uses the +Library" must include any data and utility programs needed for +reproducing the executable from it. However, as a special exception, +the source code distributed need not include anything that is normally +distributed (in either source or binary form) with the major +components (compiler, kernel, and so on) of the operating system on +which the executable runs, unless that component itself accompanies +the executable. + + It may happen that this requirement contradicts the license +restrictions of other proprietary libraries that do not normally +accompany the operating system. Such a contradiction means you cannot +use both them and the Library together in an executable that you +distribute. + + 7. You may place library facilities that are a work based on the +Library side-by-side in a single library together with other library +facilities not covered by this License, and distribute such a combined +library, provided that the separate distribution of the work based on +the Library and of the other library facilities is otherwise +permitted, and provided that you do these two things: + + a) Accompany the combined library with a copy of the same work + based on the Library, uncombined with any other library + facilities. This must be distributed under the terms of the + Sections above. + + b) Give prominent notice with the combined library of the fact + that part of it is a work based on the Library, and explaining + where to find the accompanying uncombined form of the same work. + + 8. You may not copy, modify, sublicense, link with, or distribute +the Library except as expressly provided under this License. Any +attempt otherwise to copy, modify, sublicense, link with, or +distribute the Library is void, and will automatically terminate your +rights under this License. However, parties who have received copies, +or rights, from you under this License will not have their licenses +terminated so long as such parties remain in full compliance. + + 9. You are not required to accept this License, since you have not +signed it. However, nothing else grants you permission to modify or +distribute the Library or its derivative works. These actions are +prohibited by law if you do not accept this License. Therefore, by +modifying or distributing the Library (or any work based on the +Library), you indicate your acceptance of this License to do so, and +all its terms and conditions for copying, distributing or modifying +the Library or works based on it. + + 10. Each time you redistribute the Library (or any work based on the +Library), the recipient automatically receives a license from the +original licensor to copy, distribute, link with or modify the Library +subject to these terms and conditions. You may not impose any further +restrictions on the recipients' exercise of the rights granted herein. +You are not responsible for enforcing compliance by third parties to +this License. + + 11. If, as a consequence of a court judgment or allegation of patent +infringement or for any other reason (not limited to patent issues), +conditions are imposed on you (whether by court order, agreement or +otherwise) that contradict the conditions of this License, they do not +excuse you from the conditions of this License. If you cannot +distribute so as to satisfy simultaneously your obligations under this +License and any other pertinent obligations, then as a consequence you +may not distribute the Library at all. For example, if a patent +license would not permit royalty-free redistribution of the Library by +all those who receive copies directly or indirectly through you, then +the only way you could satisfy both it and this License would be to +refrain entirely from distribution of the Library. + +If any portion of this section is held invalid or unenforceable under any +particular circumstance, the balance of the section is intended to apply, +and the section as a whole is intended to apply in other circumstances. + +It is not the purpose of this section to induce you to infringe any +patents or other property right claims or to contest validity of any +such claims; this section has the sole purpose of protecting the +integrity of the free software distribution system which is +implemented by public license practices. Many people have made +generous contributions to the wide range of software distributed +through that system in reliance on consistent application of that +system; it is up to the author/donor to decide if he or she is willing +to distribute software through any other system and a licensee cannot +impose that choice. + +This section is intended to make thoroughly clear what is believed to +be a consequence of the rest of this License. + + 12. If the distribution and/or use of the Library is restricted in +certain countries either by patents or by copyrighted interfaces, the +original copyright holder who places the Library under this License may add +an explicit geographical distribution limitation excluding those countries, +so that distribution is permitted only in or among countries not thus +excluded. In such case, this License incorporates the limitation as if +written in the body of this License. + + 13. The Free Software Foundation may publish revised and/or new +versions of the Library General Public License from time to time. +Such new versions will be similar in spirit to the present version, +but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Library +specifies a version number of this License which applies to it and +"any later version", you have the option of following the terms and +conditions either of that version or of any later version published by +the Free Software Foundation. If the Library does not specify a +license version number, you may choose any version ever published by +the Free Software Foundation. + + 14. If you wish to incorporate parts of the Library into other free +programs whose distribution conditions are incompatible with these, +write to the author to ask for permission. For software which is +copyrighted by the Free Software Foundation, write to the Free +Software Foundation; we sometimes make exceptions for this. Our +decision will be guided by the two goals of preserving the free status +of all derivatives of our free software and of promoting the sharing +and reuse of software generally. + + NO WARRANTY + + 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO +WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. +EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR +OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY +KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR +PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE +LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME +THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + + 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN +WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY +AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU +FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR +CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE +LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING +RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A +FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF +SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH +DAMAGES. + + END OF TERMS AND CONDITIONS + + Appendix: How to Apply These Terms to Your New Libraries + + If you develop a new library, and you want it to be of the greatest +possible use to the public, we recommend making it free software that +everyone can redistribute and change. You can do so by permitting +redistribution under these terms (or, alternatively, under the terms of the +ordinary General Public License). + + To apply these terms, attach the following notices to the library. It is +safest to attach them to the start of each source file to most effectively +convey the exclusion of warranty; and each file should have at least the +"copyright" line and a pointer to where the full notice is found. + + <one line to give the library's name and a brief idea of what it does.> + Copyright (C) <year> <name of author> + + This library is free software; you can redistribute it and/or + modify it under the terms of the GNU Library General Public + License as published by the Free Software Foundation; either + version 2 of the License, or (at your option) any later version. + + This library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Library General Public License for more details. + + You should have received a copy of the GNU Library General Public + License along with this library; if not, write to the Free + Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. + +Also add information on how to contact you by electronic and paper mail. + +You should also get your employer (if you work as a programmer) or your +school, if any, to sign a "copyright disclaimer" for the library, if +necessary. Here is a sample; alter the names: + + Yoyodyne, Inc., hereby disclaims all copyright interest in the + library `Frob' (a library for tweaking knobs) written by James Random Hacker. + + <signature of Ty Coon>, 1 April 1990 + Ty Coon, President of Vice + +That's all there is to it! diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.MPL b/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.MPL new file mode 100644 index 00000000000..18f8109b797 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.MPL @@ -0,0 +1,567 @@ + MOZILLA PUBLIC LICENSE + Version 1.1 + + --------------- + +1. Definitions. + + 1.0.1. "Commercial Use" means distribution or otherwise making the + Covered Code available to a third party. + + 1.1. "Contributor" means each entity that creates or contributes to + the creation of Modifications. + + 1.2. "Contributor Version" means the combination of the Original + Code, prior Modifications used by a Contributor, and the Modifications + made by that particular Contributor. + + 1.3. "Covered Code" means the Original Code or Modifications or the + combination of the Original Code and Modifications, in each case + including portions thereof. + + 1.4. "Electronic Distribution Mechanism" means a mechanism generally + accepted in the software development community for the electronic + transfer of data. + + 1.5. "Executable" means Covered Code in any form other than Source + Code. + + 1.6. "Initial Developer" means the individual or entity identified + as the Initial Developer in the Source Code notice required by Exhibit + A. + + 1.7. "Larger Work" means a work which combines Covered Code or + portions thereof with code not governed by the terms of this License. + + 1.8. "License" means this document. + + 1.8.1. "Licensable" means having the right to grant, to the maximum + extent possible, whether at the time of the initial grant or + subsequently acquired, any and all of the rights conveyed herein. + + 1.9. "Modifications" means any addition to or deletion from the + substance or structure of either the Original Code or any previous + Modifications. When Covered Code is released as a series of files, a + Modification is: + A. Any addition to or deletion from the contents of a file + containing Original Code or previous Modifications. + + B. Any new file that contains any part of the Original Code or + previous Modifications. + + 1.10. "Original Code" means Source Code of computer software code + which is described in the Source Code notice required by Exhibit A as + Original Code, and which, at the time of its release under this + License is not already Covered Code governed by this License. + + 1.10.1. "Patent Claims" means any patent claim(s), now owned or + hereafter acquired, including without limitation, method, process, + and apparatus claims, in any patent Licensable by grantor. + + 1.11. "Source Code" means the preferred form of the Covered Code for + making modifications to it, including all modules it contains, plus + any associated interface definition files, scripts used to control + compilation and installation of an Executable, or source code + differential comparisons against either the Original Code or another + well known, available Covered Code of the Contributor's choice. The + Source Code can be in a compressed or archival form, provided the + appropriate decompression or de-archiving software is widely available + for no charge. + + 1.12. "You" (or "Your") means an individual or a legal entity + exercising rights under, and complying with all of the terms of, this + License or a future version of this License issued under Section 6.1. + For legal entities, "You" includes any entity which controls, is + controlled by, or is under common control with You. For purposes of + this definition, "control" means (a) the power, direct or indirect, + to cause the direction or management of such entity, whether by + contract or otherwise, or (b) ownership of more than fifty percent + (50%) of the outstanding shares or beneficial ownership of such + entity. + +2. Source Code License. + + 2.1. The Initial Developer Grant. + The Initial Developer hereby grants You a world-wide, royalty-free, + non-exclusive license, subject to third party intellectual property + claims: + (a) under intellectual property rights (other than patent or + trademark) Licensable by Initial Developer to use, reproduce, + modify, display, perform, sublicense and distribute the Original + Code (or portions thereof) with or without Modifications, and/or + as part of a Larger Work; and + + (b) under Patents Claims infringed by the making, using or + selling of Original Code, to make, have made, use, practice, + sell, and offer for sale, and/or otherwise dispose of the + Original Code (or portions thereof). + + (c) the licenses granted in this Section 2.1(a) and (b) are + effective on the date Initial Developer first distributes + Original Code under the terms of this License. + + (d) Notwithstanding Section 2.1(b) above, no patent license is + granted: 1) for code that You delete from the Original Code; 2) + separate from the Original Code; or 3) for infringements caused + by: i) the modification of the Original Code or ii) the + combination of the Original Code with other software or devices. + + 2.2. Contributor Grant. + Subject to third party intellectual property claims, each Contributor + hereby grants You a world-wide, royalty-free, non-exclusive license + + (a) under intellectual property rights (other than patent or + trademark) Licensable by Contributor, to use, reproduce, modify, + display, perform, sublicense and distribute the Modifications + created by such Contributor (or portions thereof) either on an + unmodified basis, with other Modifications, as Covered Code + and/or as part of a Larger Work; and + + (b) under Patent Claims infringed by the making, using, or + selling of Modifications made by that Contributor either alone + and/or in combination with its Contributor Version (or portions + of such combination), to make, use, sell, offer for sale, have + made, and/or otherwise dispose of: 1) Modifications made by that + Contributor (or portions thereof); and 2) the combination of + Modifications made by that Contributor with its Contributor + Version (or portions of such combination). + + (c) the licenses granted in Sections 2.2(a) and 2.2(b) are + effective on the date Contributor first makes Commercial Use of + the Covered Code. + + (d) Notwithstanding Section 2.2(b) above, no patent license is + granted: 1) for any code that Contributor has deleted from the + Contributor Version; 2) separate from the Contributor Version; + 3) for infringements caused by: i) third party modifications of + Contributor Version or ii) the combination of Modifications made + by that Contributor with other software (except as part of the + Contributor Version) or other devices; or 4) under Patent Claims + infringed by Covered Code in the absence of Modifications made by + that Contributor. + +3. Distribution Obligations. + + 3.1. Application of License. + The Modifications which You create or to which You contribute are + governed by the terms of this License, including without limitation + Section 2.2. The Source Code version of Covered Code may be + distributed only under the terms of this License or a future version + of this License released under Section 6.1, and You must include a + copy of this License with every copy of the Source Code You + distribute. You may not offer or impose any terms on any Source Code + version that alters or restricts the applicable version of this + License or the recipients' rights hereunder. However, You may include + an additional document offering the additional rights described in + Section 3.5. + + 3.2. Availability of Source Code. + Any Modification which You create or to which You contribute must be + made available in Source Code form under the terms of this License + either on the same media as an Executable version or via an accepted + Electronic Distribution Mechanism to anyone to whom you made an + Executable version available; and if made available via Electronic + Distribution Mechanism, must remain available for at least twelve (12) + months after the date it initially became available, or at least six + (6) months after a subsequent version of that particular Modification + has been made available to such recipients. You are responsible for + ensuring that the Source Code version remains available even if the + Electronic Distribution Mechanism is maintained by a third party. + + 3.3. Description of Modifications. + You must cause all Covered Code to which You contribute to contain a + file documenting the changes You made to create that Covered Code and + the date of any change. You must include a prominent statement that + the Modification is derived, directly or indirectly, from Original + Code provided by the Initial Developer and including the name of the + Initial Developer in (a) the Source Code, and (b) in any notice in an + Executable version or related documentation in which You describe the + origin or ownership of the Covered Code. + + 3.4. Intellectual Property Matters + (a) Third Party Claims. + If Contributor has knowledge that a license under a third party's + intellectual property rights is required to exercise the rights + granted by such Contributor under Sections 2.1 or 2.2, + Contributor must include a text file with the Source Code + distribution titled "LEGAL" which describes the claim and the + party making the claim in sufficient detail that a recipient will + know whom to contact. If Contributor obtains such knowledge after + the Modification is made available as described in Section 3.2, + Contributor shall promptly modify the LEGAL file in all copies + Contributor makes available thereafter and shall take other steps + (such as notifying appropriate mailing lists or newsgroups) + reasonably calculated to inform those who received the Covered + Code that new knowledge has been obtained. + + (b) Contributor APIs. + If Contributor's Modifications include an application programming + interface and Contributor has knowledge of patent licenses which + are reasonably necessary to implement that API, Contributor must + also include this information in the LEGAL file. + + (c) Representations. + Contributor represents that, except as disclosed pursuant to + Section 3.4(a) above, Contributor believes that Contributor's + Modifications are Contributor's original creation(s) and/or + Contributor has sufficient rights to grant the rights conveyed by + this License. + + 3.5. Required Notices. + You must duplicate the notice in Exhibit A in each file of the Source + Code. If it is not possible to put such notice in a particular Source + Code file due to its structure, then You must include such notice in a + location (such as a relevant directory) where a user would be likely + to look for such a notice. If You created one or more Modification(s) + You may add your name as a Contributor to the notice described in + Exhibit A. You must also duplicate this License in any documentation + for the Source Code where You describe recipients' rights or ownership + rights relating to Covered Code. You may choose to offer, and to + charge a fee for, warranty, support, indemnity or liability + obligations to one or more recipients of Covered Code. However, You + may do so only on Your own behalf, and not on behalf of the Initial + Developer or any Contributor. You must make it absolutely clear than + any such warranty, support, indemnity or liability obligation is + offered by You alone, and You hereby agree to indemnify the Initial + Developer and every Contributor for any liability incurred by the + Initial Developer or such Contributor as a result of warranty, + support, indemnity or liability terms You offer. + + 3.6. Distribution of Executable Versions. + You may distribute Covered Code in Executable form only if the + requirements of Section 3.1-3.5 have been met for that Covered Code, + and if You include a notice stating that the Source Code version of + the Covered Code is available under the terms of this License, + including a description of how and where You have fulfilled the + obligations of Section 3.2. The notice must be conspicuously included + in any notice in an Executable version, related documentation or + collateral in which You describe recipients' rights relating to the + Covered Code. You may distribute the Executable version of Covered + Code or ownership rights under a license of Your choice, which may + contain terms different from this License, provided that You are in + compliance with the terms of this License and that the license for the + Executable version does not attempt to limit or alter the recipient's + rights in the Source Code version from the rights set forth in this + License. If You distribute the Executable version under a different + license You must make it absolutely clear that any terms which differ + from this License are offered by You alone, not by the Initial + Developer or any Contributor. You hereby agree to indemnify the + Initial Developer and every Contributor for any liability incurred by + the Initial Developer or such Contributor as a result of any such + terms You offer. + + 3.7. Larger Works. + You may create a Larger Work by combining Covered Code with other code + not governed by the terms of this License and distribute the Larger + Work as a single product. In such a case, You must make sure the + requirements of this License are fulfilled for the Covered Code. + +4. Inability to Comply Due to Statute or Regulation. + + If it is impossible for You to comply with any of the terms of this + License with respect to some or all of the Covered Code due to + statute, judicial order, or regulation then You must: (a) comply with + the terms of this License to the maximum extent possible; and (b) + describe the limitations and the code they affect. Such description + must be included in the LEGAL file described in Section 3.4 and must + be included with all distributions of the Source Code. Except to the + extent prohibited by statute or regulation, such description must be + sufficiently detailed for a recipient of ordinary skill to be able to + understand it. + +5. Application of this License. + + This License applies to code to which the Initial Developer has + attached the notice in Exhibit A and to related Covered Code. + +6. Versions of the License. + + 6.1. New Versions. + Netscape Communications Corporation ("Netscape") may publish revised + and/or new versions of the License from time to time. Each version + will be given a distinguishing version number. + + 6.2. Effect of New Versions. + Once Covered Code has been published under a particular version of the + License, You may always continue to use it under the terms of that + version. You may also choose to use such Covered Code under the terms + of any subsequent version of the License published by Netscape. No one + other than Netscape has the right to modify the terms applicable to + Covered Code created under this License. + + 6.3. Derivative Works. + If You create or use a modified version of this License (which you may + only do in order to apply it to code which is not already Covered Code + governed by this License), You must (a) rename Your license so that + the phrases "Mozilla", "MOZILLAPL", "MOZPL", "Netscape", + "MPL", "NPL" or any confusingly similar phrase do not appear in your + license (except to note that your license differs from this License) + and (b) otherwise make it clear that Your version of the license + contains terms which differ from the Mozilla Public License and + Netscape Public License. (Filling in the name of the Initial + Developer, Original Code or Contributor in the notice described in + Exhibit A shall not of themselves be deemed to be modifications of + this License.) + +7. DISCLAIMER OF WARRANTY. + + COVERED CODE IS PROVIDED UNDER THIS LICENSE ON AN "AS IS" BASIS, + WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, + WITHOUT LIMITATION, WARRANTIES THAT THE COVERED CODE IS FREE OF + DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. + THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED CODE + IS WITH YOU. SHOULD ANY COVERED CODE PROVE DEFECTIVE IN ANY RESPECT, + YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE + COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER + OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF + ANY COVERED CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER. + +8. TERMINATION. + + 8.1. This License and the rights granted hereunder will terminate + automatically if You fail to comply with terms herein and fail to cure + such breach within 30 days of becoming aware of the breach. All + sublicenses to the Covered Code which are properly granted shall + survive any termination of this License. Provisions which, by their + nature, must remain in effect beyond the termination of this License + shall survive. + + 8.2. If You initiate litigation by asserting a patent infringement + claim (excluding declatory judgment actions) against Initial Developer + or a Contributor (the Initial Developer or Contributor against whom + You file such action is referred to as "Participant") alleging that: + + (a) such Participant's Contributor Version directly or indirectly + infringes any patent, then any and all rights granted by such + Participant to You under Sections 2.1 and/or 2.2 of this License + shall, upon 60 days notice from Participant terminate prospectively, + unless if within 60 days after receipt of notice You either: (i) + agree in writing to pay Participant a mutually agreeable reasonable + royalty for Your past and future use of Modifications made by such + Participant, or (ii) withdraw Your litigation claim with respect to + the Contributor Version against such Participant. If within 60 days + of notice, a reasonable royalty and payment arrangement are not + mutually agreed upon in writing by the parties or the litigation claim + is not withdrawn, the rights granted by Participant to You under + Sections 2.1 and/or 2.2 automatically terminate at the expiration of + the 60 day notice period specified above. + + (b) any software, hardware, or device, other than such Participant's + Contributor Version, directly or indirectly infringes any patent, then + any rights granted to You by such Participant under Sections 2.1(b) + and 2.2(b) are revoked effective as of the date You first made, used, + sold, distributed, or had made, Modifications made by that + Participant. + + 8.3. If You assert a patent infringement claim against Participant + alleging that such Participant's Contributor Version directly or + indirectly infringes any patent where such claim is resolved (such as + by license or settlement) prior to the initiation of patent + infringement litigation, then the reasonable value of the licenses + granted by such Participant under Sections 2.1 or 2.2 shall be taken + into account in determining the amount or value of any payment or + license. + + 8.4. In the event of termination under Sections 8.1 or 8.2 above, + all end user license agreements (excluding distributors and resellers) + which have been validly granted by You or any distributor hereunder + prior to termination shall survive termination. + +9. LIMITATION OF LIABILITY. + + UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT + (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL + DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED CODE, + OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR + ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY + CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF GOODWILL, + WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER + COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN + INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF + LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY + RESULTING FROM SUCH PARTY'S NEGLIGENCE TO THE EXTENT APPLICABLE LAW + PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE + EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO + THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU. + +10. U.S. GOVERNMENT END USERS. + + The Covered Code is a "commercial item," as that term is defined in + 48 C.F.R. 2.101 (Oct. 1995), consisting of "commercial computer + software" and "commercial computer software documentation," as such + terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48 + C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), + all U.S. Government End Users acquire Covered Code with only those + rights set forth herein. + +11. MISCELLANEOUS. + + This License represents the complete agreement concerning subject + matter hereof. If any provision of this License is held to be + unenforceable, such provision shall be reformed only to the extent + necessary to make it enforceable. This License shall be governed by + California law provisions (except to the extent applicable law, if + any, provides otherwise), excluding its conflict-of-law provisions. + With respect to disputes in which at least one party is a citizen of, + or an entity chartered or registered to do business in the United + States of America, any litigation relating to this License shall be + subject to the jurisdiction of the Federal Courts of the Northern + District of California, with venue lying in Santa Clara County, + California, with the losing party responsible for costs, including + without limitation, court costs and reasonable attorneys' fees and + expenses. The application of the United Nations Convention on + Contracts for the International Sale of Goods is expressly excluded. + Any law or regulation which provides that the language of a contract + shall be construed against the drafter shall not apply to this + License. + +12. RESPONSIBILITY FOR CLAIMS. + + As between Initial Developer and the Contributors, each party is + responsible for claims and damages arising, directly or indirectly, + out of its utilization of rights under this License and You agree to + work with Initial Developer and Contributors to distribute such + responsibility on an equitable basis. Nothing herein is intended or + shall be deemed to constitute any admission of liability. + +13. MULTIPLE-LICENSED CODE. + + Initial Developer may designate portions of the Covered Code as + "Multiple-Licensed". "Multiple-Licensed" means that the Initial + Developer permits you to utilize portions of the Covered Code under + Your choice of the NPL or the alternative licenses, if any, specified + by the Initial Developer in the file described in Exhibit A. + +EXHIBIT A -Mozilla Public License. + + ``The contents of this file are subject to the Mozilla Public License + Version 1.1 (the "License"); you may not use this file except in + compliance with the License. You may obtain a copy of the License at + http://www.mozilla.org/MPL/ + + Software distributed under the License is distributed on an "AS IS" + basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See the + License for the specific language governing rights and limitations + under the License. + + The Original Code is ______________________________________. + + The Initial Developer of the Original Code is ________________________. + Portions created by ______________________ are Copyright (C) ______ + _______________________. All Rights Reserved. + + Contributor(s): ______________________________________. + + Alternatively, the contents of this file may be used under the terms + of the _____ license (the "[___] License"), in which case the + provisions of [______] License are applicable instead of those + above. If you wish to allow use of your version of this file only + under the terms of the [____] License and not to allow others to use + your version of this file under the MPL, indicate your decision by + deleting the provisions above and replace them with the notice and + other provisions required by the [___] License. If you do not delete + the provisions above, a recipient may use your version of this file + under either the MPL or the [___] License." + + [NOTE: The text of this Exhibit A may differ slightly from the text of + the notices in the Source Code files of the Original Code. You should + use the text of this Exhibit A rather than the text found in the + Original Code Source Code for Your Modifications.] + + ---------------------------------------------------------------------- + + AMENDMENTS + + The Netscape Public License Version 1.1 ("NPL") consists of the + Mozilla Public License Version 1.1 with the following Amendments, + including Exhibit A-Netscape Public License. Files identified with + "Exhibit A-Netscape Public License" are governed by the Netscape + Public License Version 1.1. + + Additional Terms applicable to the Netscape Public License. + I. Effect. + These additional terms described in this Netscape Public + License -- Amendments shall apply to the Mozilla Communicator + client code and to all Covered Code under this License. + + II. "Netscape's Branded Code" means Covered Code that Netscape + distributes and/or permits others to distribute under one or more + trademark(s) which are controlled by Netscape but which are not + licensed for use under this License. + + III. Netscape and logo. + This License does not grant any rights to use the trademarks + "Netscape", the "Netscape N and horizon" logo or the "Netscape + lighthouse" logo, "Netcenter", "Gecko", "Java" or "JavaScript", + "Smart Browsing" even if such marks are included in the Original + Code or Modifications. + + IV. Inability to Comply Due to Contractual Obligation. + Prior to licensing the Original Code under this License, Netscape + has licensed third party code for use in Netscape's Branded Code. + To the extent that Netscape is limited contractually from making + such third party code available under this License, Netscape may + choose to reintegrate such code into Covered Code without being + required to distribute such code in Source Code form, even if + such code would otherwise be considered "Modifications" under + this License. + + V. Use of Modifications and Covered Code by Initial Developer. + V.1. In General. + The obligations of Section 3 apply to Netscape, except to + the extent specified in this Amendment, Section V.2 and V.3. + + V.2. Other Products. + Netscape may include Covered Code in products other than the + Netscape's Branded Code which are released by Netscape + during the two (2) years following the release date of the + Original Code, without such additional products becoming + subject to the terms of this License, and may license such + additional products on different terms from those contained + in this License. + + V.3. Alternative Licensing. + Netscape may license the Source Code of Netscape's Branded + Code, including Modifications incorporated therein, without + such Netscape Branded Code becoming subject to the terms of + this License, and may license such Netscape Branded Code on + different terms from those contained in this License. + + VI. Litigation. + Notwithstanding the limitations of Section 11 above, the + provisions regarding litigation in Section 11(a), (b) and (c) of + the License shall apply to all disputes relating to this License. + + EXHIBIT A-Netscape Public License. + + "The contents of this file are subject to the Netscape Public + License Version 1.1 (the "License"); you may not use this file + except in compliance with the License. You may obtain a copy of + the License at http://www.mozilla.org/NPL/ + + Software distributed under the License is distributed on an "AS + IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or + implied. See the License for the specific language governing + rights and limitations under the License. + + The Original Code is Mozilla Communicator client code, released + March 31, 1998. + + The Initial Developer of the Original Code is Netscape + Communications Corporation. Portions created by Netscape are + Copyright (C) 1998-1999 Netscape Communications Corporation. All + Rights Reserved. + + Contributor(s): ______________________________________. + + Alternatively, the contents of this file may be used under the + terms of the _____ license (the "[___] License"), in which case + the provisions of [______] License are applicable instead of + those above. If you wish to allow use of your version of this + file only under the terms of the [____] License and not to allow + others to use your version of this file under the NPL, indicate + your decision by deleting the provisions above and replace them + with the notice and other provisions required by the [___] + License. If you do not delete the provisions above, a recipient + may use your version of this file under either the NPL or the + [___] License." diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.ZLIB b/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.ZLIB new file mode 100644 index 00000000000..b21b5724b39 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/COPYING.ZLIB @@ -0,0 +1,30 @@ + + THE ZLIB/LIBPNG LICENSE (copy from opensource.org) + + Copyright (c) <year> <copyright holders> + + This software is provided 'as-is', without any express + or implied warranty. In no event will the authors be + held liable for any damages arising from the use of this + software. + + Permission is granted to anyone to use this software for + any purpose, including commercial applications, and to + alter it and redistribute it freely, subject to the + following restrictions: + + 1. The origin of this software must not be + misrepresented; you must not claim that you + wrote the original software. If you use this + software in a product, an acknowledgment in + the product documentation would be + appreciated but is not required. + + 2. Altered source versions must be plainly + marked as such, and must not be + misrepresented as being the original + software. + + 3. This notice may not be removed or altered + from any source distribution. + diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/Makefile.am b/Build/source/libs/zziplib/zziplib-0.13.60/docs/Makefile.am new file mode 100644 index 00000000000..ec0dd845747 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/Makefile.am @@ -0,0 +1,281 @@ +AUTOMAKE_OPTIONS = 1.4 foreign +AUTOTOOL_VERSION=autoconf-2.52 automake-1.5 libtool-1.4.2 + +PYRUN = PYTHONDONTWRITEBYTECODE=1 $(PYDEFS) @PYTHON@ $(PYFLAGS) +PLRUN = PERL_DL_NONLAZY=1 $(PLDEFS) @PERL@ $(PLFLAGS) +DELETE = echo deleting... + +doc_FILES = README.MSVC6 README.SDL COPYING.MPL COPYING.LIB COPYING.ZLIB \ + zziplib.html zzipmmapped.html zzipfseeko.html +htm_FILES = zzip-index.htm zzip-zip.htm zzip-file.htm zzip-sdl-rwops.htm \ + zzip-extio.htm zzip-xor.htm zzip-crypt.htm zzip-cryptoid.htm \ + zzip-api.htm zzip-basics.htm zzip-extras.htm zzip-parse.htm \ + 64on32.htm future.htm fseeko.htm mmapped.htm memdisk.htm \ + configs.htm sfx-make.htm developer.htm download.htm \ + history.htm referentials.htm faq.htm copying.htm notes.htm \ + functions.htm zip-php.htm +htms_FILES = changes.htm +SDL = @top_srcdir@/SDL +SDL_RWOPS = $(SDL)/SDL_rwops_zzcat.c \ + $(SDL)/SDL_rwops_zzip.c $(SDL)/SDL_rwops_zzip.h +changelog = @top_srcdir@/ChangeLog + +EXTRA_DIST = make-doc.py $(doc_FILES) $(htm_FILES) $(SDL_RWOPS) \ + make-doc.pl make-dbk.pl mksite.sh mksite.pl body.htm \ + $(zzipdoc_FILES) sdocbook.css \ + zziplib-manpages.dbk zziplib-master.dbk \ + zziplib-manpages.tar +CLEANFILES = *.pc *.omf +DISTCLEANFILES = zziplib.spec manpages.tar htmpages.tar *.html *.xml + +zzipdoc_FILES = makedocs.py zzipdoc/__init__.py \ + zzipdoc/commentmarkup.py zzipdoc/match.py \ + zzipdoc/dbk2htm.py zzipdoc/htm2dbk.py \ + zzipdoc/functionheader.py zzipdoc/options.py \ + zzipdoc/functionlisthtmlpage.py zzipdoc/textfileheader.py \ + zzipdoc/functionlistreference.py zzipdoc/textfile.py \ + zzipdoc/functionprototype.py zzipdoc/htmldocument.py \ + zzipdoc/docbookdocument.py + +html_FILES = $(htm_FILES:.htm=.html) $(htms_FILES:.htm=.html) \ + $(htm_FILES:.htm=.print.html) $(htms_FILES:.htm=.print.html) \ + site.html site.print.html + +all : all-am default +default : doc @MAINTAINER_MODE_FALSE@ mans +clean-doc clean-docs : clean-unpack + - rm $(DISTCLEANFILES) + - rm $(MAINTAINERCLEANFILES) +install-data-local : @MAINTAINER_MODE_FALSE@ install-mans + +# ------------------------------------------------------------------- +zziplib.spec : @top_srcdir@/$(PACKAGE).spec + @ cp $? $@ # the two zzip-doc.* will grep thru zziplib.spec +doc : $(doc_FILES) site.html +docs : doc manpages.tar htmpages.tar +# docu : docs +docu : + - rm zziplib2.html zzipmmapped.html zzipfseeko.html + $(MAKE) manpages.tar htmpages.tar DELETE=exit + +zziplib.html: zziplib.xml +zziplib.xml: zziplib.spec $(srcdir)/Makefile.am \ + $(srcdir)/zzipdoc/*.py \ + $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c + $(PYRUN) $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c $(zziplib) \ + "--package=$(PACKAGE)" "--version=$(VERSION)" \ + "--onlymainheader=zzip/lib.h" "--output=zziplib" + test -s zziplib.docbook && mv zziplib.docbook zziplib.xml +zzipmmapped.html: zzipmmapped.xml +zzipmmapped.xml: zziplib.spec $(srcdir)/Makefile.am \ + $(srcdir)/zzipdoc/*.py \ + $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c + $(PYRUN) $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c $(zziplib) \ + "--package=$(PACKAGE)" "--version=$(VERSION)" \ + "--onlymainheader=zzip/mmapped.h" "--output=zzipmmapped" + test -s zzipmmapped.docbook && mv zzipmmapped.docbook zzipmmapped.xml +zzipfseeko.html: zzipfseeko.xml +zzipfseeko.xml: zziplib.spec $(srcdir)/Makefile.am \ + $(srcdir)/zzipdoc/*.py \ + $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c + $(PYRUN) $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c $(zziplib) \ + "--package=$(PACKAGE)" "--version=$(VERSION)" \ + "--onlymainheader=zzip/fseeko.h" "--output=zzipfseeko" + test -s zzipfseeko.docbook && mv zzipfseeko.docbook zzipfseeko.xml + +omfdir=${datadir}/omf +pkgomfdir=${omfdir}/${PACKAGE} +pkgdocdir=${mandir}/../doc/${PACKAGE} +bins = @top_srcdir@/bins +DOCEXAMPLES = $(bins)/zzdir.c $(bins)/zzcat.c \ + $(bins)/zzobfuscated.c $(bins)/zziptest.c \ + $(bins)/zzxordir.c $(bins)/zzxorcat.c \ + $(bins)/zzxorcopy.c $(SDL_RWOPS) + +install-docs: $(doc_FILES) $(man_FILES) site.html htmpages.tar + $(mkinstalldirs) $(DESTDIR)$(pkgdocdir) + $(INSTALL_DATA) $(html_FILES) $(DESTDIR)$(pkgdocdir) + for i in $(doc_FILES) $(DOCEXAMPLES) $(changelog) $(srcdir)/README.* \ + ; do $(INSTALL_DATA) `test -f $$i || echo $(srcdir)/`$$i \ + $(DESTDIR)$(pkgdocdir) ; done + cd $(DESTDIR)$(pkgdocdir) && ln -sf zzip-index.html index.html + $(mkinstalldirs) $(DESTDIR)$(pkgdocdir)/man + @ echo $(PAX_TAR_EXTRACT) htmpages.tar '>>>' $(DESTDIR)$(pkgdocdir)/man/ \ + ; test -f htmpages.tar || cd "$srcdir" \ + ; P=`pwd` ; test -s htmpages.tar || exit 1 \ + ; cd $(DESTDIR)$(pkgdocdir)/man && $(PAX_TAR_EXTRACT) $$P/htmpages.tar \ + ; true + +install-doc : install-docs $(PACKAGE)-doc.omf + $(mkinstalldirs) $(DESTDIR)$(pkgomfdir) + $(INSTALL_DATA) $(PACKAGE)-doc.omf $(DESTDIR)$(pkgomfdir)/ + - test ".$(DESTDIR)" != "." || scrollkeeper-update + +SOURCEFORGE_GROUP=zziplib +SOURCEFORGE_HOST=web.sourceforge.net +SOURCEFORGE_PATH=/home/groups/z/zz/zziplib/htdocs +www: upload-sourceforge +upload-sourceforge: + $(MAKE) install-docs DESTDIR=/tmp/zziplib-htdocs-$$USER/ + echo scp ... $(SOURCEFORGE_HOST):$(SOURCEFORGE_PATH)/ ; sleep 4 + scp -r /tmp/zziplib-htdocs-$$USER/$(pkgdocdir)/* \ + $$USER,$(SOURCEFORGE_GROUP)@$(SOURCEFORGE_HOST):$(SOURCEFORGE_PATH) + rm -r /tmp/zziplib-htdocs-$$USER/ + +# ------------------------------------------------------------ package manpages +mans : manpages +install-mans : install-man3 + +man3 man manpages : manpages.tar +html htm htmpages : htmpages.tar + +zziplib-manpages.tar : manpages.tar + test -s "$@" || test -s "$(srcdir)/$@" +manpages.tar : zziplib.xml zzipmmapped.xml zzipfseeko.xml + : "unix man format of the manpages - goes to ../share/man/man3" + @ if test "$(XMLTO)" != ":" \ + ; then echo going to regenerate "$@" in subdir "'"man"'" \ + ; echo 'test ! -d man3 || rm man3/* ; test -d man3 || mkdir man3' \ + ; test ! -d man3 || rm man3/* ; test -d man3 || mkdir man3 \ + ; echo '$(XMLTO) -o man3 man zziplib.xml' \ + ; $(XMLTO) -o man3 man zziplib.xml \ + ; echo '$(XMLTO) -o man3 man zzipmmapped.xml' \ + ; $(XMLTO) -o man3 man zzipmmapped.xml \ + ; echo '$(XMLTO) -o man3 man zzipfseeko.xml' \ + ; $(XMLTO) -o man3 man zzipfseeko.xml \ + ; if test -d man3/man3; then mv man3 man3_ \ + ; mv man3_/man3 man3; rm -r man3_; fi \ + ; echo 'chmod 664 man3/*.3' \ + ; chmod 664 man3/*.3 \ + ; echo '$(PAX_TAR_CREATE) "$@" man3/' \ + ; $(PAX_TAR_CREATE) "$@" man3/ \ + ; echo '$(DELETE); rm man3/*.3 ; rmdir man3' \ + ; $(DELETE); rm man3/*.3 ; rmdir man3 \ + ; fi ; true + @ if test -s $@ \ + ; then echo cp $@ zziplib-$@ "(saved)"; cp $@ zziplib-$@ \ + ; else echo cp $(srcdir)/zziplib-$@ $@; cp $(srcdir)/zziplib-$@ $@ \ + ; fi + +zziplib-htmpages.tar : htmpages.tar + test -s "$@" || test -s "$(srcdir)/$@" +htmpages.tar : zziplib.xml zzipmmapped.xml zzipfseeko.xml zziplib-manpages.dbk + : "html format of the manpages - put into zziplib/htdocs/man/*" + @ if test "$(XMLTO)" != ":" \ + ; then echo going to regenerate "$@" in subdir "'"html"'" \ + ; echo 'test ! -d html || rm /* ; test -d html || mkdir html' \ + ; test ! -d html || rm html/* ; test -d html || mkdir html \ + ; echo 'cp $(srcdir)/zziplib-manpages.dbk zziplib-manpages.xml' \ + ; cp $(srcdir)/zziplib-manpages.dbk zziplib-manpages.xml \ + ; echo '$(XMLTO) -o html html zziplib-manpages.xml | tee written.lst' \ + ; $(XMLTO) -o html html zziplib-manpages.xml | tee written.lst \ + ; echo '$(PAX_TAR_CREATE) $@ html/*.*' \ + ; $(PAX_TAR_CREATE) $@ html/*.* \ + ; echo '$(DELETE); rm html/*.* ; rmdir html' \ + ; $(DELETE); rm html/*.* ; rmdir html \ + ; fi ; true + @ if test -s $@ \ + ; then echo cp $@ zziplib-$@ "(saved)"; cp $@ zziplib-$@ \ + ; else echo cp $(srcdir)/zziplib-$@ $@; cp $(srcdir)/zziplib-$@ $@ \ + ; fi + +install-man3 : manpages.tar + $(mkinstalldirs) $(DESTDIR)$(mandir)/man3 + P=`pwd` ; test -s manpages.tar || exit 1 \ + ; cd $(DESTDIR)$(mandir) && $(PAX_TAR_EXTRACT) $$P/manpages.tar \ + ; true + +unpack : manpages.tar htmpages.tar + test -s manpages.tar && test -s htmpages.tar + (rm -rf _htm && mkdir _htm && cd _htm && $(PAX_TAR_EXTRACT) ../htmpages.tar) + (rm -rf _man && mkdir _man && cd _man && $(PAX_TAR_EXTRACT) ../manpages.tar) +clean-unpack : + rm -rf _htm + rm -rf _man + +# --------------------------------------------------------------- OMF handling +spec_file=$(top_srcdir)/$(PACKAGE).spec +DOCSERIES= 775fb73e-1874-11d7-93e9-e18cd7ea3c2e +FROMSPEC= $(spec_file) | head -1 | sed -e 's,<,\<,g' -e 's,>,\>,g' +DATESPEC= `date +%Y-%m-%d` + +$(PACKAGE)-doc.omf : $(spec_file) Makefile + : "OMF for the html documentation - a copy of zziplib.sf.net" + echo '<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>' >$@ + echo '<omf><resource><creator> Guido Draheim </creator>' >>$@ + grep Packager $(FROMSPEC) | sed -e 's,Packager *: *, <maintainer>,' \ + -e '/<maintainer>/s,$$,</maintainer>,' >>$@ + grep Summary $(FROMSPEC) | sed -e 's,Summary *: *, <title>,' \ + -e '/<title>/s,$$,</title>,' >>$@ + echo ' <date>'$(DATESPEC)'</date>' >>$@ + echo ' <version identifier="$(VERSION)" date="'$(DATESPEC)'"/>' >>$@ + grep Group $(FROMSPEC) | sed -e 's,Group *: *, <subject category=",' \ + -e 's,/,|,g' -e '/<subject/s,$$," />,' >>$@ + echo ' <format mime="text/html"/>' >>$@ + pkgdocdir=`echo "$(pkgdocdir)" | sed -e "s|/[a-z][a-z]*/[.][.]/|/|"` \ + echo ' <identifier url="file:'"$$pkgdocdir"'/zzip-index.html"/>' >>$@ + echo ' <language code="C"/>' >>$@ + echo ' <relation seriesid="$(DOCSERIES)"/>' >>$@ + echo ' <rights type="GNU LGPL" holder="Guido Draheim"' >>$@ + pkgdocdir=`echo "$(pkgdocdir)" | sed -e "s|/[a-z][a-z]*/[.][.]/|/|"` \ + echo ' license="'"$$pkgdocdir"')/COPYING.LIB"/>' >>$@ + echo '</resource></omf>' >>$@ + +DOCBOOKDTD= -//OASIS/DTD Docbook V4.1.2//EN +MANSERIES= a302c642-1888-11d7-86f6-ba4b52ef847d +$(PACKAGE)-man.omf : $(PACKAGE)-doc.omf $(PACKAGE).xml + : "OMF for functions reference - the docbook master of the manpages" + sed -e 's,"text/html","text/xml" dtd="$(DOCBOOKDTD)",' \ + -e 's,</title>, (Function Reference)</title>,' \ + -e 's,/index.html,/xml/manpages.xml,' \ + -e 's,$(DOCSERIES),$(MANSERIES),' $(PACKAGE)-doc.omf > $@ + test -s $@ || rm $@ + +omf : $(PACKAGE)-doc.omf $(PACKAGE)-man.omf +install-omf : omf zziplib.xml zzipmmapped.xml zzipfseeko.xml + : "not installed by default anymore - 'make install-doc' has one OMF" + $(mkinstalldirs) $(DESTDIR)$(pkgomfdir) + $(INSTALL_DATA) $(PACKAGE)-doc.omf $(DESTDIR)$(pkgomfdir)/ + $(mkinstalldirs) $(DESTDIR)$(pkgdocdir) + $(INSTALL_DATA) $(srcdir)/zziplib-manpages.xml \ + $(DESTDIR)$(pkgdocdir)/xml/manpages.xml + $(INSTALL_DATA) zziplib.xml zzipmmapped.xml zzipfseeko.xml \ + $(DESTDIR)$(pkgdocdir)/xml/ + $(INSTALL_DATA) $(PACKAGE)-man.omf $(DESTDIR)$(pkgomfdir)/ + - test ".$(DESTDIR)" != "." || scrollkeeper-update -v + +# ----------------------------------------------- mksite.sh for the main html +site.htm : body.htm + cp $(srcdir)/body.htm site.htm +mksite_sh_args = --VERSION=$(VERSION) --xml --src-dir=$(srcdir) --print +site.html : body.htm site.htm mksite.sh $(htm_FILES) $(htms_FILES) + cp $(srcdir)/body.htm site.htm + perl $(srcdir)/mksite.pl $(mksite_sh_args) site.htm || \ + $(SHELL) $(srcdir)/mksite.sh $(mksite_sh_args) site.htm + +changes.htm : $(top_srcdir)/ChangeLog Makefile + echo "<pre>" > $@ ; cat $(top_srcdir)/ChangeLog \ + | sed -e "s,\\&,\\&\\;,g" \ + -e "s,<,\\<\\;,g" -e "s,>,\\>\\;,g" \ + -e "/^[A-Z].*[12][09][09][09]/s,\\(.*\\),<b>&</b>," \ + -e "/^[0-9]/s,\\(.*\\),<b>&</b>," >> $@ ; echo "</pre>" >>$@ + +# ----------------------------------------------- create pdf via docbook xml +# sorry, the xmlto / docbook-xsl are too broken to rebuild the PDF anymore + +zzip.xml : $(htm_FILES) zziplib.xml make-dbk.pl + : '@PERL@ make-dbk.pl $(htm_FILES) zziplib.xml >$@' + @PYTHON@ $(srcdir)/zzipdoc/htm2dbk.py $(htm_FILES) zziplib.xml >$@ + test -s "$@" || rm "$@" + +zzip.html : zzip.xml + xmlto html-nochunks zzip.xml +zzip.pdf : zzip.xml + xmlto pdf zzip.xml + +zziplib.pdf : $(htm_FILES) $(srcdir)/zziplib-master.dbk mksite.pl + cp $(srcdir)/zziplib-master.dbk zziplib.docbook + xmlto pdf zziplib.docbook ; rm zziplib.docbook + test -s zziplib.pdf + +pdfs : zziplib.pdf diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/Makefile.in b/Build/source/libs/zziplib/zziplib-0.13.60/docs/Makefile.in new file mode 100644 index 00000000000..03ab81a5738 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/Makefile.in @@ -0,0 +1,694 @@ +# Makefile.in generated by automake 1.11.1 from Makefile.am. +# @configure_input@ + +# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, +# 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, +# Inc. +# This Makefile.in is free software; the Free Software Foundation +# gives unlimited permission to copy and/or distribute it, +# with or without modifications, as long as this notice is preserved. + +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY, to the extent permitted by law; without +# even the implied warranty of MERCHANTABILITY or FITNESS FOR A +# PARTICULAR PURPOSE. + +@SET_MAKE@ +VPATH = @srcdir@ +pkgdatadir = $(datadir)/@PACKAGE@ +pkgincludedir = $(includedir)/@PACKAGE@ +pkglibdir = $(libdir)/@PACKAGE@ +pkglibexecdir = $(libexecdir)/@PACKAGE@ +am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd +install_sh_DATA = $(install_sh) -c -m 644 +install_sh_PROGRAM = $(install_sh) -c +install_sh_SCRIPT = $(install_sh) -c +INSTALL_HEADER = $(INSTALL_DATA) +transform = $(program_transform_name) +NORMAL_INSTALL = : +PRE_INSTALL = : +POST_INSTALL = : +NORMAL_UNINSTALL = : +PRE_UNINSTALL = : +POST_UNINSTALL = : +build_triplet = @build@ +host_triplet = @host@ +target_triplet = @target@ +subdir = docs +DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in COPYING.LIB +ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 +am__aclocal_m4_deps = $(top_srcdir)/m4/ac_compile_check_sizeof.m4 \ + $(top_srcdir)/m4/ac_set_default_paths_system.m4 \ + $(top_srcdir)/m4/ac_sys_largefile_sensitive.m4 \ + $(top_srcdir)/m4/acx_restrict.m4 \ + $(top_srcdir)/m4/ax_cflags_gcc_option.m4 \ + $(top_srcdir)/m4/ax_cflags_no_writable_strings.m4 \ + $(top_srcdir)/m4/ax_cflags_strict_prototypes.m4 \ + $(top_srcdir)/m4/ax_cflags_warn_all.m4 \ + $(top_srcdir)/m4/ax_check_aligned_access_required.m4 \ + $(top_srcdir)/m4/ax_configure_args.m4 \ + $(top_srcdir)/m4/ax_create_pkgconfig_info.m4 \ + $(top_srcdir)/m4/ax_enable_builddir_uname.m4 \ + $(top_srcdir)/m4/ax_expand_prefix.m4 \ + $(top_srcdir)/m4/ax_maintainer_mode_auto_silent.m4 \ + $(top_srcdir)/m4/ax_not_enable_frame_pointer.m4 \ + $(top_srcdir)/m4/ax_pax_tar.m4 \ + $(top_srcdir)/m4/ax_prefix_config_h.m4 \ + $(top_srcdir)/m4/ax_set_version_info.m4 \ + $(top_srcdir)/m4/ax_spec_file.m4 \ + $(top_srcdir)/m4/ax_spec_package_version.m4 \ + $(top_srcdir)/m4/ax_warning_default_aclocaldir.m4 \ + $(top_srcdir)/m4/ax_warning_default_pkgconfig.m4 \ + $(top_srcdir)/m4/libtool.m4 $(top_srcdir)/m4/ltoptions.m4 \ + $(top_srcdir)/m4/ltsugar.m4 $(top_srcdir)/m4/ltversion.m4 \ + $(top_srcdir)/m4/lt~obsolete.m4 \ + $(top_srcdir)/m4/patch_libtool_on_darwin_zsh_overquoting.m4 \ + $(top_srcdir)/m4/patch_libtool_sys_lib_search_path_spec.m4 \ + $(top_srcdir)/m4/patch_libtool_to_add_host_cc.m4 \ + $(top_srcdir)/configure.ac +am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ + $(ACLOCAL_M4) +mkinstalldirs = $(SHELL) $(top_srcdir)/uses/mkinstalldirs +CONFIG_HEADER = $(top_builddir)/config.h +CONFIG_CLEAN_FILES = +CONFIG_CLEAN_VPATH_FILES = +SOURCES = +DIST_SOURCES = +DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) +ACLOCAL = @ACLOCAL@ +AMTAR = @AMTAR@ +AR = @AR@ +AS = @AS@ +AUTOCONF = @AUTOCONF@ +AUTOHEADER = @AUTOHEADER@ +AUTOMAKE = @AUTOMAKE@ +AWK = @AWK@ +CC = @CC@ +CCDEPMODE = @CCDEPMODE@ +CFLAGS = @CFLAGS@ +CONFIG_FILES = @CONFIG_FILES@ +CPP = @CPP@ +CPPFLAGS = @CPPFLAGS@ +CYGPATH_W = @CYGPATH_W@ +DEFS = @DEFS@ +DEPDIR = @DEPDIR@ +DLLTOOL = @DLLTOOL@ +DSYMUTIL = @DSYMUTIL@ +DUMPBIN = @DUMPBIN@ +ECHO_C = @ECHO_C@ +ECHO_N = @ECHO_N@ +ECHO_T = @ECHO_T@ +EGREP = @EGREP@ +EXEEXT = @EXEEXT@ +FGREP = @FGREP@ +GNUTAR = @GNUTAR@ +GREP = @GREP@ +GTAR = @GTAR@ +INSTALL = @INSTALL@ +INSTALL_DATA = @INSTALL_DATA@ +INSTALL_PROGRAM = @INSTALL_PROGRAM@ +INSTALL_SCRIPT = @INSTALL_SCRIPT@ +INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ +LARGEFILE_CFLAGS = @LARGEFILE_CFLAGS@ +LD = @LD@ +LDFLAGS = @LDFLAGS@ +LIBOBJS = @LIBOBJS@ +LIBS = @LIBS@ +LIBTOOL = @LIBTOOL@ +LIPO = @LIPO@ +LN_S = @LN_S@ +LTLIBOBJS = @LTLIBOBJS@ +MAINT = @MAINT@ +MAKEINFO = @MAKEINFO@ +MKDIR_P = @MKDIR_P@ +MKZIP = @MKZIP@ +NM = @NM@ +NMEDIT = @NMEDIT@ +OBJDUMP = @OBJDUMP@ +OBJEXT = @OBJEXT@ +OTOOL = @OTOOL@ +OTOOL64 = @OTOOL64@ +PACKAGE = @PACKAGE@ +PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ +PACKAGE_NAME = @PACKAGE_NAME@ +PACKAGE_STRING = @PACKAGE_STRING@ +PACKAGE_TARNAME = @PACKAGE_TARNAME@ +PACKAGE_URL = @PACKAGE_URL@ +PACKAGE_VERSION = @PACKAGE_VERSION@ +PATH_SEPARATOR = @PATH_SEPARATOR@ +PAX = @PAX@ +PAX_TAR_CREATE = @PAX_TAR_CREATE@ +PAX_TAR_EXTRACT = @PAX_TAR_EXTRACT@ +PERL = @PERL@ +PKG_CONFIG = @PKG_CONFIG@ +PYTHON = @PYTHON@ +RANLIB = @RANLIB@ +RELEASE_INFO = @RELEASE_INFO@ +RESOLVES = @RESOLVES@ +SDL = @top_srcdir@/SDL +SDL_GENERATE = @SDL_GENERATE@ +SED = @SED@ +SET_MAKE = @SET_MAKE@ +SHELL = @SHELL@ +STRIP = @STRIP@ +TAR = @TAR@ +THREAD_SAFE = @THREAD_SAFE@ +VERSION = @VERSION@ +VERSION_INFO = @VERSION_INFO@ +XMLTO = @XMLTO@ +ZIPTESTS = @ZIPTESTS@ +ZLIB_INCL = @ZLIB_INCL@ +ZLIB_LDIR = @ZLIB_LDIR@ +ZLIB_VERSION = @ZLIB_VERSION@ +ZZIPLIB_LDFLAGS = @ZZIPLIB_LDFLAGS@ +abs_builddir = @abs_builddir@ +abs_srcdir = @abs_srcdir@ +abs_top_builddir = @abs_top_builddir@ +abs_top_srcdir = @abs_top_srcdir@ +ac_ct_CC = @ac_ct_CC@ +ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ +aclocaldir = @aclocaldir@ +am__include = @am__include@ +am__leading_dot = @am__leading_dot@ +am__quote = @am__quote@ +am__tar = @am__tar@ +am__untar = @am__untar@ +ax_enable_builddir_sed = @ax_enable_builddir_sed@ +bindir = @bindir@ +build = @build@ +build_alias = @build_alias@ +build_cpu = @build_cpu@ +build_os = @build_os@ +build_vendor = @build_vendor@ +builddir = @builddir@ +datadir = @datadir@ +datarootdir = @datarootdir@ +docdir = @docdir@ +dvidir = @dvidir@ +exec_prefix = @exec_prefix@ +host = @host@ +host_alias = @host_alias@ +host_cpu = @host_cpu@ +host_os = @host_os@ +host_vendor = @host_vendor@ +htmldir = @htmldir@ +includedir = @includedir@ +infodir = @infodir@ +install_sh = @install_sh@ +libdir = @libdir@ +libexecdir = @libexecdir@ +localedir = @localedir@ +localstatedir = @localstatedir@ +lt_ECHO = @lt_ECHO@ +mandir = @mandir@ +mkdir_p = @mkdir_p@ +oldincludedir = @oldincludedir@ +pdfdir = @pdfdir@ +pkgconfig_libdir = @pkgconfig_libdir@ +pkgconfig_libfile = @pkgconfig_libfile@ +pkgconfigdir = @pkgconfigdir@ +prefix = @prefix@ +program_transform_name = @program_transform_name@ +psdir = @psdir@ +sbindir = @sbindir@ +sharedstatedir = @sharedstatedir@ +srcdir = @srcdir@ +sysconfdir = @sysconfdir@ +target = @target@ +target_alias = @target_alias@ +target_cpu = @target_cpu@ +target_os = @target_os@ +target_vendor = @target_vendor@ +top_build_prefix = @top_build_prefix@ +top_builddir = @top_builddir@ +top_srcdir = @top_srcdir@ +AUTOMAKE_OPTIONS = 1.4 foreign +AUTOTOOL_VERSION = autoconf-2.52 automake-1.5 libtool-1.4.2 +PYRUN = PYTHONDONTWRITEBYTECODE=1 $(PYDEFS) @PYTHON@ $(PYFLAGS) +PLRUN = PERL_DL_NONLAZY=1 $(PLDEFS) @PERL@ $(PLFLAGS) +DELETE = echo deleting... +doc_FILES = README.MSVC6 README.SDL COPYING.MPL COPYING.LIB COPYING.ZLIB \ + zziplib.html zzipmmapped.html zzipfseeko.html + +htm_FILES = zzip-index.htm zzip-zip.htm zzip-file.htm zzip-sdl-rwops.htm \ + zzip-extio.htm zzip-xor.htm zzip-crypt.htm zzip-cryptoid.htm \ + zzip-api.htm zzip-basics.htm zzip-extras.htm zzip-parse.htm \ + 64on32.htm future.htm fseeko.htm mmapped.htm memdisk.htm \ + configs.htm sfx-make.htm developer.htm download.htm \ + history.htm referentials.htm faq.htm copying.htm notes.htm \ + functions.htm zip-php.htm + +htms_FILES = changes.htm +SDL_RWOPS = $(SDL)/SDL_rwops_zzcat.c \ + $(SDL)/SDL_rwops_zzip.c $(SDL)/SDL_rwops_zzip.h + +changelog = @top_srcdir@/ChangeLog +EXTRA_DIST = make-doc.py $(doc_FILES) $(htm_FILES) $(SDL_RWOPS) \ + make-doc.pl make-dbk.pl mksite.sh mksite.pl body.htm \ + $(zzipdoc_FILES) sdocbook.css \ + zziplib-manpages.dbk zziplib-master.dbk \ + zziplib-manpages.tar + +CLEANFILES = *.pc *.omf +DISTCLEANFILES = zziplib.spec manpages.tar htmpages.tar *.html *.xml +zzipdoc_FILES = makedocs.py zzipdoc/__init__.py \ + zzipdoc/commentmarkup.py zzipdoc/match.py \ + zzipdoc/dbk2htm.py zzipdoc/htm2dbk.py \ + zzipdoc/functionheader.py zzipdoc/options.py \ + zzipdoc/functionlisthtmlpage.py zzipdoc/textfileheader.py \ + zzipdoc/functionlistreference.py zzipdoc/textfile.py \ + zzipdoc/functionprototype.py zzipdoc/htmldocument.py \ + zzipdoc/docbookdocument.py + +html_FILES = $(htm_FILES:.htm=.html) $(htms_FILES:.htm=.html) \ + $(htm_FILES:.htm=.print.html) $(htms_FILES:.htm=.print.html) \ + site.html site.print.html + +omfdir = ${datadir}/omf +pkgomfdir = ${omfdir}/${PACKAGE} +pkgdocdir = ${mandir}/../doc/${PACKAGE} +bins = @top_srcdir@/bins +DOCEXAMPLES = $(bins)/zzdir.c $(bins)/zzcat.c \ + $(bins)/zzobfuscated.c $(bins)/zziptest.c \ + $(bins)/zzxordir.c $(bins)/zzxorcat.c \ + $(bins)/zzxorcopy.c $(SDL_RWOPS) + +SOURCEFORGE_GROUP = zziplib +SOURCEFORGE_HOST = web.sourceforge.net +SOURCEFORGE_PATH = /home/groups/z/zz/zziplib/htdocs + +# --------------------------------------------------------------- OMF handling +spec_file = $(top_srcdir)/$(PACKAGE).spec +DOCSERIES = 775fb73e-1874-11d7-93e9-e18cd7ea3c2e +FROMSPEC = $(spec_file) | head -1 | sed -e 's,<,\<,g' -e 's,>,\>,g' +DATESPEC = `date +%Y-%m-%d` +DOCBOOKDTD = -//OASIS/DTD Docbook V4.1.2//EN +MANSERIES = a302c642-1888-11d7-86f6-ba4b52ef847d +mksite_sh_args = --VERSION=$(VERSION) --xml --src-dir=$(srcdir) --print +all: all-am + +.SUFFIXES: +$(srcdir)/Makefile.in: @MAINTAINER_MODE_TRUE@ $(srcdir)/Makefile.am $(am__configure_deps) + @for dep in $?; do \ + case '$(am__configure_deps)' in \ + *$$dep*) \ + ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \ + && { if test -f $@; then exit 0; else break; fi; }; \ + exit 1;; \ + esac; \ + done; \ + echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign docs/Makefile'; \ + $(am__cd) $(top_srcdir) && \ + $(AUTOMAKE) --foreign docs/Makefile +.PRECIOUS: Makefile +Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status + @case '$?' in \ + *config.status*) \ + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \ + *) \ + echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \ + cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \ + esac; + +$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh + +$(top_srcdir)/configure: @MAINTAINER_MODE_TRUE@ $(am__configure_deps) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh +$(ACLOCAL_M4): @MAINTAINER_MODE_TRUE@ $(am__aclocal_m4_deps) + cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh +$(am__aclocal_m4_deps): + +mostlyclean-libtool: + -rm -f *.lo + +clean-libtool: + -rm -rf .libs _libs +tags: TAGS +TAGS: + +ctags: CTAGS +CTAGS: + + +distdir: $(DISTFILES) + @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ + topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ + list='$(DISTFILES)'; \ + dist_files=`for file in $$list; do echo $$file; done | \ + sed -e "s|^$$srcdirstrip/||;t" \ + -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ + case $$dist_files in \ + */*) $(MKDIR_P) `echo "$$dist_files" | \ + sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ + sort -u` ;; \ + esac; \ + for file in $$dist_files; do \ + if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ + if test -d $$d/$$file; then \ + dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ + if test -d "$(distdir)/$$file"; then \ + find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ + fi; \ + if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ + cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ + find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ + fi; \ + cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ + else \ + test -f "$(distdir)/$$file" \ + || cp -p $$d/$$file "$(distdir)/$$file" \ + || exit 1; \ + fi; \ + done +check-am: all-am +check: check-am +all-am: Makefile +installdirs: +install: install-am +install-exec: install-exec-am +install-data: install-data-am +uninstall: uninstall-am + +install-am: all-am + @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am + +installcheck: installcheck-am +install-strip: + $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ + install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ + `test -z '$(STRIP)' || \ + echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install +mostlyclean-generic: + +clean-generic: + -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) + +distclean-generic: + -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) + -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) + -test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES) + +maintainer-clean-generic: + @echo "This command is intended for maintainers to use" + @echo "it deletes files that may require special tools to rebuild." +clean: clean-am + +clean-am: clean-generic clean-libtool mostlyclean-am + +distclean: distclean-am + -rm -f Makefile +distclean-am: clean-am distclean-generic + +dvi: dvi-am + +dvi-am: + +html: html-am + +html-am: + +info: info-am + +info-am: + +install-data-am: install-data-local + +install-dvi: install-dvi-am + +install-dvi-am: + +install-exec-am: + +install-html: install-html-am + +install-html-am: + +install-info: install-info-am + +install-info-am: + +install-man: + +install-pdf: install-pdf-am + +install-pdf-am: + +install-ps: install-ps-am + +install-ps-am: + +installcheck-am: + +maintainer-clean: maintainer-clean-am + -rm -f Makefile +maintainer-clean-am: distclean-am maintainer-clean-generic + +mostlyclean: mostlyclean-am + +mostlyclean-am: mostlyclean-generic mostlyclean-libtool + +pdf: pdf-am + +pdf-am: + +ps: ps-am + +ps-am: + +uninstall-am: + +.MAKE: install-am install-strip + +.PHONY: all all-am check check-am clean clean-generic clean-libtool \ + distclean distclean-generic distclean-libtool distdir dvi \ + dvi-am html html-am info info-am install install-am \ + install-data install-data-am install-data-local install-dvi \ + install-dvi-am install-exec install-exec-am install-html \ + install-html-am install-info install-info-am install-man \ + install-pdf install-pdf-am install-ps install-ps-am \ + install-strip installcheck installcheck-am installdirs \ + maintainer-clean maintainer-clean-generic mostlyclean \ + mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ + uninstall uninstall-am + + +all : all-am default +default : doc @MAINTAINER_MODE_FALSE@ mans +clean-doc clean-docs : clean-unpack + - rm $(DISTCLEANFILES) + - rm $(MAINTAINERCLEANFILES) +install-data-local : @MAINTAINER_MODE_FALSE@ install-mans + +# ------------------------------------------------------------------- +zziplib.spec : @top_srcdir@/$(PACKAGE).spec + @ cp $? $@ # the two zzip-doc.* will grep thru zziplib.spec +doc : $(doc_FILES) site.html +docs : doc manpages.tar htmpages.tar +# docu : docs +docu : + - rm zziplib2.html zzipmmapped.html zzipfseeko.html + $(MAKE) manpages.tar htmpages.tar DELETE=exit + +zziplib.html: zziplib.xml +zziplib.xml: zziplib.spec $(srcdir)/Makefile.am \ + $(srcdir)/zzipdoc/*.py \ + $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c + $(PYRUN) $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c $(zziplib) \ + "--package=$(PACKAGE)" "--version=$(VERSION)" \ + "--onlymainheader=zzip/lib.h" "--output=zziplib" + test -s zziplib.docbook && mv zziplib.docbook zziplib.xml +zzipmmapped.html: zzipmmapped.xml +zzipmmapped.xml: zziplib.spec $(srcdir)/Makefile.am \ + $(srcdir)/zzipdoc/*.py \ + $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c + $(PYRUN) $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c $(zziplib) \ + "--package=$(PACKAGE)" "--version=$(VERSION)" \ + "--onlymainheader=zzip/mmapped.h" "--output=zzipmmapped" + test -s zzipmmapped.docbook && mv zzipmmapped.docbook zzipmmapped.xml +zzipfseeko.html: zzipfseeko.xml +zzipfseeko.xml: zziplib.spec $(srcdir)/Makefile.am \ + $(srcdir)/zzipdoc/*.py \ + $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c + $(PYRUN) $(srcdir)/makedocs.py @top_srcdir@/zzip/*.c $(zziplib) \ + "--package=$(PACKAGE)" "--version=$(VERSION)" \ + "--onlymainheader=zzip/fseeko.h" "--output=zzipfseeko" + test -s zzipfseeko.docbook && mv zzipfseeko.docbook zzipfseeko.xml + +install-docs: $(doc_FILES) $(man_FILES) site.html htmpages.tar + $(mkinstalldirs) $(DESTDIR)$(pkgdocdir) + $(INSTALL_DATA) $(html_FILES) $(DESTDIR)$(pkgdocdir) + for i in $(doc_FILES) $(DOCEXAMPLES) $(changelog) $(srcdir)/README.* \ + ; do $(INSTALL_DATA) `test -f $$i || echo $(srcdir)/`$$i \ + $(DESTDIR)$(pkgdocdir) ; done + cd $(DESTDIR)$(pkgdocdir) && ln -sf zzip-index.html index.html + $(mkinstalldirs) $(DESTDIR)$(pkgdocdir)/man + @ echo $(PAX_TAR_EXTRACT) htmpages.tar '>>>' $(DESTDIR)$(pkgdocdir)/man/ \ + ; test -f htmpages.tar || cd "$srcdir" \ + ; P=`pwd` ; test -s htmpages.tar || exit 1 \ + ; cd $(DESTDIR)$(pkgdocdir)/man && $(PAX_TAR_EXTRACT) $$P/htmpages.tar \ + ; true + +install-doc : install-docs $(PACKAGE)-doc.omf + $(mkinstalldirs) $(DESTDIR)$(pkgomfdir) + $(INSTALL_DATA) $(PACKAGE)-doc.omf $(DESTDIR)$(pkgomfdir)/ + - test ".$(DESTDIR)" != "." || scrollkeeper-update +www: upload-sourceforge +upload-sourceforge: + $(MAKE) install-docs DESTDIR=/tmp/zziplib-htdocs-$$USER/ + echo scp ... $(SOURCEFORGE_HOST):$(SOURCEFORGE_PATH)/ ; sleep 4 + scp -r /tmp/zziplib-htdocs-$$USER/$(pkgdocdir)/* \ + $$USER,$(SOURCEFORGE_GROUP)@$(SOURCEFORGE_HOST):$(SOURCEFORGE_PATH) + rm -r /tmp/zziplib-htdocs-$$USER/ + +# ------------------------------------------------------------ package manpages +mans : manpages +install-mans : install-man3 + +man3 man manpages : manpages.tar +html htm htmpages : htmpages.tar + +zziplib-manpages.tar : manpages.tar + test -s "$@" || test -s "$(srcdir)/$@" +manpages.tar : zziplib.xml zzipmmapped.xml zzipfseeko.xml + : "unix man format of the manpages - goes to ../share/man/man3" + @ if test "$(XMLTO)" != ":" \ + ; then echo going to regenerate "$@" in subdir "'"man"'" \ + ; echo 'test ! -d man3 || rm man3/* ; test -d man3 || mkdir man3' \ + ; test ! -d man3 || rm man3/* ; test -d man3 || mkdir man3 \ + ; echo '$(XMLTO) -o man3 man zziplib.xml' \ + ; $(XMLTO) -o man3 man zziplib.xml \ + ; echo '$(XMLTO) -o man3 man zzipmmapped.xml' \ + ; $(XMLTO) -o man3 man zzipmmapped.xml \ + ; echo '$(XMLTO) -o man3 man zzipfseeko.xml' \ + ; $(XMLTO) -o man3 man zzipfseeko.xml \ + ; if test -d man3/man3; then mv man3 man3_ \ + ; mv man3_/man3 man3; rm -r man3_; fi \ + ; echo 'chmod 664 man3/*.3' \ + ; chmod 664 man3/*.3 \ + ; echo '$(PAX_TAR_CREATE) "$@" man3/' \ + ; $(PAX_TAR_CREATE) "$@" man3/ \ + ; echo '$(DELETE); rm man3/*.3 ; rmdir man3' \ + ; $(DELETE); rm man3/*.3 ; rmdir man3 \ + ; fi ; true + @ if test -s $@ \ + ; then echo cp $@ zziplib-$@ "(saved)"; cp $@ zziplib-$@ \ + ; else echo cp $(srcdir)/zziplib-$@ $@; cp $(srcdir)/zziplib-$@ $@ \ + ; fi + +zziplib-htmpages.tar : htmpages.tar + test -s "$@" || test -s "$(srcdir)/$@" +htmpages.tar : zziplib.xml zzipmmapped.xml zzipfseeko.xml zziplib-manpages.dbk + : "html format of the manpages - put into zziplib/htdocs/man/*" + @ if test "$(XMLTO)" != ":" \ + ; then echo going to regenerate "$@" in subdir "'"html"'" \ + ; echo 'test ! -d html || rm /* ; test -d html || mkdir html' \ + ; test ! -d html || rm html/* ; test -d html || mkdir html \ + ; echo 'cp $(srcdir)/zziplib-manpages.dbk zziplib-manpages.xml' \ + ; cp $(srcdir)/zziplib-manpages.dbk zziplib-manpages.xml \ + ; echo '$(XMLTO) -o html html zziplib-manpages.xml | tee written.lst' \ + ; $(XMLTO) -o html html zziplib-manpages.xml | tee written.lst \ + ; echo '$(PAX_TAR_CREATE) $@ html/*.*' \ + ; $(PAX_TAR_CREATE) $@ html/*.* \ + ; echo '$(DELETE); rm html/*.* ; rmdir html' \ + ; $(DELETE); rm html/*.* ; rmdir html \ + ; fi ; true + @ if test -s $@ \ + ; then echo cp $@ zziplib-$@ "(saved)"; cp $@ zziplib-$@ \ + ; else echo cp $(srcdir)/zziplib-$@ $@; cp $(srcdir)/zziplib-$@ $@ \ + ; fi + +install-man3 : manpages.tar + $(mkinstalldirs) $(DESTDIR)$(mandir)/man3 + P=`pwd` ; test -s manpages.tar || exit 1 \ + ; cd $(DESTDIR)$(mandir) && $(PAX_TAR_EXTRACT) $$P/manpages.tar \ + ; true + +unpack : manpages.tar htmpages.tar + test -s manpages.tar && test -s htmpages.tar + (rm -rf _htm && mkdir _htm && cd _htm && $(PAX_TAR_EXTRACT) ../htmpages.tar) + (rm -rf _man && mkdir _man && cd _man && $(PAX_TAR_EXTRACT) ../manpages.tar) +clean-unpack : + rm -rf _htm + rm -rf _man + +$(PACKAGE)-doc.omf : $(spec_file) Makefile + : "OMF for the html documentation - a copy of zziplib.sf.net" + echo '<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>' >$@ + echo '<omf><resource><creator> Guido Draheim </creator>' >>$@ + grep Packager $(FROMSPEC) | sed -e 's,Packager *: *, <maintainer>,' \ + -e '/<maintainer>/s,$$,</maintainer>,' >>$@ + grep Summary $(FROMSPEC) | sed -e 's,Summary *: *, <title>,' \ + -e '/<title>/s,$$,</title>,' >>$@ + echo ' <date>'$(DATESPEC)'</date>' >>$@ + echo ' <version identifier="$(VERSION)" date="'$(DATESPEC)'"/>' >>$@ + grep Group $(FROMSPEC) | sed -e 's,Group *: *, <subject category=",' \ + -e 's,/,|,g' -e '/<subject/s,$$," />,' >>$@ + echo ' <format mime="text/html"/>' >>$@ + pkgdocdir=`echo "$(pkgdocdir)" | sed -e "s|/[a-z][a-z]*/[.][.]/|/|"` \ + echo ' <identifier url="file:'"$$pkgdocdir"'/zzip-index.html"/>' >>$@ + echo ' <language code="C"/>' >>$@ + echo ' <relation seriesid="$(DOCSERIES)"/>' >>$@ + echo ' <rights type="GNU LGPL" holder="Guido Draheim"' >>$@ + pkgdocdir=`echo "$(pkgdocdir)" | sed -e "s|/[a-z][a-z]*/[.][.]/|/|"` \ + echo ' license="'"$$pkgdocdir"')/COPYING.LIB"/>' >>$@ + echo '</resource></omf>' >>$@ +$(PACKAGE)-man.omf : $(PACKAGE)-doc.omf $(PACKAGE).xml + : "OMF for functions reference - the docbook master of the manpages" + sed -e 's,"text/html","text/xml" dtd="$(DOCBOOKDTD)",' \ + -e 's,</title>, (Function Reference)</title>,' \ + -e 's,/index.html,/xml/manpages.xml,' \ + -e 's,$(DOCSERIES),$(MANSERIES),' $(PACKAGE)-doc.omf > $@ + test -s $@ || rm $@ + +omf : $(PACKAGE)-doc.omf $(PACKAGE)-man.omf +install-omf : omf zziplib.xml zzipmmapped.xml zzipfseeko.xml + : "not installed by default anymore - 'make install-doc' has one OMF" + $(mkinstalldirs) $(DESTDIR)$(pkgomfdir) + $(INSTALL_DATA) $(PACKAGE)-doc.omf $(DESTDIR)$(pkgomfdir)/ + $(mkinstalldirs) $(DESTDIR)$(pkgdocdir) + $(INSTALL_DATA) $(srcdir)/zziplib-manpages.xml \ + $(DESTDIR)$(pkgdocdir)/xml/manpages.xml + $(INSTALL_DATA) zziplib.xml zzipmmapped.xml zzipfseeko.xml \ + $(DESTDIR)$(pkgdocdir)/xml/ + $(INSTALL_DATA) $(PACKAGE)-man.omf $(DESTDIR)$(pkgomfdir)/ + - test ".$(DESTDIR)" != "." || scrollkeeper-update -v + +# ----------------------------------------------- mksite.sh for the main html +site.htm : body.htm + cp $(srcdir)/body.htm site.htm +site.html : body.htm site.htm mksite.sh $(htm_FILES) $(htms_FILES) + cp $(srcdir)/body.htm site.htm + perl $(srcdir)/mksite.pl $(mksite_sh_args) site.htm || \ + $(SHELL) $(srcdir)/mksite.sh $(mksite_sh_args) site.htm + +changes.htm : $(top_srcdir)/ChangeLog Makefile + echo "<pre>" > $@ ; cat $(top_srcdir)/ChangeLog \ + | sed -e "s,\\&,\\&\\;,g" \ + -e "s,<,\\<\\;,g" -e "s,>,\\>\\;,g" \ + -e "/^[A-Z].*[12][09][09][09]/s,\\(.*\\),<b>&</b>," \ + -e "/^[0-9]/s,\\(.*\\),<b>&</b>," >> $@ ; echo "</pre>" >>$@ + +# ----------------------------------------------- create pdf via docbook xml +# sorry, the xmlto / docbook-xsl are too broken to rebuild the PDF anymore + +zzip.xml : $(htm_FILES) zziplib.xml make-dbk.pl + : '@PERL@ make-dbk.pl $(htm_FILES) zziplib.xml >$@' + @PYTHON@ $(srcdir)/zzipdoc/htm2dbk.py $(htm_FILES) zziplib.xml >$@ + test -s "$@" || rm "$@" + +zzip.html : zzip.xml + xmlto html-nochunks zzip.xml +zzip.pdf : zzip.xml + xmlto pdf zzip.xml + +zziplib.pdf : $(htm_FILES) $(srcdir)/zziplib-master.dbk mksite.pl + cp $(srcdir)/zziplib-master.dbk zziplib.docbook + xmlto pdf zziplib.docbook ; rm zziplib.docbook + test -s zziplib.pdf + +pdfs : zziplib.pdf + +# Tell versions [3.59,3.63) of GNU make to not export all variables. +# Otherwise a system limit (for SysV at least) may be exceeded. +.NOEXPORT: diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/README.MSVC6 b/Build/source/libs/zziplib/zziplib-0.13.60/docs/README.MSVC6 new file mode 100644 index 00000000000..ee042c35092 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/README.MSVC6 @@ -0,0 +1,137 @@ +To compile zziplib with MSVC++ 6 you can use the workspace and project +files shipped along with the zziplib tarball in the msvc6/ directory. +This will save you most of the following steps, atleast skip step 1. + +Step 1: create zziplib workspace file + +- Create a workspace and a project file for "zzip". +- Add all .c and .h files in the zzip/ directory. Yes, all. +- Add the toplevel directory (containing the zzip/ directory) as an + "Additional Include Directory" to the search path. Best do this in + -> Project -> Settings -> Tab: C/C++ + -> Category: Preprocessor -> "Additional Include Directories". + This is a comma-separated list, in the workspace files shipped along + with the zziplib tarball, you will see ".." in there where ".." (or ..\..) + is the path to the toplevel directory of the unpacked sources. +- if you did choose "DLL" as a project type then you will automatically + see a define _USRDLL and ZZIP_EXPORTS in (this is just a hint)... + -> Project -> Settings -> Tab: C/C++ + -> Category: Preprocessor -> "Preprocessor Definitions". +- remove any LIB imports other than "kernel32.lib" + -> Project -> Settings -> Tab: Link + -> Category: Input -> "Object/Library Modules" + +Step 2: add zlib dependencies of zziplib.dll + +- if you do not have installed zlib in the system then you may want to + download "lib" + "bin" parts from http://gnuwin32.sf.net/packages/zlib.htm + (I found this msvcrt package via a reference at http://www.zlib.org) +- suppose you have the zlib.h file in "D:\include" and the libz.lib file + is in "D:\lib" then we need to add those dependencies to the project. +- add the path to zlib.h as an "Additional Include Path", best do this in + -> Project -> Settings -> Tab: C/C++ + -> Category: Preprocessor -> "Additional Include Directories". + This is a comma-separated list, after you have changed it, it might + look like "..,.,D:\include" (for ".." part see description in Step 1) +- That is enough to build a zziplib.lib, in order to create a zziplib.dll + we need to resolve its linker dependencies as well, best do this in + -> Project -> Settings -> Tab: Link + -> Category: Input -> "Object/Library Modules" + This is a space separated list (!!), add "libz.lib" there, so after + changing it it might look like "kernel32.lib libz.lib". Also modify + -> Project -> Settings -> Tab: Link + -> Category: Input -> "Additional Library Path" + which is usually empty. After changing it, it might contain "D:\lib". +- Also add ZZIP_DLL or ZZIP_EXPORTS for dllspec(exports), best do this in + -> Project -> Settings -> Tab: C/C++ + -> Category: Preprocessor" -> "Preprocessor Definitions". + After changing it, it might look like + "NDEBUG,WIN32,_WINDOWS,_MBCS,_USRDLL,ZZIP_EXPORTS" + +Step 3: example binaries to link with zziplib.dll + +- dynamic linking is best to avoid any copyright problems, the "Lesser GPL" + does not restrict usage of zziplib for the case of a separated zzip.dll +- the example workspace builds two zziplib libraries, where zziplib.lib + points to the staticlink variant and zzip.lib to the dynalink variant + which will add a dependency on zzip-1.dll being in the PATH +- the example binaries shipped with zziplib tarball do only have a single + .c file per output .exe, we pick zzcat.exe to guide you through. +- if you do not use our shipped project files, create a project "zzcat" + and add "bins/zzcat.c" in there. +- adjust the "Additional Include Directories": + -> Project -> Settings -> Tab: C/C++ + -> Category: Preprocessor -> "Additional Include Directories". + like in Step 1 add the path to "./zzip" owning the zziplib headers. + We do _not_ need the zlib headers to compile "zzcat.exe", so it might + just look like ".." (or "..\.." or "..\zziplib-0.10.82") +- adjust the "Object/Library Modules" + -> Project -> Settings -> Tab: Link + -> Category: Input -> "Object/Library Modules" + adding "zzip.lib libz.lib" with a space. The result might look like + "kernel32.lib zzip.lib libz.lib" or "kernel32.lib zziplib.lib libz.lib". +- adjust the "Additional Library Path" + -> Project -> Settings -> Tab: Link + -> Category: Input -> "Additional Library Path" + and add both zziplib libpath and libz libpath, separated by comma, i.e. + ".\Release,D:\lib" or ".\Debug,D:\lib" + +Step 4: Customization +- have a look at the info parts that can be put into the DLL project: + -> Project -> Settings -> Tab: Link + -> Category: Output -> "Version Information" + e.g. Major: "10" Minor: "82" for Release 0.10.82 of zziplib + -> Project -> Settings -> Tab: Resources -> "Language" + or just ignore the value when no messages are there + -> Project -> Settings -> Tab: Link + -> Category: General -> "Output Filename" + e.g. "zzip-1.dll" intead of "zzip.dll" for this first generation + (needs also to rename dll dependencies from "zzip.lib" to "zzip-1.lib") +- there are a few defines that trigger extra code in zziplib, e.g. + ZZIP_HARDEN - extra sanity check for obfuscated zip files + ZZIP_CHECK_BACKSLASH_DIRSEPARATOR - to check for win32-like paths + (for the magic part within a zip archive we always assume a "/" separator) + ZZIP_USE_ZIPLIKES - not only do magic checks for ".zip" files to + be handled like directories, also do that for a few other zip documents + ZZIP_WRAPWRAP - if there problems on unusual targets then try this one. + -> Project -> Settings -> Tab: C/C++ + -> Category: Preprocessor" -> "Preprocessor Definitions". + +Step 5: Testing +- copy the *.dll and *.exe files from msvc6/Release/ to a place reachable + from your PATH (perhaps d:\bin), or even simpler, go the other way round, + copy the file test/test.zip to the msvc6/Release/ directory. +- open a command window (usually with a "MSDOS" symbol) and go the + directory containing the test.zip (e.g. cd zziplib-0.10.82/msvc6/Release) +- run `zzcat test/README` which should extract the file README from the + test.zip archive and print it to the screen. Try `zzdir test` to see + that it was really a compressed file printed to the screen. +- If it works then everything is alright round zziplib which is a good + thing to know when there are other problems +- at the time of writing (0.10.82 of zziplib), the set of bin files are + precompiled with msvc6 and pushed to the download center at sourceforge. + +cheers, -- guido + +# finally, the older description for zziplib 0.10.5x + +To build zziplib you need to add the path to zlib to the include directories +search path. You find this under + + Project, Settings, C/C++, Preprocessor, Additional Include Directories. + +Example: +You have installed zlib to D:\zlib. You then change the edit box + Additional Include Directories +from + .. +to + ..,D:\zlib + + +Included are two project files for zziplib. One that creates +zziplib as a DLL, and one that creates zziplib as a static library. +The DLL version is compiled with multi-threaded support. The static library +version is currently set to link with libc(d).lib, i.e. only single-threaded +CRT. If this does not suit your needs, you can change this under + Project, Settings, C/C++, Code Generation, Use run-time library. diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/README.SDL b/Build/source/libs/zziplib/zziplib-0.13.60/docs/README.SDL new file mode 100644 index 00000000000..f0fc66fee24 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/README.SDL @@ -0,0 +1,134 @@ +WARNING: + The following instructions are outdated. + They refer back to 16. Dezember 2002 with zziplib version 0.10.66. + Most things refer to MSVC which have a different README + (and there are msvc project files being shipped along) + The rest is mainly an example program that you can use as a + boilerplate in your souce code - may be just copy and use. + +--------------------------------------------------------------------- +16122002, Thomas.Eder@nmi.at, Using the zziplib library with SDL + + +PREREQUISITES + + Tested versions: + zziplib 0.10.66 (preview), SDL 1.2.5, Win32, MSVC6 + + Homepages (download) + zziplib.sourceforge.net (zziplib-0.10.66.tar.gz) + www.libsdl.org (SDL-devel-1.2.5a-VC6.zip) + + Also you have to get zlib, I used + from SDL_image-1.2.2.zip in VisualC.zip: + zlib.lib (12.7.1998, 34674 bytes) + zlib.h ( 9.7.1998, 41791 bytes, 1.1.3) + zconf.h ( 8.7.1998, 8089 bytes) + + from SDL_image-devel-1.2.2-VC6.zip: + zlib.dll ( 5.4.2001, 53760 bytes, 1.1.3.1) + + Maybe you should get the latest version (currently 1.1.4) from + http://gnuwin32.sourceforge.net/install.html + (see notes at end of page!) + + +CREATING zzlib.dll/zzlib.lib + + Copy your versions of zlib.lib, zlib.h and zconf.h to the zzlib + directory. + In MSVC (start zziplib.dsw) + Add zlib.lib to the files for the zziplib_DLL project. + Add ZLIB_DLL to the preprocessor definitions. + + Set the active project and the active configuration to create zziplib.dll + and zziplib.lib (I created and used the release version). + + +USING zzlib WITH SDL + + Include/add the following files to your SDL-Project + (put them in proper directories, etc.): + + Header files: + zconf.h + zlib.h + zzip.h + zzip-conf.h + zzip-io.h + zziplib.h + zzip-msvc.h + zzip-stdint.h + + Libraries: + zlib.lib + zziplib.lib + + DLLs: + zlib.dll + zziplib.dll + + you may also want to use + SDL_rwops_zzip.c + SDL_rwops_zzip.h + + + For compiling it should be sufficient to use + #include <zziplib.h> + in the files where you use zziplib-functions. + + +NOTE + + It is possible to use both original (unzipped) and zipped versions of files, + and zziplib will take one of them (depending on the modes when calling + zziplib). + + But this didnt work for all of my original files, so I suggest using zipped + files only (and remove the original unzipped files, so zziplib doesnt try to + open the original version). + + +HINT + + When opening many files from a zip, its faster to open the zip-directory + only once, and not for every file access. You may want to modify + SDL_rwops_zzip for this to get code like: + + + SDL_Surface* image; + SDL_RWops* rw; + SDL_Surface* temp1 = NULL; //default > NULL > error + SDL_Surface* temp2 = NULL; //default > NULL > error + + //last param may be used for err return + ZZIP_DIR* zzipdir = zzip_dir_open( "figures.zip", NULL ); + + ZZIP_FILE* zfile = zzip_file_open(zzipdir, "f1.bmp", ZZIP_CASELESS); + + if (zfile) + { + rw = SDL_RWFromZZIP(zfile); //modified version + if (rw) + { + temp1 = IMG_Load_RW(rw, 0); + SDL_FreeRW(rw); + } + int zret = zzip_file_close( zfile ); + } + + zfile = zzip_file_open(zzipdir, "f2.bmp", ZZIP_CASELESS); + if (zfile) + { + rw = SDL_RWFromZZIP(zfile); //modified version + if (rw) + { + temp2 = IMG_Load_RW(rw, 0); + SDL_FreeRW(rw); + } + int zret = zzip_file_close( zfile ); + } + + //.. etc + + zzip_dir_close( zzipdir ); diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/body.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/body.htm new file mode 100644 index 00000000000..a2b2dba91f4 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/body.htm @@ -0,0 +1,103 @@ +<html><head><title>zziplib <!--$title?--> </title> +<link rel="stylesheet" type="text/css" href="sdocbook.css" /> + <style> + a:link { text-decoration : none ; color : #000080 ; } + a:visited { text-decoration : none ; color : #200060 ; } + .justify { text-align : justify ; } + .navlist { background-color : #F0F0F0 ; + width : 9em ; speak : none ; } + .navprint { pause-after : 200ms ; speak : spell-out ; } + body { background-color : white ; } + .P { text-align : justify ; margin-right: 1em ; + margin-left: 1em ; } + .BLOCKQUOTE { text-align : justify ; margin-right: 3em ; + margin-left: 3em ; } + .PRE { margin-right: 2em ; + margin-left: 2em ; } + .DT { font-weight : bold ; } + .DD { text-align : justify ; margin-right: 1em ; } + </style> +</head><body> +<!--mksite:sectioninfo--> <!--mksite:nosimplevars--> +<table width="100%"><tr valign="top"><td class="navlist"> +<center><sub> + <!--mksite:printerfriendly--> + <a href="${printerfriendly:=site.htm}" title="printer friendly version"> + <img alt="printer / text mode version" width="8" height="8" border="0" /></a> + </sub> + <big><big><big><b> + <font color="#800080"><sup>Z</sup>ZIP<sub>lib</sub></font> + </b></big></big></big> +<br><big><b> <!--$VERSION--> </b></big> +</center> +<hr><a href="zzip-index.html">Library</a> +<br>-<a href="zzip-zip.html"> ZIP Access</a> +<br>-<a href="zzip-file.html"> Transparently</a> +<br>-<a href="zzip-sdl-rwops.html"> SDLrwops <small>Example</small></a> +<br>-<a href="zzip-extio.html"> ext/io <small>Customization</small></a> +<br><><a href="zzip-xor.html"> xor/io <small>Obfuscation</small></a> +<br><><a href="zzip-crypt.html"> zip/no <small>Encryption</small></a> +<small><a href="zzip-cryptoid.html">(2)</a></small> +<br>-<a href="zzip-api.html"> Library API</a> +<br><><a href="zzip-basics.html"> basics</a></u> +<><a href="zzip-extras.html"> extras</a></u> +<br>=<a href="zzip-parse.html">Parsing ZIPs</a> +<br>-<a href="64on32.html"> 64on32 extras</a> +<br>-<a href="future.html"> Next To Come</a> <br> <> +<small><a href="fseeko.html"> fseeko </a></small> +<small><a href="mmapped.html"> mmapped </a></small> +<small><a href="memdisk.html"> memdisk </a></small> +<br>-<a href="configs.html"> Config Helpers</a> +<br>-<a href="sfx-make.html"> Making a zip/exe</a> +<br>=<a href="history.html">Hints And Links</a> +<br>-<a href="referentials.html"> Referentials</a> +<br>-<a href="functions.html"> Functions List..</a> +<br> -<a href="zziplib.html"> zziplib.*</a> +<br> -<a href="zzipmmapped.html"> zzipmmapped.*</a> +<br> -<a href="zzipfseeko.html"> zzipfseeko.*</a> +<!--START--> +<br> -<a href="man/index.html"><small> unix man pages</small></a> +<!--ENDS--> + +<br>  +<br> +<br><small><a href="faq.html"> faq </a></small> +<small><><a href="notes.html"> notes </a></small> +<small><><a href="zip-php.html"> zip-php </a></small> +<hr><a href="download.html"> Download Area </a> +<br><><a href="developer.html"> Developer Area </a> +<br><><a href="http://sourceforge.net/projects/zziplib"> Sourceforge Project</a> +<br><><a href="http://zziplib.sourceforge.net"> zziplib.sf.net + <small><i>Home</i></small></a> + +<br><><a href="changes.html"> ChangeLog</a> +<br><small><a href="copying.html"> LGPL/MPL license</a></small> +<br> +<hr> +<center><!--START--> + <a href="http://sourceforge.net/project/?group_id=6389"> + <img src="http://sourceforge.net/sflogo.php?group_id=6389&type=2" + border="0" alt="sourceforge.net" width="125" height="37"> + </a> +<br><small><a href="site.html">-sitemap-</a></small> +</center><!--ENDS--> + +<p align="right"><small> +generated <!--$today--> +</small> +<br> <small>(C)</small> Guido Draheim +<br><i> guidod<small>@</small>gmx.de</i> +</p> +<!--mksite:emailfooter:guidod@gmx.de?subject=zzip:--> + +<p align="right"><small><small> + formatted by <a href="http://zziplib.sf.net/mksite">mksite.sh</a> +</p> + +<p><b>as xml:</b></p> +<hr><a href="zzip-index.xml">Library</a> +<br>-<a href="64on32.xml"> 64on32 extras</a> +<br>-<a href="future.xml"> Next To Come</a> <br> <> +</td><td> + +</td></tr></table></body></html> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/configs.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/configs.htm new file mode 100644 index 00000000000..49631e673de --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/configs.htm @@ -0,0 +1,164 @@ +<section> <date> February 2003 </date> +<h2> Configuration </h2> of other projects using zziplib + +<!--border--> + +<P> + If using the zziplib with other project then you can use a number + of possibility to configure and link. The zziplib had been usually + included within the projects that made use of it - some did even + pick up the advantage to be allowed to staticlink in a limited + set of conditions. Recently however, the zziplib is shipped as a + standard library of various linux/freebsd distros - mostly for + the usage by the php-zip module. This allows third party software + makers to link to the preinstalled library in the system and + consequently reduce the memory consumption - even more than now + with the zziplib being a lightweight anyway (the i386 .so is + usually less than 20k) +</P> + +<section> +<h3> pkg-config --libs </h3> + +<P> + Within modern software development, one should be advised to use + pkg-config as soon as it is available. The pkg-config helper can + handle a lot of problems that can usually come up with linking + to third party libraries in case that those link again dynamically + with other libraries themselves. It does correctly order the + list of "libs", it can throw away duplicate "-L" hints, and same + for cflags "-I" hints, plus it will throw away some sys-includes + that gcc3.2 will warn about with a false positive. +</P> +<P> + There is a number of pkg-config targets installed in the system + but the one you want to use is <b>pkg-config zziplib</b>. + Therefore, a simple Makefile could read like + <pre> + PROGRAM = my_prog + CFLAGS = -Dhappy `pkg-config zziplib --cflags` + LIBS = -Wl,-E `pkg-config zziplib --libs` + + my_prog.o : my_prog.c + $(CC) $(CFLAGS) $< -o $@ + my_prog : my_prog.o + $(LINK) $< $(LIBS) + </pre> +</P> +<P> + The `pkg-config zziplibs --libs` will usually expand to + something like <code>-lzzip -lz</code> which are the + two (!!) libraries that you need to link with - in that + order. The zziplib builds on top of the z-lib algorithms + for compression of files within the zip-archive. That's + the same for other lib-parts of the zziplib project as + well, e.g. the sdl-rwops part which does also need to + link with the sdl-lib - and that's where the pkg-config + infrastructure can be of great help. That's the reason + why zziplib installs a few more ".pc" files, you can + get a list of them like this: + <pre> + $ pkg-config --list-all | sort | grep zzip + zziplib zziplib - ZZipLib - libZ-based ZIP-access Library + zzip-sdl-config zzip-sdl-config - SDL Config (for ZZipLib) + zzip-sdl-rwops zzip-sdl-rwops - SDL_rwops for ZZipLib + zzipwrap zzipwrap - Callback Wrappers for ZZipLib + zzip-zlib-config zzip-zlib-config - ZLib Config (for ZZipLib) + </pre> +</P><P> + The two entries like "zzip-sdl-config" and "zzip-zlib-config" + happen to be ".pc" files for the libz.so and libSDL.so that + were seen at configure-time of zziplib - you may want to reuse + these in your projects as well whenever you need to link to + either of zlib or libsdl even in places where there is no direct + need for zziplib. It basically looks like: + <pre> + $ pkg-config zzip-zlib-config --modversion + 1.1.4 + $ pkg-config zzip-zlib-config --libs + -lz + </pre> +</P> + +</section><section> +<h3> zzip-config </h3> +<P> + The pkg-config ".pc" files are relativly young in the history of + zziplib. A long time before that there was the `zzip-config` + script installed in the system. These `*-config` were common + before the pkg-config came about, and in fact the pkg-config + infrastructure was invented to flatten away the problems of + using multiple `*-config` scripts for a project. As long as you + do not combine multiple `*-config`s then it should be well okay + to use the `zzip-config` directly - it does also kill another + dependency on the `pkg-config` tool to build your project, the + zziplib is all that's needed. +</P> +<P> + In its call-structure the `zzip-config` script uses the same + options as `pkg-config`, (well they are historic cousins anyway). + and that simply means you can replace each call above like + `pkg-config zziplib...` with `zzip-config...`. + + <pre> + PROGRAM = my_prog + CFLAGS = -Dhappy `zzip-config --cflags` + LIBS = -Wl,-E `zzip-config --libs` + + my_prog.o : my_prog.c + $(CC) $(CFLAGS) $< -o $@ + my_prog : my_prog.o + $(LINK) $< $(LIBS) + </pre> +</P> +<P> + Be informed that the zzip-config script is low-maintained and + starting with 2004 it will be replaced with a one-line script + that simply reads `pkg-config zziplib $*`. By that time the + rpm/deb packages will also list "pkgconfig" as a dependency + on the zziplib-devel/zziplib-dev part. +</P> + +</section><section> +<h3> autoconf macro </h3> + +<P> + There is currently an autoconf macro installed along into + the usual /usr/share/aclocal space for making it easier for + you to pick up the configure-time cflags/libs needed to + build/link with zziplib. In any way it does look like + this: + <pre> + dnl PKG_CHECK_ZZIPLIB(ZSTUFF, min-version, action-if, action-not) + AC_DEFUN([PKG_CHECK_ZZIPLIB],[dnl + PKG_CHECK_MODULES([$1], [zziplib $2], [$3], [$4])]) + </pre> +</P> +<P> + You are strongly advised to take advantage of the pkgconfig's + macro directly - you can find the macro in + <code>/usr/share/aclocal/pkg.m4</code> and it allows to + combine the flags of a list of library modules that you + want to have. If it is only zziplib, than you could simply + use this in your configure.ac: +<pre> + <b>PKG_CHECK_MODULES</b>([<b>ZZIP</b>],[zziplib >= 0.10.75]) + </pre> +</P><P> + which will provide you with two autoconf/automake variables + named <b><code>ZZIP_CFLAGS</code></b> and <b><code>ZZIP_LIBS</code></b> + respectivly. +</P> +<P> + Up to 2004, the macro in zziplib.m4 will be however carry + a copy of the pkg.m4 so that you do not need another + dependency for your software project. The macro is called + like shown above PKG_CHECK_ZZIPLIB and you would call it + like +<br><code> + PKG_CHECK_ZZIPLIB([ZZIP],[0.10.75])</code><br> + which will give you the two autoconf/automake variables + as well, <code>ZZIP_CFLAGS</code> and <code>ZZIP_LIBS</code> +</P> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/copying.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/copying.htm new file mode 100644 index 00000000000..2f0c8c96d79 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/copying.htm @@ -0,0 +1,113 @@ +<section> <date> 2004 </date> +<h2> COPYING - license information </h2> the public license terms + +<P> + The zziplib is a small library that allows for some parts of + obfuscation. This is very handy in commercial projects which tend + to incorporate a copy into their source tree. And with + <a href="zzip-xor.htm">obfuscation</a> it is often advisable + to staticlink the zziplib part and `strip` the symbols from + the resulting binary - in order to obfuscate the usage of a + standard library for semi-`encryption` of data files. +</P> + +<P> + In the past I have been modifying the original LGPL license + with a text that allows staticlinking thereby taking over a + few paragraphs from the MPL as restrictions to do so, just to + defend against improper usage. However I kept being asked + legalese questions since most people do not want to interpret + added text either and on their own without a lawyer. However + that accounts to me as well. +</P> +<P> + The public license(s) are simply there to protect me and + my work, none of this is fixed and it is neither the only + possible way to get hold of a proper license. You can + always contact me to negotiate a special one if you do + need so. In most cases I will just say okay and you get + it for free, perhaps after some presentations I will + ask for som tax-reductable compensation sent to + a wellfare organisation (never me!). +</P> +<P> + A last hint from a friend did make me think as well, as + that the whole point of using standard public licenses + is to protect against the need to use your own lawyers + in the case that someone breaks the license rules. If + one uses a standard license then it is in the interest + of that big organization XY that the license will be + enforced and that it will be shown valid in all courts. + At the time of writing, no opensource license has + ever been discussed to an end in a court trial. +</P> +<P> + That's why at last, I decided to change the COPYING + details once again - and start shipping under a dual + MPL / LGPL license where each of them is separate + and restrictions apply alternatively. Remember that + each license is non-exclusive anyway, and I can give + out as many licenses as I want, here we have one as + MPL, then we have one as LGPL, and perhaps you ask me + for a third text to send you over. The public ones + are just there for you as a free choice which you can + pick without negotiations or a fee. +</P> +<P> + And yes, you will be on established legal grounds as + long as you restrict your usage of the library to the + details contained in either COPYING text. And better + yet, the legal possibilities have been discussed + a few hundred times before. You will surely find + good answers on the internet as well to guide you + to decisions in your company whether zziplib may + be adopted for a specific task. +</P> +<P> + The sources themselves are sent out under a dual license, + with both MPL and LGPL license options, and as long as + the MPL part is not removed then the recpient of some + modified sources will be entitled to the same choice + among the public licenses of LGPL / MPL. Note that some + example sources are given away under the ZLIB license + which is nothing more than asking for nice behavior + which should have been the case even without such a text. + <small><small>(However, it is just a fact that some people + happen to behave anti-social especially under pressure of + capitalist needs, said to lower the risks for commercial + success/failure of a company. You have to enforce good + behavior or it will be "forgotten". With a license it is + not just an error, it is a risk in itself to forget about it) + </small></small> +</P> +<P> + As for staticlinking, let us explore that a bit - there has + been a debate that the LGPL warrants in fact the freedom of + the final recipient as you must give him the original or + modified sources of zziplib, to allow them to modify that + part again, and then (re-)link to your own parts. Your own + parts may come in the form of precompiled objects without + sources (as opposed to the GPL restrictions). In here, it + is simply easier to use a dynamic linker that does the + re-linking job at startup-time of the whole project instead + to provide a makefile and linkage descriptions to let the + user do the staticlink it into a combined executable object. + The latter however is often needed for embedded environments + and it is quite of the original motivation to ask for a + staticlink option where in fact the LGPL does allow it anyway + as long as you ship all parts separatly as well. +</P> + +<P> + The MPL defines the area of a combined work a bit differently, + in a way it derives some ideas from BSD'ish licenses. This + part does more care to protect the `Intellectual Properties` + of the original developers. It does ask to prominently show + off that you have gone to link with the work of someone else + in your project. Take special note of <em>"3.5 Required Notices"</em>, + <em>"3.6 Distribution of Executable Versions"</em> and + <em>"3.7 Larger Works"</em> here. Or read a lawyer text on + the legal result of the whole license. +</P> + +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/developer.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/developer.htm new file mode 100644 index 00000000000..9c964f09933 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/developer.htm @@ -0,0 +1,75 @@ +<section> <date> 2009-08-23 </date> +<H2> Developer </H2> sourceforge SVN area + +<center> +<a href="http://zziplib.svn.sourceforge.net/viewvc/zziplib/"> + http://zziplib.svn.sourceforge.net/viewvc/zziplib/ </a> +</center> + +<P> + The zziplib is using the Subversion repository of sourceforge + for development. (originally it used the cvs service but it + is now switched over to the svn serivce) Since 0.13.x the zziplib + is using the version control system as the main area to host + the source code in the module "zzip-0". You can get a snapshot + from that area anytime, the access details can be found under the + link above. All later releases are actually snapshots of some + cvs day. (Prior releases had seen too many reshuffling of the + build system which cvs is not the best tool to handle gracefully. + Since the switch to svn also refactoring can be tracked just fine). +</P> +<P> + I was using the sourceforge compilefarm to do remote testing of + releses for crossplatform compatibility. This included usually + some unix compatible platforms such as Linux, Solaris, + FreeBSD, Darwin/MacOSX including i386, amd64, sparc, sparc64, + powerpc when available. Even the latest daytoday cvs snapshots + should be fine for these platforms. However sourceforge has + shut down the compilefarm just as HP closed the teamdrive + service for the public. At the moment I am using the buildservice + at build.opensuse.org to test multiple platforms atleast with + the help of a "make check" minimal unittest during the rpmbuild. +</P> +<P> + In the labs of my former employer some more platforms exist but these + were only checked once in a few months, mostly summertime and christmas + season if the general workload is low. That included hp/ux but more + importantly a real MSVC6 installation and a few PCs with win32 flavours. + I do not use win32 at home! Any help in that area is greatly appreciated, + the zziplib was written in strict ansi-C but the build system and dll hell + is sometimes very weird for win32 platforms. I feel it often requires + experienced hands to detect the source of a problem should there be one. +</P> +<P> + Note also that I do not use PHP, there is a regular flow of questions + circulating around the php-zip module which is built around zziplib. But + I can not answer any questions about the build and installation for that. + It would be greatly appreciated if I can find contact to a PHP hacker + to whom I can forward php related mails! The php zip wrapper is not part + of the zziplib in any way. +</P> +<P> + Since there is an MSVC6 system in reach now, I have not been updating my + gcc/mingw32 installations as rigidly as most other win32 developers + have. I understand there are great advancements in the msys/mingw32 area + and the provided technology which made them follow quickly. However in + the labs only some older releases are used for these are called more + "stable". Remember that I do not use win32 at home, only crosscompile to + mingw32 is checked out sometimes but I know that it is somewhat different + than the selfhosted msys environment. Any help is greatly appreciated. +</P> +<P> + Lastly, I am trying to make the zziplib as close as possible to the + thing called "industrial strength". But remember that this is still + an opensource spare time probject. <b>I take patches</b>! Many of the + interesting parts of zziplib have been introduced by submission which + I have later integrated more heavily to make the zziplib be what it + was always intended for: to be a small, fast and portable library allowing + to use a zip file as a special variant of a data filesystem. The ext/io + obfuscation stuff was one of the best feature submissions so far. Thanks. +</P> +<P> + Looking forward to hear from <em>YOU</em>. + - <a href="mailto:guidod@gmx.de?subject=zzip:">guidod@gmx.de</a> +</P> +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/download.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/download.htm new file mode 100644 index 00000000000..d52111155fc --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/download.htm @@ -0,0 +1,56 @@ +<section> <date> 2004-05-18 </date> +<H2> Download </H2> what to get to get it + +<section> +<H4>Sourceforge File Area</H4> + +<center> +<a href="http://sourceforge.net/project/showfiles.php?group_id=6389"> + sourceforge.net/project/showfiles.php?group_id=6389 </a> +</center> + +<P> + All source releases and some binary releases are listed at the + sourceforge download area under the link show above. The sourceforge + file area is replicated all over the world and should be accessible + with highest bandwith in all corners of the world. +</P> + +</section><section> +<H4> Which Version To Download </H4> + +<P> + Do not use 0.10.x anymore! It is listed as stable since it is the + only release of zziplib tested to work on a few dozen platforms. + However there were some problematic zip files out there that can + trigger segfaults. Later zzip file decoders have extra checks and + helper routines for that. It's just that the later zziplib have not + been given as many crossplatform build tets as the 0.10.x generation. +</P> +<P> + Use a 0.12.x (proto-stable) or a 0.13.x (developer) variant of + zziplib, especially if you intend to make heavy usage of the zip + decoders in specialized environments - I will not add any fixes to + the 0.10.x series anymore (it's deep frozen) but if you hit a + problem with 0.13.x I can help you quickly with a patch and official + bugfix release. The later versions are regulary checked crossplatform + atleast for <b>Linux, Solaris, FreeBSD, Darwin/MacOSX, Win32/NT</b> + including i386, amd64, sparc, sparc64, powerpc where available. +</P> +<P> + Note that all generations 0.10.x through 0.13.x are strictly + <b>backward compatible</b>. There is a core API (file and dir + handling) being binary compatible, which is also true for most + of the helper routines (data getters). Only some rarely used + entries are made source level compatible, and so far no one had + ever any problem with binary compatibility of the zziplib DLLs. +</P> +<P> + The MSVC users are strongly advised to use a a later version as + well since I have tested the 0.12.x/0.13.x myself and making + some msc binary dll releases directly - prior versions were + thirdparty contributions which were working smoothly since I have + been preparing zziplib for win32 using the gcc/mingw compile suite. + Please check also the <a href="developer.html">developer pages</a>. +</P> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/faq.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/faq.htm new file mode 100644 index 00000000000..56d79366655 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/faq.htm @@ -0,0 +1,182 @@ +<section> <date> 2004 </date> +<H2> FAQ </H2> (non)frequently asked questions + +<BLOCKQUOTE> + While using the zziplib some people come up with questions and + problems that need a little longer to be explained. So here is + a list of these notes for your information. Keep it up. +</BLOCKQUOTE> +<ul> +<li> <a href="#latin-1">extended ascii characters in names of zipped files </a> +</li> +<li> <a href="#utf-8">unicode support for names of zipped files </a> +</li> +<li> <a href="#timestamps">timestamps of zipped files </a> +</li> +<li> <a href="#install">installation instructions </a> +</li> +<li> <a href="#php">php zip module installation </a> +</li> +<li> <a href="#license">commercial support </a> +</li> +</ul> + +<DL> +<DT><a name="latin-1" /> + zziplib does not support extended ascii characaters, winzip does</DT> +<DD><P> + That's somehow incorrect - the ascii range is the 7bit lower plane of + an 8bit character encoding. The upper plane had been non-standard for + decades including the age when the ZIP file format was invented. The + first instances of pkware's zip compressor were used on DOS with a + codepage 437 which has a way different encoding for the upper plane + than todays latin-1 encoding which in fact used in <em>all</em> + modern operating systems. So what really see is a mismatch of + character encodings that you are used to. +</P><P> + Even more than that the character encoding had never been specified + at all for the filenames in the central directory part. An alert + reader will however recognize that <em>each</em> file entry does + also have version-info field telling about the compressor that did + create the file entry. That version-info has an upper byte telling + about the host OS being in use while packaging. A heavy-weight + zip decoder might use that value to infer the character encoding + on the host OS (while compressing), to detect a mismatch to the + current OS (while decompressing), and going to re-code the filename + accordingly. +</P><P> + Even more than that the zip file format has seen various extensions + over time that have found their place in an extra info block. There + are info blocks telling more about the filename / codeset. However + the zziplib library does not even attempt to decode a single extra + info block as zziplib is originally meant to be a light-weight library. + However one might want to put a layer on top of the structure decoding + of zziplib that does the necessary detection of character encodings and + re-coding of name entries. Such a layer has not been written so far. +</P></DD> +<DT><a name="utf-8" /> + zziplib does not support any unicode plane for filenames </DT> +<DD><P> + The pkware's appnote.text has an extra info block (id-8) for the + unicode name of the file entry but it was never actually being + used AFAICS. This might be related to the current developments of + older systems to drop usage of latin-1 encoding in the upper plane + of 8bit characters and instead choosing the multibyte encoding + according to UTF-8. This is again highly system specific. +</P><P> + Basically, you would need to instruct the compressor to use + UTF-8 encoding for the file-entries to arrive at a zip archive + with filenames in that specific character encoding. Along with + this zip archive one can switch the application into utf-8 usage + as well and take advantage of filename matches in that encoding. + This will make it so there is not mismatch in character encoding + and implicit re-coding being needed. +</P></DD> +<DT><a name="timestamps" /> + zziplib does not return stat values for file timestamps </DT> +<DD><P> + That's correct and again a re-coding problem. The original + timestamp in each file entry is in DOS format (i.e. old-FAT). + The stat value is usually expected to be in POSIX format. The + win32 API has an extra function for conversion but none of the + unix compatibles has one, so it would be needed to ship a + conversion function along with zziplib. +</P><P> + However the zziplib is intended to be light-weight system and + used largely for packaging data for an application. There it + is not used strictly as a variant of Virtual File System (vfs) + that would need to map any information from the zip file system + to native host system. Of course applications are free to cut + out the DOS file timestamp and re-code it on their own. It's + just that zziplib does not provide that re-coding originally. +</P></DD> +<DT><a name="install" /> + how can one install the zziplib package </DT> +<DD><P> + The zziplib project is opensource which effectly gives two ways of + installing the package: one can download the source archive and use + a C compiler to derive a binary executable for whatever computer + it needs to be on (see the platform compatibility list). This is + the preferred way but for convenience one can download a binary + installation archive with precompiled executables. +</P><P> + The current project uses autoconf/automake for cross platform + support which includes most unix compatible systems and their + native C compilers. The derivates of the GNU C compiler (gcc) have + replaced most of these native C compilers in the past years. The + <a href="http://www.mingw.org">mingw32</a> project has ported a + unix born C compiler to win32 and zziplib can be compiled with + it for the various win32 platforms. +</P><P> + There exist some C compilers which can not be embedded easily into + a unix compilation framework. The zziplib source archive ships with + project files for MSVC6 and MSVC6 (Microsoft Visual C). Adapting + these project files might help with installation problems of the + DLL hell on win32 platforms. There exist no sufficient guidelines to + mix binary helper libraries for many applications on windows. +</P><P> + There exists win32 binary archives as zip files on the download area + of zziplib (MSI is always on my wishlist). Including the project as + a helper library however you should not use it but instead compile + from source. The general library installation on unix are better, + the zziplib download area contains regularly some linux binary + archives (rpm). Many vendors of unix compatible systems provide + precompiled binary packages of zziplib on their own. +</P></DD> +<DT><a name="php" /> + after installing zziplib the php zip module still does not work </DT> +<DD><P> + Now that is one of <b>the most</b> frequently asked questions that + I do receive. There is just one major problem with it: I did not + write the php zip module (which uses zziplib) and I have no idea + how php modules work or how to tell apache's php sandbox to make + it work. Really, I do not have the slightest clue on that. +</P><P> + I was posting to some php developer sites to spread awareness of + the fact and hopefully to find a guy that I could forward any + questions on the php zip module installation. But so far there is + nothing, it merily seems that such installation problems are in no + way related to zziplib anyway but exists <b>with any other module with a + third party library dependency</b> as well. So the answers on php forum + sites will ask for details of the current php and apache configuration. +</P><P> + Since I do not run a php zip whatever nor any other php stuff, it's + just that those hints were not quite helpful to me. It would be really + really great if someone with a php zip background could be so nice to + write a short roundup of the areas to check when a php zip module + installation fails, so that I could post it here. Where are you? + Yours desperatly... ;-) +</P></DD> +<DT><a name="license" /> + how to obtain a license and support contract for a commercial project </DT> +<DD><P> + The zziplib has been created as a spare time project and it is put + under a very easy free public license. Even for commercial projects + there is hardly any need to negotiate a separate license since the + restrictions of the GNU LGPL or MPL can be matched easily. As a + general hint, if the zziplib is shipped unmodified with your project + then you are right within the limits of the free public license. +</P><P> + Sometimes the question for a personal license comes up for very + different reason - the need for a support contract and/or the setting + of functionality guarantees. The free public licenses include a safeguard + clause to that end, "in the hope that it will be useful, + but <em>without any warranty</em>; without even the implied warranty of + <em>merchantability</em> or <em>fitness for a particular purpose</em>." + Since the project was developed as a spare time project however there + have never been personal licenses going beyond. +</P><P> + In general you can still try to negotiate a support contract but it + will be very costly. It is much more profitable for you to tell one + of your developers to have a look at the source code and ensure the + required functionality is there, with hands on. The source code is + written to be very readable, maintainable and extensible. Just be + reminded that the free public licenses have restrictions on shipping + modified binaries but I can give you a cheap personal license to + escape these. (Such licenses can be obtained in return for tax-deductible + donations to organisations supporting opensource software). +</P></DD> +</DL> + +<P> and as always - <em> Patches are welcome </em> - </P> +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/fseeko.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/fseeko.htm new file mode 100644 index 00000000000..89c3195e36d --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/fseeko.htm @@ -0,0 +1,193 @@ +<section> <date> 2005 </date> +<H2> zzip/fseeko </H2> zip access for stdio handle + +<BLOCKQUOTE> + These routines are fully independent from the traditional zzip + implementation. They assume a readonly seekable stdio handle + representing a complete zip file. The functions show how to + parse the structure, find files and return a decoded bytestream. +</BLOCKQUOTE> + +<section> +<H3> stdio disk handle </H3> + +<P> + Other than with the <a href="mmapped.html">mmapped</a> alternative + interface there is no need to build special handle for the zip + disk file. The normal stdio file handle (of type <b><code>FILE</code></b>) + serves as the disk access representation. You can open that stdio file + handle any way you want. Note however that the <code>zzipfseeko</code> + routines modify the access state of that file handle, especially the + read position. +</P> + +<P> + To get access to a zipped file, you need a zip archive entry known + under the type <code>ZZIP_ENTRY</code>. This is again modelled after + the <code>DIR_ENTRY</code> type in being a representation of a file + name inside the zip central directory. To get a fresh zzip entry, use + <code>zzip_entry_findfirst</code>, to get the next use + <code>zzip_entry_findnext</code>, and do not forget to free the + resource with <code>zzip_entry_free</code>. +</P> +<PRE> + extern ZZIP_ENTRY* zzip_entry_findfirst(FILE* disk); + extern ZZIP_ENTRY* zzip_entry_findnext(ZZIP_ENTRY* entry); + extern int zzip_entry_free(ZZIP_ENTRY* entry); +</PRE> +<P> + These three calls will allow to walk all zip archive members in the + order listed in the zip central directory. To actually implement a + directory lister ("zzipdir"), you need to get the name string of the + zzip entry. This is not just a pointer: the zzip disk entry is not + null terminated actually. Therefore we have a helper function that + will <code>strdup</code> the entry name as a normal C string: +</P> +<PRE> + #include <zzip/fseeko.h> + void _zzip_dir(FILE* disk) + { + for (ZZIP_ENTRY* entry = zzip_findfirst (disk); + entry ; entry = zzip_findnext (entry)) { + char* name = zzip_entry_strdup_name (entry); + puts (name); free (name); + } + } +</PRE> + +</section><section> +<H3> find a zipped file </H3> + +<P> + The central directory walk can be used to find any file in the + zip archive. The <code>zzipfseeko</code> library however provides + two convenience functions that allow to jump directly to the + zip disk entry of a given name or pattern. You are free to use + the newly allocated <code>ZZIP_ENTRY</code> for later calls on + that handle type. Do not forget to <code>zzip_entry_free</code> + the handle unless the handle is consumed by a routine, e.g. + <code>zzip_entry_findnext</code> to hit the end of directory. +</P> +<PRE> + extern ZZIP_ENTRY* zzip_entry_findfile(FILE* disk, char* filename, + ZZIP_ENTRY* _zzip_restrict entry, + zzip_strcmp_fn_t compare); + + extern ZZIP_ENTRY* zzip_entry_findmatch(FILE* disk, char* filespec, + ZZIP_ENTRY* _zzip_restrict entry, + zzip_fnmatch_fn_t compare, int flags); +</PRE> +<P> + In general only the first two arguments are non-null pointing to the + stdio disk handle and the file name to look for. The "entry" argument + is an old value and allows you to walk the zip directory similar to + <code>zzip_entry_findnext</code> but actually leaping forward. The + compare function can be used for alternate match behavior: the default + of <code>strcmp</code> might be changed to <code>strncmp</code> for + a caseless match. The "flags" of the second call are forwarded to the + posix <code>fnmatch</code> which we use as the default function. +</P> +<P> + If you do know a specific filename then you can just use + <code>zzip_entry_findfile</code> and supply the return value to + <code>zzip_entry_fopen</code> with the second argument set to "1" + to tell the function to actually consume whichever entry was given. + That allows you to skip an explicit <code>zzip_entry_free</code> + as it is included in a later <code>zzip_entry_fclose</code>. +</P> +<PRE> + #include <zzip/fseeko.h> +<small> + /* zzipfseeko already exports this convenience function: */</small> + ZZIP_ENTRY_FILE* zzip_entry_ffile(FILE* disk, char* filename) { + return zzip_entry_fopen (zzip_entry_findfile (filename, 0, 0), 1); + } + + int _zzip_read(FILE* disk, char* filename, void* buffer, int bytes) + { + ZZIP_ENTRY_FILE* file = zzip_entry_ffile (disk, filename); + if (! file) return -1; + int bytes = zzip_entry_fread (buffer, 1, bytes, file); + zzip_entry_fclose (file); + return bytes; + } +</PRE> + +</section><section> +<H3> reading bytes </H3> + +<P> + The example has shown already how to read some bytes off the head of + a zipped file. In general the zzipfseeko api is used to replace a few + stdio routines that access a file. For that purpose we provide three + functions that look very similar to the stdio functions of + <code>fopen()</code>, <code>fread()</code> and <code>fclose()</code>. + These work on an active file descriptor of type <code>ZZIP_ENTRY_FILE</code>. + Note that this <code>zzip_entry_fopen()</code> uses <code>ZZIP_ENTRY</code> + argument as returned by the findfile api. To open a new reader handle from + a disk archive and file name you can use the <code>zzip_entry_ffile()</code> + convenience call. +</P> + +<PRE> + ZZIP_ENTRY_FILE* zzip_entry_ffile (FILE* disk, char* filename); + ZZIP_ENTRY_FILE* zzip_entry_fopen (ZZIP_ENTRY* entry, int takeover); + zzip_size_t zzip_entry_fread (void* ptr, + zzip_size_t sized, zzip_size_t nmemb, + ZZIP_ENTRY_FILE* file); + int zzip_entry_fclose (ZZIP_ENTRY_FILE* file); + int zzip_entry_feof (ZZIP_ENTRY_FILE* file); +</PRE> + +<P> + In all of the examples you need to remember that you provide a single + stdio <code>FILE</code> descriptor which is in reality a virtual + filesystem on its own. Per default filenames are matched case + sensitive also on win32 systems. The findnext function will walk all + files on the zip virtual filesystem table and return a name entry + with the full pathname, i.e. including any directory names to the + root of the zip disk <code>FILE</code>. +</P> + +</section><section> +<H3> ZZIP_ENTRY inspection </H3> + +<P> + The <code>ZZIP_ENTRY_FILE</code> is a special file descriptor handle + of the <code>zzipfseeko</code> library - but the <code>ZZIP_ENTRY</code> + is not so special. It is actually a bytewise copy of the data inside the + zip disk archive (plus some internal hints appended). While + <code>zzip/fseeko.h</code> will not reveal the structure on its own, + you can include <code>zzip/format.h</code> to get access to the actual + structure content of a <code>ZZIP_ENTRY</code> by (up)casting it to +<br><b><code> struct zzip_disk_entry</code></b>. +</P> + +<P> + In reality however it is not a good idea to actually read the bytes + in the <code>zzip_disk_entry</code> structure unless you seriously know + the internals of a zip archive entry. That includes any byteswapping + needed on bigendian platforms. Instead you want to take advantage of + helper macros defined in <code>zzip/fetch.h</code>. These will take + care to convert any struct data member to the host native format. +</P> +<PRE> +extern uint16_t zzip_disk_entry_get_flags( zzip_disk_entry* entry); +extern uint16_t zzip_disk_entry_get_compr( zzip_disk_entry* entry); +extern uint32_t zzip_disk_entry_get_crc32( zzip_disk_entry* entry); + +extern zzip_size_t zzip_disk_entry_csize( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_usize( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_namlen( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_extras( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_comment( zzip_disk_entry* entry); +extern int zzip_disk_entry_diskstart( zzip_disk_entry* entry); +extern int zzip_disk_entry_filetype( zzip_disk_entry* entry); +extern int zzip_disk_entry_filemode( zzip_disk_entry* entry); + +extern zzip_off_t zzip_disk_entry_fileoffset( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_sizeof_tail( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_sizeto_end( zzip_disk_entry* entry); +extern char* zzip_disk_entry_skipto_end( zzip_disk_entry* entry); +</PRE> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/functions.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/functions.htm new file mode 100644 index 00000000000..1ded1f47d66 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/functions.htm @@ -0,0 +1,28 @@ +<H2> Exported Functions </H2> +<date>2006-09-21</date> + +<P> + The ZZipLib Project does already provide THREE libraries. The classic + zziplib.so (via "zzip/lib.h") is the most prominent one and used in + a number of different projects. The zzipfseeko.so ("zzip/fseeko.html") + and zzipmmapped.so ("zzip/mmapped.html") are technology demonstrations. +</P> + +<ul> +<li><a href="zzip/lib.h"> zzip/lib.h </a> - Main User Library </li> +<li><a href="zzip/mmapped.h"> zzip/mmapped.h </a> - MMap Library </li> +<li><a href="zzip/fseeko.h"> zzip/fseeko.h </a> - Fseeko Library </li> +</ul> + +<P> + Additonally, there is a complete set of unpacked documentation - the + unix manual pages are translated to html manual pages. +</P> + +<ul> +<li><a href="manual">zzip man pages</a> +</ul> + +<p> </p> +<p> </p> +<p> </p> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/future.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/future.htm new file mode 100644 index 00000000000..e116429abb1 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/future.htm @@ -0,0 +1,80 @@ +<?xml-stylesheet type="text/css" href="sdocbook.css" ?> +<section> <date> 15. July 2002 </date> +<h2> ZZIP Future </h2> What next to come. + +<section><!--border--> +<h3> ZIP-Write </h3> + +<P> + Anybody out there who wants to program the write-support for the + zziplib? Actually, I just do not have the time to do it and no + real need to but I guess it would be nice for people as for + example to spit out savegame files in zipformat. The actual + programming path is almost obvious - start off with the zziplib + as it is, and let it open an existing zip-file. This will parse + the central directory into memory - including the file-offsets + for each file. Then, truncate the zip-realfile to the place that + the central-dir was found (identical with the end of the last + file). If a datafile is opened for writing, either add a new + entry or modify the start-offset of the existing entry to the + end of the zip-realfile - the old data is simply junk. Then + init zlib to do the deflation of the data and append it to the + current zip-realfile. When the zipdir-handle is getting closed + from write-mode, the zip's central-directory needs to be appended + to the file on disk. This coincides with creating a new zip-file + with an empty central-directory that can be spit out to disk. + During development, do not care about creating temp-files to + guard against corruption for partial writes - the usual application + will use the zziplib to create zip savegames in one turn, no + "update"-operation needs to be implemented like exists in the + standalone zip command utilities. +</P> + +</section><section> +<h3> readdir for subdir inside zip magicdir </h3> + +<P> + See the notes in the first paragraphs of <a href="zzip-api.html"> + ZZIP Programmers Interface</a> description. It would add some + complexity for something I never needed so far. The question + came up with using zziplib as the backend of a dynamic webserver + to store the content in compressed form possibly through the + incarnation of a php module - and some scripted functionality + that walks all directories to index the files hosted. I'm not + going to implement that myself but perhaps someone else wants + to do it and send me patches for free. +</P> + +</section><section> +<h3> obfuscation example project </h3> + +<P> + A subproject that shows <b>all</b> the steps from a dat-tree + to a dat-zip to an obfuscated-dat along with build-files and + source-files for all helper tools needed to obfuscate and + deobfuscate, plus a sample program to use the obfuscated + dat-file and make some use of it. Along with some extra + documentation about 20..40 hours. Don't underestimate the + amount of work for it! (otherwise a great student project). +</P> + +</section><section> +<h3> zip/unzip tool </h3> + +<P> + The infozip tools implement a full set of zip/unzip routines + based on internal code to access the zip-format. The zziplib + has its own set of zip-format routines. Still, it should be + possible to write a frontend to the library that implements + parts (if not all) of the options of the infozip zip/unzip + tools. Even without write-support in zziplib it would be + interesting to see an normal unzip-tool that does not use + the magic-wrappers thereby only going off at plain zip-files. + On the upside, such a tool would be smaller than the infozip + tools since it can use the library routines that are shared + with other tools as well. Again - don't underestimate the + amount of work for it, I guess 40..80 hours as there is a lot + of fine-tuning needed to match the infozip model. +</P> +</section> +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/history.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/history.htm new file mode 100644 index 00000000000..7250729e877 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/history.htm @@ -0,0 +1,92 @@ +<section><date>created 1.Jun.2000, last updated 25.Apr.2002 </date> +<h2> History and Links </h2> plus Installation and Contact Hints + +<!--border--> + +<section> +<h3> A Bit Of History </h3> + +<P> +You'll find <a href="http://www.gzip.org">gzip</a> using the same compression +that was written by Jean-loup <a href="http://gailly.net">Gailly</a> +for the <a href="http://www.info-zip.org">Info-Zip</a> Group +whose <a href="http://www.info-zip.org/pub/infozip/Zip.html">Zip</a> +program is compatible with msdos PKZIP program from +<a href="http://www.pkware.com">PK Ware</a>. Then, in collaboration +with <a href="http://www.alumni.caltech.edu/~madler">Mark Adler</a> +he wrote the <a href="http://www.gzip.org/zlib">zlib</a> +compression library which was later standardized in the +<a href="ftp://ftp.uu.net/graphics/png/documents/zlib/zdoc-index.html"> +zlib RFCs</a>, namely +<a href="http://www.ietf.org/rfc/rfc1950.txt">RFC 1950</a> +<a href="ftp://ftp.uu.net/graphics/png/documents/zlib/rfc-zlib.html.Z"> +zlib 3.3</a>, +<a href="http://www.ietf.org/rfc/rfc1951.txt">RFC 1951</a> +<a href="ftp://ftp.uu.net/graphics/png/documents/zlib/rfc-deflate.html.Z"> +deflate 1.3</a> and +<a href="http://www.ietf.org/rfc/rfc1952.txt">RFC 1952</a> +<a href="ftp://ftp.uu.net/graphics/png/documents/zlib/rfc-gzip.html"> +gzip 4.3</a>. The free algorithm can be found in lots of places +today including PPP packet compression and PNG picture compression. +</P> + +</section><section> +<h3> Installation </h3> + +<P> + The installation is from the source .tar.gz tarball does follow + the simple gnu style: type <tt>''configure && make install''</tt> + in the unpacked directory. This will actually perform the usual + sequence of <tt>''configure && make && make install''</tt>. The + use of <tt>''make rpm''</tt> will make rpms based on your system + setup, and using a decent mingw32 compiler (e.g. the crossgcc + from <a href="http://libsdl.org/Xmingw32">libsdl.org/Xmingw32</a>) + will allow you to create windows dlls using a gnu development + environment. MSVC and Borland support (Make-)files should be + easy to be derived from the <a href="Makefile.am">Makefile.am</a> +</P> + +</section><section> +<h3> Contact </h3> + +<P> + The library was developed by + <a href="mailto:guidod@gmx.de?subject=zziplib"> + Guido Draheim </a> based on the library + <a href="http://freshmeat.net/appindex/1999/08/02/933593367.html"> + <tt>zip08x</tt> </a> + by <a href="mailto:too@iki.fi"> Tomi Ollila </a> (many thanks + for his support of the zziplib project). He has provided + a good deal of testing rounds and very helpful comments. + It may be assumed that this library supersedes + <a href="http://www.iki.fi/too/sw/zip08x.readme"> + <tt>zip08x</tt></a>, and in April 2002, he + has even given up copyright restrictions coming from zip08x + and changed the <a href="http://www.iki.fi/too/sw/zip08x.readme">zip08x</a> + readme to point to <a href="http://zziplib.sf.net">zziplib</a>. + Anyone who wants to contribute in accessing zip-archives + with the zlib-library is hereby kindly invited to send us + comments and sourcecode. +</P> + +</section><section> +<h3> Links </h3> + +<P> +The <a href="zziplib.html">zziplib library</a> must be +linked with the free <b><a href="http://www.gzip.org/zlib/">zlib</a></b> +<a href="http://www.info-zip.org/pub/infozip/zlib">[1]</a> +<a href="http://www.lifl.fr/PRIVATE/Manuals/gnulang/zlib">[2]</a> +<a href="http://pobox.com/~newt">[3]</a> package originally developed +by the <a href="http://www.info-zip.org">Info-Zip</a> Group +and now maintained at the <a href="http://www.gzip.org">GZip</a> Group. +As of late, the pkware appnote.txt has been revised into a whitepaper +document named +<a href="http://www.pkware.com/products/enterprise/white_papers/appnote.html"> + "APPNOTE.TXT - .ZIP File Format Specification"</a>. +Be also aware of other zzip like projects, e.g. +<a href="http://zipios.sourceforge.net">zipios++</a> that +mangles zip access into C++ iostream facilities. +</P> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-dbk.pl b/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-dbk.pl new file mode 100644 index 00000000000..e1fe56683df --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-dbk.pl @@ -0,0 +1,118 @@ +#! /usr/local/bin/perl +# this file converts simple html text into a docbook xml variant. +# The mapping of markups and links is far from perfect. But all we +# want is the docbook-to-pdf converter and similar technology being +# present in the world of docbook-to-anything converters. + +use strict; + +my %o; + +my %file; +my $F; +my @order; + +for $F (@ARGV) +{ + if ($F =~ /^(\w+)=(.*)/) + { + $o{$1} = $2; + }else{ + open F, "<$F" or next; + my $T = join ("",<F>); close F; + $file{$F}{text} = $T; + $file{$F}{orig} = $F; + push @order, $F; + } +} + +$o{version} = `date` if not length $o{version}; + +for $F (keys %file) +{ + $_ = $file{$F}{text}; + s{<!--VERSION-->}{ $o{version} }gse; + s{</h2>(.*)}{</title>\n<subtitle>$1</subtitle>}mg; + s{<h2>}{<sect1 id=\"$F\"><title>}mg; + s{<[Pp]([> ])}{<para$1}mg; s{</[Pp]>}{</para>}mg; + s{<pre>}{<screen>}mg; s{</pre>}{</screen>}mg; + s{<h3>}{<sect2><title>}mg; + s{</h3>((?:.(?!<sect2>))*.?)}{</title>$1</sect2>}sg; + s{<!doctype [^<>]*>}{}sg; + s{<!DOCTYPE [^<>]*>}{}sg; + s{(<\w+\b[^<>]*\swidth=)(\d+\%)}{$1\"$2\"}sg; + s{(<\w+\b[^<>]*\s\w+=)(\d+)}{$1\"$2\"}sg; + s{&&}{\&\;\&\;}sg; + s{\$\<}{\$\<\;}sg; + s{&(\w+[\),])}{\&\;$1}sg; + s{(</?)span(\s[^<>]*)?>}{$1."phrase$2>"}sge; + s{(</?)small(\s[^<>]*)?>}{$1."note$2>"}sge; + s{(</?)(b|em|i)>}{$1."emphasis>"}sge; + s{(</?)(li)>}{$1."listitem>"}sge; + s{(</?)(ul)>}{$1."itemizedlist>"}sge; + s{(</?)(ol)>}{$1."orderedlist>"}sge; + s{(</?)(dl)>}{$1."variablelist>"}sge; + s{<dt\b([^<>]*)>}{"<varlistentry$1><term>"}sge; + s{</dt\b([^<>]*)>}{"</term>"}sge; + s{<dd\b([^<>]*)>}{"<listitem$1>"}sge; + s{</dd\b([^<>]*)>}{"</listitem></varlistentry>"}sge; + s{<table\b([^<>]*)>}{"<informaltable$1><tgroup cols=\"2\"><tbody>"}sge; + s{</table\b([^<>]*)>}{"</tbody></tgroup></informaltable>"}sge; + s{(</?)tr(\s[^<>]*)?>}{$1."row$2>"}sge; + s{(</?)td(\s[^<>]*)?>}{$1."entry$2>"}sge; + s{<informaltable\b[^<>]*>\s*<tgroup\b[^<>]*>\s*<tbody> + \s*<row\b[^<>]*>\s*<entry\b[^<>]*>\s*<informaltable\b} + {<informaltable}gsx; + s{</informaltable>\s*</entry>\s*</row> + \s*</tbody>\s*</tgroup>\s*</informaltable>} + {</informaltable>}gsx; + s{(<informaltable[^<>]*\swidth=\"100\%\")}{$1 pgwide=\"1\"}gs; + s{(<tbody>\s*<row[^<>]*>\s*<entry[^<>]*\s)(width=\"50\%\")} + {<colspec colwidth=\"1*\" /><colspec colwidth=\"1*\" />\n$1$2}gs; + + s{<nobr>([\'\`]*)<tt>}{<cmdsynopsis>$1}sg; + s{</tt>([\'\`]*)</nobr>}{$2</cmdsynopsis>}sg; + s{<nobr><(?:tt|code)>([\`\"\'])}{<cmdsynopsis>$1}sg; + s{<(?:tt|code)><nobr>([\`\"\'])}{<cmdsynopsis>$1}sg; + s{([\`\"\'])</(?:tt|code)></nobr>}{$1</cmdsynopsis>}sg; + s{([\`\"\'])</nobr></(?:tt|code)>}{$1</cmdsynopsis>}sg; + s{(</?)tt>}{$1."constant>"}sge; + s{(</?)code>}{$1."literal>"}sge; + s{>([^<>]+)<br>}{><highlights>$1</highlights>}sg; + s{<br>}{<br />}sg; + + s{(</?)date>}{$1."sect1info>"}sge; + s{<reference>}{<reference id=\"reference\">}s; + + s{<a\s+href=\"((?:http|ftp|mailto):[^<>]+)\"\s*>((?:.(?!</a>))*.)</a>} + { "<ulink url=\"$1\">$2</ulink>" }sge; + s{<a\s+href=\"zziplib.html\#([\w_]+)\"\s*>((?:.(?!</a>))*.)</a>} + { "<link linkend=\"$1\">$2</link>" }sge; + s{<a\s+href=\"(zziplib.html)\"\s*>((?:.(?!</a>))*.)</a>} + { "<link linkend=\"reference\">$2</link>" }sge; + s{<a\s+href=\"([\w-]+[.]html)\"\s*>((?:.(?!</a>))*.)</a>} + { my $K = $1; chop $K; + if (not exists $file{$K}) { print STDERR "bad link $1\n"; } + "<link linkend=\"$K\">$2</link>" }sge; + s{<a\s+href=\"([\w-]+[.](?:h|c|am|txt))\"\s*>((?:.(?!</a>))*.)</a>} + { "<ulink url=\"file:$1\">$2</ulink>" }sge; + s{<a\s+href=\"([A-Z0-9]+[.][A-Z0-9]+)\"\s*>((?:.(?!</a>))*.)</a>} + { "<ulink url=\"file:$1\">$2</ulink>" }sge; + +# s{(</?)subtitle>}{$1."para>"}ge; + + $_ .= "</sect1>" if /<sect1[> ]/; + $file{$F}{text} = $_; +} + +my $n = "\n"; + +print '<!DOCTYPE reference PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"',$n; +print ' "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd">',$n; +print '<book><chapter><title>Documentation</title>',$n; +for $F (@order) +{ + print "</chapter>" if $file{$F}{text} =~ /<reference /; + print $file{$F}{text},$n,$n; +} +print '</book>',$n; diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-doc.pl b/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-doc.pl new file mode 100644 index 00000000000..89da2f6a95c --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-doc.pl @@ -0,0 +1,580 @@ + +use strict "vars"; + +my $x; +my $F; +my @regs; +my %file; +my %func; + +my %o = ( verbose => 0 ); + +$o{version} = + `grep -i "^version *:" *.spec | sed -e "s/[Vv]ersion *: *//"`; +$o{package} = + `grep -i "^name *:" *.spec | sed -e "s/[Nn]ame *: *//"`; +$o{version} =~ s{\s*}{}gs; +$o{package} =~ s{\s*}{}gs; + +$o{version} = `date +%Y.%m.%d` if not length $o{version}; +$o{package} = "_project" if not length $o{package}; + +$o{suffix} = "-doc1"; +$o{mainheader} = "$o{package}.h"; + +my %fn; +my $id = 1000; + +for $F (@ARGV) +{ + if ($F =~ /^(\w+)=(.*)/) + { + $o{$1} = $2; + }else{ + open F, "<$F" or next; + my $T = join ("",<F>); close F; + + $T =~ s/\&/\&\;/sg; + $T =~ s/¬/\&#AC\;/sg; + $T =~ s/\*\//¬/sg; + + # cut per-function comment block + while ( $T =~ + s{ [/][*][*](?=\s) ([^¬]+) ¬ ([^\{\}\;\#]+) [\{\;] } + { per_function_comment_and_declaration($F," ".$1,$2) }gsex + ) {} + + # cut per-file comment block + if ( $T =~ m{ ^ [/][*]+(?=\s) ([^¬]+) ¬ + (\s*\#include\s*<[^<>]*>(?:\s*/[/*][^\n]*)?) }sx) + { + $file{$F}{comment} = $1; + $file{$F}{include} = $2; + $file{$F}{comment} =~ s/¬/\*\//sg; + $file{$F}{include} =~ s/¬/\*\//sg; + $file{$F}{include} =~ s{[/][*]}{//}s; + $file{$F}{include} =~ s{[*][/]}{\n}s; + $file{$F}{include} =~ s{<}{\<\;}sg; + $file{$F}{include} =~ s{>}{\>\;}sg; + } + elsif ( $T =~ m{ ^ [/][*]+(?=\s) ([^¬]+) ¬ }sx) + { + $file{$F}{comment} = $1; + $file{$F}{comment} =~ s/¬/\*\//sg; + } + + # throw away the rest - further processing on memorized strings only + } +} + +$o{outputfilestem}= "$o{package}$o{suffix}" if not length $o{outputfilestem}; +$o{docbookfile}= "$o{outputfilestem}.docbook" if not length $o{docbookfile}; +$o{libhtmlfile}= "$o{outputfilestem}.html" if not length $o{libhtmlfile}; +$o{dumpdocfile}= "$o{outputfilestem}.dxml" if not length $o{dumpdocfile}; + +sub per_function_comment_and_declaration +{ + my ($filename, $comment, $prototype) = @_; + + $prototype =~ s{¬}{*/}sg; + $comment =~ s{¬}{*/}sg; + $comment =~ s{<([\w\.\-]+\@[\w\.\-]+\w\w)>}{<$1>}sg; + $func{$id}{filename} = $filename; + $func{$id}{comment} = $comment; + $func{$id}{prototype} = $prototype; + $id ++; + return $prototype; +} +# ----------------------------------------------------------------------- +sub pre { # used for non-star lines in comment blocks + my $T = $_[0]; $T =~ s/\&/\&\;/g; + $T =~ s/\</\<\;/g; $T =~ s/\>/\>\;/g; $T =~ s/\"/\"\;/g; + $T =~ s/^/\ /gm; # $T =~ s/^/\| /gm; + return " <pre> $T </pre> "; +} + +# per-file comment block handling +my $name; +for $name (keys %file) +{ + $file{$name}{comment} =~ s{<([\w\.\-]+\@[\w\.\-]+\w\w)>}{<$1>}sg; + $file{$name}{comment} =~ s{ ^\s?\s?\s? ([^\*\s]+ .*) $}{&pre($1)}mgex; + $file{$name}{comment} =~ s{ ^\s*[*]\s* $}{ <p> }gmx; + $file{$name}{comment} =~ s{ ^\s?\s?\s?\* (.*) $}{ $1 }gmx; + $file{$name}{comment} =~ s{ </pre>(\s*)<pre> }{$1}gsx; + $file{$name}{comment} =~ s{ <([^<>\;]+\@[^<>\;]+)> }{<email>$1</email>}gsx; + $file{$name}{comment} =~ s{ \<\;([^<>\&\;]+\@[^<>\&\;]+)\>\; } + {<email>$1</email>}gsx; + $file{$name}{comment} .= "<p>"; + + $file{$name}{comment} =~ s{ \b[Aa]uthor\s*:(.*<\/email>) } + { + $file{$name}{author} = "$1"; + "<author>"."$1"."</author>" + }sex; + + $file{$name}{comment} =~ s{ \b[Cc]opyright[\s:]([^<>]*)<p> } + { + $file{$name}{copyright} = "$1"; + "<copyright>"."$1"."</copyright>" + }sex; +# if ($name =~ /file/) { +# print STDERR $file{$name}{comment},"\n"; +# } + if ($file{$name}{include} =~ m{//\s*(\w+)[.][.][.]\s*}) + { + if (length $o{$1}) { + $file{$name}{include} = "#include " + .$o{$1}."\n"; + $file{$name}{include} =~ s{<}{\<\;}sg; + $file{$name}{include} =~ s{>}{\>\;}sg; + } + } +} + +# ----------------------------------------------------------------------- + +# pass 1 of per-func strings: +# (a) cut prototype into prespec/namespec/callspec +# (b) sanitize comment-block into proper docbook format +# do this while copying strings from $func{$name} to $fn{name} strstrhash +my @namelist; +for $x (sort keys %func) +{ + my $name = $func{$x}{prototype}; + $name =~ s/^.*[^.]\b(\w[\w.]*\w)\b\s*\(.*$/$1/s; + push @namelist, $name; # may be you want to omit some funcs from output? + + $func{$x}{prototype} =~ m{ ^(.*[^.]) \b(\w[\w.]*\w)\b (\s*\(.*) $ }sx; + $fn{$name}{prespec} = $1; + $fn{$name}{namespec} = $2; + $fn{$name}{callspec} = $3; + + $fn{$name}{comment} = $func{$x}{comment}; + $fn{$name}{comment} =~ s/(^|\s)\=\>\"([^\"]*)\"/$1<link>$2<\/link>/gmx; + $fn{$name}{comment} =~ s/(^|\s)\=\>\'([^\"]*)\'/$1<link>$2<\/link>/gmx; + $fn{$name}{comment} =~ s/(^|\s)\=\>\s(\w[\w.]*\w)\b/$1<link>$2<\/link>/gmx; + $fn{$name}{comment} =~ + s/(^|\s)\=\>\s([^\s\,\.\!\?\:\;\<\>\&\'\=\-]+)/$1<link>$2<\/link>/gmx; + + # cut comment in first-line (describe) and only keep the rest in comment + $fn{$name}{describe} = $fn{$name}{comment}; + $fn{$name}{describe} =~ s{^([^\n]*\n).*}{$1}gs; + $fn{$name}{comment} =~ s{^[^\n]*\n}{}gs; + if ($fn{$name}{describe} =~ /^\s*$/s) + { + $fn{$name}{describe} = "(".$func{$x}{filename}.")"; + $fn{$name}{describe} =~ s,[.][.][/],,g; + } + + $fn{$name}{comment} =~ s/ ^\s?\s?\s? ([^\*\s]+ .*) $/&pre($1)/mgex; + $fn{$name}{comment} =~ s/ ^\s?\s?\s?\* (.*) $/ <br \/> $1 /gmx; + $fn{$name}{comment} =~ s/ ^\s*<br\s*\/>\s* $/ <p> /gmx; + $fn{$name}{comment} =~ s{<<}{<}sg; + $fn{$name}{comment} =~ s{>>}{>}sg; + $fn{$name}{comment} =~ s/ (<p>\s*)<br\s*\/?>/$1/gsx; + $fn{$name}{comment} =~ s/ (<p>\s*)<br\s*\/?>/$1/gsx; + $fn{$name}{comment} =~ s/ (<br\s*\/?>\s*)<br\s*\/?>/$1/gsx; + $fn{$name}{comment} =~ s/<c>/<code>/gs; + $fn{$name}{comment} =~ s/<\/c>/<\/code>/gs; + $fn{$name}{comment} =~ s/<\/pre>(\s*)<pre>/$1/gs; + + $fn{$name}{filename} = $func{$x}{filename}; + $fn{$name}{callspec} =~ s{^ \s*}{}gsx; + $fn{$name}{prespec} =~ s{^ \s*}{}gsx; + $fn{$id} = $x; +} + +# add extra docbook markups to callspec in $fn-hash +for $name (@namelist) # <paramdef> +{ + $fn{$name}{callspec} =~ s:^([^\(\)]*)\(:$1<parameters>\(<paramdef>:s; + $fn{$name}{callspec} =~ s:\)([^\(\)]*)$:</paramdef>\)</parameters>$1:s; + $fn{$name}{callspec} =~ s:,:</paramdef>,<paramdef>:gs; + $fn{$name}{callspec} =~ s:<paramdef>(\s+):$1<paramdef>:gs; + $fn{$name}{callspec} =~ s:(\s+)</paramdef>:</paramdef>$1:gs; +} + +# html-formatting of callspec strings +for $name (@namelist) +{ + $fn{$name}{declcode} = + "<td valign=\"top\"><code>".$fn{$name}{prespec}."<\/code><\/td>" + ."<td valign=\"top\"> </td>" + ."<td valign=\"top\"><a href=\"#$name\">" + ."\n <code>".$fn{$name}{namespec}."<\/code>\n" + ."<\/a><\/td>" + ."<td valign=\"top\"> </td>" + ."<td valign=\"top\">".$fn{$name}{callspec}."<\/td>"; + + $fn{$name}{implcode} = + "<code>".$fn{$name}{prespec}."<\/code>" + ."\n <br \/><b><code>".$fn{$name}{namespec}."<\/code><\/b>" + ."\n <code>".$fn{$name}{callspec}."<\/code>"; + + $fn{$name}{declcode} =~ s{\s+<paramdef>}{\n<nobr>}gs; + $fn{$name}{implcode} =~ s{\s+<paramdef>}{\n<nobr>}gs; + $fn{$name}{declcode} =~ s{<paramdef>}{<nobr>}gs; + $fn{$name}{implcode} =~ s{<paramdef>}{<nobr>}gs; + $fn{$name}{declcode} =~ s{</paramdef>}{</nobr>}gs; + $fn{$name}{implcode} =~ s{</paramdef>}{</nobr>}gs; + $fn{$name}{declcode} =~ s{<parameters>}{\n <code>}gs; + $fn{$name}{implcode} =~ s{<parameters>}{\n <code>}gs; + $fn{$name}{declcode} =~ s{</parameters>}{</code>\n}gs; + $fn{$name}{implcode} =~ s{</parameters>}{</code>\n}gs; +} + +# whether each function should get its own page or combined with others +my $combinedstyle = 1; + +for $name (@namelist) +{ + if ($fn{$name}{describe} =~ /^ \s* <link>(\w[\w.]*\w)<\/link> /sx) + { + if ($combinedstyle and exists $fn{$1}) + { + # $into tells later steps which func-name is the leader of a man + # page and that this func should add its descriptions over there. + $fn{$name}{into} = $1; + } + } + + if ($fn{$name}{describe} =~ s/(.*)also:(.*)/$1/) + { + $fn{$name}{_seealso} = $2; + } + + # and prepare items for being filled in $combinedstyle (html-mode) + # which includes adding descriptions of the leader functions firsthand + $fn{$name}{_anchors} = "<a name=\"$name\" />"; + $fn{$name}{_impcode} = "<code>".$fn{$name}{implcode}."</code>"; + $fn{$name}{_comment} = "<p> ".$fn{$name}{describe}."\n"; + $fn{$name}{_comment} .= "<p>".$fn{$name}{comment}; +} + +for $name (@namelist) # and add descriptions of non-leader entries (html-mode) +{ + next if not exists $fn{$name}{into}; # skip leader pages + my $into = $fn{$name}{into}; + $fn{$into}{_anchors} .= "<a name=\"$name\" />"; + $fn{$into}{_impcode} .= "<br />\n"; + $fn{$into}{_impcode} .= "<code>".$fn{$name}{implcode}."</code>"; + my $text = $fn{$name}{comment}; + $text =~ s/ (T|t)his \s (function|procedure) /$1."he ".$name." ".$2/gsex; + $fn{$name}{_comment} .= "<p>".$text; +} + +my $htmlTOC = ""; +my $htmlTXT = ""; + +# generate the index-block at the start of the onepage-html file +for $name (@namelist) +{ + $fn{$name}{_comment} =~ s/ (<p>\s*)<br\s*\/>/$1/gsx; + + $htmlTOC .= "<tr valign=\"top\">\n".$fn{$name}{declcode}."</tr>"; + next if $combinedstyle and exists $fn{$name}{into}; + + $htmlTXT .= "\n<dt>".$fn{$name}{_anchors}.$fn{$name}{_impcode}."<dt>"; + $htmlTXT .= "\n<dd>".$fn{$name}{_comment}; + $htmlTXT .= "\n<p align=\"right\"><small>(" + .$fn{$name}{filename}.")</small></p></dd>"; +} + +# link ref-names in this page with its endpoints on the same html page +$htmlTXT =~ s/ <link>(\w+)([^<>]*)<\/link> / &a_name($1,$2) /gsex; +sub a_name +{ + my ($n,$m) = @_; + if (exists $fn{$n}) { return "<a href=\"#$n\"><code>$n$m</code></a>"; } + else { return "<code>$n$m</code>"; } +} +$htmlTXT =~ s/ \-\> /<small>-\>\;<\/small>/gsx; # just sanitize + +# and finally print the html-formatted output +open F, ">$o{libhtmlfile}" or die "could not open '$o{libhtmlfile}': $!"; +print F "<html><head><title> $o{package} autodoc documentation </title>"; +print F "</head>\n<body>\n"; +print F "\n<h1>",$o{package}," <small><small><i>-", $o{version}; +print F "</i></small></small></h1>"; +print F "\n<table border=0 cellspacing=2 cellpadding=0>"; +print F $htmlTOC; +print F "\n</table>"; +print F "\n<h3>Documentation</h3>\n"; +print F "\n<dl>"; +print F $htmlTXT; +print F "\n</dl>"; +print F "\n</body></html>\n"; +close F; + +# =========================================================================== # +# let's go for the pure docbook, a reference type master file for all man pages +my @headerlist; # leader function pages - a file will be created for each of th +my @mergedlist; # non-leader function that end up in one of those in headerlist + +for $name (@namelist) +{ + push @headerlist, $name if not exists $fn{$name}{into}; + push @mergedlist, $name if exists $fn{$name}{into}; + + # and initialize the fields need for a man page entry - the fields are + # named after the docbook-markup that encloses (!!) the text we store + # in the strstrhash - therefore, {}{_refhint} = "hello" will be printed + # as <refhint>hello</refhint>. Names with scores at the end are only used + # as temporaries but they are memorized - perhaps they are useful later. + + $fn{$name}{_refhint} = + "\n<!--========= ".$name." (3) ===========-->\n"; + $fn{$name}{_refstart} = ""; + $fn{$name}{_date_} = $o{version}; + $fn{$name}{_date_} =~ s{\s*}{}gs; + $fn{$name}{_refentryinfo} + = "\n <date>".$fn{$name}{_date_}."</date>"; + $fn{$name}{_productname_} = $o{package}; + $fn{$name}{_productname_} =~ s{\s*}{}gs; + $fn{$name}{_refentryinfo} + .= "\n <productname>".$fn{$name}{_productname_}."</productname>"; +# if (exists $file{ $fn{$name}{filename} }{author}) +# { +# $H = $file{ $fn{$name}{filename} }{author}; +# $H =~ s{ \s* ([^<>]*) (<email>[^<>]*</email>) }{ +# $fn{$name}{_refentryinfo} .= "\n <author>".$1.$2."</author>"; +# "" }gmex; +# } + $fn{$name}{_refmeta} = ""; + $fn{$name}{_refnamediv} = ""; + $fn{$name}{_mainheader} = $o{mainheader}; + $fn{$name}{_includes} = $file{ $fn{$name}{filename} }{include}; + $fn{$name}{_manvolnum} = "3"; + $fn{$name}{_funcsynopsisinfo} = ""; + $fn{$name}{_funcsynopsis} = ""; + $fn{$name}{_description} = ""; + $fn{$name}{_refends} = ""; +} + +push @headerlist, @mergedlist; # aaahmm... + +# let's walk all (!!) entries... +for $name (@headerlist) +{ + # $into is the target-manpage to add descriptions to. Initially it does + # reference the name of the function itself - but it overridden in the + # next line when we see an {into} mark. The self/into state is registered + # in two vars: $into is an index into %fn-strstrhash to be used instead of + # the $name-runvar and $me just a boolean value to conditionally add texts + my $into = $name; my $me = 1; + + if (exists $fn{$name}{into}) + { + $into = $fn{$name}{into}; $me = 0; + $fn{$name}{_refhint} = + "\n <!-- see ".$fn{$name}{mergeinto}." -->\n"; + } + + $fn{$into}{_refstart} .= '<refentry id="'.$name.'">' if $me; + $fn{$into}{_refends} .= "\n</refentry>\n" if $me; + + $fn{$name}{_title_} = $name; + $fn{$name}{_title_} =~ s{\s*}{}gs; + $fn{$name}{_refentryinfo} + .= "\n <title>".$fn{$name}{_title_}."</title>" if $me; + $fn{$into}{_refmeta} + .= "\n <manvolnum>".$fn{$name}{_manvolnum}."</manvolnum>" if $me; + $fn{$into}{_refmeta} + .= "\n <refentrytitle>".$name."</refentrytitle>" if $me; + + $fn{$name}{_funcsynopsisinfo} + = "\n".' #include <'.$fn{$into}{_mainheader}.'>' if $me; + $fn{$name}{_funcsynopsisinfo} + = "\n".$fn{$into}{_includes} if $me and length $fn{$into}{_includes}; + $fn{$name}{_funcsynopsisinfo} + .= " // ".$o{synopsis} if $me and length $o{synopsis}; + + $fn{$into}{_refnamediv} .= "\n ". + "<refpurpose>".$fn{$name}{describe}." </refpurpose>" if $me; + $fn{$into}{_refnamediv} .= "\n".' <refname>'.$name.'</refname>'; + + # add to {}{_funcsynopsis}... + $fn{$into}{_funcsynopsis} .= "\n <funcprototype>\n <funcdef>"; + $fn{$into}{_funcsynopsis} .= $fn{$name}{prespec} + ." <function>".$name."</function></funcdef>"; + $fn{$name}{_callspec_} = $fn{$name}{callspec}; + $fn{$name}{_callspec_} =~ s{<parameters>\s*\(}{ }gs; + $fn{$name}{_callspec_} =~ s{\)\s*</parameters>}{ }gs; + $fn{$name}{_callspec_} =~ s{</paramdef>\s*,\s*}{</paramdef>\n }gs; + $fn{$into}{_funcsynopsis} + .= "\n".$fn{$name}{_callspec_}." </funcprototype>"; + + # add to {}{_description}... + $fn{$name}{_comment_} = "<para>\n".$fn{$name}{comment}."\n</para>"; + $fn{$name}{_comment_} =~ s{ (T|t)his \s (function|procedure) } + { $1."he <function>".$name."</function> ".$2 }gsex; + $fn{$name}{_comment_} =~ s{<p>}{"\n</para><para>\n"}gsex; + $fn{$name}{_comment_} =~ s{<br\s*/?>}{}gs; + $fn{$name}{_comment_} =~ s{(</?)em>}{$1emphasis>}gs; + $fn{$name}{_comment_} =~ s{<code>}{<userinput>}gs; + $fn{$name}{_comment_} =~ s{</code>}{</userinput>}gs; + $fn{$name}{_comment_} =~ s{<link>}{<function>}gs; + $fn{$name}{_comment_} =~ s{</link>}{</function>}gs; + $fn{$name}{_comment_} =~ s{<pre>}{<screen>}gs; # only xmlto .8 and + $fn{$name}{_comment_} =~ s{</pre>}{</screen>}gs; # higher !! +# $fn{$name}{_comment_} =~ s{<ul>}{</para><itemizedlist>}gs; +# $fn{$name}{_comment_} =~ s{</ul>}{</itemizedlist><para>}gs; +# $fn{$name}{_comment_} =~ s{<li>}{<listitem><para>}gs; +# $fn{$name}{_comment_} =~ s{</li>}{</para></listitem>\n}gs; + $fn{$name}{_comment_} =~ s{<ul>}{</para><programlisting>\n}gs; + $fn{$name}{_comment_} =~ s{</ul>}{</programlisting><para>}gs; + $fn{$name}{_comment_} =~ s{<li>}{}gs; + $fn{$name}{_comment_} =~ s{</li>}{}gs; + $fn{$into}{_description} .= $fn{$name}{_comment_}; + + if (length $fn{$name}{_seealso} and not $me) + { + $fn{$into}{_seealso} .= ", " if length $fn{$into}{_seealso}; + $fn{$into}{_seealso} .= $fn{$name}{_seealso}; + } + + if (exists $file{ $fn{$name}{filename} }{author}) + { + my $authors = $file{ $fn{$name}{filename} }{author}; + $fn{$into}{_authors} = "<itemizedlist>"; + $authors =~ s{ \s* ([^<>]*) (<email>[^<>]*</email>) }{ + $fn{$into}{_authors} + .= "\n <listitem><para>".$1." ".$2."</para></listitem>"; + "" }gmex; + $fn{$into}{_authors} .= "</itemizedlist>"; + } + + if (exists $file{ $fn{$name}{filename} }{copyright}) + { + $fn{$into}{_copyright} + = "<screen>\n".$file{ $fn{$name}{filename} }{copyright}."</screen>\n"; + } +} + +# printing the docbook file is a two-phase process - we spit out the +# leader pages first - later we add more pages with _refstart pointing +# to the lader page, so that xmlto will add the functions there. Only the +# leader page contains some extra info needed for troff page processing. + +my %header; + +open F, ">$o{docbookfile}" or die "could not open $o{docbookfile}: $!"; +print F '<!DOCTYPE reference PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"'; +print F "\n",' "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd">'; +print F "\n\n",'<reference><title>Manual Pages</title>',"\n"; +for $name (@namelist) +{ + print F $fn{$name}{_refhint}; + next if exists $fn{$name}{into}; + print F $fn{$name}{_refstart}; + print F "\n<refentryinfo>", $fn{$name}{_refentryinfo} + , "\n</refentryinfo>\n" if length $fn{$name}{_refentryinfo}; + print F "\n<refmeta>", $fn{$name}{_refmeta} + , "\n</refmeta>\n" if length $fn{$name}{_refmeta}; + print F "\n<refnamediv>", $fn{$name}{_refnamediv} + , "\n</refnamediv>\n" if length $fn{$name}{_refnamediv}; + + print F "\n<refsynopsisdiv>" if length $fn{$name}{_funcsynopsis}; + print F "\n<funcsynopsisinfo>", $fn{$name}{_funcsynopsisinfo} + , "\n</funcsynopsisinfo>" if length $fn{$name}{_funcsynopsisinfo}; + print F "\n<funcsynopsis>", $fn{$name}{_funcsynopsis} + , "\n</funcsynopsis>" if length $fn{$name}{_funcsynopsis}; + print F "\n</refsynopsisdiv>" if length $fn{$name}{_funcsynopsis}; + print F "\n<refsect1><title>Description</title>", $fn{$name}{_description} + , "\n</refsect1>" if length $fn{$name}{_description}; + print F "\n<refsect1><title>Author</title>", $fn{$name}{_authors} + , "\n</refsect1>" if length $fn{$name}{_authors}; + print F "\n<refsect1><title>Copyright</title>", $fn{$name}{_copyright} + , "\n</refsect1>" if length $fn{$name}{_copyright}; + print F "\n<refsect1><title>See Also</title>", $fn{$name}{_seealso} + , "\n</refsect1>" if length $fn{$name}{_seealso}; + + print F $fn{$name}{_refends}; + + # ------------------------------------------------------------------ + # creating the per-header manpage - a combination of function man pages + + my $H = $fn{$name}{_mainheader}; # a shorthand for the mainheader index + my $me = 0; $me = 1 if not exists $header{$H}; + my $HH = $H; $HH =~ s/[^\w\.]/-/g; + $header{$H}{_refstart} = "\n<refentry id=\"".$HH."\">" if $me; + $header{$H}{_refends} = "\n</refentry>\n" if $me; + $header{$H}{_refentryinfo} = $fn{$name}{_refentryinfo} if $me; + $header{$H}{_refentryinfo} + =~ s/(<title>)([^<>]*)(<\/title>)/$1 the library $3/s if $me; + $header{$H}{_refmeta} + = "\n <manvolnum>".$fn{$name}{_manvolnum}."</manvolnum>\n" + . "\n <refentrytitle>".$fn{$name}{_mainheader}."</refentrytitle>" if $me; + $header{$H}{_refnamediv} = "\n <refpurpose> library </refpurpose>"; + $header{$H}{_refnamediv} .= "\n <refname>".$HH."</refname>"; + + $header{$H}{_refsynopsisinfo} + = $fn{$name}{_refsynopsisinfo} if exists $fn{$name}{_refsynopsisinfo}; + $header{$H}{_funcsynopsis} + .= "\n".$fn{$name}{_funcsynopsis} if exists $fn{$name}{_funcsynopsis}; +# $header{$H}{_funcsynopsisdiv} .= "\n<funcsynopsis>" +# .$fn{$name}{_funcsynopsis}."</funcsynopsis>" +# if exists $fn{$name}{_funcsynopsis}; + $header{$H}{_copyright} + = $fn{$name}{_copyright} if exists $fn{$name}{_copyright} and $me; + $header{$H}{_authors} + = $fn{$name}{_authors} if exists $fn{$name}{_authors} and $me; + if ($me) + { + my $T = `cat $o{package}.spec`; + if ($T =~ /\%description\b([^\%]*)\%/s) + { + $header{$H}{_description} = $1; + }elsif (not length $header{$H}{_description}) + { + $header{$H}{_description} = "$o{package} library"; + } + } +} + +my $H; +for $H (keys %header) # second pass +{ + next if not length $header{$H}{_refstart}; + print F "\n<!-- _______ ",$H," _______ -->"; + print F $header{$H}{_refstart}; + print F "\n<refentryinfo>", $header{$H}{_refentryinfo} + , "\n</refentryinfo>\n" if length $header{$H}{_refentryinfo}; + print F "\n<refmeta>", $header{$H}{_refmeta} + , "\n</refmeta>\n" if length $header{$H}{_refmeta}; + print F "\n<refnamediv>", $header{$H}{_refnamediv} + , "\n</refnamediv>\n" if length $header{$H}{_refnamediv}; + + print F "\n<refsynopsisdiv>" if length $header{$H}{_funcsynopsis}; + print F "\n<funcsynopsisinfo>", $header{$H}{_funcsynopsisinfo} + , "\n</funcsynopsisinfo>" if length $header{$H}{_funcsynopsisinfo}; + print F "\n<funcsynopsis>", $header{$H}{_funcsynopsis} + , "\n</funcsynopsis>" if length $header{$H}{_funcsynopsis}; + print F "\n</refsynopsisdiv>" if length $header{$H}{_funcsynopsis}; + + print F "\n<refsect1><title>Description</title>", $header{$H}{_description} + , "\n</refsect1>" if length $header{$H}{_description}; + print F "\n<refsect1><title>Author</title>", $header{$H}{_authors} + , "\n</refsect1>" if length $header{$H}{_authors}; + print F "\n<refsect1><title>Copyright</title>", $header{$H}{_copyright} + , "\n</refsect1>" if length $header{$H}{_copyright}; + + print F $header{$H}{_refends}; +} +print F "\n",'</reference>',"\n"; +close (F); + +# _____________________________________________________________________ +open F, ">$o{dumpdocfile}" or die "could not open $o{dumpdocfile}: $!"; + +for $name (sort keys %fn) +{ + print F "<fn id=\"$name\"><!-- FOR \"$name\" -->\n"; + for $H (sort keys %{$fn{$name}}) + { + print F "<$H name=\"$name\">",$fn{$name}{$H},"</$H>\n"; + } + print F "</fn><!-- END \"$name\" -->\n\n"; +} +close F; diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-doc.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-doc.py new file mode 100644 index 00000000000..c58427c4a5c --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/make-doc.py @@ -0,0 +1,1028 @@ +#! /usr/bin/python +# -*- coding: UTF-8 -*- +import sys +import re +import string +import commands +import warnings + +errors = 0 +def warn(msg, error=None): + global errors + errors += 1 + if error is None: + warnings.warn("-- "+str(errors)+" --\n "+msg, RuntimeWarning, 2) + else: + warnings.warn("-- "+str(errors)+" --\n "+msg+ + "\n error was "+str(error), RuntimeWarning, 2) +#fu + +# beware, stupid python interprets backslashes in repl only partially! +def s(string, pattern, repl, count=0): + return re.sub(pattern, repl, string, count) +def m(string, pattern): + return re.match(pattern, string) +def sorted_keys(dict): + keys = dict.keys() + keys.sort() + return keys + +# we make up a few formatter routines to help in the processing: +def html2docbook(text): + """ the C comment may contain html markup - simulate with docbook tags """ + return ( + s(s(s(s(s(s(s(s(s(s(s(text, + r"<br\s*/?>",""), + r"(</?)em>",r"\1emphasis>"), + r"<code>","<userinput>"), + r"</code>","</userinput>"), + r"<link>","<function>"), + r"</link>","</function>"), + r"(?s)\s*</screen>","</screen>"), +# r"<ul>","</para><itemizedlist>"), +# r"</ul>","</itemizedlist><para>"), +# r"<li>","<listitem><para>"), +# r"</li>","</para></listitem>\n"), + r"<ul>","</para><programlisting>\n"), + r"</ul>","</programlisting><para>"), + r"<li>",""), + r"</li>","")) +def paramdef2html(text): + return s(s(s(s(s(text, + r"\s+<paramdef>", r"\n<nobr>"), + r"<paramdef>",r"<nobr>"), + r"</paramdef>",r"</nobr>"), + r"<parameters>",r"\n <code>"), + r"</parameters>",r"</code>\n") +def section2html(text): + mapping = { "<screen>" : "<pre>", "</screen>" : "</pre>", + "<para>" : "<p>", "</para>" : "</p>" , + "<function>" : "<link>", "</function>" : "</link>" } + for str in mapping: + text = string.replace(text, str, mapping[str]) + return text +def html(text): + return section2html(paramdef2html(text)) +def cdata1(text): + return string.replace(text, "&", "&") +def cdata31(text): + return string.replace(string.replace(text, "<","<"), ">",">") +def cdata3(text): + return cdata31(cdata1(text)) +def cdata43(text): + return string.replace(text,"\"", """) +def cdata41(text): + return cdata43(cdata31(text)) +def cdata4(text): + return cdata43(cdata3(text)) +def markup_as_screen41 (text): + """ used for non-star lines in comment blocks """ + return " <screen> " + s(cdata41(text), r"(?m)^", r" ") +" </screen> " + +def file_comment2section(text): + """ convert a C comment into a series of <para> and <screen> parts """ + return ("<para>\n"+ + s(s(s(s(s(s(s(text, + r"(?s){<([\w\.\-]+\@[\w\.\-]+\w\w)>", + r"<\1>"), + r"(?mx) ^\s?\s?\s? ([^\*\s]+ .*) $", + lambda x : markup_as_screen41 (x.group(1))), + r"(?mx) ^\s*[*]\s* $", r" \n</para><para>\n"), + r"(?mx) ^\s?\s?\s?\* (.*) $", r" \1 "), + r"(?sx) </screen>(\s*)<screen> ", r"\1"), + r"(?sx) <([^<>\;]+\@[^<>\;]+)> ", r"<email>\1</email>"), + r"(?sx) \<\;([^<>\&\;]+\@[^<>\&\;]+)\>\; ", + r"<email>\1</email>") + "\n</para>") +def func_comment2section(text): + """ convert a C comment into a series of <para> and <screen> parts + and sanitize a few markups already present in the comment text + """ + return ("<para>\n"+ + s(s(s(s(s(s(s(s(s(s(s(text, + r"<c>",r"<code>"), r"</c>", r"</code>"), + r"(?mx) ^\s?\s?\s? ([^\*\s]+.*)", + lambda x: markup_as_screen41 (x.group(1))), + r"(?mx) ^\s?\s?\s?\* (.*) $", r" <br /> \1"), + r"(?mx) ^\s*<br\s*\/>\s* $", r"\n</para><para>\n"), + r"<<",r"<"), r">>",r">"), + r"(?sx) (</?para>\s*)<br\s*\/?>",r"\1"), + r"(?sx) (</?para>\s*)<br\s*\/?>",r"\1"), + r"(?sx) (<br\s*\/?>\s*)<br\s*\/?>",r"\1"), + r"(?sx) <\/screen>(\s*)<screen>",r"\1") + "\n</para>") +def markup_link_syntax(text): + """ markup the link-syntax ` => somewhere ` in the text block """ + return ( + s(s(s(s(text, + r"(?mx) (^|\s)\=\>\"([^\"]*)\"", r"\1<link>\2</link>"), + r"(?mx) (^|\s)\=\>\'([^\"]*)\'", r"\1<link>\2</link>"), + r"(?mx) (^|\s)\=\>\s(\w[\w.]*\w)\b", r"\1<link>\2</link>"), + r"(?mx) (^|\s)\=\>\s([^\s\,\.\!\?\:\;\<\>\&\'\=\-]+)", + r"\1<link>\2</link>")) +def this_function_link(text, name): + return s(text, r"(?sx) (T|t)his \s (function|procedure) ", lambda x + : "<function>"+x.group(1)+"he "+name+" "+x.group(2)+"</function>") + +# ----------------------------------------------------------------------- +class Options: + var = {} + def __getattr__(self, name): + if not self.var.has_key(name): return None + return self.var[name] + def __setattr__(self, name, value): + self.var[name] = value +#end + +o = Options() +o.verbose = 0 + +o.version = s( commands.getoutput( + """ grep -i "^version *:" *.spec 2>/dev/null | + sed -e "s/[Vv]ersion *: *//" """), r"\s*",r"") +o.package = s(commands.getoutput( + """ grep -i "^name *:" *.spec 2>/dev/null | + sed -e "s/[Nn]ame *: *//" """), r"\s*",r"") + +if not len(o.version): + o.version = commands.getoutput(""" date +%Y.%m.%d """) +if not len(o.package): + o.package = "_project" + +o.suffix = "-doc3" +o.mainheader = o.package+".h" + +class File: + def __init__(self, filename): + self.name = filename + self.mainheader = o.mainheader + self.authors = "" + self.copyright = "" + def __getattr__(self, name): + """ defend against program to break on uninited members """ + if self.__dict__.has_key(name): return self.__dict__[name] + warn("no such member: "+name); return None + def set_author(self, text): + if self.authors: + self.authors += "\n" + self.authors += text + return text + def set_copyright(self, text): + self.copyright = text + return text + +class InputFiles: + """ for each set of input files we can create an object + it does correspond with a single html-output page and + a single docbook <reference> master page to be output + """ + def __init__(self): + # the id will tell us in which order + # we did meet each function definition + self.id = 1000 + self.files = [] # file_list + self.funcs = [] # func_list: of hidden class FuncDeclaration + self.file = None # current file + def new_File(self, name): + self.file = File(name) + self.files.append(self.file) + return self.file + def next_id(self): + id = self.id ; self.id += 1 + return id + def add_function_declaration(self, comment, prototype): + class FuncDeclaration: # note that both decl.comment and + pass # decl.prototype are in cdata1 format + func = FuncDeclaration() + func.file = self.file + func.comment = s(comment, # need to take out email-style markups + r"<([\w\.\-]+\@[\w\.\-]+\w\w)>", r"<\1>") + func.prototype = prototype + func.id = all.next_id() + self.funcs.append(func) + # print id + return prototype + +def scan_options (options, list): + def encode(text): + return s(s(text, r"¬", r"&#AC;"), r"\*/",r"¬") + def decode(text): + return s(text, r"¬", r"*/") + + for name in options: + found = m(name, r"^(\w+)=(.*)") + if found: + o.var[found.group(1)] = found.group(2) + continue + #else + try: + input = open(name, "r") + except IOError, error: + warn(#...... (scan_options) ............... + "can not open input file: "+name, error) + continue + text = input.read() ; input.close() + text = encode (cdata1 (text)) + + file = list.new_File(name) + + # cut per-function comment block + text = s(text, r"(?x) [/][*][*](?=\s) ([^¬]+) ¬ ([^\{\}\;\#]+) [\{\;]", + lambda x : list.add_function_declaration( + decode(x.group(1)), decode(x.group(2)))) + + # cut per-file comment block + found = m(text, r"(?sx) [/][*]+(?=\s) ([^¬]+) ¬ " + r"(?:\s*\#define\s*\S+)*" + r"(\s*\#include\s*<[^<>]*>(?:\s*//[^\n]*)?)") + if found: + file.comment = decode(found.group(1)) + file.include = cdata31(found.group(2)) + else: + file.comment = None + file.include = None + found = m(text, r"(?sx) ^ [/][*]+(?=\s) ([^¬]+) ¬ ") + if found: + file.comment = decode(found.group(1)) + #fi + # throw away the rest - further processing on memorized strings only + + return None + +all = InputFiles() +scan_options (sys.argv[1:], all) + +if not o.docbookfile: + o.docbookfile = o.package+o.suffix+".docbook" +if not o.libhtmlfile: + o.libhtmlfile = o.package+o.suffix+".html" +if not o.dumpdocfile: + o.dumpdocfile = o.package+o.suffix+".dxml" + +# ........................................................................... +# check out information in the file.comment section + +def all_files_comment2section(list): + for file in list: + if file.comment is None: continue + file.section = file_comment2section(file.comment) + + file.section = s( + file.section, r"(?sx) \b[Aa]uthor\s*:(.*</email>) ", lambda x + : "<author>" + file.set_author(x.group(1)) + "</author>") + file.section = s( + file.section, r"(?sx) \b[Cc]opyright\s*:([^<>]*)</para> ",lambda x + : "<copyright>" + file.set_copyright(x.group(1)) + "</copyright>") + # if "file" in file.name: print >> sys.stderr, file.comment # 2.3 + #od +all_files_comment2section(all.files) + +# ----------------------------------------------------------------------- + +class Function: + " <prespec>void* </><namespec>hello</><namespec> (int) const</callspec> " + def __init__(self): + self.prespec = "" + self.namespec = "" + self.callspec = "" + self.name = "" +# def set(self, **defines): +# name = defines.keys()[0] +# self.__dict__[name] = defines[name] +# return defines[name] +# def cut(self, **defines): +# name = defines.keys()[0] +# self.__dict__[name] += defines[name] +# return "" + def __getattr__(self, name): + """ defend against program exit on members being not inited """ + if self.__dict__.has_key(name): return self.__dict__[name] + warn("no such member: "+name); return None + def dict(self): + return self.__dict__ + def dict_sorted_keys(self): + keys = self.__dict__.keys() + keys.sort() + return keys + def parse(self, prototype): + found = m(prototype, r"(?sx) ^(.*[^.]) \b(\w[\w.]*\w)\b (\s*\(.*) $ ") + if found: + self.prespec = found.group(1).lstrip() + self.namespec = found.group(2) + self.callspec = found.group(3).lstrip() + self.name = self.namespec.strip() + return self.name + return None + +# pass 1 of per-func strings ............................................... +# (a) cut prototype into prespec/namespec/callspec +# (b) cut out first line of comment as headline information +# (c) sanitize rest of comment block into proper docbook formatted .body +# +# do this while copying strings from all.funcs to function_list +# and remember the original order in name_list + +def markup_callspec(text): + return ( + s(s(s(s(s(text, + r"(?sx) ^([^\(\)]*)\(", r"\1<parameters>(<paramdef>",1), + r"(?sx) \)([^\(\)]*)$", r"</paramdef>)</parameters>\1",1), + r"(?sx) , ", r"</paramdef>,<paramdef>"), + r"(?sx) <paramdef>(\s+) ", r"\1<paramdef>"), + r"(?sx) (\s+)</paramdef>", r"</paramdef>\1")) + +def parse_all_functions(func_list): # list of FunctionDeclarations + """ parse all FunctionDeclarations and create a list of Functions """ + list = [] + for func in all.funcs: + function = Function() + if not function.parse (func.prototype): continue + + list.append(function) + + function.body = markup_link_syntax(func.comment) + if "\n" not in function.body: # single-line comment is the head + function.head = function.body + function.body = "" + else: # cut comment in first-line and only keep the rest as descr body + function.head = s(function.body, r"(?sx) ^([^\n]*\n).*",r"\1",1) + function.body = s(function.body, r"(?sx) ^[^\n]*\n", r"", 1) + #fi + if m(function.head, r"(?sx) ^\s*$ "): # empty head line, autofill here + function.head = s("("+func.file.name+")", r"[.][.][/]", r"") + + function.body = func_comment2section(function.body) + function.src = func # keep a back reference + + # add extra docbook markups to callspec in $fn-hash + function.callspec = markup_callspec (function.callspec) + #od + return list +function_list = parse_all_functions(all.funcs) + +def examine_head_anchors(func_list): + """ .into tells later steps which func-name is the leader of a man + page and that this func should add its descriptions over there. """ + for function in func_list: + function.into = None + function.seealso = None + + found = m(function.head, r"(?sx) ^ \s* <link>(\w[\w.]*\w)<\/link>") + # if found and found.group(1) in func_list.names: + if found and found.group(1): + function.into = found.group(1) + + def set_seealso(f, value): + f.seealso = value + return value + function.head = s(function.head, r"(.*)also:(.*)", lambda x + : set_seealso(function, x.group(2)) and x.group(1)) + if function.seealso and None: + print "function[",function.name,"].seealso=",function.seealso +examine_head_anchors(function_list) + +# =============================================================== HTML ===== + +def find_by_name(func_list, name): + for func in func_list: + if func.name == name: + return func + #od + return None +#fu + +class HtmlFunction: + def __init__(self, func): + self.src = func.src + self.into = func.into + self.name = func.name + self.toc_line = paramdef2html( + " <td valign=\"top\"><code>"+func.prespec+"</code></td>\n"+ + " <td valign=\"top\"> </td>\n"+ + " <td valign=\"top\"><a href=\"#"+func.name+"\">\n"+ + " <code>"+func.namespec+"</code>"+ + " </a></td>\n"+ + " <td valign=\"top\"> </td>\n"+ + " <td valign=\"top\">"+func.callspec+"</td>\n") + self.synopsis = paramdef2html( + " <code>"+func.prespec+"</code>\n"+ + " <br /><b><code>"+func.namespec+"</code></b>\n"+ + " <code>"+func.callspec+"</code>\n") + self.anchor = "<a name=\""+func.name+"\" />" + self.section = "<para><em> "+func.head+"\n"+ \ + "\n</em></para>"+section2html(func.body) +#class + +class HtmlFunctionFamily(HtmlFunction): + def __init__(page, func): + HtmlFunction.__init__(page, func) + page.toc_line_list = [ page.toc_line ] + # page.html_txt = page.synopsis + page.synopsis_list = [ page.synopsis ] + page.anchor_list = [ page.anchor ] + page.section_list = [ this_function_link(page.section, func.name) ] + +def ensure_name(text, name): + adds = "<small><code>"+name+"</code></small> -" + match = r"(?sx) .*>[^<>]*\b" + name + r"\b[^<>]*<.*" + found = m(text, match) + if found: return text + found = m(text, r".*<p(ara)?>.*") + if found: return s(text, r"(<p(ara)?>)", r"\1"+adds, 1) + return adds+text + +def combined_html_pages(func_list): + """ and now add descriptions of non-leader entries (html-mode) """ + combined = {} + + for func in func_list: # assemble leader pages + if func.into is not None: continue + combined[func.name] = HtmlFunctionFamily(func) + + for func in func_list: + if func.into is None: continue + if func.into not in combined : + warn(#......... (combine_html_pages) .............. + "function '"+func.name+"'s into => '"+func.into+ + "\n: no such target function: "+func.into) + combined[func.name] = HtmlFunctionFamily(func) + continue + #fi + page = HtmlFunction(func) + into = combined[func.into] + into.toc_line_list.append( page.toc_line ) + into.anchor_list.append( page.anchor ) + into.synopsis_list.append( page.synopsis ) + into.section_list.append( + s(ensure_name(this_function_link(section2html( func.body ), + func.name), func.name), + r"(?sx) (</?para>\s*) <br\s*\/>", r"\1")) + return combined.values() +html_pages = combined_html_pages(function_list) + +def html_resolve_links_on_page(text, list): + """ link ref-names of a page with its endpoint on the same html page""" + def html_link (name , extra): + """ make <link>s to <href> of correct target or make it <code> """ + if find_by_name(list, name) is None: + return "<code>"+name+extra+"</code>" + else: + return "<a href=\"#"+name+"\"><code>"+name+extra+"</code></a>" + #fu html_link + return s(s(text, r"(?sx) <link>(\w+)([^<>]*)<\/link> ", + lambda x : html_link(x.group(1),x.group(2))), + r"(?sx) \-\> ", r"<small>-></small>") # just sanitize.. +#fu html_resolve_links + +class HtmlPage: + def __init__(self): + self.toc = "" + self.txt = "" + self.package = o.package + self.version = o.version + def page_text(self): + """ render .toc and .txt parts into proper <html> page """ + T = "" + T += "<html><head>" + T += "<title>"+self.package+"autodoc documentation </title>" + T += "</head>\n<body>\n" + T += "\n<h1>"+self.package+" <small><small><i>- "+self.version + T += "</i></small></small></h1>" + T += "\n<table border=0 cellspacing=2 cellpadding=0>" + T += self.toc + T += "\n</table>" + T += "\n<h3>Documentation</h3>\n\n<dl>" + T += html_resolve_links_on_page(self.txt, function_list) + T += "\n</dl>\n</body></html>\n" + return T + def add_page_map(self, list): + """ generate the index-block at the start of the onepage-html file """ + keys = list.keys() + keys.sort() + for name in keys: + self.toc += "<tr valign=\"top\">\n"+ \ + "\n</tr><tr valign=\"top\">\n".join( + list[name].toc_line_list)+"</tr>\n" + self.txt += "\n<dt>"+" ".join(list[name].anchor_list) + self.txt += "\n"+"\n<br />".join(list[name].synopsis_list)+"<dt>" + self.txt += "\n<dd>\n"+"\n".join(list[name].section_list) + self.txt += ("\n<p align=\"right\">"+ + "<small>("+list[name].src.file.name+")</small>"+ + "</p></dd>") + def add_page_list(self, functions): + """ generate the index-block at the start of the onepage-html file """ + mapp = {} + for func in functions: + mapp[func.name] = func + #od + self.add_page_map(mapp) +#end + +html = HtmlPage() +# html.add_function_dict(Fn) +# html.add_function_list(Fn.sort.values()) +html.add_page_list(html_pages) + +# and finally print the html-formatted output +try: + F = open(o.libhtmlfile, "w") +except IOError, error: + warn(# ............. open(o.libhtmlfile, "w") .............. + "can not open html output file: "+o.libhtmlfile, error) +else: + print >> F, html.page_text() + F.close() +#fi + +# ========================================================== DOCBOOK ===== +# let's go for the pure docbook, a reference type master for all man pages + +class RefPage: + def __init__(self, func): + """ initialize the fields needed for a man page entry - the fields are + named after the docbook-markup that encloses (!!) the text we store + the entries like X.refhint = "hello" will be printed therefore as + <refhint>hello</refhint>. Names with underscores are only used as + temporaries but they are memorized, perhaps for later usage. """ + self.refhint = "\n<!--========= "+func.name+" (3) ===========-->\n" + self.refentry = None + self.refentry_date = o.version.strip() # //refentryinfo/date + self.refentry_productname = o.package.strip() # //refentryinfo/prod* + self.refentry_title = None # //refentryinfo/title + self.refentryinfo = None # override + self.manvolnum = "3" # //refmeta/manvolnum + self.refentrytitle = None # //refmeta/refentrytitle + self.refmeta = None # override + self.refpurpose = None # //refnamediv/refpurpose + self.refname = None # //refnamediv/refname + self.refname_list = [] + self.refnamediv = None # override + self.mainheader = func.src.file.mainheader + self.includes = func.src.file.include + self.funcsynopsisinfo = "" # //funcsynopsisdiv/funcsynopsisinfo + self.funcsynopsis = None # //funcsynopsisdiv/funcsynopsis + self.funcsynopsis_list = [] + self.description = None + self.description_list = [] + # optional sections + self.authors_list = [] # //sect1[authors]/listitem + self.authors = None # override + self.copyright = None + self.copyright_list = [] + self.seealso = None + self.seealso_list = [] + if func.seealso: + self.seealso_list.append(func.seealso) + # func.func references + self.func = func + self.file_authors = None + if func.src.file.authors: + self.file_authors = func.src.file.authors + self.file_copyright = None + if func.src.file.copyright: + self.file_copyright = func.src.file.copyright + #fu + def refentryinfo_text(page): + """ the manvol formatter wants to render a footer line and header line + on each manpage and such info is set in <refentryinfo> """ + if page.refentryinfo: + return page.refentryinfo + if page.refentry_date and \ + page.refentry_productname and \ + page.refentry_title: return ( + "\n <date>"+page.refentry_date+"</date>"+ + "\n <productname>"+page.refentry_productname+"</productname>"+ + "\n <title>"+page.refentry_title+"</title>") + if page.refentry_date and \ + page.refentry_productname: return ( + "\n <date>"+page.refentry_date+"</date>"+ + "\n <productname>"+page.refentry_productname+"</productname>") + return "" + def refmeta_text(page): + """ the manvol formatter needs to know the filename of the manpage to + be made up and these parts are set in <refmeta> actually """ + if page.refmeta: + return page.refmeta + if page.manvolnum and page.refentrytitle: + return ( + "\n <refentrytitle>"+page.refentrytitle+"</refentrytitle>"+ + "\n <manvolnum>"+page.manvolnum+"</manvolnum>") + if page.manvolnum and page.func.name: + return ( + "\n <refentrytitle>"+page.func.name+"</refentrytitle>"+ + "\n <manvolnum>"+page.manvolnum+"</manvolnum>") + return "" + def refnamediv_text(page): + """ the manvol formatter prints a header line with a <refpurpose> line + and <refname>'d functions that are described later. For each of + the <refname>s listed here, a mangpage is generated, and for each + of the <refname>!=<refentrytitle> then a symlink is created """ + if page.refnamediv: + return page.refnamediv + if page.refpurpose and page.refname: + return ("\n <refname>"+page.refname+'</refname>'+ + "\n <refpurpose>"+page.refpurpose+" </refpurpose>") + if page.refpurpose and page.refname_list: + T = "" + for refname in page.refname_list: + T += "\n <refname>"+refname+'</refname>' + T += "\n <refpurpose>"+page.refpurpose+" </refpurpose>" + return T + return "" + def funcsynopsisdiv_text(page): + """ refsynopsisdiv shall be between the manvol mangemaent information + and the reference page description blocks """ + T="" + if page.funcsynopsis: + T += "\n<funcsynopsis>" + if page.funcsynopsisinfo: + T += "\n<funcsynopsisinfo>"+ page.funcsynopsisinfo + \ + "\n</funcsynopsisinfo>\n" + T += page.funcsynopsis + \ + "\n</funcsynopsis>\n" + if page.funcsynopsis_list: + T += "\n<funcsynopsis>" + if page.funcsynopsisinfo: + T += "\n<funcsynopsisinfo>"+ page.funcsynopsisinfo + \ + "\n</funcsynopsisinfo>\n" + for funcsynopsis in page.funcsynopsis_list: + T += funcsynopsis + T += "\n</funcsynopsis>\n" + #fi + return T + def description_text(page): + """ the description section on a manpage is the main part. Here + it is generated from the per-function comment area. """ + if page.description: + return page.description + if page.description_list: + T = "" + for description in page.description_list: + if not description: continue + T += description + if T: return T + return "" + def authors_text(page): + """ part of the footer sections on a manpage and a description of + original authors. We prever an itimizedlist to let the manvol + show a nice vertical aligment of authors of this ref item """ + if page.authors: + return page.authors + if page.authors_list: + T = "<itemizedlist>" + previous="" + for authors in page.authors_list: + if not authors: continue + if previous == authors: continue + T += "\n <listitem><para>"+authors+"</para></listitem>" + previous = authors + T += "</itemizedlist>" + return T + if page.authors: + return page.authors + return "" + def copyright_text(page): + """ the copyright section is almost last on a manpage and purely + optional. We list the part of the per-file copyright info """ + if page.copyright: + return page.copyright + """ we only return the first valid instead of merging them """ + if page.copyright_list: + T = "" + for copyright in page.copyright_list: + if not copyright: continue + return copyright # !!! + return "" + def seealso_text(page): + """ the last section on a manpage is called 'SEE ALSO' usally and + contains a comma-separated list of references. Some manpage + viewers can parse these and convert them into hyperlinks """ + if page.seealso: + return page.seealso + if page.seealso_list: + T = "" + for seealso in page.seealso_list: + if not seealso: continue + if T: T += ", " + T += seealso + if T: return T + return "" + def refentry_text(page, id=None): + """ combine fields into a proper docbook refentry """ + if id is None: + id = page.refentry + if id: + T = '<refentry id="'+id+'">' + else: + T = '<refentry>' # this is an error + + if page.refentryinfo_text(): + T += "\n<refentryinfo>"+ page.refentryinfo_text()+ \ + "\n</refentryinfo>\n" + if page.refmeta_text(): + T += "\n<refmeta>"+ page.refmeta_text() + \ + "\n</refmeta>\n" + if page.refnamediv_text(): + T += "\n<refnamediv>"+ page.refnamediv_text() + \ + "\n</refnamediv>\n" + if page.funcsynopsisdiv_text(): + T += "\n<refsynopsisdiv>\n"+ page.funcsynopsisdiv_text()+ \ + "\n</refsynopsisdiv>\n" + if page.description_text(): + T += "\n<refsect1><title>Description</title> " + \ + page.description_text() + "\n</refsect1>" + if page.authors_text(): + T += "\n<refsect1><title>Author</title> " + \ + page.authors_text() + "\n</refsect1>" + if page.copyright_text(): + T += "\n<refsect1><title>Copyright</title> " + \ + page.copyright_text() + "\n</refsect1>\n" + if page.seealso_text(): + T += "\n<refsect1><title>See Also</title><para> " + \ + page.seealso_text() + "\n</para></refsect1>\n" + + T += "\n</refentry>\n" + return T + #fu +#end + +# ----------------------------------------------------------------------- +class FunctionRefPage(RefPage): + def reinit(page): + """ here we parse the input function for its values """ + if page.func.into: + page.refhint = "\n <!-- see "+page.func.into+" -->\n" + #fi + page.refentry = page.func.name # //refentry@id + page.refentry_title = page.func.name.strip() # //refentryinfo/title + page.refentrytitle = page.func.name # //refmeta/refentrytitle + if page.includes: + page.funcsynopsisinfo += "\n"+page.includes + if not page.funcsynopsisinfo: + page.funcsynopsisinfo="\n"+' #include <'+page.mainheader+'>' + page.refpurpose = page.func.head + page.refname = page.func.name + + def funcsynopsis_of(func): + return ( + "\n <funcprototype>\n <funcdef>"+func.prespec+ + " <function>"+func.name+"</function></funcdef>"+ + "\n"+s(s(s(func.callspec, + r"<parameters>\s*\(",r" "), + r"\)\s*</parameters>",r" "), + r"</paramdef>\s*,\s*",r"</paramdef>\n ")+ + " </funcprototype>") + page.funcsynopsis = funcsynopsis_of(page.func) + + page.description = ( + html2docbook(this_function_link(page.func.body, page.func.name))) + + if page.file_authors: + def add_authors(page, ename, email): + page.authors_list.append( ename+' '+email ) + return ename+email + s(page.file_authors, + r"(?sx) \s* ([^<>]*) (<email>[^<>]*</email>) ", lambda x + : add_authors(page, x.group(1), x.group(2))) + #fi + + if page.file_copyright: + page.copyright = "<screen>\n"+page.file_copyright+"</screen>\n" + #fi + return page + def __init__(page,func): + RefPage.__init__(page, func) + FunctionRefPage.reinit(page) + +def refpage_list_from_function_list(funclist): + list = [] + mapp = {} + for func in funclist: + mapp[func.name] = func + #od + for func in funclist: + page = FunctionRefPage(func) + if func.into and func.into not in mapp: + warn (# ............ (refpage_list_from_function_list) ....... + "page '"+page.func.name+"' has no target => "+ + "'"+page.func.into+"'" + "\n: going to reset .into of Function '"+page.func.name+"'") + func.into = None + #fi + list.append(FunctionRefPage(func)) + return list +#fu + +# ordered list of pages +refpage_list = refpage_list_from_function_list(function_list) + +class FunctionFamilyRefPage(RefPage): + def __init__(self, page): + RefPage.__init__(self, page.func) + self.seealso_list = [] # reset + self.refhint_list = [] + def refhint_list_text(page): + T = "" + for hint in page.refhint_list: + T += hint + return T + def refentry_text(page): + return page.refhint_list_text() + "\n" + \ + RefPage.refentry_text(page) + pass + +def docbook_pages_recombine(pagelist): + """ take a list of RefPages and create a new list where sections are + recombined in a way that their description is listed on the same + page and the manvol formatter creates symlinks to the combined + function description page - use the attribute 'into' to guide the + processing here as each of these will be removed from the output + list. If no into-pages are there then the returned list should + render to the very same output text like the input list would do """ + + list = [] + combined = {} + for orig in pagelist: + if orig.func.into: continue + page = FunctionFamilyRefPage(orig) + combined[orig.func.name] = page ; list.append(page) + + page.refentry = orig.refentry # //refentry@id + page.refentry_title = orig.refentrytitle # //refentryinfo/title + page.refentrytitle = orig.refentrytitle # //refmeta/refentrytitle + page.includes = orig.includes + page.funcsynopsisinfo = orig.funcsynopsisinfo + page.refpurpose = orig.refpurpose + if orig.refhint: + page.refhint_list.append( orig.refhint ) + if orig.refname: + page.refname_list.append( orig.refname ) + elif orig.refname_list: + page.refname_list.extend( orig.refname_list ) + if orig.funcsynopsis: + page.funcsynopsis_list.append( orig.funcsynopsis ) + elif orig.refname_list: + page.funcsynopsis_list.extend( orig.funcsynopsis_list ) + if orig.description: + page.description_list.append( orig.description ) + elif orig.refname_list: + page.description_list.extend( orig.description_list ) + if orig.seealso: + page.seealso_list.append( orig.seealso ) + elif orig.seealso_list: + page.seealso_list.extend( orig.seealso_list ) + if orig.authors: + page.authors_list.append( orig.authors ) + elif orig.authors_list: + page.authors_list.extend( orig.authors_list ) + if orig.copyright: + page.copyright_list.append( orig.copyright ) + elif orig.refname_list: + page.copyright_list.extend( orig.copyright_list ) + #od + for orig in pagelist: + if not orig.func.into: continue + if orig.func.into not in combined: + warn("page for '"+orig.func.name+ + "' has no target => '"+orig.func.into+"'") + page = FunctionFamilyRefPage(orig) + else: + page = combined[orig.func.into] + + if orig.refname: + page.refname_list.append( orig.refname ) + elif orig.refname_list: + page.refname_list.extend( orig.refname_list ) + if orig.funcsynopsis: + page.funcsynopsis_list.append( orig.funcsynopsis ) + elif orig.refname_list: + page.funcsynopsis_list.extend( orig.funcsynopsis_list ) + if orig.description: + page.description_list.append( orig.description ) + elif orig.refname_list: + page.description_list.extend( orig.description_list ) + if orig.seealso: + page.seealso_list.append( orig.seealso ) + elif orig.seealso_list: + page.seealso_list.extend( orig.seealso_list ) + if orig.authors: + page.authors_list.append( orig.authors ) + elif orig.authors_list: + page.authors_list.extend( orig.authors_list ) + if orig.copyright: + page.copyright_list.append( orig.copyright ) + elif orig.refname_list: + page.copyright_list.extend( orig.copyright_list ) + #od + return list +#fu + +combined_pages = docbook_pages_recombine(pagelist = refpage_list) + +# ----------------------------------------------------------------------- + +class HeaderRefPage(RefPage): + pass + +def docbook_refpages_perheader(page_list): # headerlist + " creating the per-header manpage - a combination of function man pages " + header = {} + for page in page_list: + assert not page.func.into + file = page.func.src.file.mainheader # short for the mainheader index + if file not in header: + header[file] = HeaderRefPage(page.func) + header[file].id = s(file, r"[^\w\.]","-") + header[file].refentry = header[file].id + header[file].refentryinfo = None + header[file].refentry_date = page.refentry_date + header[file].refentry_productname = ( + "the library "+page.refentry_productname) + header[file].manvolnum = page.manvolnum + header[file].refentrytitle = file + header[file].funcsynopsis = "" + if 1: # or += or if not header[file].refnamediv: + header[file].refpurpose = " library " + header[file].refname = header[file].id + + if not header[file].funcsynopsisinfo and page.funcsynopsisinfo: + header[file].funcsynopsisinfo = page.funcsynopsisinfo + if page.funcsynopsis: + header[file].funcsynopsis += "\n"+page.funcsynopsis + if not header[file].copyright and page.copyright: + header[file].copyright = page.copyright + if not header[file].authors and page.authors: + header[file].authors = page.authors + if not header[file].authors and page.authors_list: + header[file].authors_list = page.authors_list + if not header[file].description: + found = m(commands.getoutput("cat "+o.package+".spec"), + r"(?s)\%description\b([^\%]*)\%") + if found: + header[file].description = found.group(1) + elif not header[file].description: + header[file].description = "<para>" + ( + page.refentry_productname + " library") + "</para>"; + #fi + #fi + #od + return header#list +#fu + +def leaders(pagelist): + list = [] + for page in pagelist: + if page.func.into : continue + list.append(page) + return list +header_refpages = docbook_refpages_perheader(leaders(refpage_list)) + +# ----------------------------------------------------------------------- +# printing the docbook file is a two-phase process - we spit out the +# leader pages first - later we add more pages with _refstart pointing +# to the leader page, so that xmlto will add the functions there. Only the +# leader page contains some extra info needed for troff page processing. + +doctype = '<!DOCTYPE reference PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"' +doctype += "\n " +doctype += '"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd">'+"\n" + +try: + F = open(o.docbookfile,"w") +except IOError, error: + warn("can not open docbook output file: "+o.docbookfile, error) +else: + print >> F, doctype, '<reference><title>Manual Pages</title>' + + for page in combined_pages: + print >> F, page.refentry_text() + #od + + for page in header_refpages.values(): + if not page.refentry: continue + print >> F, "\n<!-- _______ "+page.id+" _______ -->", + print >> F, page.refentry_text() + #od + + print >> F, "\n",'</reference>',"\n" + F.close() +#fi + +# _____________________________________________________________________ +try: + F = open( o.dumpdocfile, "w") +except IOError, error: + warn ("can not open"+o.dumpdocfile,error) +else: + for func in function_list: + name = func.name + print >> F, "<fn id=\""+name+"\">"+"<!-- FOR \""+name+"\" -->\n" + for H in sorted_keys(func.dict()): + print >> F, "<"+H+" name=\""+name+"\">", + print >> F, str(func.dict()[H]), + print >> F, "</"+H+">" + #od + print >> F, "</fn><!-- END \""+name+"\" -->\n\n"; + #od + F.close(); +#fi + +if errors: sys.exit(errors) diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/makedocs.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/makedocs.py new file mode 100644 index 00000000000..1bc8f885e79 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/makedocs.py @@ -0,0 +1,323 @@ +import sys +from zzipdoc.match import * +from zzipdoc.options import * +from zzipdoc.textfile import * +from zzipdoc.textfileheader import * +from zzipdoc.functionheader import * +from zzipdoc.functionprototype import * +from zzipdoc.commentmarkup import * +from zzipdoc.functionlisthtmlpage import * +from zzipdoc.functionlistreference import * +from zzipdoc.dbk2htm import * +from zzipdoc.htmldocument import * +from zzipdoc.docbookdocument import * + +def _src_to_xml(text): + return text.replace("&", "&").replace("<", "<").replace(">", ">") +def _email_to_xml(text): + return text & Match("<([^<>]*@[^<>]*)>") >> "<\\1>" + +class PerFileEntry: + def __init__(self, header, comment): + self.textfileheader = header + self.filecomment = comment +class PerFile: + def __init__(self): + self.textfileheaders = [] + self.filecomments = [] + self.entries = [] + def add(self, textfileheader, filecomment): + self.textfileheaders += [ textfileheader ] + self.filecomments += [ filecomment ] + self.entries += [ PerFileEntry(textfileheader, filecomment) ] + def where_filename(self, filename): + for entry in self.entries: + if entry.textfileheader.get_filename() == filename: + return entry + return None + def print_list_mainheader(self): + for t_fileheader in self.headers: + print t_fileheader.get_filename(), t_fileheader.src_mainheader() + +class PerFunctionEntry: + def __init__(self, header, comment, prototype): + self.header = header + self.comment = comment + self.prototype = prototype + def get_name(self): + return self.prototype.get_name() + def get_titleline(self): + return self.header.get_titleline() + def get_head(self): + return self.prototype + def get_body(self): + return self.comment +class PerFunction: + def __init__(self): + self.headers = [] + self.comments = [] + self.prototypes = [] + self.entries = [] + def add(self, functionheader, functioncomment, functionprototype): + self.headers += [ functionheader ] + self.comments += [ functionprototype ] + self.prototypes += [ functionprototype ] + self.entries += [ PerFunctionEntry(functionheader, functioncomment, + functionprototype) ] + def print_list_titleline(self): + for funcheader in self.headers: + print funcheader.get_filename(), "[=>]", funcheader.get_titleline() + def print_list_name(self): + for funcheader in self.prototypes: + print funcheader.get_filename(), "[>>]", funcheader.get_name() + +class PerFunctionFamilyEntry: + def __init__(self, leader): + self.leader = leader + self.functions = [] + def contains(self, func): + for item in self.functions: + if item == func: return True + return False + def add(self, func): + if not self.contains(func): + self.functions += [ func ] + def get_name(self): + if self.leader is None: return None + return self.leader.get_name() +class PerFunctionFamily: + def __init__(self): + self.functions = [] + self.families = [] + self.retarget = {} + self.entries = [] + def add_PerFunction(self, per_list): + for item in per_list.entries: + add_PerFunctionEntry(item) + def add_PerFunctionEntry(self, item): + self.functions += [ item ] + def get_function(self, name): + for item in self.functions: + if item.get_name() == name: + return item + return None + def get_entry(self, name): + for item in self.entries: + if item.get_name() == name: + return item + return None + def fill_families(self): + name_list = {} + for func in self.functions: + name = func.get_name() + name_list[name] = func + for func in self.functions: + name = func.get_name() + line = func.get_titleline() + is_retarget = Match("=>\s*(\w+)") + if line & is_retarget: + into = is_retarget[1] + self.retarget[name] = is_retarget[1] + lead_list = [] + for name in self.retarget: + into = self.retarget[name] + if into not in name_list: + print ("function '"+name+"' retarget into '"+into+ + "' does not exist - keep alone") + if into in self.retarget: + other = self.retarget[into] + print ("function '"+name+"' retarget into '"+into+ + "' which is itself a retarget into '"+other+"'") + if into not in lead_list: + lead_list += [ into ] + for func in self.functions: + name = func.get_name() + if name not in lead_list and name not in self.retarget: + lead_list += [ name ] + for name in lead_list: + func = self.get_function(name) + if func is not None: + entry = PerFunctionFamilyEntry(func) + entry.add(func) # the first + self.entries += [ entry ] + else: + print "head function '"+name+" has no entry" + for func in self.functions: + name = func.get_name() + if name in self.retarget: + into = self.retarget[name] + entry = self.get_entry(into) + if entry is not None: + entry.add(func) # will not add duplicates + else: + print "into function '"+name+" has no entry" + def print_list_name(self): + for family in self.entries: + name = family.get_name() + print name, ":", + for item in family.functions: + print item.get_name(), ",", + print "" +class HtmlManualPageAdapter: + def __init__(self, entry): + """ usually takes a PerFunctionEntry """ + self.entry = entry + def get_name(self): + return self.entry.get_name() + def _head(self): + return self.entry.get_head() + def _body(self): + return self.entry.get_body() + def head_xml_text(self): + return self._head().xml_text() + def body_xml_text(self, name): + return self._body().xml_text(name) + def head_get_prespec(self): + return self._head().get_prespec() + def head_get_namespec(self): + return self._head().get_namespec() + def head_get_callspec(self): + return self._head().get_callspec() + def get_title(self): + return self._body().header.get_title() + def get_filename(self): + return self._body().header.get_filename() + def src_mainheader(self): + return self._body().header.parent.textfile.src_mainheader() + def get_mainheader(self): + return _src_to_xml(self.src_mainheader()) +class RefEntryManualPageAdapter: + def __init__(self, entry, per_file = None): + """ usually takes a PerFunctionEntry """ + self.entry = entry + self.per_file = per_file + def get_name(self): + return self.entry.get_name() + def _head(self): + return self.entry.get_head() + def _body(self): + return self.entry.get_body() + def _textfile(self): + return self._body().header.parent.textfile + def head_xml_text(self): + return self._head().xml_text() + def body_xml_text(self, name): + return self._body().xml_text(name) + def get_title(self): + return self._body().header.get_title() + def get_filename(self): + return self._body().header.get_filename() + def src_mainheader(self): + return self._textfile().src_mainheader() + def get_mainheader(self): + return _src_to_xml(self.src_mainheader()) + def get_includes(self): + return "" + def list_seealso(self): + return self._body().header.get_alsolist() + def get_authors(self): + comment = None + if self.per_file: + entry = self.per_file.where_filename(self.get_filename()) + if entry: + comment = entry.filecomment.xml_text() + if comment: + check = Match(r"(?s)<para>\s*[Aa]uthors*\b:*" + r"((?:.(?!</para>))*.)</para>") + if comment & check: return _email_to_xml(check[1]) + return None + def get_copyright(self): + comment = None + if self.per_file: + entry = self.per_file.where_filename(self.get_filename()) + if entry: + comment = entry.filecomment.xml_text() + if comment: + check = Match(r"(?s)<para>\s*[Cc]opyright\b" + r"((?:.(?!</para>))*.)</para>") + if comment & check: return _email_to_xml(check[0]) + return None + +def makedocs(filenames, o): + textfiles = [] + for filename in filenames: + textfile = TextFile(filename) + textfile.parse() + textfiles += [ textfile ] + per_file = PerFile() + for textfile in textfiles: + textfileheader = TextFileHeader(textfile) + textfileheader.parse() + filecomment = CommentMarkup(textfileheader) + filecomment.parse() + per_file.add(textfileheader, filecomment) + funcheaders = [] + for textfile in per_file.textfileheaders: + funcheader = FunctionHeaderList(textfile) + funcheader.parse() + funcheaders += [ funcheader ] + per_function = PerFunction() + for funcheader in funcheaders: + for child in funcheader.get_children(): + funcprototype = FunctionPrototype(child) + funcprototype.parse() + funccomment = CommentMarkup(child) + funccomment.parse() + per_function.add(child, funccomment, funcprototype) + per_family = PerFunctionFamily() + for item in per_function.entries: + per_family.add_PerFunctionEntry(item) + per_family.fill_families() + # debug output.... + # per_file.print_list_mainheader() + # per_function.print_list_titleline() + # per_function.print_list_name() + # per_family.print_list_name() + # + html = FunctionListHtmlPage(o) + for item in per_family.entries: + for func in item.functions: + func_adapter = HtmlManualPageAdapter(func) + if o.onlymainheader and not (Match("<"+o.onlymainheader+">") + & func_adapter.src_mainheader()): + continue + html.add(func_adapter) + html.cut() + html.cut() + class _Html_: + def __init__(self, html): + self.html = html + def html_text(self): + return section2html(paramdef2html(self.html.xml_text())) + def get_title(self): + return self.html.get_title() + HtmlDocument(o).add(_Html_(html)).save(o.output+o.suffix) + # + man3 = FunctionListReference(o) + for item in per_family.entries: + for func in item.functions: + func_adapter = RefEntryManualPageAdapter(func, per_file) + if o.onlymainheader and not (Match("<"+o.onlymainheader+">") + & func_adapter.src_mainheader()): + continue + man3.add(func_adapter) + man3.cut() + man3.cut() + DocbookDocument(o).add(man3).save(o.output+o.suffix) + + +if __name__ == "__main__": + filenames = [] + o = Options() + o.package = "ZZipLib" + o.program = sys.argv[0] + o.html = "html" + o.docbook = "docbook" + o.output = "zziplib" + o.suffix = "" + for item in sys.argv[1:]: + if o.scan(item): continue + filenames += [ item ] + makedocs(filenames, o) + + diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/memdisk.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/memdisk.htm new file mode 100644 index 00000000000..d5d25c9952d --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/memdisk.htm @@ -0,0 +1,116 @@ +<section> <date> 2005 </date> +<H2> zzip/memdisk </H2> zip cache for mmapped views + +<BLOCKQUOTE> + These routines are fully independent from the traditional zzip + implementation. They build on top of + <a href="mmapped.html">zzip/mmapped</a> that uses a readonly + mmapped sharedmem block. These functions add additional hints + how to parse extension blocks and how to cache the zip central + directory entries which does furthermore allow to convert them + to any host-local format as required. +</BLOCKQUOTE> + +<section> +<H3> zzip disk handle </H3> + +<P> + Other than with the <a href="fseeko.html">fseeko</a> alternative + interface there is no need to have an actual disk handle to the + zip archive. Instead you can use a bytewise copy of a file or + even use a mmapped view of a file. This is generally the fastest + way to get to the data contained in a zipped file. All it requires + is enough of virtual memory space but a desktop computer with a + a modern operating system will easily take care of that. +</P> + +<P> + The zzipmmapped library provides a number of calls to create a + disk handle representing a zip archive in virtual memory. Per + default we use the sys/mmap.h (or MappedView) functionality + of the operating system. See for more details in the + <a href="mmapped.html">zzip/mmapped</a> descriptions. +</P> +<P> + The zzip/memdisk extensions of zzip/mmapped are made to have a + very similar call API - therefore you will find again open and + close functions for filenames or filehandles. However the + direct mmap interface is not re-exported under the zzip_mem_disk + prefix (of the underlying zzip_disk prefix). The "_mem_" part + hints that the central directory of the underlying zzip_disk + is preparsed to a separate memory block. +<PRE> + ZZIP_MEM_DISK* zzip_mem_disk_open(char* filename); + ZZIP_MEM_DISK* zzip_mem_disk_fdopen(int fd); + void zzip_mem_disk_close(ZZIP_MEM_DISK* disk); + + int zzip_mem_disk_load (ZZIP_MEM_DISK* dir, ZZIP_DISK* disk); + void zzip_mem_disk_unload (ZZIP_MEM_DISK* dir); +</PRE> + The last two functions export some parts of the underlying + interface. It is possible to bind an existing ZZIP_MEM_DISK + handle with an arbitrary ZZIP_DISK handle. Upon calling "load" + the central directory will be loaded from the underlying zip + disk content and parsed to an internal mem block. The corresponding + "unload" function will trash that central directory cache but it + leaves the handles intact. +</P> + +</section><section> +<H3> reading the central directory </H3> + +<P> + All other zzip_mem_disk functions are simply re-exporting the + underlying zzip_disk functions. Note that the first field in + the ZZIP_MEM_DISK is a "ZZIP_DISK* disk" - the header file + zzip/memdisk.h will simply export inline functions where there + is no special zzip_mem_disk function. Therefore, whenever a + function call on a ZZIP_DISK handle is appropriate one can + also use its cousin for a ZZIP_MEM_DISK handle without any + penalties but future compatibility for extra functionality in + zzip/memdisk layer of the zzip/mmapped library. +</P> + +<P><small>Note: by default the re-exports are done with the help + of the C precompiler as precompiler macros. Using USE_INLINE + will force to make them real inlines. In the future that may + change in favor of a better autodetection for inline capabilities + of the compiler and/or using a standard cpp-define that enables + the C/C++ inline functions. The inline functions do have the + added value of having strongtyped arguments provoking more + readable warning messages in user application code.</small></P> + +<PRE> + inline ZZIP_DISK_ENTRY* +zzip_mem_disk_findfirst(ZZIP_MEM_DISK* dir); + inline ZZIP_DISK_ENTRY* +zzip_mem_disk_findnext(ZZIP_MEM_DISK* dir, ZZIP_DISK_ENTRY* entry); + inline char* _zzip_restrict +zzip_mem_disk_entry_strdup_name(ZZIP_MEM_DISK* dir, + ZZIP_DISK_ENTRY* entry); + inline struct zzip_file_header* +zzip_mem_disk_entry_to_file_header(ZZIP_MEM_DISK* dir, + ZZIP_DISK_ENTRY* entry); + inline char* +zzip_mem_disk_entry_to_data(ZZIP_MEM_DISK* dir, ZZIP_DISK_ENTRY* entry); + inline ZZIP_DISK_ENTRY* +zzip_mem_disk_findfile(ZZIP_MEM_DISK* dir, + char* filename, ZZIP_DISK_ENTRY* after, + zzip_strcmp_fn_t compare); + inline ZZIP_DISK_ENTRY* +zzip_mem_disk_findmatch(ZZIP_MEM_DISK* dir, + char* filespec, ZZIP_DISK_ENTRY* after, + zzip_fnmatch_fn_t compare, int flags); + inline ZZIP_DISK_FILE* _zzip_restrict +zzip_mem_disk_entry_fopen (ZZIP_MEM_DISK* dir, ZZIP_DISK_ENTRY* entry); + inline ZZIP_DISK_FILE* _zzip_restrict +zzip_mem_disk_fopen (ZZIP_MEM_DISK* dir, char* filename); + inline _zzip_size_t +zzip_mem_disk_fread (void* ptr, _zzip_size_t size, _zzip_size_t nmemb, + ZZIP_DISK_FILE* file); + inline int +zzip_mem_disk_fclose (ZZIP_DISK_FILE* file); + inline int +zzip_mem_disk_feof (ZZIP_DISK_FILE* file); +</PRE> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/mksite.pl b/Build/source/libs/zziplib/zziplib-0.13.60/docs/mksite.pl new file mode 100644 index 00000000000..a463b3a9ce7 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/mksite.pl @@ -0,0 +1,2634 @@ +#! /usr/bin/perl +# this is the perl variant of the mksite script. It based directly on a +# copy of mksite.sh which is derived from snippets that I was using to +# finish doc pages for website publishing. Using only sh/sed along with +# files has a great disadvantage: it is a very slow process atleast. The +# perl language in contrast has highly optimized string, replace, search +# functions as well as data structures to store intermediate values. As +# an advantage large parts of the syntax are similar to the sh/sed variant. +# +# http://zziplib.sf.net/mksite/ +# THE MKSITE.SH (ZLIB/LIBPNG) LICENSE +# Copyright (c) 2004 Guido U. Draheim <guidod@gmx.de> +# This software is provided 'as-is', without any express or implied warranty +# In no event will the authors be held liable for any damages arising +# from the use of this software. +# Permission is granted to anyone to use this software for any purpose, +# including commercial applications, and to alter it and redistribute it +# freely, subject to the following restrictions: +# 1. The origin of this software must not be misrepresented; you must not +# claim that you wrote the original software. If you use this software +# in a product, an acknowledgment in the product documentation would be +# appreciated but is not required. +# 2. Altered source versions must be plainly marked as such, and must not +# be misrepresented as being the original software. +# 3. This notice may not be removed or altered from any source distribution. +# $Id: mksite.pl,v 1.2 2006-09-22 00:33:22 guidod Exp $ + +use strict; use warnings; no warnings "uninitialized"; +use File::Basename qw(basename); +use POSIX qw(strftime); + +# initialize some defaults +my $SITEFILE=""; +$SITEFILE="site.htm" if not $SITEFILE and -f "site.htm"; +$SITEFILE="site.html" if not $SITEFILE and -f "site.html"; +$SITEFILE="site.htm" if not $SITEFILE; +# my $MK="-mksite"; # note the "-" at the start +my $SED="sed"; + +my $DATA="~~"; # extension for meta data files +my $HEAD="~head~"; # extension for head sed script +my $BODY="~body~"; # extension for body sed script +my $FOOT="~foot~"; # append to body text (non sed) + +my $SED_LONGSCRIPT="$SED -f"; + +my $az="a-z"; # for perl +my $AZ="A-Z"; # we may assume there are +my $NN="0-9"; # char-ranges available +my $AA="_$NN$AZ$az"; # that makes the resulting +my $AX="$AA.+-"; # script more readable + +my $n = "\n"; +my $Q = "q class"; +my $QX = "/q"; + +# LANG="C" ; LANGUAGE="C" ; LC_COLLATE="C" # these are needed for proper +# export LANG LANGUAGE LC_COLLATE # lowercasing as some collate + # treat A-Z to include a-z + +my @HTMLTAGS = qw/a p h1 h2 h3 h4 h5 h6 dl dd dt ul ol li pre code + table tr td th b u i s q em strong strike cite big small sup sub tt + thead tbody center hr br nobr wbr span div img adress blockquote/; +my @HTMLTAGS2 = qw/html head body title meta http-equiv style link/; + +# ========================================================================== +my $hint=""; + +sub echo +{ + print join(" ",@_),$n; +} +sub error +{ + print STDERR "ERROR: ", join(" ",@_),$n; +} +sub warns +{ + print STDERR "WARN: ", join(" ",@_), $n; +} +sub hint +{ + print STDERR "NOTE: ", join(" ", @_), $n if $hint; +} +sub init +{ + $hint="1" if -d "DEBUG"; +} + +&init ("NOW!!!"); + +sub ls_s { + my $x=`ls -s @_`; + chomp($x); + return $x; +} + +# ========================================================================== +# reading options from the command line GETOPT +my %o = (); # to store option variables +$o{variables}="files"; +$o{fileseparator}="?"; +$o{files}=""; +$o{main_file}=""; +$o{formatter}="$0"; +my $opt=""; +for my $arg (@ARGV) { # this variant should allow to embed spaces in $arg + if ($opt) { + $o{$opt}=$arg; + $opt=""; + } else { + $_=$arg; + if (/^-.*=.*$/) { + $opt=$arg; $opt =~ s/-*([$AA][$AA-]*).*/$1/; $opt =~ y/-/_/; + if (not $opt) { + error "invalid option $arg"; + } else { + $arg =~ s/^[^=]*=//; + $o{$opt} = $arg; + $o{variables} .= " ".$opt; + } + $opt="";; + } elsif (/^-.*.-.*$/) { + $opt=$arg; $opt =~ s/-*([$AA][$AA-]*).*/$1/; $opt =~ y/-/_/; + if (not $opt) { + error "invalid option $arg"; + $opt=""; + } else { + # keep the option for next round + } ;; + } elsif (/^-.*/) { + $opt=$arg; $opt =~ s/^-*([$AA][$AA-]*).*/$1/; $opt =~ y/-/_/; + if (not $opt) { + error "invalid option $arg"; + } else { + $arg =~ s/^[^=]*=//; + $o{$opt} = ' '; + } + $opt="" ;; + } else { + hint "<$arg>"; + if (not $o{main_file}) { $o{main_file} = $arg; } else { + $o{files} .= $o{fileseparator} if $o{files}; + $o{files} .= $arg; }; + $opt="" ;; + }; + } +} ; if ($opt) { + $o{$opt}=" "; + $opt=""; + } + +### env | grep ^opt + +$SITEFILE=$o{main_file} if $o{main_file} and -f $o{main_file}; +$SITEFILE=$o{site_file} if $o{site_file} and -f $o{site_file}; +$hint="1" if $o{debug}; + +if ($o{help}) { + $_=$SITEFILE; + echo "$0 [sitefile]"; + echo " default sitefile = $_ ($o{main_file}) ($o{files})"; + echo "options:"; + echo " --filelist : show list of target files as ectracted from $_"; + echo " --src-dir xx : if source files are not where mksite is executed"; + echo " --tmp-dir xx : use temp instead of local directory"; + echo " --tmp : use automatic temp directory in \$TEMP/mksite.*"; + exit; + echo " internal:"; + echo "--fileseparator=x : for building the internal filelist (def. '?')"; + echo "--files xx : for list of additional files to be processed"; + echo "--main-file xx : for the main sitefile to take file list from"; +} + +if (not $SITEFILE) { + error "no SITEFILE found (default would be 'site.htm')$n"; + exit 1; +} else { + hint "sitefile: ", ls_s($SITEFILE); +} + +# we use internal hashes to store mappings - kind of relational tables +my @MK_TAGS= (); # "./$MK.tags.tmp" +my @MK_VARS= (); # "./$MK.vars.tmp" +my @MK_SPAN= (); # "./$MK.span.tmp" +my @MK_META= (); # "./$MK.meta.tmp" +my @MK_METT= (); # "./$MK.mett.tmp" +my @MK_TEST= (); # "./$MK.test.tmp" +my @MK_FAST= (); # "./$MK.fast.tmp" +my @MK_GETS= (); # "./$MK.gets.tmp" +my @MK_PUTS= (); # "./$MK.puts.tmp" +my @MK_OLDS= (); # "./$MK.olds.tmp" +my @MK_SITE= (); # "./$MK.site.tmp" +my @MK_SECT1= (); # "./$MK.sect1.tmp" +my @MK_SECT2= (); # "./$MK.sect2.tmp" +my @MK_SECT3= (); # "./$MK.sect3.tmp" +my @MK_DATA= (); # "./$MK~~" +my %DATA= (); # used for $F.$PARTs + +# ======================================================================== +# ======================================================================== +# ======================================================================== +# MAGIC VARS +# IN $SITEFILE +my $printerfriendly=""; +my $sectionlayout="list"; +my $sitemaplayout="list"; +my $attribvars=" "; # <x ref="${varname:=default}"> +my $updatevars=" "; # <!--$varname:=-->default +my $expandvars=" "; # <!--$varname--> +my $commentvars=" "; # $updatevars && $expandsvars +my $sectiontab=" "; # highlight ^<td class=...>...href="$section" +my $currenttab=" "; # highlight ^<br>..<a href="$topic"> +my $headsection="no"; +my $tailsection="no"; +my $sectioninfo="no"; # using <h2> title <h2> = info text +my $emailfooter="no"; + +for (source($SITEFILE)) { + if (/<!--multi-->/) { + warns("do not use <!--multi-->," + ." change to <!--mksite:multi--> $SITEFILE" + ."warning: or" + ." <!--mksite:multisectionlayout-->" + ." <!--mksite:multisitemaplayout-->"); + $sectionlayout="multi"; + $sitemaplayout="multi"; + } + if (/<!--mksite:multi-->/) { + $sectionlayout="multi"; + $sitemaplayout="multi"; + } + if (/<!--mksite:multilayout-->/) { + $sectionlayout="multi"; + $sitemaplayout="multi"; + } +} + +sub mksite_magic_option +{ + # $1 is word/option to check for + my ($U,$INP,$Z) = @_; + $INP=$SITEFILE if not $INP; + for (source($INP)) { + s/(<!--mksite:)($U)-->/$1$2: -->/g; + s/(<!--mksite:)(\w\w*)($U)-->/$1$3:$2-->/g; + /<!--mksite:$U:/ or next; + s/.*<!--mksite:$U:([^<>]*)-->.*/$1/; + s/.*<!--mksite:$U:([^-]*)-->.*/$1/; + /<!--mksite:$U:/ and next; + chomp; + return $_; + } + return ""; +} + +{ + my $x; + $x=mksite_magic_option("sectionlayout"); if + ($x =~ /^(list|multi)$/) { $sectionlayout="$x" ; } + $x=mksite_magic_option("sitemaplayout"); if + ($x =~ /^(list|multi)$/) { $sitemaplayout="$x" ; } + $x=mksite_magic_option("attribvars"); if + ($x =~ /^( |no|warn)$/) { $attribvars="$x" ; } + $x=mksite_magic_option("updatevars"); if + ($x =~ /^( |no|warn)$/) { $updatevars="$x" ; } + $x=mksite_magic_option("expandvars"); if + ($x =~ /^( |no|warn)$/) { $expandvars="$x" ; } + $x=mksite_magic_option("commentvars"); if + ($x =~ /^( |no|warn)$/) { $commentvars="$x" ; } + $x=mksite_magic_option("printerfriendly"); if + ($x =~ /^( |[.].*|[-]-.*)$/) { $printerfriendly="$x" ; } + $x=mksite_magic_option("sectiontab"); if + ($x =~ /^( |no|warn)$/) { $sectiontab="$x" ; } + $x=mksite_magic_option("currenttab"); if + ($x =~ /^( |no|warn)$/) { $currenttab="$x" ; } + $x=mksite_magic_option("sectioninfo"); if + ($x =~ /^( |no|[=:-])$/) { $sectioninfo="$x" ; } + $x=mksite_magic_option("commentvars"); if + ($x =~ /^( |no|warn)$/) { $commentvars="$x" ; } + $x=mksite_magic_option("emailfooter"); if + ($x) { $emailfooter="$x"; } +} + +$printerfriendly=$o{print} if $o{print}; +$updatevars="no" if $commentvars eq "no"; # duplicated into +$expandvars="no" if $commentvars eq "no"; # info2vars_sed + +hint "'$sectionlayout\'sectionlayout '$sitemaplayout\'sitemaplayout"; +hint "'$attribvars\'attribvars '$updatevars\'updatevars"; +hint "'$expandvars\'expandvars '$commentvars\'commentvars"; +hint "'$currenttab\'currenttab '$sectiontab\'sectiontab"; +hint "'$headsection\'headsection '$tailsection\'tailsection"; + +# ========================================================================== +# init a few global variables +# 0. INIT + +# $MK.tags.tmp - originally, we would use a lambda execution on each +# uppercased html tag to replace <P> with <p class="P">. Here we just +# walk over all the known html tags and make an sed script that does +# the very same conversion. There would be a chance to convert a single +# tag via "h;y;x" or something we do want to convert all the tags on +# a single line of course. +@MK_TAGS=(); +{ my ($M,$P); for $M (@HTMLTAGS) { + $P=uc($M); + push @MK_TAGS, "s|<$P>|<$M class=\\\"$P\\\">|g;"; + push @MK_TAGS, "s|<$P |<$M class=\\\"$P\\\" |g;"; + push @MK_TAGS, "s|</$P>|</$M>|g;"; +}} +push @MK_TAGS, "s|<>|\\ \\;|g;"; +push @MK_TAGS, "s|<->|<WBR />\\;|g;"; +push @MK_TAGS, "s|<c>|<code>|g;"; +push @MK_TAGS, "s|</c>|</code>|g;"; +push @MK_TAGS, "s|<section>||g;"; +push @MK_TAGS, "s|</section>||g;"; +push @MK_TAGS, "s|<(a [^<>]*) />|<\$1></a>|g"; +my $_ulink_="<a href=\"\$1\" remap=\"url\">\$1</a>"; +push @MK_TAGS, "s|<a>\\s*(\\w+://[^<>]*)</a>|$_ulink_|g;"; +# also make sure that some non-html entries are cleaned away that +# we are generally using to inject meta information. We want to see +# that meta ino in the *.htm browser view during editing but they +# shall not get present in the final html page for publishing. +my @DC_VARS = + ("contributor", "date", "source", "language", "coverage", "identifier", + "rights", "relation", "creator", "subject", "description", + "publisher", "DCMIType"); +my @_EQUIVS = + ("refresh", "expires", "content-type", "cache-control", + "redirect", "charset", # mapped to refresh / content-type + "content-language", "content-script-type", "content-style-type"); +{ my $P; for $P (@DC_VARS) { # dublin core embedded + push @MK_TAGS, "s|<$P>[^<>]*</$P>||g;"; +}} +{ my $P; for $P (@_EQUIVS) { + push @MK_TAGS, "s|<$P>[^<>]*</$P>||g;"; +}} +push @MK_TAGS, "s|<a sect=\\\"[$AZ$NN]\\\"|<a|g;" if not $o{keepsect}; +push @MK_TAGS, "s|<!--[$AX]*[?]-->||g;"; +push @MK_TAGS, "s|<!--\\\$[$AX]*[?]:-->||g;"; +push @MK_TAGS, "s|<!--\\\$[$AX]*:[?=]-->||g;"; +push @MK_TAGS, "s|(<[^<>]*)\\\${[$AX]*:[?=]([^<{}>]*)}([^<>]*>)|\$1\$2\$3|g;"; + +my $TRIMM=" -e 's:^ *::' -e 's: *\$::'"; # trimm away leading/trailing spaces +sub trimm +{ + my ($T,$Z) = @_; + $T =~ s:\A\s*::s; $T =~ s:\s*\Z::s; + return $T; +} +sub trimmm +{ + my ($T,$Z) = @_; + $T =~ s:\A\s*::s; $T =~ s:\s*\Z::s; $T =~ s:\s+: :g; + return $T; +} +sub timezone +{ + # +%z is an extension while +%Z is supposed to be posix + my $tz; + eval { $tz = strftime("%z", localtime()) }; + return $tz if $tz =~ /[+]/; + return $tz if $tz =~ /[-]/; + return strftime("%Z", localtime()); +} + +sub timetoday +{ + return strftime("%Y-%m-%d", localtime()); +} +sub timetodays +{ + return strftime("%Y-%m%d", localtime()); +} + +sub esc +{ + my ($TXT,$XXX) = @_; + $TXT =~ s|&|\\\\&|g; + return $TXT; +} + +my %SOURCE; +sub source # $file : @lines +{ + my ($FILE,$Z) = @_; + if (exists $SOURCE{$FILE}) { return @{$SOURCE{$FILE}}; } + my @TEXT = (); + open FILE, "<$FILE" or die "could not open $FILE: $!"; + for my $line (<FILE>) { + push @TEXT, $line; + } close FILE; + @{$SOURCE{$FILE}} = @TEXT; + return @{$SOURCE{$FILE}}; +} +sub savesource # $file \@lines +{ + my ($FILE,$LINES,$Z) = @_; + @{$SOURCE{$FILE}} = @{$LINES}; +} + +my $F; # current file during loop <<<<<<<<< +my $i = 100; +sub savelist { + if (-d "DEBUG") { + my ($script,$ext,$Z) = @_; + if (not $ext) { $ext = "_".$i; $i++; } + my $X = "$F.$ext.tmp.PL"; $X =~ s|/|:|g; + open X, ">DEBUG/$X" or die "could not open $X: $!"; + print X "#! /usr/bin/env perl",$n; + print X "# ",$#_," $ext files ",localtime(),$n; + my $TEXT = join("$n", @{$script}); + $TEXT =~ s|source\([^()]*\)|<>|; + print X $TEXT,$n; close X; + } +} + +sub eval_MK_LIST # $str @list +{ + my $FILETYPE = $_[0]; shift @_; + my $result = $_[0]; shift @_; + my $extra = ""; + my $script = "\$_ = \$result; my \$Z;"; + $script .= join(";$n ", @_); + $script .= "$n;\$result = \$_;$n"; + savelist([$script],$FILETYPE); + eval $script; + return $result.$extra; +} + +sub eval_MK_FILE { + my $FILETYPE = $_[0]; shift @_; + my $FILENAME = $_[0]; shift @_; + my $result = ""; + my $script = "my \$FILE; my \$extra = ''; my \$Z; $n"; + $script.= "for (source('$FILENAME')) { $n"; + $script.= join(";$n ", @_); + $script.= "$n; \$result .= \$_; "; + $script.= "$n if(\$extra){\$result.=\$extra;\$extra='';\$result.=\"\\n\"}"; + $script.= "$n} if(\$extra){\$result.=\$extra;}$n"; + savelist([$script],$FILETYPE); + eval $script; + return $result; +} +my $sed_add = "\$extra .= "; # "/r "; + +sub foo { print " '$F'$n"; } + +# ====================================================================== +# FUNCS + +my $SOURCEFILE; # current file <<<<<<<< +my @FILELIST; # <<<<<<< + +sub sed_slash_key # helper to escape chars special in /anchor/ regex +{ # currently escaping "/" "[" "]" "." + my $R = $_[0]; $R =~ s|[\"./[-]|\\$&|g; $R =~ s|\]|\\\\$&|g; + return $R; +} +sub sed_piped_key # helper to escape chars special in s|anchor|| regex +{ # currently escaping "|" "[" "]" "." + my $R = $_[0]; $R =~ s/[\".|[-]/\\$&/g; $R =~ s/\]/\\\\$&/g; + return $R; +} + +sub back_path # helper to get the series of "../" for a given path +{ + my ($R,$Z) = @_; if ($R !~ /\//) { return ""; } + $R =~ s|/[^/]*$|/|; $R =~ s|[^/]*/|../|g; + return $R; +} + +sub dir_name +{ + my $R = $_[0]; $R =~ s:/[^/][^/]*\$::; + return $R; +} + +sub info2vars_sed # generate <!--$vars--> substition sed addon script +{ + my ($INP,$Z) = @_; + $INP = \@{$DATA{$F}} if not $INP; + my @OUT = (); + my $V8=" *([^ ][^ ]*) +(.*)<$QX>"; + my $V9=" *DC[.]([^ ][^ ]*) +(.*)<$QX>"; + my $N8=" *([^ ][^ ]*) ([$NN].*)<$QX>"; + my $N9=" *DC[.]([^ ][^ ]*) ([$NN].*)<$QX>"; + my $V0="([<]*)\\\$"; + my $V1="([^<>]*)\\\$"; + my $V2="([^{<>}]*)"; + my $V3="([^<>]*)"; + my $SS="<"."<>".">"; # spacer so value="2004" dont make for s|\(...\)|\12004| + $Z="\$Z="; + $updatevars = "no" if $commentvars eq "no"; # duplicated from + $expandvars = "no" if $commentvars eq "no"; # option handling + my @_INP = (); for (@{$INP}) { + my $x=$_; $x =~ s/(>[^<>]*)'([^<>]*<)/$1\\'$2/; push @_INP, $x; # OOOOPS + } + if ($expandvars ne "no") { + for (@_INP) { + if (/^=....=formatter /) { next; } + elsif (/^<$Q='name'>$V9/){push @OUT, "\$Z='$2';s|<!--$V0$1\\?-->|- \$Z|;"} + elsif (/^<$Q='Name'>$V9/){push @OUT, "\$Z='$2';s|<!--$V0$1\\?-->|(\$Z)|;"} + elsif (/^<$Q='name'>$V8/){push @OUT, "\$Z='$2';s|<!--$V0$1\\?-->|- \$Z|;"} + elsif (/^<$Q='Name'>$V8/){push @OUT, "\$Z='$2';s|<!--$V0$1\\?-->|(\$Z)|;"} + } + } + if ($expandvars ne "no") { + for (@_INP) { + if (/^=....=formatter /) { next; } + elsif (/^<$Q='text'>$V9/){push @OUT, "\$Z='$2';s|<!--$V1$1-->|\$1$SS\$Z|;"} + elsif (/^<$Q='Text'>$V9/){push @OUT, "\$Z='$2';s|<!--$V1$1-->|\$1$SS\$Z|;"} + elsif (/^<$Q='name'>$V9/){push @OUT, "\$Z='$2';s|<!--$V1$1\\?-->|\$1$SS\$Z|;"} + elsif (/^<$Q='Name'>$V9/){push @OUT, "\$Z='$2';s|<!--$V1$1\\?-->|\$1$SS\$Z|;"} + elsif (/^<$Q='text'>$V8/){push @OUT, "\$Z='$2';s|<!--$V1$1-->|\$1$SS\$Z|;"} + elsif (/^<$Q='Text'>$V8/){push @OUT, "\$Z='$2';s|<!--$V1$1-->|\$1$SS\$Z|;"} + elsif (/^<$Q='name'>$V8/){push @OUT, "\$Z='$2';s|<!--$V1$1\\?-->|\$1$SS\$Z|;"} + elsif (/^<$Q='Name'>$V8/){push @OUT, "\$Z='$2';s|<!--$V1$1\\?-->|\$1$SS\$Z|;"} + } + } + if ($updatevars ne "no") { + for (@_INP) { my $H = "[^<>]*"; + if (/^=....=formatter /) { next; } + elsif (/^<$Q='name'>$V9/){push @OUT, "\$Z='$2';s|<!--$V0$1:\\?-->$H|- \$Z|;"} + elsif (/^<$Q='Name'>$V9/){push @OUT, "\$Z='$2';s|<!--$V0$1:\\?-->$H|(\$Z)|;"} + elsif (/^<$Q='name'>$V8/){push @OUT, "\$Z='$2';s|<!--$V0$1:\\?-->$H|- \$Z|;"} + elsif (/^<$Q='Name'>$V8/){push @OUT, "\$Z='$2';s|<!--$V0$1:\\?-->$H|(\$Z)|;"} + } + } + if ($updatevars ne "no") { + for (@_INP) { my $H = "[^<>]*"; + if (/^=....=formatter /) { next; } + elsif (/^<$Q='text'>$V9/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\=-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='Text'>$V9/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\=-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='name'>$V9/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\?-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='Name'>$V9/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\?-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='text'>$V8/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\=-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='Text'>$V8/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\=-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='name'>$V8/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\?-->$H|\$1$SS\$Z|;"} + elsif (/^<$Q='Name'>$V8/){push @OUT,"\$Z='$2';s|<!--$V1$1:\\?-->$H|\$1$SS\$Z|;"} + } + } + if ($attribvars ne "no") { + for (@_INP) { my $H = "[^<>]*"; + if (/^=....=formatter /) { next; } + elsif (/^<$Q='text'>$V9/){push @OUT,"\$Z='$2';s|<$V1\{$1:[=]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='Text'>$V9/){push @OUT,"\$Z='$2';s|<$V1\{$1:[=]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='name'>$V9/){push @OUT,"\$Z='$2';s|<$V1\{$1:[?]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='Name'>$V9/){push @OUT,"\$Z='$2';s|<$V1\{$1:[?]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='text'>$V8/){push @OUT,"\$Z='$2';s|<$V1\{$1:[=]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='Text'>$V8/){push @OUT,"\$Z='$2';s|<$V1\{$1:[=]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='name'>$V8/){push @OUT,"\$Z='$2';s|<$V1\{$1:[?]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + elsif (/^<$Q='Name'>$V8/){push @OUT,"\$Z='$2';s|<$V1\{$1:[?]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + } + for (split / /, $o{variables}) { + {push @OUT,"\$Z='$o{$_}';s|<$V1\{$_:[?]$V2}$V3>|<\$1$SS\$Z\$3>|;"} + } + } + # if value="2004" then generated sed might be "\\12004" which is bad + # instead we generate an edited value of "\\1$SS$value" and cut out + # the spacer now after expanding the variable values: + push @OUT, "s|$SS||g;"; + return @OUT; + +} + +sub info2meta_sed # generate <meta name..> text portion +{ + my ($INP,$XXX) = @_; + $INP = \@{$DATA{$F}} if not $INP; + my @OUT = (); + # http://www.metatab.de/meta_tags/DC_type.htm + my $V6=" *HTTP[.]([^ ]+) (.*)<$QX>"; + my $V7=" *DC[.]([^ ]+) (.*)<$QX>"; + my $V8=" *([^ ]+) (.*)<$QX>" ; + sub __TYPE_SCHEME { "name=\"DC.type\" content=\"$2\" scheme=\"$1\"" }; + sub __DCMI { "name=\"$1\" content=\"$2\" scheme=\"DCMIType\"" }; + sub __NAME { "name=\"$1\" content=\"$2\"" }; + sub __NAME_TZ { "name=\"$1\" content=\"$2 ".&timezone()."\"" }; + sub __HTTP { "http-equiv=\"$1\" content=\"$2\"" }; + for (@$INP) { + if (/=....=today /) { next; } + if (/<$Q='meta'>HTTP[.]/ && /<$Q='meta'>$V6/) { + push @OUT, " <meta ${\(__HTTP)} />" if $2; next; } + if (/<$Q='meta'>DC[.]DCMIType / && /<$Q='meta'>$V7/) { + push @OUT, " <meta ${\(__TYPE_SCHEME)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Collection$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Dataset$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Event$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Image$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Service$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Software$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Sound$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]type Text$/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__DCMI)} />" if $2; next; } + if (/<$Q='meta'>DC[.]date[.].*[+]/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__NAME)} />" if $2; next; } + if (/<$Q='meta'>DC[.]date[.].*[:]/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__NAME_TZ)} />" if $2; next; } + if (/<$Q='meta'>/ && /<$Q='meta'>$V8/) { + push @OUT, " <meta ${\(__NAME)} />" if $2; next; } + } + return @OUT; +} + +sub info_get_entry # get the first <!--vars--> value known so far +{ + my ($TXT,$INP,$XXX) = @_; + $TXT = "sect" if not $TXT; + $INP = \@{$DATA{$F}} if not $INP; + for (grep {/<$Q='text'>$TXT /} @$INP) { + my $info = $_; + $info =~ s|<$Q='text'>$TXT ||; $info =~ s|<$QX>||; + chomp($info); chomp($info); return $info; + } +} + +sub info1grep # test for a <!--vars--> substition to be already present +{ + my ($TXT,$INP,$XXX) = @_; + $TXT = "sect" if not $TXT; + $INP = \@{$DATA{$F}} if not $INP; + return scalar(grep {/^<$Q='text'>$TXT /} @$INP); # returning the count +} + +sub dx_init +{ + @{$DATA{$F}} = (); + &dx_meta ("formatter", basename($o{formatter})); + for (split / /, $o{variables}) { # commandline --def=value + if (/_/) { my $u=$_; $u =~ y/_/-/; # makes for <!--$def--> override + &dx_meta ($u, $o{$_}); + } else { &dx_text ($_, $o{$_}); } + } +} + +sub dx_line +{ + my ($U,$V,$W,$Z) = @_; chomp($U); chomp($V); + push @{$DATA{$F}}, "<$Q=$U>".$V." ".trimmm($W)."<$QX>"; +} + +sub DX_line +{ + my ($U,$V,$W,$Z) = @_; $W =~ s/<[^<>]*>//g; + &dx_line ($U,$V,$W); +} + +sub dx_text +{ + my ($U,$V,$Z) = @_; + &dx_line ("'text'",$U,$V); +} + +sub DX_text # add a <!--vars--> substition includings format variants +{ + my ($N, $T,$XXX) = @_; + $N = trimm($N); $T = trimm($T); + if ($N) { + if ($T) { + my $text=lc("$T"); $text =~ s/<[^<>]*>//g; + &dx_line ("'text'",$N,$T); + &dx_line ("'name'",$N,$text); + my $varname=$N; $varname =~ s/.*[.]//; # cut out front part + if ($N ne $varname and $varname) { + $text=lc("$varname $T"); $text =~ s/<[^<>]*>//g; + &dx_line ("'Text'",$varname,$T); + &dx_line ("'Name'",$varname,$text); + } + } + } +} + +sub dx_meta +{ + my ($U,$V,$Z) = @_; + &DX_line ("'meta'",$U,$V); +} + +sub DX_meta # add simple meta entry and its <!--vars--> subsitution +{ + my ($U,$V,$Z) = @_; + &DX_line ("'meta'",$U,$V); + &DX_text ("$U", $V); +} + +sub DC_meta # add new DC.meta entry plus two <!--vars--> substitutions +{ + my ($U,$V,$Z) = @_; + &DX_line ("'meta'","DC.$U",$V); + &DX_text ("DC.$U", $V); + &DX_text ("$U", $V); +} + +sub HTTP_meta # add new HTTP.meta entry plus two <!--vars--> substitutions +{ + my ($U,$V,$Z) = @_; + &DX_line ("'meta'","HTTP.$U",$V); + &DX_text ("HTTP.$U", $V); + &DX_text ("$U", $V); +} + +sub DC_VARS_Of # check DC vars as listed in $DC_VARS global/generate DC_meta +{ # the results will be added to .meta.tmp and .vars.tmp later + my ($FILENAME,$Z)= @_; + $FILENAME=$SOURCEFILE if not $FILENAME; + for my $M (@DC_VARS, "title") { + # scan for a <markup> of this name FIXME + my ($part,$text); + for (source($FILENAME)) { + /<$M>/ or next; s|.*<$M>||; s|</$M>.*||; + $part = trimm($_); last; + } + $text=$part; $text =~ s|^\w*:||; $text = trimm($text); + next if not $text; + # <mark:part> will be <meta name="mark.part"> + if ($text ne $part) { + my $N=$part; $N =~ s/:.*//; + &DC_meta ("$M.$N", $text); + } elsif ($M eq "date") { + &DC_meta ("$M.issued", $text); # "<date>" -> "<date>issued:" + } else { + &DC_meta ("$M", $text); + } + } +} + +sub HTTP_VARS_Of # check HTTP-EQUIVs as listed in $_EQUIV global then +{ # generate meta tags that are http-equiv= instead of name= + my ($FILENAME,$Z)= @_; + $FILENAME=$SOURCEFILE if not $FILENAME; + for my $M (@_EQUIVS) { + # scan for a <markup> of this name FIXME + my ($part,$text); + for (source($FILENAME)) { + /<$M>/ or next; s|.*<$M>||; s|</$M>.*||; + $part = trimm($_); last; + } + $text=$part; $text =~ s|^\w*:||; $text = trimm($text); + next if not $text; + if ($M eq "redirect") { + &HTTP_meta ("refresh", "5; url=$text"); &DX_text ("$M", $text); + } elsif ($M eq "charset") { + &HTTP_meta ("content-type", "text/html; charset=$text"); + } else { + &HTTP_meta ("$M", $text); + } + } +} + +sub DC_isFormatOf # make sure there is this DC.relation.isFormatOf tag +{ # choose argument for a fallback (usually $SOURCEFILE) + my ($NAME,$Z) = @_; + $NAME=$SOURCEFILE if not $NAME; + if (not &info1grep ("DC.relation.isFormatOf")) { + &DC_meta ("relation.isFormatOf", "$NAME"); + } +} + +sub DC_publisher # make sure there is this DC.publisher meta tag +{ # choose argument for a fallback (often $USER) + my ($NAME,$Z) = @_; + $NAME=$ENV{"USER"} if not $NAME; + if (not &info1grep ("DC.publisher")) { + &DC_meta ("publisher", "$NAME"); + } +} + +sub DC_modified # make sure there is a DC.date.modified meta tag +{ # maybe choose from filesystem dates if possible + my ($ZZ,$Z) = @_; # target file + if (not &info1grep ("DC.date.modified")) { + my @stats = stat($ZZ); + my $text = strftime("%Y-%m-%d", localtime($stats[9])); + &DC_meta ("date.modified", $text); + } +} + +sub DC_date # make sure there is this DC.date meta tag +{ # choose from one of the available DC.date.* specials + my ($ZZ,$Z) = @_; # source file + if (&info1grep ("DC.date")) { + &DX_text ("issue", "dated ".&info_get_entry("DC.date")); + &DX_text ("updated", &info_get_entry("DC.date")); + } else { + my $text=""; my $kind; + for $kind (qw/available issued modified created/) { + $text=&info_get_entry("DC.date.$kind"); + # test ".$text" != "." && echo "$kind = date = $text ($ZZ)" + last if $text; + } + if (not $text) { + my $part; my $M="date"; + for (source($ZZ)) { + /<$M>/ or next; s|.*<$M>||; s|</$M>.*||; + $part=trimm($_); last; + } + $text=$part; $text =~ s|^[$AA]*:||; + $text = &trimm ($text); + } + if (not $text) { + my $part; my $M="!--date:*=*--"; # takeover updateable variable... + for (source($ZZ)) { + /<$M>/ or next; s|.*<$M>||; s|</.*||; + $part=trimm($_); last; + } + $text=$part; $text =~ s|^[$AA]*:||; $text =~ s|\&.*||; + $text = &trimm ($text); + } + $text =~ s/[$NN]*:.*//; # cut way seconds + &DX_text ("updated", $text); + my $text1=$text; $text1 =~ s|^.* *updated ||; + if ($text ne $text1) { + $kind="modified" ; $text=$text1; $text =~ s|,.*||; + } + $text1=$text; $text1 =~ s|^.* *modified ||; + if ($text ne $text1) { + $kind="modified" ; $text=$text1; $text =~ s|,.*||; + } + $text1=$text; $text1 =~ s|^.* *created ||; + if ($text ne $text1) { + $kind="created" ; $text=$text1; $text =~ s|,.*||; + } + &DC_meta ("date", "$text"); + &DX_text ("issue", "$kind $text"); + } +} + +sub DC_title +{ + # choose a title for the document, either an explicit title-tag + # or one of the section headers in the document or fallback to filename + my ($ZZ,$Z) = @_; # target file + my ($term, $text); + if (not &info1grep ("DC.title")) { + for my $M (qw/TITLE title H1 h1 H2 h2 H3 H3 H4 H4 H5 h5 H6 h6/) { + for (source($ZZ)) { + /<$M>/ or next; s|.*<$M>||; s|</$M>.*||; + $text = trimm($_); last; + } + last if $text; + for (source($ZZ)) { + /<$M [^<>]*>/ or next; s|.*<$M [^<>]*>||; s|</$M>.*||; + $text = trimm($_); last; + } + last if $text; + } + if (not $text) { + $text=basename($ZZ,".html"); + $text=basename($text,".htm"); $text =~ y/_/ /; $text =~ s/$/ info/; + $text=~ s/\n/ /g; + } + $term=$text; $term =~ s/.*[\(]//; $term =~ s/[\)].*//; + $text =~ s/[\(][^\(\)]*[\)]//; + if (not $term or $term eq $text) { + &DC_meta ("title", "$text"); + } else { + &DC_meta ("title", "$term - $text"); + } + } +} + +sub site_get_section # return parent section page of given page +{ + my $_F_ = &sed_slash_key(@_); + for my $x (grep {/<$Q='sect'>$_F_ /} @MK_DATA) { + my $info = $x; $info =~ s|<$Q='sect'>[^ ]* ||; $info =~ s|<$QX>||; + return $info; + } +} + +sub DC_section # not really a DC relation (shall we use isPartOf ?) +{ # each document should know its section father + my $sectn = &site_get_section($F); + if ($sectn) { + &DC_meta ("relation.section", $sectn); + } +} + +sub info_get_entry_section +{ + return &info_get_entry("DC.relation.section"); +} + +sub site_get_selected # return section of given page +{ + my $_F_ = &sed_slash_key(@_); + for my $x (grep {/<$Q='[u]se.'>$_F_ /} @MK_DATA) { + my $info = $x; + $info =~ s/<$Q='[u]se.'>[^ ]* //; $info =~ s|<$QX>||; + return $info; + } +} + +sub DC_selected # not really a DC title (shall we use alternative ?) +{ + # each document might want to highlight the currently selected item + my $short=&site_get_selected($F); + if ($short) { + &DC_meta ("title.selected", $short); + } +} + +sub info_get_entry_selected +{ + return &info_get_entry("DC.title.selected"); +} + +sub site_get_rootsections # return all sections from root of nav tree +{ + my @OUT; + for (grep {/<$Q='[u]se1'>/} @MK_DATA) { + my $x = $_; + $x =~ s/<$Q='[u]se.'>([^ ]*) .*/$1/; + push @OUT, $x; + } + return @OUT; +} + +sub site_get_sectionpages # return all children pages in the given section +{ + my $_F_=&sed_slash_key(@_); + my @OUT = (); + for (grep {/^<$Q='sect'>[^ ]* $_F_$/} @MK_DATA) { + my $x = $_; + $x =~ s/^<$Q='sect'>//; $x =~ s/ .*//; $x =~ s|<$QX>||; + push @OUT, $x; + } + return @OUT; +} + +sub site_get_subpages # return all page children of given page +{ + my $_F_=&sed_slash_key(@_); + my @OUT = (); + for (grep {/^<$Q='node'>[^ ]* $_F_<[^<>]*>$/} @MK_DATA) { + my $x = $_; + $x =~ s/^<$Q='node'>//; $x =~ s/ .*//; $x =~ s|<$QX>||; + push @OUT, $x; + } + return @OUT; +} + +sub site_get_parentpage # ret parent page for given page (".." for sections) +{ + my $_F_=&sed_slash_key(@_); + for (grep {/^<$Q='node'>$_F_ /} @MK_DATA) { + my $x = $_; + $x =~ s/^<$Q='node'>[^ ]* //; $x =~ s|<$QX>||; + return $x; + } +} + +sub DX_alternative # detect wether page asks for alternative style +{ # which is generally a shortpage variant + my ($U,$Z) = @_; + my $x=&mksite_magic_option("alternative",$U); + $x =~ s/^ *//; $x =~s/ .*//; + if ($x) { + &DX_text ("alternative", $x); + } +} + +sub info2head_sed # append alternative handling script to $HEAD +{ + my @OUT = (); + my $have=&info_get_entry("alternative"); + if ($have) { + push @OUT, "/<!--mksite:alternative:$have .*-->/ && do {"; + push @OUT, "s/<!--mksite:alternative:$have( .*)-->/\$1/"; + push @OUT, "$sed_add \$_; last; };"; + } + return @OUT; +} +sub info2body_sed # append alternative handling script to $BODY +{ + my @OUT = (); + my $have=&info_get_entry("alternative"); + if ($have) { + push @OUT, "s/<!--mksite:alternative:$have( .*)-->/\$1/"; + } + return @OUT; +} + +sub bodymaker_for_sectioninfo +{ + if ($sectioninfo eq "no") { return ""; } + my $_x_="<!--mksite:sectioninfo::-->"; + my $_q_="([^<>]*[$AX][^<>]*)"; + $_q_="[ ][ ]*$sectioninfo([ ])" if $sectioninfo ne " "; + my @OUT = (); + push @OUT, "s|(^<[hH][$NN][ >].*</[hH][$NN]>)$_q_|\$1$_x_\$2|"; + push @OUT, "/$_x_/ and s|^|<table width=\"100%\"><tr valign=\"bottom\"><td>|"; + push @OUT, "/$_x_/ and s|</[hH][$NN]>|&</td><td align=\"right\"><i>|"; + push @OUT, "/$_x_/ and s|\$|</i></td></tr></table>|"; + push @OUT, "s|$_x_||"; + return @OUT; +} + +sub fast_href # args "$FILETOREFERENCE" "$FROMCURRENTFILE:$F" +{ # prints path to $FILETOREFERENCE href-clickable in $FROMCURRENTFILE + # if no subdirectoy then output is the same as input $FILETOREFERENCE + my ($T,$R,$Z) = @_; + my $S=&back_path ($R); + if (not $S) { + return $T; + } else { + my $t=$T; + $t =~ s/^ *$//; $t =~ s/^\/.*//; + $t =~ s/^[.][.].*//; $t =~ s/^\w*:.*//; + if (not $t) { # don't move any in the pattern above + return $T; + } else { + return "$S$T"; # prefixed with backpath + } + } +} + +sub make_back_path # "$FILE" +{ + my ($R,$Z) = @_; + my $S=&back_path ($R); + my @OUT = (); + return @OUT if $S !~ /^\.\./; + push @OUT, "s|(<[^<>]*\\shref=\\\")(\\w[^<>:]*\\\"[^<>]*>)|\$1$S\$2|g;"; + push @OUT, "s|(<[^<>]*\\ssrc=\\\")(\\w[^<>:]*\\\"[^<>]*>)|\$1$S\$2|g;"; + return @OUT; +} + +# ============================================================== SITE MAP DATA +# each entry needs atleast a list-title, a long-title, and a list-date +# these are the basic information to be printed in the sitemap file +# where it is bound the hierarchy of sect/subsect of the entries. + +sub site_map_list_title # $file $text +{ + my ($U,$V,$Z) = @_; chomp($U); + push @MK_DATA, "<$Q='list'>$U ".trimm($V)."<$QX>"; +} +sub info_map_list_title # $file $text +{ + my ($U,$V,$Z) = @_; chomp($U); + push @{$DATA{$U}}, "<$Q='list'>".trimm($V)."<$QX>"; +} +sub site_map_long_title # $file $text +{ + my ($U,$V,$Z) = @_; chomp($U); + push @MK_DATA, "<$Q='long'>$U ".trimm($V)."<$QX>"; +} +sub info_map_long_title # $file $text +{ + my ($U,$V,$Z) = @_; chomp($U); + push @{$DATA{$U}}, "<$Q='long'>".trimm($V)."<$QX>"; +} +sub site_map_list_date # $file $text +{ + my ($U,$V,$Z) = @_; chomp($U); + push @MK_DATA, "<$Q='date'>$U ".trimm($V)."<$QX>"; +} +sub info_map_list_date # $file $text +{ + my ($U,$V,$Z) = @_; chomp($U); + push @{$DATA{$U}}, "<$Q='date'>".trimm($V)."<$QX>"; +} + +sub site_get_list_title +{ + my ($U,$V,$Z) = @_; + for (@MK_DATA) { if (m|^<$Q='list'>$U (.*)<$QX>|) { return $1; } } return ""; +} +sub site_get_long_title +{ + my ($U,$V,$Z) = @_; + for (@MK_DATA) { if (m|^<$Q='long'>$U (.*)<$QX>|) { return $1; } } return ""; +} +sub site_get_list_date +{ + my ($U,$V,$Z) = @_; + for (@MK_DATA) { if (m|^<$Q='date'>$U (.*)<$QX>|) { return $1; } } return ""; +} + +sub siteinfo2sitemap# generate <name><page><date> addon sed scriptlet +{ # the resulting script will act on each item/line + # containing <!--"filename"--> and expand any following + # reference of <!--name--> or <!--date--> or <!--long--> + my ($INP,$Z) = @_ ; $INP= \@MK_DATA if not $INP; + my @OUT = (); + my $_list_= + sub{"s|(<!--\\\"$1\\\"-->.*)<name [^<>]*>.*</name>|\$1<name href=\\\"$1\\\">$2</name>|"}; + my $_date_= + sub{"s|(<!--\\\"$1\\\"-->.*)<date>.*</date>|\$1<date>$2</date>|"}; + my $_long_= + sub{"s|(<!--\\\"$1\\\"-->.*)<long>.*</long>|\$1<long>$2</long>|"}; + + for (@$INP) { + my $info = $_; + $info =~ s:<$Q='list'>([^ ]*) (.*)<$QX>:&$_list_:e; + $info =~ s:<$Q='date'>([^ ]*) (.*)<$QX>:&$_date_:e; + $info =~ s:<$Q='long'>([^ ]*) (.*)<$QX>:&$_long_:e; + $info =~ /^s\|/ || next; + push @OUT, $info; + } + return @OUT; +} + +sub make_multisitemap +{ # each category gets its own column along with the usual entries + my ($INPUTS,$Z)= @_ ; $INPUTS=\@MK_DATA if not $INPUTS; + @MK_SITE = &siteinfo2sitemap(); # have <name><long><date> addon-sed + my @OUT = (); + my $_form_= sub{"<!--\"$2\"--><!--use$1--><long>$3</long><!--end$1-->" + ."<br><name href=\"$2\">$3</name><date>......</date>" }; + my $_tiny_="small><small><small" ; my $_tinyX_="small></small></small "; + my $_tabb_="<br><$_tiny_> </$_tinyX_>" ; my $_bigg_="<big> </big>"; + push @OUT, "<table width=\"100%\"><tr><td> ".$n; + for (grep {/<$Q='[Uu]se.'>/} @$INPUTS) { + my $x = $_; + $x =~ />\w\w\w\w*:/ and next; # name: http: ftp: mailto: ... + $x =~ s|<$Q='[Uu]se(.)'>([^ ]*) (.*)<$QX>|&$_form_|e; + $x = &eval_MK_LIST("multisitemap", $x, @MK_SITE); + $x =~ /<name/ or next; + $x =~ s|<!--[u]se1-->|</td><td valign=\"top\"><b>|; + $x =~ s|<!--[e]nd1-->|</b>|; + $x =~ s|<!--[u]se2-->|<br>|; + $x =~ s|<!--[u]se.-->|<br>|; $x =~ s/<!--[^<>]*-->/ /g; + $x =~ s|<name |<$_tiny_><a |; $x =~ s|</name>||; + $x =~ s|<date>|<small style="date">|; + $x =~ s|</date>|</small></a><br></$_tinyX_>|; + $x =~ s|<long>|<!--long-->|; + $x =~ s|</long>|<!--/long-->|; + chomp $x; + push @OUT, $x.$n; + } + push @OUT, "</td><tr></table>".$n; + return @OUT; +} + +sub make_listsitemap +{ # traditional - the body contains a list with date and title extras + my ($INPUTS,$Z)= @_ ; $INPUTS=\@MK_DATA if not $INPUTS; + @MK_SITE = &siteinfo2sitemap(); # have <name><long><date> addon-sed + my @OUT = (); + my $_form_=sub{ + "<!--\"$2\"--><!--use$1--><name href=\"$2\">$3</name><date>......</date><long>$3</long>"}; + my $_tabb_="<td>\ \;</td>"; + push @OUT, "<table cellspacing=\"0\" cellpadding=\"0\">".$n; + my $xx; + for $xx (grep {/<$Q='[Uu]se.'>/} @$INPUTS) { + my $x = "".$xx; + $x =~ />\w\w\w\w*:/ and next; + $x =~ s|<$Q='[Uu]se(.)'>([^ ]*) (.*)<$QX>|&$_form_|e; + $x = &eval_MK_LIST("listsitemap", $x, @MK_SITE); + $x =~ /<name/ or next; + $x =~ s|<!--[u]se(1)-->|<tr class=\"listsitemap$1\"><td>*</td>|; + $x =~ s|<!--[u]se(2)-->|<tr class=\"listsitemap$1\"><td>-</td>|; + $x =~ s|<!--[u]se(.)-->|<tr class=\"listsitemap$1\"><td> </td>|; + $x =~ /<tr.class=\"listsitemap3\">/ and $x =~ s|(<name [^<>]*>)|$1- |; + $x =~ s|<!--[^<>]*-->| |g; + $x =~ s|<name href=\"name:sitemap:|<name href=\"|; + $x =~ s|<name |<td><a |; $x =~ s|</name>|</a></td>$_tabb_|; + $x =~ s|<date>|<td><small style="date">|; + $x =~ s|</date>|</small></td>$_tabb_|; + $x =~ s|<long>|<td><em><!--long-->|; + $x =~ s|</long>|<!--/long--></em></td></tr>|; + push @OUT, $x.$n; + } + for $xx (grep {/<$Q='[u]se.'>/} @$INPUTS) { + my $x = $xx; + $x =~ s/<$Q='[u]se.'>name:sitemap://; $x =~ s|<$QX>||; $x =~ s:\s*::gs; + if (-f $x) { + for (grep {/<tr.class=\"listsitemap\d\">/} source($x)) { + push @OUT, $_; + } + } + } + push @OUT, "</table>".$n; + return @OUT; +} + +my $_xi_include_= + "<xi:include xmlns:xi=\"http://www.w3.org/2001/XInclude\" parse=\"xml\""; + +sub make_xmlsitemap +{ # traditional - the body contains a list with date and title extras + my ($INPUTS,$Z)= @_ ; $INPUTS=\@MK_DATA if not $INPUTS; + @MK_SITE = &siteinfo2sitemap(); # have <name><long><date> addon-sed + my @OUT = (); + my $_form_=sub{"<!--\"$2\"--><name href=\"$2\">$3</name>"}; + my $xx; + for $xx (grep {/<$Q='[Uu]se.'>/} @$INPUTS) { + my $x = "".$xx; + $x =~ />\w\w\w\w*:/ and next; + $x =~ s|<$Q='[Uu]se(.)'>([^ ]*) (.*)<$QX>|&$_form_|e; + $x = &eval_MK_LIST("listsitemap", $x, @MK_SITE); + $x =~ /<name/ or next; + $x =~ m|href="${SITEFILE}"| and next; + $x =~ m|href="${SITEFILE}l"| and next; + $x =~ s|(href="[^<>]*)\.html(")|$1.xml$2|g; + $x =~ s|.*<name|$_xi_include_$n |; + $x =~ s|>.*</name>| />|; + push @OUT, $x.$n; + } + return @OUT; +} + +sub print_extension +{ + my ($ARG,$Z)= @_ ; $ARG=$o{print} if not $ARG; + if ($ARG =~ /^([.-])/) { + return $ARG; + } else { + return ".print"; + } +} + +sub from_sourcefile +{ + my ($U,$Z) = @_; + if (-f $U) { + return $U; + } elsif (-f "$o{src_dir}/$U") { + return "$o{src_dir}/$U"; + } else { + return $U; + } +} + +sub html_sourcefile # generally just cut away the trailing "l" (ell) +{ # making "page.html" argument into "page.htm" return + my ($U,$Z) = @_; + my $_SRCFILE_=$U; $_SRCFILE_ =~ s/l$//; + my $_XMLFILE_=$U; $_XMLFILE_ =~ s/\.html$/.dbk/; + if (-f $_SRCFILE_) { + return $_SRCFILE_; + } elsif (-f $_XMLFILE_) { + return $_XMLFILE_; + } elsif (-f "$o{src_dir}/$_SRCFILE_") { + return "$o{src_dir}/$_SRCFILE_"; + } elsif (-f "$o{src_dir}/$_XMLFILE_") { + return "$o{src_dir}/$_XMLFILE_"; + } else { + return ".//$_SRCFILE_"; + } +} +sub html_printerfile_sourcefile +{ + my ($U,$Z) = @_; + if (not $printerfriendly) { + $U =~ s/l\$//; return $U; + } else { + my $_ext_=&sed_slash_key(&print_extension($printerfriendly)); + $U =~ s/l\$//; $U =~ s/$_ext_([.][\w]*)$/$1/; return $U; + } +} + +sub fast_html_printerfile { + my ($U,$V,$Z) = @_; + my $x=&html_printerfile($U) ; return basename($x); +# my $x=&html_printerfile($U) ; return &fast_href($x,$V); +} + +sub html_printerfile # generate the printerfile for a given normal output +{ + my ($U,$Z) = @_; + my $_ext_=&esc(&print_extension($printerfriendly)); + $U =~ s/([.][\w]*)$/$_ext_$1/; return $U; # index.html -> index.print.html +} + +sub make_printerfile_fast # generate s/file.html/file.print.html/ for hrefs +{ # we do that only for the $FILELIST + my ($U,$Z) = @_; + my $ALLPAGES=$U; + my @OUT = (); + for my $p (@$ALLPAGES) { + my $a=&sed_slash_key($p); + my $b=&html_printerfile($p); + if ($b ne $p) { + $b =~ s:/:\\/:g; + push @OUT, + "s/<a href=\\\"$a\\\"([^<>])*>/<a href=\\\"$b\\\"\$1>/;"; + } + } + return @OUT; +} + +sub echo_printsitefile_style +{ + my $_bold_="text-decoration : none ; font-weight : bold ; "; + return " <style>" + ."$n a:link { $_bold_ color : #000060 ; }" + ."$n a:visited { $_bold_ color : #000040 ; }" + ."$n body { background-color : white ; }" + ."$n </style>" + ."$n"; +} + +sub make_printsitefile_head # $sitefile +{ + my $MK_STYLE = &echo_printsitefile_style(); + my @OUT = (); + for (source($SITEFILE)) { + if (/<head>/) { push @OUT, $_; + push @OUT, $MK_STYLE; next; } + if (/<title>/) { push @OUT, $_; next; } + if (/<\/head>/) { push @OUT, $_; next; } + if (/<body>/) { push @OUT, $_; next; } + if (/<link [^<>]*rel=\"shortcut icon\"[^<>]*>/) { + push @OUT, $_; next; + } + } + return @OUT; +} + +# ------------------------------------------------------------------------ +# The printsitefile is a long text containing html href markups where +# each of the href lines in the file is being prefixed with the section +# relation. During a secondary call the printsitefile can grepp'ed for +# those lines that match a given output fast-file. The result is a +# navigation header with 1...3 lines matching the nesting level + +# these alt-texts will be only visible in with a text-mode browser: +my $printsitefile_square="width=\"8\" height=\"8\" border=\"0\""; +my $printsitefile_img_1="<img alt=\"|go text:\" $printsitefile_square />"; +my $printsitefile_img_2="<img alt=\"||topics:\" $printsitefile_square />"; +my $printsitefile_img_3="<img alt=\"|||pages:\" $printsitefile_square />"; +my $_SECT="mksite:sect:"; + +sub echo_current_line # $sect $extra +{ + # add the prefix which is used by select_in_printsitefile to cut out things + my ($N,$M,$Z) = @_; + return "<!--$_SECT\"$N\"-->$M"; +} +sub make_current_entry # $sect $file ## requires $MK_SITE +{ + my ($S,$R,$Z) = @_; + my $RR=&sed_slash_key($R); + my $sep=" - " ; my $_left_=" [ " ; my $_right_=" ] "; + my $name = site_get_list_title($R); + $_ = &echo_current_line ("$S", "<a href=\"$R\">$name</a>$sep"); + if ($R eq $S) { + s/<a href/$_left_$&/; + s/<\/a>/$&$_right_/; + } + return $_; +} +sub echo_subpage_line # $sect $extra +{ + my ($N,$M,$Z) = @_; + return "<!--$_SECT*:\"$N\"-->$M"; +} + +sub make_subpage_entry +{ + my ($S,$R,$Z) = @_; + my $RR=&sed_slash_key($R); + my $sep=" - " ; + my $name = site_get_list_title($R); + $_ = &echo_subpage_line ("$S", "<a href=\"$R\">$name</a>$sep"); + return $_; +} + +sub make_printsitefile +{ + # building the printsitefile looks big but its really a loop over sects + my ($INPUTS,$Z) = @_; $INPUTS=\@MK_DATA if not $INPUTS; + @MK_SITE = &siteinfo2sitemap(); # have <name><long><date> addon-sed + savelist(\@MK_SITE,"SITE"); + + my @OUT = &make_printsitefile_head ($SITEFILE); + my $sep=" - " ; + my $_sect1= + "<a href=\"#.\" title=\"section\">$printsitefile_img_1</a> ||$sep"; + my $_sect2= + "<a href=\"#.\" title=\"topics\">$printsitefile_img_2</a> ||$sep"; + my $_sect3= + "<a href=\"#.\" title=\"pages\">$printsitefile_img_3</a> ||$sep"; + + my $_SECT1="mksite:sect1"; + my $_SECT2="mksite:sect2"; + my $_SECT3="mksite:sect3"; + + @MK_SECT1 = &site_get_rootsections(); + # round one - for each root section print a current menu + for my $r (@MK_SECT1) { + push @OUT, &echo_current_line ("$r", "<!--$_SECT1:A--><br>$_sect1"); + for my $s (@MK_SECT1) { + push @OUT, &make_current_entry ("$r", "$s"); + } + push @OUT, &echo_current_line ("$r", "<!--$_SECT1:Z-->"); + } + + # round two - for each subsection print a current and subpage menu + for my $r (@MK_SECT1) { + @MK_SECT2 = &site_get_subpages ("$r"); + for my $s (@MK_SECT2) { + push @OUT, &echo_current_line ("$s", "<!--$_SECT2:A--><br>$_sect2"); + for my $t (@MK_SECT2) { + push @OUT, &make_current_entry ("$s", "$t"); + } # "$t" + push @OUT, &echo_current_line ("$s", "<!--$_SECT2:Z-->"); + } # "$s" + my $_have_children_=""; + for my $t (@MK_SECT2) { + if (not $_have_children_) { + push @OUT, &echo_subpage_line ("$r", "<!--$_SECT2:A--><br>$_sect2"); } + $_have_children_ .= "1"; + push @OUT, &make_subpage_entry ("$r", "$t"); + } + if ($_have_children_) { + push @OUT, &echo_subpage_line ("$r", "<!--$_SECT2:Z-->"); } + } # "$r" + + # round three - for each subsubsection print a current and subpage menu + for my $r (@MK_SECT1) { + @MK_SECT2 = &site_get_subpages ("$r"); + for my $s (@MK_SECT2) { + @MK_SECT3 = &site_get_subpages ("$s"); + for my $t (@MK_SECT3) { + push @OUT, &echo_current_line ("$t", "<!--$_SECT3:A--><br>$_sect3"); + for my $u (@MK_SECT3) { + push @OUT, &make_current_entry ("$t", "$u"); + } # "$t" + push @OUT, &echo_current_line ("$t", "<!--$_SECT3:Z-->"); + } # "$t" + my $_have_children_=""; + for my $u (@MK_SECT3) { + if (not $_have_children_) { + push @OUT, &echo_subpage_line ("$s", "<!--$_SECT3:A--><br>$_sect3"); } + $_have_children_ .= "1"; + push @OUT, &make_subpage_entry ("$s", "$u"); + } + if ($_have_children_) { + push @OUT, &echo_subpage_line ("$s", "<!--$_SECT3:Z-->"); } + } # "$s" + } # "$r" + push @OUT, "<a name=\".\"></a>"; + push @OUT, "</body></html>"; + savelist(\@OUT,"FORM"); + return @OUT; +} + +# create a selector that can grep a printsitefile for the matching entries +sub select_in_printsitefile # arg = "page" : return to stdout >> $P.$HEAD +{ + my ($N,$Z) = @_; + my $_selected_="$N" ; $_selected_="$F" if not $_selected_; + my $_section_=&sed_slash_key($_selected_); + my @OUT = (); + push @OUT, "s/^<!--$_SECT\\\"$_section_\\\"-->//;"; # sect3 + push @OUT, "s/^<!--$_SECT\[*\]:\\\"$_section_\\\"-->//;"; # children + $_selected_=&site_get_parentpage($_selected_); + $_section_=&sed_slash_key($_selected_); + push @OUT, "s/^<!--$_SECT\\\"$_section_\\\"-->//;"; # sect2 + $_selected_=&site_get_parentpage($_selected_); + $_section_=&sed_slash_key($_selected_); + push @OUT, "s/^<!--$_SECT\\\"$_section_\\\"-->//;"; # sect1 + push @OUT, "/^<!--$_SECT\\\"[^\\\"]*\\\"-->/ and next;"; + push @OUT, "/^<!--$_SECT\[*\]:\\\"[^\\\"]*\\\"-->/ and next;"; + push @OUT, "s/^<!--mksite:sect[$NN]:[$AZ]-->//;"; + return @OUT; +} + +sub body_for_emailfooter +{ + return "" if $emailfooter eq "no"; + my $_email_=$emailfooter; $_email_ =~ s|[?].*||; + my $_dated_=&info_get_entry("updated"); + return "<hr><table border=\"0\" width=\"100%\"><tr><td>" + ."$n"."<a href=\"mailto:$emailfooter\">$_email_</a>" + ."$n"."</td><td align=\"right\">" + ."$n"."$_dated_</td></tr></table>" + ."$n"; +} + +# =================================================================== CSS +# There was another project to support sitemap build from xml files. +# The source format was using .dbk+xml with embedded references to .css +# files for visual preview in a browser. An docbook xml file with semantic +# outlines is far better suited for quality documentation than any html +# source. It happens that the xml/css support in browsers is still not +# very portable - especially embedded css style blocks are a nightmare. +# Instead we (a) grab all non-html xml markup tags (b) grab all referenced +# css stylesheets (c) cut out css defs from [b] that are known by [a] and +# (d) append those to the <style> tag in the output html file as well as +# (e) reformatting the defs as well as markups from tags to tag classes. +# Input dbk/htm +# <?xml-stylesheet type="text/css" href="html.css" ?> <!-- dbk/xml --> +# <link rel="stylesheet" type="text/css" href="sdocbook.css" /> <!-- xhtml --> +# <article><para> +# Using some <command>exe</command> +# </para></article> +# Input css: +# article { .. ; display : block } +# para { .. ; display : block } +# command { .. ; display : inline } +# Output html: +# <html><style type="text/css"> +# div .article { .. } +# div .para { .. } +# span .command { .. } +# </style> +# <div class="article"><div class="para> +# Using some <span class="command">exe</span> +# </div></div> + +sub css_sourcefile +{ + my ($X,$XXX) = @_; + return $X if -f $X; + return "$o{src_dir}/$X" if -f "$o{src_dir}/$X"; + return "$X" if "$X" =~ m:^/:; + return "./$X"; +} + +my %XMLTAGS = (); +sub css_xmltags # $SOURCEFILE +{ + my $X=$SOURCEFILE; + my %R = (); + my $line; + foreach $line (source($SOURCEFILE)) { + $line =~ s|>[^<>]*<|><|g; + $line =~ s|^[^<>]*<|<|; + $line =~ s|>[^<>]*\$|>|; + my $item; + foreach $item (split /</, $line) { + $item =~ m:^/: and next; + $item =~ m:^\s*$: and next; + $item !~ m|>| and next; + $item =~ s|>.*||; + chomp $item; + $R{$item} = ""; + } + } + @{$XMLTAGS{$X}} = keys %R; +} + +my %XMLSTYLESHEETS = (); +sub css_xmlstyles # $SOURCEFILE +{ + my $X=$SOURCEFILE; + my %R = (); + my $text = ""; + my $line = ""; + foreach $line (source($SOURCEFILE)) { + $text .= $line; + $text =~ s|<link *rel=[\'\"]*stylesheet|<?xml-stylesheet |; + if ($text !~ m/<.xml-stylesheet/) { $text = ""; next; } + if ($text !~ m/href=/) { next; } + $text =~ s|^.*<.xml-stylesheet||; + $text =~ s|^.*href=[\"\']||; $text =~ s|[\"\'].*||s; + chomp $text; + $R{$text} = ""; + } + foreach $line (source($SITEFILE)) { + $text .= $line; + $text =~ s|<link *rel=[\'\"]*stylesheet|<?xml-stylesheet |; + if ($text !~ m/<.xml-stylesheet/) { $text = ""; next; } + if ($text !~ m/href=/) { next; } + $text =~ s|^.*<.xml-stylesheet||; + $text =~ s|^.*href=[\"\']||; $text =~ s|[\"\'].*||s; + chomp $text; + $R{$text} = ""; + } + @{$XMLSTYLESHEETS{$X}} = keys %R; +} + +my %XMLTAGSCSS = (); +sub css_xmltags_css # $SOURCEFILE +{ + my $X=$SOURCEFILE; + my @S = $XMLTAGS{$X}; + my @R = (); + my $xmlstylesheet; + foreach $xmlstylesheet (@{$XMLSTYLESHEETS{$X}}) { + my $stylesheet = css_sourcefile($xmlstylesheet); + if (-f $stylesheet) { + push @R, "/* $xmlstylesheet */"; + my $text = ""; + my $line = ""; + my $STYLESHEET = $stylesheet; + open STYLESHEET, "<$STYLESHEET" or next; + foreach $line (<STYLESHEET>) + { + $text .= $line; + if ($text =~ /^[^\{]*\}/s) { $text = ""; next; } + if ($text !~ /^[^\{]*\{.*\}/s) { next; } + $text =~ s|\r||g; + my $xmltag; my $found = 0; + foreach $xmltag (grep /^\w/, @{$XMLTAGS{$X}}) { + $xmltag =~ s| .*||; + if (grep {$_ eq $xmltag} qw/title section/) { + next if $xmltag eq "section"; + $found++ if $text =~ + /\b$xmltag\s*(?:,[^{},]*)*\s*\{/s; + my $xmlparent; + foreach $xmlparent (@{$XMLTAGS{$X}}) { + $xmlparent =~ s| .*||; + /^\w/ or next; + $found++ if $text =~ + /\b$xmlparent\s+$xmltag\s*(?:,[^{},]*)*\s*\{/s; + } + } else { + $found++ if $text =~ + /\b$xmltag\s*(?:,[^\{\},]*)*\{/s; + } + last if $found; + } + if (not $found) { $text = ""; next; } + foreach $xmltag (grep /^\w/, @{$XMLTAGS{$X}}) { + $xmltag =~ s| .*||; + if (grep {$_ eq $xmltag} @HTMLTAGS) { next; } + if (grep {$_ eq $xmltag} @HTMLTAGS2) { next; } + $text =~ s/(\b$xmltag\s*(?:,[^{},]*)*\s*\{)/.$1/gs; + } + chomp $text; + push @R, $text; $text = ""; next; + } + } else { + warn "$xmlstylesheet : ERROR, no such stylesheet $xmlstylesheet"; + } + } + @{$XMLTAGSCSS{$X}} = @R; +} + +my %XMLMAPPING = (); +sub css_xmlmapping # $SOURCEFILE +{ + my $X=$SOURCEFILE; + my %R = (); + foreach (@{$XMLTAGSCSS{$X}}) { + my $span = ""; + $span="li" if /\bdisplay\s*:\s*list-item\b/; + $span="caption" if /\bdisplay\s*:\s*table-caption\b/; + $span="td" if /\bdisplay\s*:\s*table-cell\b/; + $span="tr" if /\bdisplay\s*:\s*table-row\b/; + $span="table" if /\bdisplay\s*:\s*table\b/; + $span="div" if /\bdisplay\s*:\s*block\b/; + $span="span" if /\bdisplay\s*:\s*inline\b/; + $span="small" if /\bdisplay\s*:\s*none\b/; + $span="ul" if /\blist-style-type\s*:\s*disc\b/ and $span eq "div"; + $span="ol" if /\blist-style-type\s*:\s*decimal\b/ and $span eq "div"; + $span="tt" if /\bfont-family\s*:\s*monospace\b/ and $span eq "span"; + $span="em" if /\bfont-style\s*:\s*italic\b/ and $span eq "span"; + $span="b" if /\bfont-weight\s*:\s*bold\b/ and $span eq "span"; + $span="pre" if /\bwhite-space\s*:\s*pre\b/ and $span eq "div"; + my $xmltag; + for $xmltag (grep /^\w/, @{$XMLTAGS{$X}}) { + $xmltag =~ s| .*||; + if (/\.$xmltag\b/s) { + $R{$xmltag} = $span; + $R{$xmltag} = "p" if $xmltag eq "para" and $span eq "div"; + $R{$xmltag} = "a" if $xmltag eq "ulink" and $span eq "span"; + } + } + } + %{$XMLMAPPING{$X}} = %R; +} + +sub css_scan # $SOURCEFILE +{ + css_xmltags (); + css_xmlstyles (); + css_xmltags_css (); + css_xmlmapping (); +} + +sub tags2span_sed # $SOURCEFILE > $++ +{ + my $X=$SOURCEFILE; + my $xmltag; + my @R = (); + push @R, "s|<[?]xml-stylesheet[^<>]*[?]>||"; + push @R, "s|<link *rel=['\"]*stylesheet[^<>]*>||"; + push @R, "s|<section[^<>]*>||g;"; + push @R, "s|</section[^<>]*>||g;"; + for $xmltag (grep /^\w/, @{$XMLTAGS{$X}}) { + $xmltag =~ s| .*||; + if (grep {$_ eq $xmltag} @HTMLTAGS) { next; } + if (grep {$_ eq $xmltag} @HTMLTAGS2) { next; } + my $span = $XMLMAPPING{$X}{$xmltag}; + $span = "span" if $span eq ""; + push @R, "s|<$xmltag([\\n\\t ][^<>]*)url=|<$span class=\"$xmltag\"\$1href=|g;"; + push @R, "s|<$xmltag([\\n\\t >])|<$span class=\"$xmltag\"\$1|g;"; + push @R, "s|</$xmltag([\\n\\t >])|</$span\$1|g;"; + } + my $xmlstylesheet; + foreach $xmlstylesheet (@{$XMLSTYLESHEETS{$X}}) { + my $H="[^<>]*href=[\'\"]${xmlstylesheet}[\'\"][^<>]*"; + push @R, "s|<[?]xml-stylesheet$H>||;"; + push @R, "s|<link[^<>]* rel=['\"]*stylesheet['\"]$H>||;"; + } + return @R; +} + +sub tags2meta_sed # $SOURCEFILE > $++ +{ + my @R = (); + push @R, " <style type=\"text/css\"><!--"; + push @R, map {s/(^|\n)/$1 /g;$_} @{$XMLTAGSCSS{$SOURCEFILE}}; + push @R, " --></style>"; + @R = () if $#R < 3; + return @R; +} + +# ========================================================================== +# xml/docbook support is taking an dbk input file converting any html DBK +# syntax into pure docbook tagging. Each file is being given a docbook +# doctype so that an xml/docbook viewer can render it correctly - that +# is needed atleast since docbook files do not embed stylesheet infos. +# Most of the processing is related to remap html markup and some other +# shortcut markup into correct docbook markup. The result is NOT checked +# for being well-formed or even matching the docbook schema DTD at all. + +sub scan_xml_rootnode +{ + my ($INF,$XXX) = @_; + $INF = \@{$DATA{$F}} if not $INF; + for my $entry (source($SOURCEFILE)) { + my $line = $entry; next if $line !~ /<\w/; + $line =~ s/<(\w*).*/$1/s; + # print ":",$line,$n; + push @{$INF}, "<!root $F>$line"; + return; + } +} + +sub get_xml_rootnode +{ + my ($INF,$XXX) = @_; + $INF = \@{$DATA{$F}} if not $INF; + my $_file_ = sed_slash_key($F); + foreach my $entry (grep /^<!root $_file_>/, @{$INF}) { + my $line=$entry; $line =~ s|.*>||; + return $line; + } +} + +sub xml_sourcefile +{ + my ($X,$XXX) = @_; + my $XMLFILE=$X; $XMLFILE =~ s/\.xml$/.dbk/; + my $SRCFILE=$X; $SRCFILE =~ s/\.xml$/.htm/; + $XMLFILE="///" if $X eq $XMLFILE; + $SRCFILE="///" if $X eq $SRCFILE; + return $XMLFILE if -f $XMLFILE; + return $SRCFILE if -f $SRCFILE; + return "$o{src_dir}/$XMLFILE" if -f "$o{src_dir}/$XMLFILE"; + return "$o{src_dir}/$SRCFILE" if -f "$o{src_dir}/$SRCFILE"; + return ".//$XMLFILE"; # $++ (not found?) +} + +sub scan_xmlfile +{ + $SOURCEFILE= &xml_sourcefile($F); + hint "'$SOURCEFILE': scanning xml -> '$F'"; + scan_xml_rootnode(); + my $rootnode=&get_xml_rootnode(); $rootnode =~ s|^(h\d.*$)|$1 <?section?>|; + hint "'$SOURCEFILE': rootnode ('$rootnode')"; +} + +sub make_xmlfile +{ + $SOURCEFILE= &xml_sourcefile($F); + my $X=$SOURCEFILE; + my $article= &get_xml_rootnode(); + $article="article" if $article eq ""; + my $text = ""; + $text .= '<!DOCTYPE '.$article. + ' PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN"'.$n; + $text .= ' "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd">' + .$n; + for my $stylesheet (@{$XMLSTYLESHEETS{$X}}) { + $text .= "<?xml-stylesheet type=\"text/css\" href=\"$stylesheet\" ?>" + .$n; + } + for (source($SOURCEFILE)) { + s!<>!\ \;!g; + s!(&)(&)!${1}amp;${2}amp;!g; + s!(<[^<>]*)(width)(=)(\d+\%*)!$1$2$3\"$4\"!g; + s!(<[^<>]*)(cellpadding)(=)(\d+\%*)!$1$2$3\"$4\"!g; + s!(<[^<>]*)(border)(=)(\d+\%*)!$1$2$3\"$4\"!g; + s!<[?]xml-stylesheet[^<>]*>!!; + s!<link[^<>]* rel=[\'\"]*stylesheet[^<>]*>!!; + s!<[hH]\d!<title!g; + s!</[hH]\d!</title!g; + s!(</title> *)([^<>]*\w[^<>\r\n]*)$!$1<sub>$2</sub>!; + s!(</title>.*)<sub>!$1<subtitle>!g; + s!(</title>.*)</sub>!$1</subtitle>!g; + s!(<section>[^<>]*)(<date>.*</date>[^<>\n]*)$!$1<sectioninfo>$2</sectioninfo>!gx; + s!<em>!<emphasis>!g; + s!</em>!</emphasis>!g; + s!<i>!<emphasis>!g; + s!</i>!</emphasis>!g; + s!<b>!<emphasis role=\"bold\">!g; + s!</b>!</emphasis>!g; + s!<u>!<emphasis role=\"underline\">!g; + s!</u>!</emphasis>!g; + s!<big>!<emphasis role=\"strong\">!g; + s!</big>!</emphasis>!g; + s!<(s|strike)>!<emphasis role=\"strikethrough\">!g; + s!</(s|strike)>!</emphasis>!g; + s!<center>!<blockquote><para>!g; + s!</center>!</para></blockquote>!g; + s!<p align=(\"\w*\")>!<para role=${1}>!g; + s!<[pP]>!<para>!g; + s!</[pP]>!</para>!g; + s!<(pre|PRE)>!<screen>!g; + s!</(pre|PRE)>!</screen>!g; + s!<a( [^<>]*)name=([^<>]*)/>!<anchor ${1}id=${2}/>!g; + s!<a( [^<>]*)name=([^<>]*)>!<anchor ${1}id=${2}/>!g; + s!<a( [^<>]*)href=!<ulink${1}url=!g; + s!</a>!</ulink>!g; + s! remap=\"url\">[^<>]*</ulink>! />!g; + s!<(/?)span(\s[^<>]*)?>!<${1}phrase${2}>!g; + s!<small(\s[^<>]*)?>!<phrase role=\"small\"${1}>!g; + s!</small(\s[^<>]*)?>!</phrase${1}>!g; + s!<(/?)(sup)>!<${1}superscript>!g; + s!<(/?)(sub)>!<${1}subscript>!g; + s!(<)(li)(><)!${1}listitem${3}!g; + s!(></)(li)(>)!${1}listitem${3}!g; + s!(<)(li)(>)!${1}listitem${3}<para>!g; + s!(</)(li)(>)!</para>${1}listitem${3}!g; + s!(</?)(ul)>!${1}itemizedlist>!g; + s!(</?)(ol)>!${1}orderedlist>!g; + s!(</?)(dl)>!${1}variablelist>!g; + s!<(/?)DT>!<${1}dt>!g; + s!<(/?)DD>!<${1}dd>!g; + s!<(/?)DL>!<${1}dl>!g; + s!<BLOCKQUOTE>!<blockquote><para>!g; + s!</BLOCKQUOTE>!</para></blockquote>!g; + s!<(/?)dl>!<${1}variablelist>!g; + s!<dt\b([^<>]*)>!<varlistentry${1}><term>!g; + s!</dt\b([^<>]*)>!</term>!g; + s!<dd\b([^<>]*)><!<listitem${1}><!g; + s!></dd\b([^<>]*)>!></listitem></varlistentry>!g; + s!<dd\b([^<>]*)>!<listitem${1}><para>!g; + s!</dd\b([^<>]*)>!</para></listitem></varlistentry>!g; + s!<table[^<>]*><tr><td>(<table[^<>]*>)!$1!; + s!(</table>)</td></tr></table>!$1!; + s!<table\b([^<>]*)>!<informaltable${1}><tgroup cols=\"2\"><tbody>!g; + s!</table\b([^<>]*)>!</tbody></tgroup></informaltable>!g; + s!(</?)tr(\s[^<>]*)?>!${1}row${2}>!g; + s!(</?)td(\s[^<>]*)?>!${1}entry${2}>!g; + s!(<informaltable[^<>]*\swidth=\"100\%\")!$1 pgwide=\"1\"!g; + s!(<tgroup[<>]*\scols=\"2\">)(<tbody>) + !$1<colspec colwidth=\"1*\" /><colspec colwidth=\"1*\" />$2!gx; + s!(<entry[^<>]*\s)width=(\"\d*\%*\")!${1}remap=${2}!g; + s!<nobr>([\'\`]*)<tt>!<cmdsynopsis><command>$1!g; + s!</tt>([\'\`]*)</nobr>!$1</command></cmdsynopsis>!g; + s!<nobr><(tt|code)>([\`\"\'])!<cmdsynopsis><command>$2!g; + s!<(tt|code)><nobr>([\`\"\'])!<cmdsynopsis><command>$2!g; + s!([\`\"\'])</(tt|code)></nobr>!$1</command></cmdsynopsis>!g; + s!([\`\"\'])</nobr></(tt|code)>!$1</command></cmdsynopsis>!g; + s!(</?)tt>!${1}constant>!g; + s!(</?)code>!${1}literal>!g; + s!<br>!<br />!g; + s!<br */>!<screen role=\"linebreak\">\n</screen>!g; + $text .= $_; + } + open F, ">$F" or die "could not write $F: $!"; print F $text; close F; + echo "'$SOURCEFILE': ",&ls_s($SOURCEFILE)," >> ",&ls_s($F); +} + +sub make_xmlmaster +{ + $SOURCEFILE= &xml_sourcefile($F); + my $X=$SOURCEFILE; + my $article="section"; # book? chapter? + my $text = ""; + $text .= '<!DOCTYPE '.$article. + ' PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN"'.$n; + $text .= ' "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd">' + .$n; + for my $stylesheet (@{$XMLSTYLESHEETS{$X}}) { + $text .= "<?xml-stylesheet type=\"text/css\" href=\"$stylesheet\" ?>" + .$n; + } + # $text .= "<section><sectioninfo><date/><authorblurb/></sectioninfo>..."; + $text .= "<section><title>Documentation</title>$n"; + for (make_xmlsitemap()) { + $text .= $_; + } + $text .= "</section>$n"; + open F, ">$F" or die "could not write $F: $!"; print F $text; close F; + echo "'$SOURCEFILE': ",&ls_s($SOURCEFILE)," >*> ",&ls_s($F); +} + +# ========================================================================== +# +# During processing we will create a series of intermediate files that +# store relations. They all have the same format being +# =relationtype=key value +# where key is usually s filename or an anchor. For mere convenience +# we assume that the source html text does not have lines that start +# off with =xxxx= (btw, ye remember perl section notation...). Of course +# any other format would be usuable as well. +# + +# we scan the SITEFILE for href references to be converted +# - in the new variant we use a ".gets.tmp" sed script that SECTS +# marks all interesting lines so they can be checked later +# with an sed anchor of sect="[$NN]" (or sect="[$AZ]") +my $S="\\ \\;"; +# S="[&]nbsp[;]" + +# HR and EM style markups must exist in input - BR sometimes left out +# these routines in(ter)ject hardspace before, between, after markups +# note that "<br>" is sometimes used with HR - it must exist in input +sub echo_HR_EM_PP +{ + my ($U,$V,$W,$X,$Z) = @_; + my @list = ( + "s%^($U$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^(<>$U$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^($S$U$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^($U<>$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^($U$S$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^($U$V<>$W*<a) (href=)%\$1 $X \$2%;", + "s%^($U$V$S$W*<a) (href=)%\$1 $X \$2%;" ); + return @list; +} + +sub echo_br_EM_PP +{ + my ($U,$V,$W,$X,$Z) = @_; + my @list = &echo_HR_EM_PP ("$U", "$V", "$W", "$X"); + my @listt = ( + "s%^($V$W*<a) (href=)%\$1 $X \$2%;", + "s%^(<>$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^($S$V$W*<a) (href=)%\$1 $X \$2%;", + "s%^($V<>$W*<a) (href=)%\$1 $X \$2%;", + "s%^($V$S$W*<a) (href=)%\$1 $X \$2%;", + "s%^($V$W*<><a) (href=)%\$1 $X \$2%;", + "s%^($V$W*$S<a) (href=)%\$1 $X \$2%;" ); + push @list, @listt; + return @list; +} + +sub echo_HR_PP +{ + my ($U,$V,$W,$Z) = @_; + my @list = ( + "s%^($U<a) (href=)%\$1 $W \$2%;", + "s%^($U$V*<a) (href=)%\$1 $W \$2%;", + "s%^(<>$U$V*<a) (href=)%\$1 $W \$2%;", + "s%^($S$U$V*<a) (href=)%\$1 $W \$2%;", + "s%^($U<>$V*<a) (href=)%\$1 $W \$2%;", + "s%^($U$S$V*<a) (href=)%\$1 $W \$2%;" ); + return @list; +} +sub echo_br_PP +{ + my ($U,$V,$W,$Z) = @_; + my @list = &echo_HR_PP ("$U", "$V", "$W"); + my @listt = ( + "s%^($V*<a) (href=)%\$1 $W \$2%;", + "s%^(<>$V*<a) (href=)%\$1 $W \$2%;", + "s%^($S$V*<a) (href=)%\$1 $W \$2%;" ); + push @list, @listt; + return @list; +} +sub echo_sp_PP +{ + my ($U,$V,$Z) = @_; + my @list = ( + "s%^(<>$U*<a) (href=)%\$1 $V \$2%;", + "s%^($S$U*<a) (href=)%\$1 $V \$2%;", + "s%^(<><>$U*<a) (href=)%\$1 $V \$2%;", + "s%^($S$S$U*<a) (href=)%\$1 $V \$2%;", + "s%^(<>$U<>*<a) (href=)%\$1 $V \$2%;", + "s%^($S$U$S*<a) (href=)%\$1 $V \$2%;", + "s%^($U<><>*<a) (href=)%\$1 $V \$2%;", + "s%^($U$S$S*<a) (href=)%\$1 $V \$2%;", + "s%^($U<>*<a) (href=)%\$1 $V \$2%;", + "s%^($U$S*<a) (href=)%\$1 $V \$2%;" ); + return @list; +} +sub echo_sp_SP +{ + my ($U,$V,$Z) = @_; + my @list = ( + "s%^($U<a) (href=)%\$1 $V \$2%;", + "s%^(<>$U<a) (href=)%\$1 $V \$2%;", + "s%^($S$U<a) (href=)%\$1 $V \$2%;", + "s%^(<><>$U<a) (href=)%\$1 $V \$2%;", + "s%^($S$S$U<a) (href=)%\$1 $V \$2%;", + "s%^(<>$U<><a) (href=)%\$1 $V \$2%;", + "s%^($S$U$S<a) (href=)%\$1 $V \$2%;", + "s%^($U<><><a) (href=)%\$1 $V \$2%;", + "s%^($U$S$S<a) (href=)%\$1 $V \$2%;", + "s%^($U<><a) (href=)%\$1 $V \$2%;", + "s%^($U$S<a) (href=)%\$1 $V \$2%;" ); + return @list; +} +sub echo_sp_sp +{ + my ($U,$V,$Z) = @_; + my @list = ( + "s%^($U<a) (name=)%\$1 $V \$2%;", + "s%^(<>$U<a) (name=)%\$1 $V \$2%;", + "s%^($S$U<a) (name=)%\$1 $V \$2%;", + "s%^(<><>$U<a) (name=)%\$1 $V \$2%;", + "s%^($S$S$U<a) (name=)%\$1 $V \$2%;", + "s%^(<>$U<><a) (name=)%\$1 $V \$2%;", + "s%^($S$U$S<a) (name=)%\$1 $V \$2%;", + "s%^($U<><><a) (name=)%\$1 $V \$2%;", + "s%^($U$S$S<a) (name=)%\$1 $V \$2%;", + "s%^($U<><a) (name=)%\$1 $V \$2%;", + "s%^($U$S<a) (name=)%\$1 $V \$2%;" ); + return @list; +} + +sub make_sitemap_init +{ + # build a list of detectors that map site.htm entries to a section table + # note that the resulting .gets.tmp / .puts.tmp are real sed-script + my $h1="[-|[]"; + my $b1="[*=]"; + my $b2="[-|[]"; + my $b3="[\\/:]"; + my $q3="[\\/:,[]"; + @MK_GETS = (); + push @MK_GETS, &echo_HR_PP ("<hr>", "$h1", "sect=\\\"1\\\""); + push @MK_GETS, &echo_HR_EM_PP("<hr>","<em>", "$h1", "sect=\\\"1\\\""); + push @MK_GETS, &echo_HR_EM_PP("<hr>","<strong>", "$h1", "sect=\\\"1\\\""); + push @MK_GETS, &echo_HR_PP ("<br>", , "$b1$b1", "sect=\\\"1\\\""); + push @MK_GETS, &echo_HR_PP ("<br>", , "$b2$b2", "sect=\\\"2\\\""); + push @MK_GETS, &echo_HR_PP ("<br>", , "$b3$b3", "sect=\\\"3\\\""); + push @MK_GETS, &echo_br_PP ("<br>", , "$b2$b2", "sect=\\\"2\\\""); + push @MK_GETS, &echo_br_PP ("<br>", , "$b3$b3", "sect=\\\"3\\\""); + push @MK_GETS, &echo_br_EM_PP("<br>","<small>" , "$q3" , "sect=\\\"3\\\""); + push @MK_GETS, &echo_br_EM_PP("<br>","<em>" , "$q3" , "sect=\\\"3\\\""); + push @MK_GETS, &echo_br_EM_PP("<br>","<u>" , "$q3" , "sect=\\\"3\\\""); + push @MK_GETS, &echo_HR_PP ("<br>", , "$q3" , "sect=\\\"3\\\""); + push @MK_GETS, &echo_br_PP ("<u>", , "$b2" , "sect=\\\"2\\\""); + push @MK_GETS, &echo_sp_PP ( "$q3" , "sect=\\\"3\\\""); + push @MK_GETS, &echo_sp_SP ( "" , "sect=\\\"2\\\""); + push @MK_GETS, &echo_sp_sp ( "$q3" , "sect=\\\"9\\\""); + push @MK_GETS, &echo_sp_sp ("<br>", "sect=\\\"9\\\""); + @MK_PUTS = map { my $x=$_; $x =~ s/(>)(\[)/$1 *$2/; $x } @MK_GETS; + # the .puts.tmp variant is used to <b><a href=..></b> some hrefs which + # shall not be used otherwise for being generated - this is nice for + # some quicklinks somewhere. The difference: a whitspace "<hr> <a...>" +} + +my $_uses_= sub{"<$Q='use$1'>$2 $3<$QX>" }; +my $_name_= sub{"<$Q='use$1'>name:$2 $3<$QX>" }; + +sub make_sitemap_list +{ + my ($V,$Z) = @_; $V = $SITEFILE if not $V; + # scan sitefile for references pages - store as "=use+=href+ anchortext" + for (source($V)) { + my $x = $_; + local $_ = &eval_MK_LIST("sitemap_list", $x, @MK_GETS); + /<a sect=\"[$NN]\"/ or next; + chomp; + s{.*<a sect=\"([^\"]*)\" href=\"([^\"]*)\"[^<>]*>(.*)</a>.*}{&$_uses_}e; + s{.*<a sect=\"([^\"]*)\" name=\"([^\"]*)\"[^<>]*>(.*)</a>.*}{&$_name_}e; + s{.*<a sect=\"([^\"]*)\" name=\"([^\"]*)\"[^<>]*>(.*)}{&$_name_}e; + /^<$Q=/ or next; + /^<!/ and next; + push @MK_DATA, $_; + } +} + +my $_Uses_= sub{"<$Q='Use$1'>$2 $3<$QX>" }; +my $_Name_= sub{"<$Q='Use$1'>name:$2 $3<$QX>" }; + +sub make_subsitemap_list # file-to-scan +{ + my ($V,$W,$Z) = @_; $V = $SITEFILE if not $V; + # scan sitefile for references pages - store as "=use+=href+ anchortext" + for (source($V)) { + my $x = $_; + local $_ = &eval_MK_LIST("subsitemap_list", $x, @MK_GETS); + /<a sect=\"[$NN]\"/ or next; + chomp; + s{.*<a sect=\"([^\"]*)\" href=\"([^\"]*)\"[^<>]*>(.*)</a>.*}{&$_Uses_}e; + s{.*<a sect=\"([^\"]*)\" name=\"([^\"]*)\"[^<>]*>(.*)</a>.*}{&$_Name_}e; + s{.*<a sect=\"([^\"]*)\" name=\"([^\"]*)\"[^<>]*>(.*)}{&$_Name_}e; + /^<$Q=/ or next; + /^<!/ and next; + s|>([^:./][^:./]*[./])|>$W$1|; + push @MK_DATA, $_; + } +} + +sub make_sitemap_sect +{ + # scan used pages and store prime section group relation =sect= and =node= + # (A) each "use1" creates "=sect=href+ href1" for all following non-"use1" + # (B) each "use1" creates "=node=href2 href1" for all following "use2" + my $sect = ""; + for (grep {/<$Q='[u]se.'>/} @MK_DATA) { + if (/<$Q='[u]se1'>([^ ]*) .*/) { $sect = $1; } + my $x = $_; # chomp $x; + $x =~ s|<$Q='[u]se.'>([^ ]*) .*|<$Q='sect'>$1 $sect<$QX>|; + push @MK_DATA, $x; + } + for (grep {/<$Q='[u]se.'>/} @MK_DATA) { + if (/<$Q='[u]se1'>([^ ]*) .*/) { $sect = $1; } + /<$Q='[u]se[13456789]'>/ and next; + my $x = $_; # chomp $x; + $x =~ s|<$Q='[u]se.'>([^ ]*) .*|<$Q='node'>$1 $sect<$QX>|; + push @MK_DATA, $x; + } +} + +sub make_sitemap_page +{ + # scan used pages and store secondary group relation =page= and =node= + # the parenting =node= for use3 is usually a use2 (or use1 if none there) + my $sect = ""; + for (grep {/<$Q='[u]se.'>/} @MK_DATA) { + if (/<$Q='[u]se1'>([^ ]*) .*/) { $sect = $1; } + if (/<$Q='[u]se2'>([^ ]*) .*/) { $sect = $1; } + /<$Q='[u]se[1]'>/ and next; + my $x = $_; + $x =~ s|<$Q='[u]se.'>([^ ]*) .*|<$Q='page'>$1<$QX>|; chomp $x; + push @MK_DATA, "$x $sect"; + } + for (grep {/<$Q='[u]se.'>/} @MK_DATA) { + if (/<$Q='[u]se1'>([^ ]*) .*/) { $sect = $1; } + if (/<$Q='[u]se2'>([^ ]*) .*/) { $sect = $1; } + /<$Q='[u]se[12456789]'>/ and next; + my $x = $_; + $x =~ s/<$Q='[u]se.'>([^ ]*) .*/<$Q='node'>$1<$QX>/; chomp $x; + push @MK_DATA, "$x $sect"; ## print "(",$_,")","$x $sect", $n; + } + # and for the root sections we register ".." as the parenting group + for (grep {/<$Q='[u]se1'>/} @MK_DATA) { + my $x = $_; $x = trimm($x); + $x =~ s/<$Q='[u]se.'>([^ ]*) .*/<$Q='node'>$1 ..<$QX>/; chomp $x; + push @MK_DATA, $x; + } +} +sub echo_site_filelist +{ + my @OUT = (); + for (grep {/<$Q='[u]se.'>/} @MK_DATA) { + my $x = $_; $x =~ s/<$Q='[u]se.'>//; $x =~ s/ .*[\n]*//; + push @OUT, $x; + } + return @OUT; +} + +# ========================================================================== +# originally this was a one-pass compiler but the more information +# we were scanning out the more slower the system ran - since we +# were rescanning files for things like section information. Now +# we scan the files first for global information. +# 1.PASS + +sub scan_sitefile # $F +{ + $SOURCEFILE=&html_sourcefile($F); + hint "'$SOURCEFILE': scanning -> sitefile"; + if ($SOURCEFILE ne $F) { + dx_init "$F"; + dx_text ("today", &timetoday()); + my $short=$F; + $short =~ s:.*/::; $short =~ s:[.].*::; # basename for all exts + $short .=" ~"; + DC_meta ("title", "$short"); + DC_meta ("date.available", &timetoday()); + DC_meta ("subject", "sitemap"); + DC_meta ("DCMIType", "Collection"); + DC_VARS_Of ($SOURCEFILE) ; HTTP_VARS_Of ($SOURCEFILE) ; + DC_modified ($SOURCEFILE) ; DC_date ($SOURCEFILE); + DC_section ($F); + DX_text ("date.formatted", &timetoday()); + if ($printerfriendly) { + DX_text ("printerfriendly", fast_html_printerfile($F)); } + if ($ENV{USER}) { DC_publisher ($ENV{USER}); } + echo "'$SOURCEFILE': $short (sitemap)"; + site_map_list_title ($F, "$short"); + site_map_long_title ($F, "generated sitemap index"); + site_map_list_date ($F, &timetoday()); + } +} + +sub scan_htmlfile # "$F" +{ + my ($FF,$Z) = @_; + $SOURCEFILE=&html_sourcefile($F); # SCAN : + hint "'$SOURCEFILE': scanning -> $F"; # HTML : + if ($SOURCEFILE ne $F) { + if ( -f $SOURCEFILE) { + dx_init "$F"; + dx_text ("today", &timetoday()); + dx_text ("todays", &timetodays()); + DC_VARS_Of ($SOURCEFILE); HTTP_VARS_Of ($SOURCEFILE); + DC_title ($SOURCEFILE); + DC_isFormatOf ($SOURCEFILE); + DC_modified ($SOURCEFILE); + DC_date ($SOURCEFILE); DC_date ($SITEFILE); + DC_section ($F); DC_selected ($F); DX_alternative ($SOURCEFILE); + if ($ENV{USER}) { DC_publisher ($ENV{USER}); } + DX_text ("date.formatted", &timetoday()); + if ($printerfriendly) { + DX_text ("printerfriendly", fast_html_printerfile($F)); } + my $sectn=&info_get_entry("DC.relation.section"); + my $short=&info_get_entry("DC.title.selected"); + &site_map_list_title ($F, "$short"); + &info_map_list_title ($F, "$short"); + my $title=&info_get_entry("DC.title"); + &site_map_long_title ($F, "$title"); + &info_map_long_title ($F, "$title"); + my $edate=&info_get_entry("DC.date"); + my $issue=&info_get_entry("issue"); + &site_map_list_date ($F, "$edate"); + &info_map_list_date ($F, "$edate"); + css_scan(); + echo "'$SOURCEFILE': '$title' ('$short') @ '$issue' ('$sectn')"; + }else { + echo "'$SOURCEFILE': does not exist"; + site_map_list_title ($F, "$F"); + site_map_long_title ($F, "$F (no source)"); + } + } else { + echo "<$F> - skipped - ($SOURCEFILE)"; + } +} + +sub scan_subsitemap_long +{ + my ($V,$W,$ZZZ) = @_; + for (source($V)) { + my $x = $_; + if ($x =~ m|<a href="([^\"]*)">.*<small style="date">([^<>]*)</small>|) { + &site_map_list_date($W.$1,$2); + } + if ($x =~ m|<a href="([^\"]*)">.*<!--long-->([^<>]*)<!--/long-->|) { + &site_map_long_title($W.$1,$2); + } + } +} + +sub scan_namespec +{ + # nothing so far + # my ($F,$ZZZ) = @_; + if ($F =~ /^name:sitemap:/) { + my $short=$F; + $short =~ s:.*/::; $short =~ s:[.].*::; # basename for all exts + $short =~ s/name:sitemap://; + $short .=" ~"; + site_map_list_title ($F, "$short"); + site_map_long_title ($F, "external sitemap index"); + site_map_list_date ($F, &timetoday()); + echo "'$F' external sitemap index"; + } + elsif ($F =~ /^name:(.*\.html*)$/) { # assuming it is a subsitefile + my $FF=$1; + my $FFF=$FF; $FFF =~ s:/[^/]*$:/:; # dirname + $FFF="" if $FFF !~ m:/:; + make_subsitemap_list($FF, $FFF); + scan_subsitemap_long($FF, $FFF); + } +} +sub scan_httpspec +{ + # nothing so far +} + +sub skip_namespec +{ + # nothing so far +} +sub skip_httpspec +{ + # nothing so far +} + +# ========================================================================== +# and now generate the output pages +# 2.PASS + +sub head_sed_sitemap # $filename $section +{ + my ($U,$V,$Z) = @_; + my $FF=&sed_piped_key($U); + my $SECTION=&sed_slash_key($V); + my $SECTS="sect=\"[$NN$AZ]\"" ; + my $SECTN="sect=\"[$NN]\""; # lines with hrefs + my @OUT = (); + push @OUT, "s|(<a $SECTS href=\\\"$FF\\\">.*</a>)|<b>\$1</b>|;"; + push @OUT, "/ href=\\\"$SECTION\\\"/ " + ."and s|^<td class=\\\"[^\\\"]*\\\"|<td |;" if $sectiontab ne "no"; + return @OUT; +} + +sub head_sed_listsection # $filename $section +{ + # traditional.... the sitefile is the full navigation bar + my ($U,$V,$Z) = @_; + my $FF=&sed_piped_key($U); + my $SECTION=&sed_slash_key($V); + my $SECTS="sect=\"[$NN$AZ]\"" ; + my $SECTN="sect=\"[$NN]\""; # lines with hrefs + my @OUT = (); + push @OUT, "s|(<a $SECTS href=\\\"$FF\\\">.*</a>)|<b>\$1</b>|;"; + push @OUT, "/ href=\\\"$SECTION\\\"/ " + ."and s|^<td class=\\\"[^\\\"]*\\\"|<td |;" if $sectiontab ne "no"; + return @OUT; +} + +sub head_sed_multisection # $filename $section +{ + # sitefile navigation bar is split into sections + my ($U,$V,$Z) = @_; + my $FF=&sed_piped_key($U); + my $SECTION=&sed_slash_key($V); + my $SECTS="sect=\"[$NN$AZ]\"" ; + my $SECTN="sect=\"[$NN]\""; # lines with hrefs + my @OUT = (); + # grep all pages with a =sect= relation to current $SECTION and + # build foreach an sed line "s|<a $SECTS (href=$F)>|<a sect="X" $1>|" + # after that all the (still) numeric SECTNs are deactivated / killed. + for my $section ($SECTION, $headsection, $tailsection) { + next if $section eq "no"; + for (grep {/^<$Q='sect'>[^ ]* $section/} @MK_DATA) { + my $x = $_; + $x =~ s|<$Q='sect'>||; $x =~ s| .*||; # $filename + $x =~ s/(.*)/s|<a $SECTS \(href=\\\"$1\\\"\)|<a sect=\\\"X\\\" \$1|/; + push @OUT, $x.";"; + } + for (grep {/^<$Q='sect'>name:[^ ]* $section/} @MK_DATA) { + my $x = $_; + $x =~ s|<$Q='sect'>name:||; $x =~ s| .*||; # $filename + $x =~ s/(.*)/s|<a $SECTS \(name=\\\"$1\\\"\)|<a sect=\\\"X\\\" \$1|/; + push @OUT, $x.";"; + } + } + push @OUT, "s|.*<a ($SECTN href=[^<>]*)>.*|<!-- \$1 -->|;"; + push @OUT, "s|.*<a ($SECTN name=[^<>]*)>.*|<!-- \$1 -->|;"; + push @OUT, "s|(<a $SECTS href=\\\"$FF\\\">.*</a>)|<b>\$1</b>|;"; + push @OUT, "/ href=\\\"$SECTION\\\"/ " + ."and s|^<td class=\\\"[^\\\"]*\\\"|<td |;" if $sectiontab ne "no"; + return @OUT; +} + +sub make_sitefile # "$F" +{ + $SOURCEFILE=&html_sourcefile($F); + if ($SOURCEFILE ne $F) { + if (-f $SOURCEFILE) { + # remember that in this case "${SITEFILE}l" = "$F" = "${SOURCEFILE}l" + @MK_VARS = &info2vars_sed(); # have <!--title--> vars substituted + @MK_META = &info2meta_sed(); # add <meta name="DC.title"> values + my @F_HEAD = (); my @F_FOOT = (); + push @F_HEAD, @MK_PUTS; + push @F_HEAD, &head_sed_sitemap ($F, &info_get_entry_section()); + push @F_HEAD, "/<head>/ and $sed_add join(\"\\n\", \@MK_META);"; + push @F_HEAD, @MK_VARS; push @F_HEAD, @MK_TAGS; + push @F_HEAD, "/<\\/body>/ and next;"; #cut lastline + if ( $sitemaplayout eq "multi") { + push @F_FOOT, &make_multisitemap(); # here we use ~foot~ to + } else { + push @F_FOOT, &make_listsitemap(); # hold the main text + } + + my $html = ""; # + $html .= &eval_MK_FILE("SITE", $SITEFILE, @F_HEAD); + $html .= join("", @F_FOOT); + for (source($SITEFILE)) { + /<\/body>/ or next; + $html .= &eval_MK_LIST("sitefile", $_, @MK_VARS); + } + open F, ">$F"; print F $html; close F; + echo "'$SOURCEFILE': ",ls_s($SOURCEFILE)," >-> ",ls_s($F); + savesource("$F.~head~", \@F_HEAD); + savesource("$F.~foot~", \@F_FOOT); +} else { + echo "'$SOURCEFILE': does not exist"; +} } +} + +sub make_htmlfile # "$F" +{ + $SOURCEFILE=&html_sourcefile($F); # 2.PASS + if ("$SOURCEFILE" ne "$F") { + if (-f "$SOURCEFILE") { + if (grep {/<meta name="formatter"/} source($SOURCEFILE)) { + echo "'$SOURCEFILE': SKIP, this sourcefile looks like a formatted file"; + echo "'$SOURCEFILE': (may be a sourcefile in place of a targetfile?)"; + return; } + @MK_VARS = &info2vars_sed(); # have <!--title--> vars substituted + @MK_META = &info2meta_sed(); # add <meta name="DC.title"> values + @MK_SPAN = &tags2span_sed(); # extern text/css -> intern css classes + push @MK_META, &tags2meta_sed(); # extern text/css -> intern css classes + my @F_HEAD = (); my @F_BODY = (); my $F_FOOT = ""; + push @F_HEAD, @MK_PUTS; + if ( $sectionlayout eq "multi") { + push @F_HEAD, &head_sed_multisection ($F, &info_get_entry_section()); + } else { + push @F_HEAD, &head_sed_listsection ($F, &info_get_entry_section()); + } + push @F_HEAD, @MK_VARS; push @F_HEAD, @MK_TAGS; push @F_HEAD, @MK_SPAN; + push @F_HEAD, "/<\\/body>/ and next;"; #cut lastline + push @F_HEAD, "/<head>/ and $sed_add join(\"\\n\",\@MK_META);"; #add metatags + push @F_BODY, "/<title>/ and next;"; #not that line + push @F_BODY, @MK_VARS; push @F_BODY, @MK_TAGS; push @F_BODY, @MK_SPAN; + push @F_BODY, &bodymaker_for_sectioninfo(); #if sectioninfo + push @F_BODY, &info2body_sed(); #cut early + push @F_HEAD, &info2head_sed(); + push @F_HEAD, &make_back_path($F); + if ($emailfooter ne "no") { + $F_FOOT = &body_for_emailfooter(); + } + my $html = ""; + $html .= eval_MK_FILE("head", $SITEFILE, @F_HEAD); + $html .= eval_MK_FILE("body", $SOURCEFILE, @F_BODY); + $html .= $F_FOOT; + for (source($SITEFILE)) { + /<\/body>/ or next; + $_ = &eval_MK_LIST("htmlfile", $_, @MK_VARS); + $html .= $_; + } + open F, ">$F" or die "could not write $F: $!"; print F $html; close F; + echo "'$SOURCEFILE': ",&ls_s($SOURCEFILE)," -> ",&ls_s($F); + savesource("$F.~head~", \@F_HEAD); + savesource("$F.~body~", \@F_BODY); + } else { + echo "'$SOURCEFILE': does not exist"; + }} else { + echo "<$F> - skipped"; + } +} + +my $PRINTSITEFILE; +sub make_printerfriendly # "$F" +{ # PRINTER + my $printsitefile="0"; # FRIENDLY + my $BODY_TXT; my $BODY_SED; + my $P=&html_printerfile ($F); + my @P_HEAD = (); my @P_BODY = (); + if ("$F" =~ /^(${SITEFILE}|${SITEFILE}l)$/) { + $printsitefile=">=>" ; $BODY_TXT="$F.~foot~" ; + } elsif ("$F" =~ /^(.*[.]html)$/) { + $printsitefile="=>" ; $BODY_TXT="$SOURCEFILE"; + } + if (grep {/<meta name="formatter"/} source($BODY_TXT)) { return; } + if ($printsitefile ne "0" and -f $SOURCEFILE) { my $x; + @MK_FAST = &make_printerfile_fast (\@FILELIST); + push @P_HEAD, @MK_VARS; push @P_HEAD, @MK_TAGS; push @P_HEAD, @MK_FAST; + @MK_METT = map { $x = $_; $x =~ + /DC.relation.isFormatOf/ and $x =~ s|content=\"[^\"]*\"|content=\"$F\"| ; + $x } @MK_META; + push @P_HEAD, "/<head>/ and $sed_add join(\"\\n\", \@MK_METT);"; + push @P_HEAD, "/<\\/body>/ and next;"; + push @P_HEAD, &select_in_printsitefile ("$F"); + my $_ext_=&print_extension($printerfriendly); +# my $line_=&sed_slash_key($printsitefile_img_2); + push @P_HEAD, "/\\|\\|topics:/" + ." and s| href=\\\"\\#\\.\\\"| href=\\\"$F\\\"|;"; + push @P_HEAD, "/\\|\\|\\|pages:/" + ." and s| href=\\\"\\#\\.\\\"| href=\\\"$F\\\"|;"; + push @P_HEAD, &make_back_path("$F"); + push @P_BODY, @MK_VARS; push @P_BODY, @MK_TAGS; push @P_BODY, @MK_FAST; + push @P_BODY, &make_back_path("$F"); + my $html = ""; + $html .= eval_MK_FILE("p_head", $PRINTSITEFILE, @P_HEAD); + $html .= eval_MK_FILE("p_body", $BODY_TXT, @P_BODY); + for (source($PRINTSITEFILE)) { + /<\/body>/ or next; + $_ = &eval_MK_LIST("printerfriendly", $_, @MK_VARS); + $html .= $_; + } + open P, ">$P" or die "could not write $P: $!"; print P $html; close P; + echo "'$SOURCEFILE': ",ls_s($SOURCEFILE)," $printsitefile ",ls_s($P); + } +} + + +# ======================================================================== +# ======================================================================== +# ======================================================================== +# ======================================================================== +# #### 0. INIT +$F=$SITEFILE; +&make_sitemap_init(); +&make_sitemap_list($SITEFILE); +&make_sitemap_sect(); +&make_sitemap_page(); +savelist(\@MK_DATA, "DATA"); + +@FILELIST=&echo_site_filelist(); +if ($o{filelist} or $o{list} eq "file" or $o{list} eq "files") { + for (@FILELIST) { echo $_; } exit; # --filelist +} +if ($o{files}) { @FILELIST=split(/ /, $o{files}); } # --files +if ($#FILELIST < 0) { warns "nothing to do (no --filelist)"; } +if ($#FILELIST == 0 and + $FILELIST[0] eq $SITEFILE) { warns "only '$SITEFILE'?!"; } + +for (@FILELIST) { #### 1. PASS + $F = $_; + if (/^(name:.*)$/) { + &scan_namespec ("$F"); + } elsif (/^(http:|https:|ftp:|mailto:|telnet:|news:|gopher:|wais:)/) { + &scan_httpspec ("$F"); + } elsif (/^(${SITEFILE}|${SITEFILE}l)$/) { + &scan_sitefile ("$F") ;; # ........... SCAN SITE + } elsif (/^(.*\@.*\.de)$/) { + echo "!! -> '$F' (skipping malformed mailto:-link)"; + } elsif (/^(\.\.\/.*)$/) { + echo "!! -> '$F' (skipping topdir build)"; +# */*.html) +# make_back_path # try for later subdir build +# echo "!! -> '$F' (skipping subdir build)" +# ;; +# */*/*/|*/*/|*/|*/index.htm|*/index.html) +# echo "!! -> '$F' (skipping subdir index.html)" +# ;; + } elsif (/^(.*\.html)$/) { + &scan_htmlfile ("$F"); # ........... SCAN HTML + if ($o{xml}) { + $F =~ s/\.html$/.xml/; + &scan_xmlfile ("$F"); + } + } elsif (/^(.*\.xml)$/) { + &scan_xmlfile ("$F") ;; + } elsif (/^(.*\/)$/) { + echo "'$F' : directory - skipped"; + &site_map_list_title ("$F", &sed_slash_key($F)); + &site_map_long_title ("$F", "(directory)"); + } else { + echo "?? -> '$F'"; + } +} + +if ($printerfriendly) { # .......... PRINT VERSION + my $_ext_=esc(&print_extension($printerfriendly)); + $PRINTSITEFILE=$SITEFILE; $PRINTSITEFILE =~ s/(\.\w*)$/$_ext_$1/; + $F=$PRINTSITEFILE; + my @TEXT = &make_printsitefile(); + echo "NOTE: going to create printer-friendly sitefile '$PRINTSITEFILE'" + ." $F._$i"; + savelist(\@TEXT, "TEXT"); + my @LINES = map { chomp; $_."$n" } @TEXT; + savesource($PRINTSITEFILE, \@LINES); + if (1) { + if (open PRINTSITEFILE, ">$PRINTSITEFILE") { + print PRINTSITEFILE join("", @LINES); close PRINTSITEFILE; + } + } +} + +for (@FILELIST) { #### 2. PASS + $F = $_; + if (/^(name:.*)$/) { + &skip_namespec ("$F") ;; + } elsif (/^(http:|https:|ftp:|mailto:|telnet:|news:|gopher:|wais:)/) { + &skip_httpspec ("$F") ;; + } elsif (/^(${SITEFILE}|${SITEFILE}l)$/) { + &make_sitefile ("$F") ;; # ........ SITE FILE + &make_printerfriendly ("$F") if ($printerfriendly); + if ($o{xml}) { + $F =~ s/\.html$/.xml/; + &make_xmlmaster ("$F"); + } + } elsif (/^(.*\@.*\.de)$/) { + echo "!! -> '$F' (skipping malformed mailto:-link)"; + } elsif (/^(\.\.\/.*)$/) { + echo "!! -> '$F' (skipping topdir build)"; +# */*.html) +# echo "!! -> '$F' (skipping subdir build)" +# ;; +# */*/*/|*/*/|*/|*/index.htm|*/index.html) +# echo "!! -> '$F' (skipping subdir index.html)" +# ;; + } elsif (/^(.*\.html)$/) { + &make_htmlfile ("$F") ; # .................. HTML FILES + &make_printerfriendly ("$F") if ($printerfriendly); + if ($o{xml}) { + $F =~ s/\.html$/.xml/; + &make_xmlfile ("$F"); + } + } elsif (/^(.*\.xml)$/) { + &make_xmlfile ("$F") ;; + } elsif (/^(.*\/)$/) { + echo "'$F' : directory - skipped"; + } else { + echo "?? -> '$F'"; + } + +# .............. debug .................... + if (-d "DEBUG" and -f $F) { + my $INP = \@{$DATA{$F}}; + my $FFFF = $F; $FFFF =~ s,/,:,g; + if (open FFFF, ">DEBUG/$FFFF.data.tmp.ht") { + for (@{$INP}) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.tags.tmp.pl") { + print FFFF "# /usr/bin/env perl -p",$n; + for (@MK_TAGS) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.vars.tmp.pl") { + print FFFF "# /usr/bin/env perl -p",$n; + for (@MK_VARS) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.span.tmp.pl") { + print FFFF "# /usr/bin/env perl -p",$n; + for (@MK_SPAN) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.meta.tmp.ht") { + for (@MK_META) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.gets.tmp.ht") { + for (@MK_GETS) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.puts.tmp.ht") { + for (@MK_PUTS) { print FFFF $_,$n; } close FFFF; + } + if (open FFFF, ">DEBUG/$FFFF.fast.tmp.ht") { + for (@MK_FAST) { print FFFF $_,$n; } close FFFF; + } + } +} # done + +## rm ./$MK.*.tmp.* if not $o{keeptmpfiles} +exit 0 diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/mksite.sh b/Build/source/libs/zziplib/zziplib-0.13.60/docs/mksite.sh new file mode 100644 index 00000000000..12af7948d00 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/mksite.sh @@ -0,0 +1,2355 @@ +#! /bin/sh +# this is the sh/sed variant of the mksite script. It is largely +# derived from snippets that I was using to finish doc pages for +# website publishing. For the mksite project the functionaliy has +# been expanded of course. Still this one does only use simple unix +# commands like sed, date, and test. And it still works. :-)=) +# http://zziplib.sf.net/mksite/ +# THE MKSITE.SH (ZLIB/LIBPNG) LICENSE +# Copyright (c) 2004 Guido U. Draheim <guidod@gmx.de> +# This software is provided 'as-is', without any express or implied warranty +# In no event will the authors be held liable for any damages arising +# from the use of this software. +# Permission is granted to anyone to use this software for any purpose, +# including commercial applications, and to alter it and redistribute it +# freely, subject to the following restrictions: +# 1. The origin of this software must not be misrepresented; you must not +# claim that you wrote the original software. If you use this software +# in a product, an acknowledgment in the product documentation would be +# appreciated but is not required. +# 2. Altered source versions must be plainly marked as such, and must not +# be misrepresented as being the original software. +# 3. This notice may not be removed or altered from any source distribution. +# $Id: mksite.sh,v 1.5 2006-09-22 00:33:22 guidod Exp $ + +# Zsh is not Bourne compatible without the following: (seen in autobook) +if test -n "$ZSH_VERSION"; then + emulate sh + NULLCMD=: +fi + +# initialize some defaults +test ".$SITEFILE" = "." && test -f "site.htm" && SITEFILE="site.htm" +test ".$SITEFILE" = "." && test -f "site.html" && SITEFILE="site.html" +test ".$SITEFILE" = "." && SITEFILE="site.htm" +MK="-mksite" # note the "-" at the start +SED="sed" +CAT="cat" # "sed -e n" would be okay too +GREP="grep" +DATE_NOW="date" # should be available on all posix systems +DATE_R="date -r" # gnu date has it / solaris date not +STAT_R="stat" # gnu linux +LS_L="ls -l" # linux uses one less char than solaris + +DATA="~~" # extension for meta data files +HEAD="~head~" # extension for head sed script +BODY="~body~" # extension for body sed script +FOOT="~foot~" # append to body text (non sed) + +NULL="/dev/null" # to divert stdout/stderr +CATNULL="$CAT $NULL" # to create 0-byte files +SED_LONGSCRIPT="$SED -f" + +Q='q class=' +QX='/q' +LOWER="abcdefghijklmnopqrstuvwxyz" +UPPER="ABCDEFGHIJKLMNOPQRSTUVWXYZ" +az="$LOWER" # some old sed tools can not +AZ="$UPPER" # use char-ranges in the +NN="0123456789" # match expressions so that +AA="_$NN$AZ$az" # we use their unrolled +AX="$AA.+-" # definition here +AP="|" # (pipe symbol in char-range) +AK="[" # (open range in char-range) + +LANG="C" ; LANGUAGE="C" ; LC_COLLATE="C" # these are needed for proper +export LANG LANGUAGE LC_COLLATE # lowercasing as some collate + # treat A-Z to include a-z + +HTMLTAGS=" a p h1 h2 h3 h4 h5 h6 dl dd dt ul ol li pre code table tr td th" +HTMLTAGS=" $HTMLTAGS b u i s q em strong strike cite big small sup sub tt" +HTMLTAGS=" $HTMLTAGS thead tbody center hr br nobr wbr" +HTMLTAGS=" $HTMLTAGS span div img adress blockquote" +HTMLTAGS2=" html head body title meta http-equiv style link" + +# ========================================================================== +if "${SHELL-/bin/sh}" -c 'foo () { exit 0; }; foo' 2>$NULL ; then : ; else +echo "!! sorry, this shell '$SHELL' does not support shell functions" ; exit 1 +fi + +error () +{ + echo "ERROR:" "$@" 1>&2 +} + +warn () +{ + echo "WARN:" "$@" 1>&2 +} + +note () +{ + echo "NOTE:" "$@" 1>&2 +} + +hint=":" + +init () +{ + if test -d DEBUG + then hint="note" + fi + if test "$SED" = "sed" ; then + if gsed --version 2>$NULL | $GREP "GNU sed" >$NULL ; then + SED="gsed" + $hint "using 'gsed' as SED" + fi + fi + if $SED --version 2>$NULL | $GREP "GNU sed" >$NULL ; then + az="a-z" # but if we have GNU sed + AZ="A-Z" # then we assume there are + NN="0-9" # char-ranges available + AA="_$NN$AZ$az" # that makes the resulting + AX="$AA.+-" # script more readable + $hint "found GNU sed - good" + elif uname -s | $GREP HP-UX >$NULL ; then + SED_LONGSCRIPT="sed_longscript" # due to 100 sed lines limit + $hint "weird sed - hpux sed has a limit of 100 lines" \ + "- using sed_longscript mode" + fi + if echo "TEST" | sed -e "s%[:[]*TEST%OK%" | grep OK 2>&1 > $NULL + then : + elif echo "TEST" | sed -e "s%[:\\[]*TEST%OK%" | grep OK 2>&1 > $NULL + then AK="\\[" ; $hint "AK=\\[" + else AK="" ; warn "buggy sed - disabled [ in char-ranges / fileref-tests" + fi + if echo "TEST" | sed -e "s%[:|]*TEST%OK%" | grep OK 2>&1 > $NULL + then : + elif echo "TEST" | sed -e "s%[:\\|]*TEST%OK%" | grep OK 2>&1 > $NULL + then AP="\\[" ; $hint "AP=\\|" + else AP="" ; warn "buggy sed - disabled | in char-ranges / fileref-tests" + fi +} + +init "NOW!!!" + +sed_debug () +{ + $note "sed" "$@" >&2 + sed "$@" +} + +# ========================================================================== +# reading options from the command line GETOPT +opt_variables="files" +opt_fileseparator="?" +opt_files="" +opt_main_file="" +opt_formatter="$0" +opt="" +for arg in "$@" # this variant should allow to embed spaces in $arg +do if test ".$opt" != "." ; then + eval "export opt_$opt='$arg'" + opt="" + else + case "$arg" in + -*=*) + opt=`echo "$arg" | $SED -e "s/-*\\([$AA][$AA-]*\\).*/\\1/" -e y/-/_/` + if test ".$opt" = "." ; then + error "invalid option $arg" + else + arg=`echo "$arg" | $SED -e "s/^[^=]*=//"` + eval "export opt_$opt='$arg'" + opt_variables="$opt_variables $opt" + fi + opt="" ;; + -*?-*) : an option with an argument --main-file=x or --main-file x + opt=`echo "$arg" | $SED -e "s/-*\\([$AA][$AA-]*\\).*/\\1/" -e y/-/_/` + if test ".$opt" = "." ; then + error "invalid option $arg" + opt="" + else : + # keep the option for next round + fi ;; + -*) : a simple option --filelist or --debug or --verbose + opt=`echo "$arg" | $SED -e "s/^-*\\([$AA][$AA-]*\\).*/\\1/" -e y/-/_/` + if test ".$opt" = "." ; then + error "invalid option $arg" + else + arg=`echo "$arg" | $SED -e "s/^[^=]*=//"` + eval "export opt_$opt=' '" + fi + opt="" ;; + *) $hint "<$arg>" + if test ".$opt_main_file" = "." ; then opt_main_file="$arg" ; else + test ".$opt_files" != "." && opt_files="$opt_files$opt_fileseparator" + opt_files="$opt_files$arg" ; fi + opt="" ;; + esac + fi +done ; if test ".$opt" != "." ; then + eval "export opt_$opt='$arg'" + opt="" +fi +### env | grep ^opt + +test ".$opt_main_file" != "." && test -f "$opt_main_file" && \ +SITEFILE="$opt_main_file" +test ".$opt_site_file" != "." && test -f "$opt_site_file" && \ +SITEFILE="$opt_site_file" +test "$opt_debug" && \ +hint="note" + +if test ".$opt_help" != "." ; then + F="$SITEFILE" + echo "$0 [sitefile]"; + echo " default sitefile = $F"; + echo "options:"; + echo " --filelist : show list of target files as ectracted from $F" + echo " --src-dir xx : if source files are not where mksite is executed" + echo " --tmp-dir xx : use temp instead of local directory" + echo " --tmp : use automatic temp directory in ${TEMP-/tmp}/mksite.*" + exit; + echo " internal:" + echo "--fileseparator=x : for building the internal filelist (default '?')" + echo "--files xx : for list of additional files to be processed" + echo "--main-file xx : for the main sitefile to take file list from" +fi + +if test ".$SITEFILE" = "." ; then + error "no SITEFILE found (default would be 'site.htm')" + exit 1 +else + $hint "sitefile:" `ls -s $SITEFILE` +fi + +tmp="." ; if test ".$opt_tmp_dir" != "." ; then tmp="$opt_tmp_dir" ; fi +if test ".$opt_tmp_dir" = "." && test ".$opt_tmp" != "." ; then +tmp="${TEMP-/tmp}/mksite.$$" ; fi + +# we use external files to store mappings - kind of relational tables +MK_TAGS="$tmp/$MK.tags.tmp.sed" +MK_VARS="$tmp/$MK.vars.tmp.sed" +MK_SPAN="$tmp/$MK.span.tmp.sed" +MK_META="$tmp/$MK.meta.tmp.htm" +MK_METT="$tmp/$MK.mett.tmp.htm" +MK_TEST="$tmp/$MK.test.tmp.htm" +MK_FAST="$tmp/$MK.fast.tmp.sed" +MK_GETS="$tmp/$MK.gets.tmp.sed" +MK_PUTS="$tmp/$MK.puts.tmp.sed" +MK_SITE="$tmp/$MK.site.tmp.sed" +MK_SECT1="$tmp/$MK.sect1.tmp.sed" +MK_SECT2="$tmp/$MK.sect2.tmp.sed" +MK_SECT3="$tmp/$MK.sect3.tmp.sed" +MK_STYLE="$tmp/$MK.style.tmp.sed" +MK_DATA="$tmp/$MK.$DATA.tmp.htm" + +# ======================================================================== +# ======================================================================== +# ======================================================================== +# MAGIC VARS +# IN $SITEFILE +printerfriendly="" +sectionlayout="list" +sitemaplayout="list" +attribvars=" " # <x ref="${varname:=default}"> +updatevars=" " # <!--$varname:=-->default +expandvars=" " # <!--$varname--> +commentvars=" " # $updatevars && $expandsvars +sectiontab=" " # highlight ^<td class=...>...href="$section" +currenttab=" " # highlight ^<br>..<a href="$topic"> +headsection="no" +tailsection="no" +sectioninfo="no" # using <h2> title <h2> = info text +emailfooter="no" + +if $GREP "<!--multi-->" $SITEFILE >$NULL ; then +echo \ +"WARNING: do not use <!--multi-->, change to <!--mksite:multi--> " "$SITEFILE" +echo \ +"warning: or <!--mksite:multisectionlayout--> <!--mksite:multisitemaplayout-->" +sectionlayout="multi" +sitemaplayout="multi" +fi +if $GREP "<!--mksite:multi-->" $SITEFILE >$NULL ; then +sectionlayout="multi" +sitemaplayout="multi" +fi +if $GREP "<!--mksite:multilayout-->" $SITEFILE >$NULL ; then +sectionlayout="multi" +sitemaplayout="multi" +fi + +mksite_magic_option () +{ + # $1 is word/option to check for + INP="$2" ; test ".$INP" = "." && INP="$SITEFILE" + $SED \ + -e "s/\\(<!--mksite:\\)\\($1\\)-->/\\1\\2: -->/g" \ + -e "s/\\(<!--mksite:\\)\\([$AA][$AA]*\\)\\($1\\)-->/\\1\\3:\\2-->/g" \ + -e "/<!--mksite:$1:/!d" \ + -e "s/.*<!--mksite:$1:\\([^<>]*\\)-->.*/\\1/" \ + -e "s/.*<!--mksite:$1:\\([^-]*\\)-->.*/\\1/" \ + -e "/<!--mksite:$1:/d" -e q $INP # $++ +} + +x=`mksite_magic_option sectionlayout` ; case "$x" in + "list"|"multi") sectionlayout="$x" ;; esac +x=`mksite_magic_option sitemaplayout` ; case "$x" in + "list"|"multi") sitemaplayout="$x" ;; esac +x=`mksite_magic_option attribvars` ; case "$x" in + " "|"no"|"warn") attribvars="$x" ;; esac +x=`mksite_magic_option updatevars` ; case "$x" in + " "|"no"|"warn") updatevars="$x" ;; esac +x=`mksite_magic_option expandvars` ; case "$x" in + " "|"no"|"warn") expandvars="$x" ;; esac +x=`mksite_magic_option commentvars` ; case "$x" in + " "|"no"|"warn") commentvars="$x" ;; esac +x=`mksite_magic_option printerfriendly` ; case "$x" in + " "|".*"|"-*") printerfriendly="$x" ;; esac +x=`mksite_magic_option sectiontab` ; case "$x" in + " "|"no"|"warn") sectiontab="$x" ;; esac +x=`mksite_magic_option currenttab` ; case "$x" in + " "|"no"|"warn") currenttab="$x" ;; esac +x=`mksite_magic_option sectioninfo` ; case "$x" in + " "|"no"|"[=:-]") sectioninfo="$x" ;; esac +x=`mksite_magic_option emailfooter` + test ".$x" != "." && emailfooter="$x" + +test ".$opt_print" != "." && printerfriendly="$opt_print" +test ".$commentvars" = ".no" && updatevars="no" # duplicated into +test ".$commentvars" = ".no" && expandvars="no" # info2vars_sed () + + +$hint "'$sectionlayout'sectionlayout '$sitemaplayout'sitemaplayout" +$hint "'$attribvars'attribvars '$updatevars'updatevars" +$hint "'$expandvars'expandvars '$commentvars'commentvars " +$hint "'$currenttab'currenttab '$sectiontab'sectiontab" +$hint "'$headsection'headsection '$tailsection'tailsection" + +if ($STAT_R "$SITEFILE" >$NULL) 2>$NULL ; then : ; else STAT_R=":" ; fi +# ========================================================================== +# init a few global variables +# 0. INIT + +mkpathdir () { + if test -n "$1" && test ! -d "$1" ; then + echo "!! mkdir '$1'" ; mkdir "$1" + test ! -d "$1" || mkdir -p "$1" + fi +} + +mkpathfile () { + dirname=`echo "$1" | $SED -e "s:/[^/][^/]*\$::"` + if test ".$1" != ".$dirname" && test ".$dirname" != "." ; + then mkpathdir "$dirname"; fi +} + +mknewfile () { + mkpathfile "$1" + $CATNULL > "$1" +} + +tmp_dir_was_created="no" +if test ! -d "$tmp" ; then mkpathdir "$tmp" ; tmp_dir_was_created="yes" ; fi + +# $MK_TAGS - originally, we would use a lambda execution on each +# uppercased html tag to replace <P> with <p class="P">. Here we just +# walk over all the known html tags and make an sed script that does +# the very same conversion. There would be a chance to convert a single +# tag via "h;y;x" or something we do want to convert all the tags on +# a single line of course. +mknewfile "$MK_TAGS" +for M in `echo $HTMLTAGS` +do P=`echo "$M" | $SED -e "y/$LOWER/$UPPER/"` + echo "s|<$P>|<$M class=\"$P\">|g" >> "$MK_TAGS" + echo "s|<$P |<$M class=\"$P\" |g" >> "$MK_TAGS" + echo "s|</$P>|</$M>|g" >> "$MK_TAGS" +done + echo "s|<>|\\ \\;|g" >> "$MK_TAGS" + echo "s|<->|<WBR />|g" >> "$MK_TAGS" + echo "s|<c>|<code>|g" >> "$MK_TAGS" + echo "s|</c>|</code>|g" >> "$MK_TAGS" + echo "s|<section>||g" >> "$MK_TAGS" + echo "s|</section>||g" >> "$MK_TAGS" + echo "s|<\\(a [^<>]*\\) />|<\\1></a>|g" >> "$MK_TAGS" + _ulink_="<a href=\"\\1\" remap=\"url\">\\1</a>" + echo "s|<a>\\([$az$AZ][$az$AZ]*://[^<>]*\\)</a>|$_ulink_|g" >> "$MK_TAGS" +# also make sure that some non-html entries are cleaned away that +# we are generally using to inject meta information. We want to see +# that meta ino in the *.htm browser view during editing but they +# shall not get present in the final html page for publishing. +DC_VARS="contributor date source language coverage identifier" +DC_VARS="$DC_VARS rights relation creator subject description" +DC_VARS="$DC_VARS publisher DCMIType" +_EQUIVS="refresh expires content-type cache-control" +_EQUIVS="$_EQUIVS redirect charset" # mapped to refresh / content-type +_EQUIVS="$_EQUIVS content-language content-script-type content-style-type" +for P in $DC_VARS $_EQUIVS ; do # dublin core embedded + echo "s|<$P>[^<>]*</$P>||g" >> "$MK_TAGS" +done + test ".$opt_keepsect" = "." && \ + echo "s|<a sect=\"[$AZ$NN]\"|<a|g" >> "$MK_TAGS" + echo "s|<!--[$AX]*[?]-->||g" >> "$MK_TAGS" + echo "s|<!--\\\$[$AX]*[?]:-->||g" >> "$MK_TAGS" + echo "s|<!--\\\$[$AX]*:[?=]-->||g" >> "$MK_TAGS" + echo "s|\\(<[^<>]*\\)\\\${[$AX]*:[?=]\\([^<{}>]*\\)}\\([^<>]*>\\)|\\1\\2\\3|g" >>$MK_TAGS + +# see overview at www.metatab.de - http-equivs are +# <refresh>5; url=target</reresh> or <redirect>target</redirect> +# <content-type>text/html; charset=koi8-r</content-type> iso-8859-1/UTF-8 +# <content-language>de</content-language> <charset>UTF-8</charset> +# <content-script-type>text/javascript</content-script-type> /jscript/vbscript +# <content-style-type>text/css</content-style-type> +# <cache-control>no-cache</cache-control> + +trimm () +{ + echo "$1" | $SED -e "s:^ *::" -e "s: *\$::"; +} +trimmm () +{ + echo "$1" | $SED -e "s:^ *::" -e "s: *\$::" -e "s:[ ][ ]*: :g"; +} + +timezone () +{ + # +%z is an extension while +%Z is supposed to be posix + _timezone=`$DATE_NOW +%z` + case "$_timezone" in + *+*) echo "$_timezone" ;; + *-*) echo "$_timezone" ;; + *) $DATE_NOW +%Z + esac +} +timetoday () +{ + $DATE_NOW +%Y-%m-%d +} +timetodays () +{ + $DATE_NOW +%Y-%m%d +} + +# ====================================================================== +# FUNCS + +sed_longscript () +{ + # hpux sed has a limit of 100 entries per sed script ! + $SED -e "100q" "$1" > "$1~1~" + $SED -e "1,100d" -e "200q" "$1" > "$1~2~" + $SED -e "1,200d" -e "300q" "$1" > "$1~3~" + $SED -e "1,300d" -e "400q" "$1" > "$1~4~" + $SED -e "1,400d" -e "500q" "$1" > "$1~5~" + $SED -e "1,500d" -e "600q" "$1" > "$1~6~" + $SED -e "1,600d" -e "700q" "$1" > "$1~7~" + $SED -e "1,700d" -e "800q" "$1" > "$1~8~" + $SED -e "1,800d" -e "900q" "$1" > "$1~9~" + $SED -f "$1~1~" -f "$1~2~" -f "$1~3~" -f "$1~4~" -f "$1~5~" \ + -f "$1~6~" -f "$1~7~" -f "$1~8~" -f "$1~9~" "$2" +} + +sed_escape_key () +{ + $SED -e "s|\\.|\\\\&|g" -e "s|\\[|\\\\&|g" -e "s|\\]|\\\\&|g" "$@" +} + +sed_slash_key () # helper to escape chars special in /anchor/ regex +{ # currently escaping "/" "[" "]" "." + echo "$1" | sed_escape_key -e "s|/|\\\\&|g" +} +sed_piped_key () # helper to escape chars special in s|anchor|| regex +{ # currently escaping "|" "[" "]" "." + echo "$1" | sed_escape_key -e "s/|/\\\\&/g" +} + +back_path () # helper to get the series of "../" for a given path +{ + echo "$1" | $SED -e "/\\//!d" -e "s|/[^/]*\$|/|" -e "s|[^/]*/|../|g" +} + +dir_name () +{ + echo "$1" | $SED -e "s:/[^/][^/]*\$::" +} + +piped_value="s/|/\\\\|/g" +amp_value="s|&|\\\\&|g" +info2vars_sed () # generate <!--$vars--> substition sed addon script +{ + INP="$1" ; test ".$INP" = "." && INP="$tmp/$F.$DATA" + V8=" *\\([^ ][^ ]*\\) \\(.*\\)<$QX>" + V9=" *DC[.]\\([^ ][^ ]*\\) \\(.*\\)<$QX>" + N8=" *\\([^ ][^ ]*\\) \\([$NN].*\\)<$QX>" + N9=" *DC[.]\\([^ ][^ ]*\\) \\([$NN].*\\)<$QX>" + V0="\\\\([<]*\\\\)\\\\\\\$" + V1="\\\\([^<>]*\\\\)\\\\\\\$" + V2="\\\\([^{<>}]*\\\\)" + V3="\\\\([^<>]*\\\\)" + SS="<""<>"">" # spacer so value="2004" does not make for s|\(...\)|\12004| + test ".$commentvars" = ".no" && updatevars="no" # duplicated from + test ".$commentvars" = ".no" && expandvars="no" # option handling + test ".$expandvars" != ".no" && \ + $SED -e "/^=....=formatter /d" -e "$piped_value" \ + -e "/^<$Q'name'>/s,<$Q'name'>$V9,s|<!--$V0\\1[?]-->|- \\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V9,s|<!--$V0\\1[?]-->|(\\2)|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V8,s|<!--$V0\\1[?]-->|- \\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V8,s|<!--$V0\\1[?]-->|(\\2)|," \ + -e "/^<$Q/d" -e "/^<!/d" -e "$amp_value" $INP # $++ + test ".$expandvars" != ".no" && \ + $SED -e "/^=....=formatter /d" -e "$piped_value" \ + -e "/^<$Q'text'>/s,<$Q'text'>$V9,s|<!--$V1\\1-->|\\\\1$SS\\2|," \ + -e "/^<$Q'Text'>/s,<$Q'Text'>$V9,s|<!--$V1\\1-->|\\\\1$SS\\2|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V9,s|<!--$V1\\1[?]-->|\\\\1$SS\\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V9,s|<!--$V1\\1[?]-->|\\\\1$SS\\2|," \ + -e "/^<$Q'text'>/s,<$Q'text'>$V8,s|<!--$V1\\1-->|\\\\1$SS\\2|," \ + -e "/^<$Q'Text'>/s,<$Q'Text'>$V8,s|<!--$V1\\1-->|\\\\1$SS\\2|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V8,s|<!--$V1\\1[?]-->|\\\\1$SS\\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V8,s|<!--$V1\\1[?]-->|\\\\1$SS\\2|," \ + -e "/^<$Q/d" -e "/^<!/d" -e "$amp_value" $INP # $++ + test ".$updatevars" != ".no" && \ + $SED -e "/^=....=formatter /d" -e "$piped_value" \ + -e "/^<$Q'name'>/s,<$Q'name'>$V9,s|<!--$V0\\1:[?]-->[^<>]*|- \\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V9,s|<!--$V0\\1:[?]-->[^<>]*|(\\2)|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V8,s|<!--$V0\\1:[?]-->[^<>]*|- \\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V8,s|<!--$V0\\1:[?]-->[^<>]*|(\\2)|," \ + -e "/^<$Q/d" -e "/^<!/d" -e "$amp_value" $INP # $++ + test ".$updatevars" != ".no" && \ + $SED -e "/^=....=formatter /d" -e "$piped_value" \ + -e "/^<$Q'text'>/s,<$Q'text'>$V9,s|<!--$V1\\1:[=]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'Text'>/s,<$Q'Text'>$V9,s|<!--$V1\\1:[=]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V9,s|<!--$V1\\1:[?]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V9,s|<!--$V1\\1:[?]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'text'>/s,<$Q'text'>$V8,s|<!--$V1\\1:[=]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'Text'>/s,<$Q'Text'>$V8,s|<!--$V1\\1:[=]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V8,s|<!--$V1\\1:[?]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V8,s|<!--$V1\\1:[?]-->[^<>]*|\\\\1$SS\\2|," \ + -e "/^<$Q/d" -e "/^<!/d" -e "$amp_value" $INP # $++ + test ".$attribvars" != ".no" && \ + $SED -e "/^=....=formatter /d" -e "$piped_value" \ + -e "/^<$Q'text'>/s,<$Q'text'>$V9,s|<$V1{\\1:[=]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'Text'>/s,<$Q'Text'>$V9,s|<$V1{\\1:[=]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V9,s|<$V1{\\1:[?]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V9,s|<$V1{\\1:[?]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'text'>/s,<$Q'text'>$V8,s|<$V1{\\1:[=]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'Text'>/s,<$Q'Text'>$V8,s|<$V1{\\1:[=]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'name'>/s,<$Q'name'>$V8,s|<$V1{\\1:[?]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q'Name'>/s,<$Q'Name'>$V8,s|<$V1{\\1:[?]$V2}$V3>|<\\\\1$SS\\2\\\\3>|," \ + -e "/^<$Q/d" -e "/^<!/d" -e "$amp_value" $INP # $++ + # if value="2004" then generated sed might be "\\12004" which is bad + # instead we generate an edited value of "\\1$SS$value" and cut out + # the spacer now after expanding the variable values: + echo "s|$SS||g" # $++ +} + +info2meta_sed () # generate <meta name..> text portion +{ + # http://www.metatab.de/meta_tags/DC_type.htm + INP="$1" ; test ".$INP" = "." && INP="$tmp/$F.$DATA" + V6=" *HTTP[.]\\([^ ][^ ]*\\) \\(.*\\)<$QX>" + V7=" *DC[.]\\([^ ][^ ]*\\) \\(.*\\)<$QX>" + V8=" *\\([^ ][^ ]*\\) \\(.*\\)<$QX>" + DATA_META_TYPE_SCHEME="name=\"DC.type\" content=\"\\2\" scheme=\"\\1\"" + DATA_META_DCMI="name=\"\\1\" content=\"\\2\" scheme=\"DCMIType\"" + DATA_META_NAME_TZ="name=\"\\1\" content=\"\\2 `timezone`\"" + DATA_META_NAME="name=\"\\1\" content=\"\\2\"" + DATA_META_HTTP="http-equiv=\"\\1\" content=\"\\2\"" + $SED -e "/=....=today /d" \ + -e "/<$Q'meta'>HTTP[.]/s,<$Q'meta'>$V6, <meta $DATA_META_HTTP />," \ + -e "/<$Q'meta'>DC[.]DCMIType /s,<$Q'meta'>$V7, <meta $DATA_META_TYPE_SCHEME />," \ + -e "/<$Q'meta'>DC[.]type Collection$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Dataset$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Event$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Image$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Service$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Software$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Sound$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]type Text$/s,<$Q'meta'>$V8, <meta $DATA_META_DCMI />," \ + -e "/<$Q'meta'>DC[.]date[.].*[+]/s,<$Q'meta'>$V8, <meta $DATA_META_NAME />," \ + -e "/<$Q'meta'>DC[.]date[.].*[:]/s,<$Q'meta'>$V8, <meta $DATA_META_NAME_TZ />," \ + -e "/<$Q'meta'>/s,<$Q'meta'>$V8, <meta $DATA_META_NAME />," \ + -e "/<meta name=\"[^\"]*\" content=\"\" /d" \ + -e "/<meta http-equiv=\"[^\"]*\" content=\"\" /d" \ + -e "/^<$Q/d" -e "/^<!/d" $INP # $++ +} + +info_get_entry () # get the first <!--vars--> value known so far +{ + TXT="$1" ; test ".$TXT" = "." && TXT="sect" + INP="$2" ; test ".$INP" = "." && INP="$tmp/$F.$DATA" + $SED -e "/<$Q'text'>$TXT /!d" \ + -e "s|<$Q'text'>$TXT ||" -e "s|<$QX>||" -e "q" $INP # $++ +} + +info1grep () # test for a <!--vars--> substition to be already present +{ + TXT="$1" ; test ".$TXT" = "." && TXT="sect" + INP="$2" ; test ".$INP" = "." && INP="$tmp/$F.$DATA" + $GREP "^<$Q'text'>$TXT " $INP >$NULL + return $? +} + +dx_init() +{ + mkpathdir "$tmp" + dx_meta formatter `basename $opt_formatter` > "$tmp/$F.$DATA" + for opt in $opt_variables ; do case "$opt" in # commandline --def=value + *_*) op_=`echo "$opt" | sed -e "y/_/-/"` # makes for <!--$def--> + dx_meta "$op_" `eval echo "\\\$opt_$opt"` ;; + *) dx_text "$opt" `eval echo "\\\$opt_$opt"` ;; + esac ; done +} + +dx_line () +{ + echo "<$Q$1>$2 "`trimmm "$3"`"<$QX>" >> "$tmp/$F.$DATA" +} + +DX_line () +{ + dx_val_=`echo "$3" | sed -e "s/<[^<>]*>//g"` + dx_line "$1" "$2" "$dx_val_" +} + +dx_text () +{ + dx_line "'text'" "$1" "$2" +} + +DX_text () # add a <!--vars--> substition includings format variants +{ + N=`trimm "$1"` ; T=`trimm "$2"` + if test ".$N" != "." ; then + if test ".$T" != "." ; then + text=`echo "$T" | $SED -e "y/$UPPER/$LOWER/" -e "s/<[^<>]*>//g"` + dx_line "'text'" "$N" "$T" + dx_line "'name'" "$N" "$text" + varname=`echo "$N" | $SED -e 's/.*[.]//'` # cut out front part + if test ".$N" != ".$varname" ; then + text=`echo "$varname $T" | $SED -e "y/$UPPER/$LOWER/" -e "s/<[^<>]*>//g"` + dx_line "'Text'" "$varname" "$T" + dx_line "'Name'" "$varname" "$text" + fi + fi + fi +} + +dx_meta () +{ + DX_line "'meta'" "$1" "$2" +} + +DX_meta () # add simple meta entry and its <!--vars--> subsitution +{ + DX_line "'meta'" "$1" "$2" + DX_text "$1" "$2" +} + +DC_meta () # add new DC.meta entry plus two <!--vars--> substitutions +{ + DX_line "'meta'" "DC.$1" "$2" + DX_text "DC.$1" "$2" + DX_text "$1" "$2" +} + +HTTP_meta () # add new HTTP.meta entry plus two <!--vars--> substitutions +{ + DX_line "'meta'" "HTTP.$1" "$2" + DX_text "HTTP.$1" "$2" + DX_text "$1" "$2" +} + +DC_VARS_Of () # check DC vars as listed in $DC_VARS global and generate DC_meta +{ # the results will be added to .meta.tmp and .vars.tmp later + FILENAME="$1" ; test ".$FILENAME" = "." && FILENAME="$SOURCEFILE" + for M in $DC_VARS title ; do + # scan for a <markup> of this name + part=`$SED -e "/<$M>/!d" -e "s|.*<$M>||" -e "s|</$M>.*||" -e q $FILENAME` + part=`trimm "$part"` + text=`echo "$part" | $SED -e "s|^[$AA]*:||"` + text=`trimm "$text"` + test ".$text" = "." && continue + # <mark:part> will be <meta name="mark.part"> + if test ".$text" != ".$part" ; then + N=`echo "$part" | $SED -e "s/:.*//"` + DC_meta "$M.$N" "$text" + elif test ".$M" = ".date" ; then + DC_meta "$M.issued" "$text" # "<date>" -> "<date>issued:" + else + DC_meta "$M" "$text" + fi + done +} + +HTTP_VARS_Of () # check HTTP-EQUIVs as listed in $_EQUIV global then +{ # generate meta tags that are http-equiv= instead of name= + FILENAME="$1" ; test ".$FILENAME" = "." && FILENAME="$SOURCEFILE" + for M in $_EQUIVS ; do + # scan for a <markup> of this name + part=`$SED -e "/<$M>/!d" -e "s|.*<$M>||" -e "s|</$M>.*||" -e q $FILENAME` + part=`trimm "$part"` + text=`echo "$part" | $SED -e "s|^[$AA]*:||"` + text=`trimm "$text"` + test ".$text" = "." && continue + if test ".$M" = ".redirect" ; then + HTTP_meta "refresh" "5; url=$text" ; DX_text "$M" "$text" + elif test ".$M" = ".charset" ; then + HTTP_meta "content-type" "text/html; charset=$text" + else + HTTP_meta "$M" "$text" + fi + done +} + +DC_isFormatOf () # make sure there is this DC.relation.isFormatOf tag +{ # choose argument for a fallback (usually $SOURCEFILE) + NAME="$1" ; test ".$NAME" = "." && NAME="$SOURCEFILE" + info1grep DC.relation.isFormatOf || DC_meta relation.isFormatOf "$NAME" +} + +DC_publisher () # make sure there is this DC.publisher meta tag +{ # choose argument for a fallback (often $USER) + NAME="$1" ; test ".$NAME" = "." && NAME="$USER" + info1grep DC.publisher || DC_meta publisher "$NAME" +} + +DC_modified () # make sure there is a DC.date.modified meta tag +{ # maybe choose from filesystem dates if possible + ZZ="$1" # target file + if info1grep DC.date.modified ; then : + else + _42_chars="........................................." + cut_42_55="s/^$_42_chars\\(.............\\).*/\\1/" # i.e.`cut -b 42-55` + text=`$STAT_R $ZZ 2>$NULL | $SED -e '/odify:/!d' -e 's|.*fy:||' -e q` + text=`echo "$text" | $SED -e "s/:..[.][$NN]*//"` + text=`trimm "$text"` + test ".$text" = "." && \ + text=`$DATE_R "$ZZ" +%Y-%m-%d 2>$NULL` # GNU sed + test ".$text" = "." && + text=`$LS_L "$ZZ" | $SED -e "$cut_42_55" -e "s/^ *//g" -e "q"` + text=`echo "$text" | $SED -e "s/[$NN]*:.*//"` # cut way seconds + DC_meta date.modified `trimm "$text"` + fi +} + +DC_date () # make sure there is this DC.date meta tag +{ # choose from one of the available DC.date.* specials + ZZ="$1" # source file + if info1grep DC.date + then DX_text issue "dated `info_get_entry DC.date`" + DX_text updated "`info_get_entry DC.date`" + else text="" + for kind in available issued modified created ; do + text=`info_get_entry DC.date.$kind` + # test ".$text" != "." && echo "$kind = date = $text ($ZZ)" + test ".$text" != "." && break + done + if test ".$text" = "." ; then + M="date" + part=`$SED -e "/<$M>/!d" -e "s|.*<$M>||" -e "s|</$M>.*||" -e q $ZZ` + part=`trimm "$part"` + text=`echo "$part" | $SED -e "s|^[$AA]*:||"` + text=`trimm "$text"` + fi + if test ".$text" = "." ; then + M="!--date:*=*--" # takeover updateable variable... + part=`$SED -e "/<$M>/!d" -e "s|.*<$M>||" -e "s|</.*||" -e q $ZZ` + part=`trimm "$part"` + text=`echo "$part" | $SED -e "s|^[$AA]*:||" -e "s|\\&.*||"` + text=`trimm "$text"` + fi + text=`echo "$text" | $SED -e "s/[$NN]*:.*//"` # cut way seconds + DX_text updated "$text" + text1=`echo "$text" | $SED -e "s|^.* *updated ||"` + if test ".$text" != ".$text1" ; then + kind="modified" ; text=`echo "$text1" | $SED -e "s|,.*||"` + fi + text1=`echo "$text" | $SED -e "s|^.* *modified ||"` + if test ".$text" != ".$text1" ; then + kind="modified" ; text=`echo "$text1" | $SED -e "s|,.*||"` + fi + text1=`echo "$text" | $SED -e "s|^.* *created ||"` + if test ".$text" != ".$text1" ; then + kind="created" ; text=`echo "$text1" | $SED -e "s|,.*||"` + fi + text=`echo "$text" | $SED -e "s/[$NN]*:.*//"` # cut way seconds + DC_meta date `trimm "$text"` + DX_text issue `trimm "$kind $text"` + fi +} + +DC_title () +{ + # choose a title for the document, either an explicit title-tag + # or one of the section headers in the document or fallback to filename + ZZ="$1" # target file + if info1grep DC.title ; then : + else + for M in TITLE title H1 h1 H2 h2 H3 H3 H4 H4 H5 h5 H6 h6 ; do + text=`$SED -e "/<$M>/!d" -e "s|.*<$M>||" -e "s|</$M>.*||" -e q $ZZ` + text=`trimm "$text"` ; test ".$text" != "." && break + MM="$M [^<>]*" + text=`$SED -e "/<$MM>/!d" -e "s|.*<$MM>||" -e "s|</$M>.*||" -e q $ZZ` + text=`trimm "$text"` ; test ".$text" != "." && break + done + if test ".text" = "." ; then + text=`basename $ZZ .html` + text=`basename $text .htm | $SED -e 'y/_/ /' -e "s/\\$/ info/"` + fi + term=`echo "$text" | $SED -e 's/.*[(]//' -e 's/[)].*//'` + text=`echo "$text" | $SED -e 's/[(][^()]*[)]//'` + if test ".$term" = "." || test ".$term" = ".$text" ; then + DC_meta title "$text" + else + DC_meta title "$term - $text" + fi + fi +} + +site_get_section () # return parent section page of given page +{ + _F_=`sed_slash_key "$1"` + $SED -e "/^<$Q'sect'>$_F_ /!d" \ + -e "s|^<$Q'sect'>$_F_ ||" -e "s|<$QX>||" \ + -e q "$MK_DATA" # $++ +} + +DC_section () # not really a DC relation (shall we use isPartOf ?) +{ # each document should know its section father + sectn=`site_get_section "$F"` + if test ".$sectn" != "." ; then + DC_meta relation.section "$sectn" + fi +} + +info_get_entry_section() +{ + info_get_entry DC.relation.section # $++ +} + +site_get_selected () # return section of given page +{ + _F_=`sed_slash_key "$1"` + $SED -e "/<$Q'use.'>$_F_ /!d" \ + -e "s|<$Q'use.'>[^ ]* ||" -e "s|<$QX>||" \ + -e q "$MK_DATA" # $++ +} + +DC_selected () # not really a DC title (shall we use alternative ?) +{ + # each document might want to highlight the currently selected item + short=`site_get_selected "$F"` + if test ".$short" != "." ; then + DC_meta title.selected "$short" + fi +} + +info_get_entry_selected () +{ + info_get_entry DC.title.selected # $++ +} + +site_get_rootsections () # return all sections from root of nav tree +{ + $SED -e "/^<$Q'use1'>/!d" \ + -e "s|^<$Q'use.'>\\([^ ]*\\) .*|\\1|" "$MK_DATA" # $++ +} + +site_get_sectionpages () # return all children pages in the given section +{ + _F_=`sed_slash_key "$1"` + $SED -e "/^<$Q'sect'>[^ ]* $_F_<[^<>]*>\$/!d" \ + -e "s|^<$Q'sect'>||" -e "s|<$QX>||" \ + -e "s/ .*//" "$MK_DATA" # $++ +} + +site_get_subpages () # return all page children of given page +{ + _F_=`sed_slash_key "$1"` + $SED -e "/^<$Q'node'>[^ ]* $_F_<[^<>]*>\$/!d" \ + -e "s|^<$Q'node'>||" -e "s|<$QX>||" \ + -e "s/ .*//" "$MK_DATA" + # $++ +} + +site_get_parentpage () # return parent page for given page (".." for sections) +{ + _F_=`sed_slash_key "$1"` + $SED -e "/^<$Q'node'>$_F_ /!d" \ + -e "s|^<$Q'node'>[^ ]* ||" -e "s|<$QX>||" \ + -e "q" "$MK_DATA" # $++ +} + +DX_alternative () # detect wether page asks for alternative style +{ # which is generally a shortpage variant + x=`mksite_magic_option alternative $1 | sed -e "s/^ *//" -e "s/ .*//"` + if test ".$x" != "." ; then + DX_text alternative "$x" + fi +} + +info2head_sed () # append alternative handling script to $HEAD +{ + have=`info_get_entry alternative` + if test ".$have" != "." ; then + echo "/<!--mksite:alternative:$have .*-->/{" # $++ + echo "s/<!--mksite:alternative:$have\\( .*\\)-->/\\1/" # $++ + echo "q" # $++ + echo "}" # $++ + fi +} +info2body_sed () # append alternative handling script to $BODY +{ + have=`info_get_entry alternative` + if test ".$have" != "." ; then + echo "s/<!--mksite:alternative:$have\\( .*\\)-->/\\1/" # $++ + fi +} + +bodymaker_for_sectioninfo () +{ + test ".$sectioninfo" = ".no" && return + _x_="<!--mksite:sectioninfo::-->" + _q_="\\([^<>]*[$AX][^<>]*\\)" + test ".$sectioninfo" != ". " && _q_="[ ][ ]*$sectioninfo\\([ ]\\)" + echo "s|\\(^<[hH][$NN][ >].*</[hH][$NN]>\\)$_q_|\\1$_x_\\2|" # $++ + echo "/$_x_/s|^|<table width=\"100%\"><tr valign=\"bottom\"><td>|" # $++ + echo "/$_x_/s|</[hH][$NN]>|&</td><td align=\"right\"><i>|" # $++ + echo "/$_x_/s|\$|</i></td></tr></table>|" # $++ + echo "s|$_x_||" # $++ +} + +fast_href () # args "$FILETOREFERENCE" "$FROMCURRENTFILE:$F" +{ # prints path to $FILETOREFERENCE href-clickable in $FROMCURRENTFILE + # if no subdirectoy then output is the same as input $FILETOREFERENCE + R="$2" ; test ".$R" = "." && R="$F" + S=`back_path "$R"` + if test ".$S" = "." + then echo "$1" # $++ + else _1_=`echo "$1" | \ + $SED -e "/^ *\$/d" -e "/^\\//d" -e "/^[.][.]/d" -e "/^[$AA]*:/d" ` + if test ".$_1_" = "." # don't move any in the pattern above + then echo "$1" # $++ + else echo "$S$1" # $++ prefixed with backpath + fi fi +} + +make_back_path () # "$FILE" +{ + R="$1" ; test ".$R" = "." && R="$F" + S=`back_path "$R"` + if test ".$S" != "." ; then + echo "s|\\(<[^<>]* href=\\\"\\)\\([$AA][^<>:]*\\\"[^<>]*>\\)|\\1$S\\2|g" + echo "s|\\(<[^<>]* src=\\\"\\)\\([$AA][^<>:]*\\\"[^<>]*>\\)|\\1$S\\2|g" + fi +} + +# ============================================================== SITE MAP DATA +# each entry needs atleast a list-title, a long-title, and a list-date +# these are the basic information to be printed in the sitemap file +# where it is bound the hierarchy of sect/subsect of the entries. + +site_map_list_title() # $file $text +{ + echo "<$Q'list'>$1 $2<$QX>" >> "$MK_DATA" +} +info_map_list_title() # $file $text +{ + echo "<$Q'list'>$2<$QX>" >> "$tmp/$1.$DATA" +} +site_map_long_title() # $file $text +{ + echo "<$Q'long'>$1 $2<$QX>" >> "$MK_DATA" +} +info_map_long_title() # $file $text +{ + echo "<$Q'long'>$2<$QX>" >> "$tmp/$1.$DATA" +} +site_map_list_date() # $file $text +{ + echo "<$Q'date'>$1 $2<$QX>" >> "$MK_DATA" +} +info_map_list_date() # $file $text +{ + echo "<$Q'date'>$2<$QX>" >> "$tmp/$1.$DATA" +} + +siteinfo2sitemap () # generate <name><page><date> addon sed scriptlet +{ # the resulting script will act on each item/line + # containing <!--"filename"--> and expand any following + # reference of <!--name--> or <!--date--> or <!--long--> + INP="$1" ; test ".$INP" = "." && INP="$MK_DATA" + _list_="s|\\\\(<!--\"\\1\"-->.*\\\\)<name href=[^<>]*>.*</name>|\\\\1<name href=\"\\1\">\\2</name>|" + _date_="s|\\\\(<!--\"\\1\"-->.*\\\\)<date>.*</date>|\\\\1<date>\\2</date>|" + _long_="s|\\\\(<!--\"\\1\"-->.*\\\\)<long>.*</long>|\\\\1<long>\\2</long>|" + $SED -e "s:&:\\\\&:g" \ + -e "s:<$Q'list'>\\([^ ]*\\) \\(.*\\)<$QX>:$_list_:" \ + -e "s:<$Q'date'>\\([^ ]*\\) \\(.*\\)<$QX>:$_date_:" \ + -e "s:<$Q'long'>\\([^ ]*\\) \\(.*\\)<$QX>:$_long_:" \ + -e "/^s|/!d" $INP # $++ +} + +make_multisitemap () +{ # each category gets its own column along with the usual entries + INPUTS="$1" ; test ".$INPUTS" = "." && INPUTS="$MK_DATA" + siteinfo2sitemap > "$MK_SITE" # have <name><long><date> addon-sed + _form_="<!--\"\\2\"--><!--use\\1--><long>\\3</long><!--end\\1-->" + _form_="$_form_<br><name href=\"\\2\">\\3</name><date>......</date>" + _tiny_="small><small><small" ; _tinyX_="small></small></small " + _tabb_="<br><$_tiny_> </$_tinyX_>" ; _bigg_="<big> </big>" + echo "<table width=\"100%\"><tr><td> " # $++ + $SED -e "/^<$Q'[Uu]se.'>/!d" \ + -e "/>[$AZ$az][$AZ$az][$AZ$az][$AZ$az]*:/d" \ + -e "s|^<$Q'[Uu]se\\(.\\)'>\\([^ ]*\\) \\(.*\\)<$QX>|$_form_|" \ + -f "$MK_SITE" -e "/<name/!d" \ + -e "s|<!--use1-->|</td><td valign=\"top\"><b>|" \ + -e "s|<!--end1-->|</b>|" \ + -e "s|<!--use2-->|<br>|" \ + -e "s|<!--use.-->|<br>|" -e "s/<!--[^<>]*-->/ /g" \ + -e "s|<name |<$_tiny_><a |" -e "s|</name>||" \ + -e "s|<date>|<small style=\"date\">|" \ + -e "s|</date>|</small></a><br></$_tinyX_>|" \ + -e "s|<long>|<!--long-->|" -e "s|</long>|<!--/long-->|" \ + $INPUTS # $++ + echo "</td><tr></table>" # $++ +} + +make_listsitemap () +{ # traditional - the body contains a list with date and title extras + INPUTS="$1" ; test ".$INPUTS" = "." && INPUTS="$MK_DATA" + siteinfo2sitemap > "$MK_SITE" # have <name><long><date> addon-sed + _form_="<!--\"\\2\"--><!--use\\1--><name href=\"\\2\">\\3</name>" + _form_="$_form_<date>......</date><long>\\3</long>" + _tabb_="<td>\\ \\;</td>" + echo "<table cellspacing=\"0\" cellpadding=\"0\">" # $++ + $SED -e "/^<$Q'[Uu]se.'>/!d" \ + -e "/>[$AZ$az][$AZ$az][$AZ$az][$AZ$az]*:/d" \ + -e "s|^<$Q'[Uu]se\\(.\\)'>\\([^ ]*\\) \\(.*\\)<$QX>|$_form_|" \ + -f "$MK_SITE" -e "/<name/!d" \ + -e "s|<!--use\\(1\\)-->|<tr class=\"listsitemap\\1\"><td>*</td>|" \ + -e "s|<!--use\\(2\\)-->|<tr class=\"listsitemap\\1\"><td>-</td>|" \ + -e "s|<!--use\\(.\\)-->|<tr class=\"listsitemap\\1\"><td> </td>|" \ + -e "/<tr.class=\"listsitemap3\">/s|<name [^<>]*>|&- |" \ + -e "s|<!--[^<>]*-->| |g" \ + -e "s|<name href=\"name:sitemap:|<name href=\"|" \ + -e "s|<name |<td><a |" -e "s|</name>|</a></td>$_tabb_|" \ + -e "s|<date>|<td><small style=\"date\">|" \ + -e "s|</date>|</small></td>$_tabb_|" \ + -e "s|<long>|<td><em><!--long-->|" \ + -e "s|</long>|<!--/long--></em></td></tr>|" \ + "$INPUTS" # $++ + for xx in `grep "^<$Q'use.'>name:sitemap:" $INPUTS` ; do + xx=`echo $xx | sed -e "s/^<$Q'use.'>name:sitemap://" -e "s|<$QX>||"` + if test -f "$xx" ; then + grep "<tr.class=\"listsitemap[$NN]\">" $xx # $++ + fi + done + echo "</table>" # $++ +} + +_xi_include_=`echo \ + "<xi:include xmlns:xi=\"http://www.w3.org/2001/XInclude\" parse=\"xml\""` +make_xmlsitemap () +{ # traditional - the body contains a list with date and title extras + INPUTS="$1" ; test ".$INPUTS" = "." && INPUTS="$MK_DATA" + siteinfo2sitemap > "$MK_SITE" # have <name><long><date> addon-sed + _form_="<!--\"\\2\"--><name href=\"\\2\">\\3</name>" + _sitefile_=`sed_slash_key "$SITEFILE"` + $SED -e "/^<$Q'[Uu]se.'>/!d" \ + -e "/>[$AZ$az][$AZ$az][$AZ$az][$AZ$az]*:/d" \ + -e "s|^<$Q'[Uu]se\\(.\\)'>\\([^ ]*\\) \\(.*\\)<$QX>|$_form_|" \ + -f "$MK_SITE" -e "/<name/!d" \ + -e "/${_sitefile_}/d" \ + -e "/${_sitefile_}l/d" \ + -e "s|\\(href=\"[^<>]*\\)\\.html\\(\"\\)|\\1.xml\\2|g" \ + -e "s|.*<name|$_xi_include_\\n |" \ + -e "s|>.*</name>| />|" \ + "$INPUTS" # $++ +} + +print_extension () +{ + ARG="$1" ; test ".$ARG" = "." && ARG="$opt_print" + case "$ARG" in + -*|.*) echo "$ARG" ;; # $++ + *) echo ".print" ;; # $++ + esac +} + +from_sourcefile () +{ + if test -f "$1" + then echo "$1" + elif test -f "$opt_srcdir/$1" + then echo "$opt_srcdir/$1" + else echo "$1" + fi +} + +html_sourcefile () # generally just cut away the trailing "l" (ell) +{ # making "page.html" argument into "page.htm" return + # (as a new addtion the source may be in ".dbk" xml) + _SRCFILE_=`echo "$1" | $SED -e "s/l\\$//"` + _XMLFILE_=`echo "$1" | $SED -e "s/\\.html/.dbk/"` + if test -f "$_SRCFILE_" + then echo "$_SRCFILE_" # $++ + elif test -f "$_XMLFILE_" + then echo "$_XMLFILE_" # $++ + elif test -f "$opt_src_dir/$_SRCFILE_" + then echo "$opt_src_dir/$_SRCFILE_" # $++ + elif test -f "$opt_src_dir/$_XMLFILE_" + then echo "$opt_src_dir/$_XMLFILE_" # $++ + else echo ".//$_SRCFILE_" # $++ (not found?) + fi +} +html_printerfile_sourcefile () +{ + if test ".$printerfriendly" = "." + then + echo "$1" | sed -e "s/l\$//" # $++ + else + _ext_=`print_extension "$printerfriendly"` + _ext_=`sed_slash_key "$_ext_"` + echo "$1" | sed -e "s/l\$//" -e "s/$_ext_\\([.][$AA]*\\)\$/\\1/" # $++ + fi +} + +fast_html_printerfile () { + x=`html_printerfile "$1"` ; basename "$x" # $++ +# x=`html_printerfile "$1"` ; fast_href "$x" $2 # $++ +} + +html_printerfile () # generate the printerfile for a given normal output +{ + _ext_=`print_extension "$printerfriendly" | sed -e "s/&/\\\\&/"` + echo "$1" | sed -e "s/\\([.][$AA]*\\)\$/$_ext_\\1/" # $++ +} + +make_printerfile_fast () # generate s/file.html/file.print.html/ for hrefs +{ # we do that only for the $FILELIST + ALLPAGES="$1" ; # ="$FILELIST" + for p in $ALLPAGES ; do + a=`sed_slash_key "$p"` + b=`html_printerfile "$p"` + if test "$b" != "$p" ; then + b=`html_printerfile "$p" | sed -e "s:&:\\\\&:g" -e "s:/:\\\\\\/:g"` + echo "s/<a href=\"$a\">/<a href=\"$b\">/" # $++ + echo "s/<a href=\"$a\" /<a href=\"$b\" /" # $++ + fi + done +} + +echo_printsitefile_style () +{ + _bold_="text-decoration : none ; font-weight : bold ; " + echo " <style>" # $+++ + echo " a:link { $_bold_ color : #000060 ; }" # $+++ + echo " a:visited { $_bold_ color : #000040 ; }" # $+++ + echo " body { background-color : white ; }" # $+++ + echo " </style>" # $+++ +} + +make_printsitefile_head() # $sitefile +{ + echo_printsitefile_style > "$MK_STYLE" + $SED -e "/<title>/p" -e "/<title>/d" \ + -e "/<head>/p" -e "/<head>/d" \ + -e "/<\/head>/p" -e "/<\/head>/d" \ + -e "/<body>/p" -e "/<body>/d" \ + -e "/^.*<link [^<>]*rel=\"shortcut icon\"[^<>]*>.*\$/p" \ + -e "d" $SITEFILE | $SED -e "/<head>/r $MK_STYLE" # $+++ +} + + +# ------------------------------------------------------------------------ +# The printsitefile is a long text containing html href markups where +# each of the href lines in the file is being prefixed with the section +# relation. During a secondary call the printsitefile can grepp'ed for +# those lines that match a given output fast-file. The result is a +# navigation header with 1...3 lines matching the nesting level + +# these alt-texts will be only visible in with a text-mode browser: +printsitefile_square="width=\"8\" height=\"8\" border=\"0\"" +printsitefile_img_1="<img alt=\"|go text:\" $printsitefile_square />" +printsitefile_img_2="<img alt=\"||topics:\" $printsitefile_square />" +printsitefile_img_3="<img alt=\"|||pages:\" $printsitefile_square />" +_SECT="mksite:sect:" + +echo_current_line () # $sect $extra +{ + echo "<!--$_SECT\"$1\"-->$2" # $++ +} +make_current_entry () # $sect $file ## requires $MK_SITE +{ + S="$1" ; R="$2" + SSS=`sed_slash_key "$S"` + sep=" - " ; _left_=" [ " ; _right_=" ] " + echo_current_line "$S" "<!--\"$R\"--><name href=\"$R\">$R</name>$sep" \ + | $SED -f "$MK_SITE" \ + -e "s|<!--[^<>]*--><name |<a |" -e "s|</name>|</a>|" \ + -e "/<a href=\"$SSS\"/s/<a href/$_left_&/" \ + -e "/<a href=\"$SSS\"/s/<\\/a>/&$_right_/" # $+++ +} +echo_subpage_line () # $sect $extra +{ + echo "<!--$_SECT*:\"$1\"-->$2" # $++ +} + +make_subpage_entry () +{ + S="$1" ; R="$2" + RR=`sed_slash_key "$R"` + sep=" - " ; + echo_subpage_line "$S" "<!--\"$R\"--><name href=\"$R\">$R</name>$sep" \ + | $SED -f "$MK_SITE" \ + -e "s|<!--[^<>]*--><name |<a |" -e "s|</name>|</a>|" # $+++ +} + +make_printsitefile () +{ + # building the printsitefile looks big but its really a loop over sects + INPUTS="$1" ; test ".$INPUTS" = "." && INPUTS="$MK_DATA" + siteinfo2sitemap > "$MK_SITE" # have <name><long><date> addon-sed + if test -d DEBUG && test -f "$MK_SITE" + then FFFF=`echo "$F" | sed -e "s,/,:,g"` + cp "$MK_DATA" "DEBUG/$FFFF.SITE.tmp.sed" + fi + + make_printsitefile_head $SITEFILE # $++ + sep=" - " + _sect1="<a href=\"#.\" title=\"section\">$printsitefile_img_1</a> ||$sep" + _sect2="<a href=\"#.\" title=\"topics\">$printsitefile_img_2</a> ||$sep" + _sect3="<a href=\"#.\" title=\"pages\">$printsitefile_img_3</a> ||$sep" + site_get_rootsections > "$MK_SECT1" + # round one - for each root section print a current menu + for r in `cat "$MK_SECT1"` ; do + echo_current_line "$r" "<!--mksite:sect1:A--><br>$_sect1" # $++ + for s in `cat "$MK_SECT1"` ; do + make_current_entry "$r" "$s" # $++ + done + echo_current_line "$r" "<!--mksite:sect1:Z-->" # $++ + done # "$r" + + # round two - for each subsection print a current and subpage menu + for r in `cat "$MK_SECT1"` ; do + site_get_subpages "$r" > "$MK_SECT2" + for s in `cat "$MK_SECT2"` ; do test "$r" = "$s" && continue + echo_current_line "$s" "<!--mksite:sect2:A--><br>$_sect2" # $++ + for t in `cat "$MK_SECT2"` ; do test "$r" = "$t" && continue + make_current_entry "$s" "$t" # $++ + done # "$t" + echo_current_line "$s" "<!--mksite:sect2:Z-->" # $++ + done # "$s" + _have_children_="0" + for t in `cat "$MK_SECT2"` ; do test "$r" = "$t" && continue + test "$_have_children_" = "0" && _have_children_="1" && \ + echo_subpage_line "$r" "<!--mksite:sect2:A--><br>$_sect2" # $++ + make_subpage_entry "$r" "$t" # $++ + done # "$t" + test "$_have_children_" = "1" && \ + echo_subpage_line "$r" "<!--mksite:sect2:Z-->" # $++ + done # "$r" + + # round three - for each subsubsection print a current and subpage menu + for r in `cat "$MK_SECT1"` ; do + site_get_subpages "$r" > "$MK_SECT2" + for s in `cat "$MK_SECT2"` ; do test "$r" = "$s" && continue + site_get_subpages "$s" > "$MK_SECT3" + for t in `cat "$MK_SECT3"` ; do test "$s" = "$t" && continue + echo_current_line "$t" "<!--mksite:sect3:A--><br>$_sect3" # $++ + for u in `cat "$MK_SECT3"` ; do test "$s" = "$u" && continue + make_current_entry "$t" "$u" # $++ + done # "$u" + echo_current_line "$t" "<!--mksite:sect3:Z-->" # $++ + done # "$t" + _have_children_="0" + for u in `cat "$MK_SECT3"` ; do test "$u" = "$s" && continue + test "$_have_children_" = "0" && _have_children_="1" && \ + echo_subpage_line "$s" "<!--mksite:sect3:A--><br>$_sect3" # $++ + make_subpage_entry "$s" "$u" # $++ + done # "$u" + test "$_have_children_" = "1" && \ + echo_subpage_line "$s" "<!--mksite:sect3:Z-->" # $++ + done # "$s" + done # "$r" + echo "<a name=\".\"></a>" # $++ + echo "</body></html>" # $++ +} + +# create a selector that can grep a printsitefile for the matching entries +select_in_printsitefile () # arg = "page" : return to stdout >> $P.$HEAD +{ + _selected_="$1" ; test ".$_selected_" = "." && _selected_="$F" + _section_=`sed_slash_key "$_selected_"` + echo "s/^<!--$_SECT\"$_section_\"-->//" # sect3 + echo "s/^<!--$_SECT[*]:\"$_section_\"-->//" # children + _selected_=`site_get_parentpage "$_selected_"` + _section_=`sed_slash_key "$_selected_"` + echo "s/^<!--$_SECT\"$_section_\"-->//" # sect2 + _selected_=`site_get_parentpage "$_selected_"` + _section_=`sed_slash_key "$_selected_"` + echo "s/^<!--$_SECT\"$_section_\"-->//" # sect1 + echo "/^<!--$_SECT\"[^\"]*\"-->/d" + echo "/^<!--$_SECT[*]:\"[^\"]*\"-->/d" + echo "s/^<!--mksite:sect[$NN]:[$AZ]-->//" +} + +body_for_emailfooter () +{ + test ".$emailfooter" = ".no" && return + _email_=`echo "$emailfooter" | sed -e "s|[?].*||"` + _dated_=`info_get_entry updated` + echo "<hr><table border=\"0\" width=\"100%\"><tr><td>" + echo "<a href=\"mailto:$emailfooter\">$_email_</a>" + echo "</td><td align=\"right\">" + echo "$_dated_</td></tr></table>" +} + +# =================================================================== CSS +# There was another project to support sitemap build from xml files. +# The source format was using .dbk+xml with embedded references to .css +# files for visual preview in a browser. An docbook xml file with semantic +# outlines is far better suited for quality documentation than any html +# source. It happens that the xml/css support in browsers is still not +# very portable - especially embedded css style blocks are a nightmare. +# Instead we (a) grab all non-html xml markup tags (b) grab all referenced +# css stylesheets (c) cut out css defs from [b] that are known by [a] and +# (d) append those to the <style> tag in the output html file as well as +# (e) reformatting the defs as well as markups from tags to tag classes. +# Input dbk/htm +# <?xml-stylesheet type="text/css" href="html.css" ?> <!-- dbk/xml --> +# <link rel="stylesheet" type="text/css" href="sdocbook.css" /> <!-- xhtml --> +# <article><para> +# Using some <command>exe</command> +# </para></article> +# Input css: +# article { .. ; display : block } +# para { .. ; display : block } +# command { .. ; display : inline } +# Output html: +# <html><style type="text/css"> +# div .article { .. } +# div .para { .. } +# span .command { .. } +# </style> +# <div class="article"><div class="para> +# Using some <span class="command">exe</span> +# </div></div> + +css_sourcefile () +{ + if test -f "$1" ; then echo "$1" + elif test -f "$opt_src_dir/$1" ; then echo "$opt_src_dir/$1" + elif echo "$1" | grep "^/" > $NULL ; then echo "$1" + else echo "./$1" + fi +} + +css_xmltags () # $SOURCEFILE +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$SOURCEFILE" + cat "$S" | $SED -e "s|>[^<>]*<|><|g" -e "s|^[^<>]*<|<|" \ + -e "s|>[^<>]*\$|>|" -e "s|<|\\n|g" \ + | $SED -e "/^\\//d" -e "/^ *\$/d" -e "/>/!d" -e "s|>.*||" \ + | sort | uniq > "$tmp/$MK.$X.xmltags.tmp.txt" +} + +css_xmlstyles () # $SOURCEFILE +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$SOURCEFILE" + cat "$S" "$SITEFILE" \ + | sed \ + -e "s|<link *rel=['\"]*stylesheet|<?xml-stylesheet |" \ + -e "/<.xml-stylesheet/!d" -e "/href/!N" -e "/href/!N" \ + -e "s|^.*<.xml-stylesheet||" -e 's|^.*href="||' -e 's|".*||' \ + | sort | uniq > "$tmp/$MK.$X.xmlstylesheets.tmp.txt" +} + +css_xmlstyles_sed () # $SOURCEFILE +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$tmp/$MK.$X.xmltags.tmp.txt" + R="$tmp/$MK.$X.xmltags.tmp.sed" + rm -f "$R" + { + for x in 1 2 3 4 5 6 7 8 9 ; do echo "/}/d" ; echo "/{/!N" ; done + echo "s|\\r||g" + $SED "/^[$AZ$az$NN]/!d" "$S" | { while read xmltag ; do + xmltag=`echo "$xmltag" | sed -e "s/ .*//"` + _xmltag=`sed_slash_key "$xmltag"` + if echo " title section " | grep " $xmltag " > $NULL ; then + test "$xmltag" = "section" && continue; + echo "/^ *$_xmltag *[,\\n{]/bfound" >> "$R" + echo "/[,\\n] *$_xmltag *[,\\n{]/bfound" >> "$R" + $SED "/^[$AZ$az$NN]/!d" "$S" | { while read xmlparent ; do + xmlparent=`echo "$xmlparent" | sed -e "s/ .*//"` + _xmlparent=`sed_slash_key "$xmlparent"` + echo "/^ *$_xmlparent *$_xmltag *[,\\n{]/bfound" + echo "/[ ,\\n] *$_xmlparent *$_xmltag *[,\\n{]/bfound" + done } + else + echo "/^ *$_xmltag *[ ,\\n{]/bfound" + echo "/[ ,\\n] *$_xmltag *[ ,\\n{]/bfound" + fi + done } + echo "d" ; echo ":found" + for x in 1 2 3 4 5 6 7 8 9 ; do echo "/}/!N" ; done + $SED "/^[$AZ$az$NN]/!d" "$S" | { while read xmltag ; do + xmltag=`echo "$xmltag" | sed -e "s/ .*//"` + if echo " $HTMLTAGS $HTMLTAGS2" | grep " $xmltag " > $NULL ; then + continue # keep html tags + fi + echo "s|^\\( *\\)\\($xmltag *[ ,\\n{]\\)|\\1.\\2|g" + echo "s|\\([ ,\\n] *\\)\\($xmltag *[ ,\\n{]\\)|\\1.\\2|g" + done } + } > "$R" +} + +css_xmltags_css () # $SOURCEFILE +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$tmp/$MK.$X.xmltags.tmp.sed" + R="$tmp/$MK.$X.xmltags.tmp.css" + { + cat "$tmp/$MK.$X.xmlstylesheets.tmp.txt" | { while read xmlstylesheet ; do + stylesheet=`css_sourcefile "$xmlstylesheet"` + if test -f "$stylesheet" ; then + echo "/* $xmlstylesheet */" + cat "$stylesheet" | $SED -f "$S" + else + error "$xmlstylesheet : ERROR, no such stylesheet" + fi + done } + } > "$R" +} + +css_xmlmapping_sed () # $SOURCEFILE +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$tmp/$MK.$X.xmltags.tmp.txt" + R="$tmp/$MK.$X.xmlmapping.tmp.sed" + rm -f "$R" + { + for x in 1 2 3 4 5 6 7 8 9 ; do echo "/}/d" ; echo "/{/!N" ; done + echo "s|\\r||g" + $SED "/^[$AZ$az$NN]/!d" "$S" | { while read xmltag ; do + xmltag=`echo "$xmltag" | sed -e "s/ .*//"` + xmltag=`sed_slash_key "$xmltag"` + echo "/^ *\\.$xmltag *[ ,\\n{]/bfound" + echo "/[ ,\\n] *\\.$xmltag *[,\\n{]/bfound" + done } + echo "d" ; echo ":found" + for x in 1 2 3 4 5 6 7 8 9 ; do echo "/}/!N" ; done + echo "s/^/>>/" + echo "/[\\n ]display *: *list-item/s|^.*>>|li>>|" + echo "/[\\n ]display *: *table-caption/s|^.*>>|caption>>|" + echo "/[\\n ]display *: *table-cell/s|^.*>>|td>>|" + echo "/[\\n ]display *: *table-row/s|^.*>>|tr>>|" + echo "/[\\n ]display *: *table/s|^.*>>|table>>|" + echo "/[\\n ]display *: *block/s|^.*>>|div>>|" + echo "/[\\n ]display *: *inline/s|^.*>>|span>>|" + echo "/[\\n ]display *: *none/s|^.*>>|small>>|" + echo "/^div>>.*[\\n ]list-style-type *: *disc/s|^.*>>|ul>>|" + echo "/^div>>.*[\\n ]list-style-type *: *decimal/s|^.*>>|ol>>|" + echo "/^span>>.*[\\n ]font-family *: *monospace/s|^.*>>|tt>>|" + echo "/^span>>.*[\\n ]font-style *: *italic/s|^.*>>|em>>|" + echo "/^span>>.*[\\n ]font-weight *: *bold/s|^.*>>|b>>|" + echo "/^div>>.*[\\n ]white-space *: *pre/s|^.*>>|pre>>|" + echo "/^div>>.*[\\n ]margin-left *: *[$NN]/s|^.*>>|blockquote>>|" + $SED "/^[$AZ$az$NN]/!d" "$S" | { while read xmltag ; do + xmltag=`echo "$xmltag" | sed -e "s/ .*//"` + echo "s|^\\(.*\\)>> *\\.$xmltag *[ ,\\n{].*|\\1 .$xmltag|" + echo "s|^\\(.*\\)>>.*[ ,\\n] *\\.$xmltag *[ ,\\n{].*|\\1 .$xmltag|" + done } + echo "s/^div \\.para\$/p .para/" + echo "s/^span \\.ulink\$/a .ulink/" + } > "$R" +} + +css_xmlmapping () # $SOURCEFILE +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + cat "$tmp/$MK.$X.xmltags.tmp.css" | \ + $SED -f "$tmp/$MK.$X.xmlmapping.tmp.sed" \ + > "$tmp/$MK.$X.xmlmapping.tmp.txt" +} + +css_scan() # $SOURCEFILE +{ + css_xmltags + css_xmlstyles + css_xmlstyles_sed + css_xmltags_css + css_xmlmapping_sed + css_xmlmapping +} + +tags2span_sed() # $SOURCEFILE > $++ +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$tmp/$MK.$X.xmltags.tmp.txt" + R="$tmp/$MK.$X.xmltags.tmp.css" + echo "s|<[?]xml-stylesheet[^<>]*[?]>||" + echo "s|<link *rel=['\"]*stylesheet[^<>]*>||" + echo "s|<section[^<>]*>||g" + echo "s|</section>||g" + $SED "/^[$AZ$az$NN]/!d" "$S" | { while read xmltag ; do + # note "xmltag=$xmltag" + xmltag=`echo "$xmltag" | sed -e "s/ .*//"` + if echo " $HTMLTAGS $HTMLTAGS2" | grep " $xmltag " > $NULL ; then + continue # keep html tags + fi + _xmltag=`sed_slash_key "$xmltag"` + _span_=`$SED -e "/ \\.$_xmltag\$/!d" -e "s/ .*//" -e q \ + < "$tmp/$MK.$X.xmlmapping.tmp.txt"` + test ".$_span_" = "." && _span_="span" + _xmltag=`sed_piped_key "$xmltag"` + echo "s|<$xmltag\\([\\n\\t ][^<>]*\\)url=|<$_span_ class=\"$xmltag\"\\1href=|g" + echo "s|<$xmltag\\([\\n\\t >]\\)|<$_span_ class=\"$xmltag\"\\1|g" + echo "s|</$xmltag\\([\\n\\t >]\\)|</$_span_\\1|g" + done } + cat "$tmp/$MK.$X.xmlstylesheets.tmp.txt" | { while read xmlstylesheet ; do + if test -f "$xmlstylesheet" ; then + R="[^<>]*href=['"'"'"]$xmlstylesheet['"'"'"][^<>]*" + echo "s|<[?]xml-stylesheet$R>||" + echo "s|<link[^<>]* rel=['"'"'"]*stylesheet['"'"'" ]$R>||" + fi + done } +} + +tags2meta_sed() # $SOURCEFILE > $++ +{ + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + S="$tmp/$MK.$X.xmlstylesheets.tmp.txt" + R="$tmp/$MK.$X.xmltags.tmp.css" + cat "$tmp/$MK.$X.xmlstylesheets.tmp.txt" | { while read xmlstylesheet ; do + if test -f "$xmlstylesheet" ; then + echo " <style type=\"text/css\"><!--" + $SED -e "s/^/ /" < "$R" + echo " --></style>" + break + fi + done } +} + +# ========================================================================== +# xml/docbook support is taking an dbk input file converting any html DBK +# syntax into pure docbook tagging. Each file is being given a docbook +# doctype so that an xml/docbook viewer can render it correctly - that +# is needed atleast since docbook files do not embed stylesheet infos. +# Most of the processing is related to remap html markup and some other +# shortcut markup into correct docbook markup. The result is NOT checked +# for being well-formed or even matching the docbook schema DTD at all. + +scan_xml_rootnode () +{ + rootnode=`cat "$SOURCEFILE" | \ + $SED -e "/<[$AZ$az$NN]/!d" -e "s/<\\([$AZ$az$NN]*\\).*/\\1/" -e q` + echo "<$Q'root'>$F $rootnode<$QX>" >> "$MK_DATA" +} + +get_xml_rootnode () +{ + _file_=`sed_slash_key "$F"` + $SED -e "/^<$Q'root'>$_file_ /!d" \ + -e "s|.* ||" -e "s|<.*||" -e q "$MK_DATA" # + +} + +xml_sourcefile () +{ + _XMLFILE_=`echo "$1" | $SED -e "s/\\.xml\\$/.dbk/"` + _SRCFILE_=`echo "$1" | $SED -e "s/\\.xml\\$/.htm/"` + test "$1" = "$_XMLFILE_" && _XMLFILE_="///" + test "$1" = "$_SRCFILE_" && _SRCFILE_="///" + if test -f "$_XMLFILE_" + then echo "$_XMLFILE_" # $++ + elif test -f "$_SRCFILE_" + then echo "$_SRCFILE_" # $++ + elif test -f "$opt_src_dir/$_XMLFILE_" + then echo "$opt_src_dir/$_XMLFILE_" # $++ + elif test -f "$opt_src_dir/$_SRCFILE_" + then echo "$opt_src_dir/$_SRCFILE_" # $++ + else echo ".//$_XMLFILE_" # $++ (not found?) + fi +} + +scan_xmlfile() +{ + SOURCEFILE=`xml_sourcefile "$F"` + $hint "'$SOURCEFILE': scanning xml -> '$F'" + scan_xml_rootnode + rootnode=`get_xml_rootnode | sed -e "/^h[$NN]/s|\$| <?section?>|"` + $hint "'$SOURCEFILE': rootnode ('$rootnode')" +} + +make_xmlfile() +{ + SOURCEFILE=`xml_sourcefile "$F"` + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + article=`get_xml_rootnode` + test ".$article" = "." && article="article" + echo '<!DOCTYPE '$article' PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN"' \ + > "$F" + echo ' "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd">' \ + >> "$F" + cat "$tmp/$MK.$X.xmlstylesheets.tmp.txt" | { while read stylesheet ; do + echo "<?xml-stylesheet type=\"text/css\" href=\"$stylesheet\" ?>" \ + >> "$F" + done } + __secinfo="\\1<sectioninfo>\\2</sectioninfo>" + cat "$SOURCEFILE" | $SED \ + -e "s!<>!\ \;!g" \ + -e "s!\\(&\\)\\(&\\)!\\1amp;\\2amp;!g" \ + -e "s!\\(<[^<>]*\\)\\(width\\)\\(=\\)\\([$NN]*\%*\\)!\\1\\2\\3\"\\4\"!g" \ + -e "s!\\(<[^<>]*\\)\\(cellpadding\\)\\(=\\)\\([$NN]*\%*\\)!\\1\\2\\3\"\\4\"!g" \ + -e "s!\\(<[^<>]*\\)\\(border\\)\\(=\\)\\([$NN]*\%*\\)!\\1\\2\\3\"\\4\"!g" \ + -e "s!<[?]xml-stylesheet[^<>]*>!!" \ + -e "s!<link[^<>]* rel=[\'\"]*stylesheet[^<>]*>!!" \ + -e "s!<[hH][$NN]!<title!g" \ + -e "s!</[hH][$NN]!</title!g" \ + -e "s!\\(</title> *\\)\\([^<>]*[$AZ$az$NN][^<>\r\n]*\\)\$!\\1<sub>\\2</sub>!" \ + -e "s!\\(</title>.*\\)<sub>!\\1<subtitle>!g" \ + -e "s!\\(</title>.*\\)</sub>!\\1</subtitle>!g" \ + -e "s!\\(<section>[^<>]*\\)\\(<date>.*</date>[^<>]*\\)\$!\\1<sectioninfo>\\2</sectioninfo>!g" \ + -e "s!<em>!<emphasis>!g" \ + -e "s!</em>!</emphasis>!g" \ + -e "s!<i>!<emphasis>!g" \ + -e "s!</i>!</emphasis>!g" \ + -e "s!<b>!<emphasis role=\"bold\">!g" \ + -e "s!</b>!</emphasis>!g" \ + -e "s!<u>!<emphasis role=\"underline\">!g" \ + -e "s!</u>!</emphasis>!g" \ + -e "s!<big>!<emphasis role=\"strong\">!g" \ + -e "s!</big>!</emphasis>!g" \ + -e "s!<\\(strike\\)>!<emphasis role=\"strikethrough\">!g" \ + -e "s!<\\(s\\)>!<emphasis role=\"strikethrough\">!g" \ + -e "s!</\\(strike\\)>!</emphasis>!g" \ + -e "s!</\\(s\\)>!</emphasis>!g" \ + -e "s!<center>!<blockquote><para>!g" \ + -e "s!</center>!</para></blockquote>!g" \ + -e "s!<p align=\\(\"[$AZ$az$NN]*\"\\)>!<para role=\\1>!g" \ + -e "s!<[pP]>!<para>!g" \ + -e "s!</[pP]>!</para>!g" \ + -e "s!<\\(pre\\)>!<screen>!g" \ + -e "s!<\\(PRE\\)>!<screen>!g" \ + -e "s!</\\(pre\\)>!</screen>!g" \ + -e "s!</\\(PRE\\)>!</screen>!g" \ + -e "s!<a\\( [^<>]*\\)name=\\([^<>]*\\)/>!<anchor \\1id=\\2/>!g" \ + -e "s!<a\\( [^<>]*\\)name=\\([^<>]*\\)>!<anchor \\1id=\\2/>!g" \ + -e "s!<a\\( [^<>]*\\)href=!<ulink\\1url=!g" \ + -e "s!</a>!</ulink>!g" \ + -e "s! remap=\"url\">[^<>]*</ulink>! />!g" \ + -e "s!<\\(/*\\)span\\([ ][^<>]*\\)>!<\\1phrase\\2>!g" \ + -e "s!<\\(/*\\)span>!<\\1phrase>!g" \ + -e "s!<small\\([ ][^<>]*\\)>!<phrase role=\"small\"\\1>!g" \ + -e "s!<small>!<phrase role=\"small\">!g" \ + -e "s!</small>!</phrase>!g" \ + -e "s!<\\(/*\\)\\(sup\\)>!<\\1superscript>!g" \ + -e "s!<\\(/*\\)\\(sub\\)>!<\\1subscript>!g" \ + -e "s!\\(<\\)\\(li\\)\\(><\\)!\\1listitem\\3!g" \ + -e "s!\\(></\\)\\(li\\)\\(>\\)!\\1listitem\\3!g" \ + -e "s!\\(<\\)\\(li\\)\\(>\\)!\\1listitem\\3<para>!g" \ + -e "s!\\(</\\)\\(li\\)\\(>\\)!</para>\\1listitem\\3!g" \ + -e "s!\\(</*\\)\\(ul\\)>!\\1itemizedlist>!g" \ + -e "s!\\(</*\\)\\(ol\\)>!\\1orderedlist>!g" \ + -e "s!\\(</*\\)\\(dl\\)>!\\1variablelist>!g" \ + -e "s!<\\(/*\\)DT>!<\\1dt>!g" \ + -e "s!<\\(/*\\)DD>!<\\1dd>!g" \ + -e "s!<\\(/*\\)DL>!<\\1dl>!g" \ + -e "s!<BLOCKQUOTE>!<blockquote><para>!g" \ + -e "s!</BLOCKQUOTE>!</para></blockquote>!g" \ + -e "s!<\\(/*\\)dl>!<\\1variablelist>!g" \ + -e "s!<dt\\( [^<>]*\\)>!<varlistentry\\1><term>!g" \ + -e "s!<dt>!<varlistentry><term>!g" \ + -e "s!</dt>!</term>!g" \ + -e "s!<dd\\( [^<>]*\\)><!<listitem\\1><!g" \ + -e "s!<dd><!<listitem><!g" \ + -e "s!></dd>!></listitem></varlistentry>!g" \ + -e "s!<dd\\( [^<>]*\\)>!<listitem\\1><para>!g" \ + -e "s!<dd>!<listitem><para>!g" \ + -e "s!</dd>!</para></listitem></varlistentry>!g" \ + -e "s!<table[^<>]*><tr><td>\\(<table[^<>]*>\\)!\\1!" \ + -e "s!\\(</table>\\)</td></tr></table>!\\1!" \ + -e "s!<table\\( [^<>]*\\)>!<informaltable\\1><tgroup cols=\"2\"><tbody>!g" \ + -e "s!<table>!<informaltable><tgroup cols=\"2\"><tbody>!g" \ + -e "s!</table>!</tbody></tgroup></informaltable>!g" \ + -e "s!\\(</*\\)tr\\([ ][^<>]*\\)>!\\1row\\2>!g" \ + -e "s!\\(</*\\)tr>!\\1row>!g" \ + -e "s!\\(</*\\)td\\([ ][^<>]*\\)>!\\1entry\\2>!g" \ + -e "s!\\(</*\\)td>!\\1entry>!g" \ + -e "s!\\(<informaltable[^<>]*[ ]width=\"100\%\"\\)!\\1 pgwide=\"1\"!g" \ + -e "s!\\(<tgroup[<>]*[ ]cols=\"2\">\\)\\(<tbody>\\)!\\1<colspec colwidth=\"1*\" /><colspec colwidth=\"1*\" />\\2!g" \ + -e "s!\\(<entry[^<>]*[ ]\\)width=\\(\"[$NN]*\%*\"\\)!\\1remap=\\2!g" \ + -e "s!<nobr>\\([\'\`]*\\)<tt>!<cmdsynopsis><command>\\1!g" \ + -e "s!</tt>\\([\'\`]*\\)</nobr>!\\1</command></cmdsynopsis>!g" \ + -e "s!<nobr><\\(code\\)>\\([\`\"\']\\)!<cmdsynopsis><command>\\2!g" \ + -e "s!<\\(code\\)><nobr>\\([\`\"\']\\)!<cmdsynopsis><command>\\2!g" \ + -e "s!\\([\`\"\']\\)</\\(code\\)></nobr>!\\1</command></cmdsynopsis>!g" \ + -e "s!\\([\`\"\']\\)</nobr></\\(code\\)>!\\1</command></cmdsynopsis>!g" \ + -e "s!<nobr><\\(tt\\)>\\([\`\"\']\\)!<cmdsynopsis><command>\\2!g" \ + -e "s!<\\(tt\\)><nobr>\\([\`\"\']\\)!<cmdsynopsis><command>\\2!g" \ + -e "s!\\([\`\"\']\\)</\\(tt\\)></nobr>!\\1</command></cmdsynopsis>!g" \ + -e "s!\\([\`\"\']\\)</nobr></\\(tt\\)>!\\1</command></cmdsynopsis>!g" \ + -e "s!\\(</*\\)tt>!\\1constant>!g" \ + -e "s!\\(</*\\)code>!\\1literal>!g" \ + -e "s!<br>!<br />!g" \ + -e "s!<br */>!<screen role=\"linebreak\">\n</screen>!g" \ + >> "$F" + echo "'$SOURCEFILE': " `ls -s $SOURCEFILE` ">>" `ls -s $F` +} + +make_xmlmaster () +{ + SOURCEFILE=`xml_sourcefile "$F"` + X=`echo $SOURCEFILE | sed -e "y:/:~:"` + article="section" # book? chapter? + echo '<!DOCTYPE' $article 'PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN"' >$F + echo ' "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd">' >>$F + cat "$tmp/$MK.$X.xmlstylesheets.tmp.txt" | { while read stylesheet ; do + echo "<?xml-stylesheet type=\"text/css\" href=\"$stylesheet\" ?>" \ + >> "$F" + done } + echo "<section><title>Documentation</title>" >>$F + make_xmlsitemap >> $F + echo "</section>" >> $F + echo "'$SOURCEFILE': " `ls -s $SOURCEFILE` ">*>" `ls -s $F` +} + +# ========================================================================== +# +# During processing we will create a series of intermediate files that +# store relations. They all have the same format being +# =relationtype=key value +# where key is usually s filename or an anchor. For mere convenience +# we assume that the source html text does not have lines that start +# off with =xxxx= (btw, ye remember perl section notation...). Of course +# any other format would be usuable as well. +# + +# we scan the SITEFILE for href references to be converted +# - in the new variant we use a ".gets.tmp" sed script that SECTS +# marks all interesting lines so they can be checked later +# with an sed anchor of sect="[$NN]" (or sect="[$AZ]") +S="\\ \\;" +# S="[&]nbsp[;]" + +# HR and EM style markups must exist in input - BR sometimes left out +# these routines in(ter)ject hardspace before, between, after markups +# note that "<br>" is sometimes used with HR - it must exist in input +echo_HR_EM_PP () +{ + echo "s%^\\($1$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\(<>$1$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($S$1$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($1<>$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($1$S$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($1$2<>$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($1$2$S$3*<a\\) \\(href=\\)%\\1 $4 \\2%" +} + +echo_br_EM_PP () +{ + echo_HR_EM_PP "$1" "$2" "$3" "$4" + echo "s%^\\($2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\(<>$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($S$2$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($2<>$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($2$S$3*<a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($2$3*<><a\\) \\(href=\\)%\\1 $4 \\2%" + echo "s%^\\($2$3*$S<a\\) \\(href=\\)%\\1 $4 \\2%" +} + +echo_HR_PP () +{ + echo "s%^\\($1<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\($1$2*<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\(<>$1$2*<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\($S$1$2*<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\($1<>$2*<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\($1$S$2*<a\\) \\(href=\\)%\\1 $3 \\2%" +} +echo_br_PP () +{ + echo_HR_PP "$1" "$2" "$3" + echo "s%^\\($2*<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\(<>$2*<a\\) \\(href=\\)%\\1 $3 \\2%" + echo "s%^\\($S$2*<a\\) \\(href=\\)%\\1 $3 \\2%" +} +echo_sp_PP () +{ + echo "s%^\\(<>$1*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($S$1*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\(<><>$1*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($S$S$1*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\(<>$1<>*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($S$1$S*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1<><>*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1$S$S*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1<>*<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1$S*<a\\) \\(href=\\)%\\1 $2 \\2%" +} + +echo_sp_SP () +{ + echo "s%^\\($1<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\(<>$1<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($S$1<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\(<><>$1<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($S$S$1<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\(<>$1<><a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($S$1$S<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1<><><a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1$S$S<a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1<><a\\) \\(href=\\)%\\1 $2 \\2%" + echo "s%^\\($1$S<a\\) \\(href=\\)%\\1 $2 \\2%" +} + +echo_sp_sp () +{ + echo "s%^\\($1<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\(<>$1<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($S$1<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\(<><>$1<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($S$S$1<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\(<>$1<><a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($S$1$S<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($1<><><a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($1$S$S<a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($1<><a\\) \\(name=\\)%\\1 $2 \\2%" + echo "s%^\\($1$S<a\\) \\(name=\\)%\\1 $2 \\2%" +} + +make_sitemap_init() +{ + # build a list of detectors that map site.htm entries to a section table + # note that the resulting .gets.tmp / .puts.tmp are real sed-script + h1="[-$AP$AK]" + b1="[*=]" + b2="[-$AP$AK]" + b3="[:/]" + q3="[:/,$AK]" + echo_HR_PP "<hr>" "$h1" "sect=\"1\"" > "$MK_GETS" + echo_HR_EM_PP "<hr>" "<em>" "$h1" "sect=\"1\"" >> "$MK_GETS" + echo_HR_EM_PP "<hr>" "<strong>" "$h1" "sect=\"1\"" >> "$MK_GETS" + echo_HR_PP "<br>" "$b1$b1" "sect=\"1\"" >> "$MK_GETS" + echo_HR_PP "<br>" "$b2$b2" "sect=\"2\"" >> "$MK_GETS" + echo_HR_PP "<br>" "$b3$b3" "sect=\"3\"" >> "$MK_GETS" + echo_br_PP "<br>" "$b2$b2" "sect=\"2\"" >> "$MK_GETS" + echo_br_PP "<br>" "$b3$b3" "sect=\"3\"" >> "$MK_GETS" + echo_br_EM_PP "<br>" "<small>" "$q3" "sect=\"3\"" >> "$MK_GETS" + echo_br_EM_PP "<br>" "<em>" "$q3" "sect=\"3\"" >> "$MK_GETS" + echo_br_EM_PP "<br>" "<u>" "$q3" "sect=\"3\"" >> "$MK_GETS" + echo_HR_PP "<br>" "$q3" "sect=\"3\"" >> "$MK_GETS" + echo_br_PP "<u>" "$b2" "sect=\"2\"" >> "$MK_GETS" + echo_sp_PP "$q3" "sect=\"3\"" >> "$MK_GETS" + echo_sp_SP "" "sect=\"2\"" >> "$MK_GETS" + echo_sp_sp "$q3" "sect=\"9\"" >> "$MK_GETS" + echo_sp_sp "<br>" "sect=\"9\"" >> "$MK_GETS" + $SED -e "s/\\(>\\)\\(\\[\\)/\\1 *\\2/" "$MK_GETS" > "$MK_PUTS" + # the .puts.tmp variant is used to <b><a href=..></b> some hrefs which + # shall not be used otherwise for being generated - this is nice for + # some quicklinks somewhere. The difference: a whitspace "<hr> <a...>" + echo "" > "$MK_DATA" # fresh start +} + +_uses_="<$Q'use\\1'>\\2 \\3<$QX>" +_name_="<$Q'use\\1'>name:\\2 \\3<$QX>" ; + +make_sitemap_list() +{ + _sitefile_="$1" ; test ".$_sitefile_" = "." && _sitefile_="$SITEFILE" + # scan sitefile for references pages - store as "=use+=href+ anchortext" + $SED -f "$MK_GETS" -e "/<a sect=\"[$NN]\"/!d" \ + -e "s|.*<a sect=\"\\([^\"]*\\)\" href=\"\\([^\"]*\\)\"[^<>]*>\\(.*\\)</a>.*|$_uses_|" \ + -e "s|.*<a sect=\"\\([^\"]*\\)\" name=\"\\([^\"]*\\)\"[^<>]*>\\(.*\\)</a>.*|$_name_|" \ + -e "s|.*<a sect=\"\\([^\"]*\\)\" name=\"\\([^\"]*\\)\"[^<>]*>\\(.*\\)|$_name_|" \ + -e "/^<$Q/!d" -e "/^<!/d" \ + "$_sitefile_" >> "$MK_DATA" +} + +_Uses_="<$Q'Use\\1'>\\2 \\3<$QX>" +_Name_="<$Q'Use\\1'>name:\\2 \\3<$QX>" ; + +make_subsitemap_list() +{ + _sitefile_="$1" ; test ".$_sitefile_" = "." && _sitefile_="$SITEFILE" + # scan sitefile for references pages - store as "=use+=href+ anchortext" + $SED -f "$MK_GETS" -e "/<a sect=\"[$NN]\"/!d" \ + -e "s|.*<a sect=\"\\([^\"]*\\)\" href=\"\\([^\"]*\\)\"[^<>]*>\\(.*\\)</a>.*|$_Uses_|" \ + -e "s|.*<a sect=\"\\([^\"]*\\)\" name=\"\\([^\"]*\\)\"[^<>]*>\\(.*\\)</a>.*|$_Name_|" \ + -e "s|.*<a sect=\"\\([^\"]*\\)\" name=\"\\([^\"]*\\)\"[^<>]*>\\(.*\\)|$_Name_|" \ + -e "/^<$Q/!d" -e "/^<!/d" \ + -e "s|>\\([^:./][^:./]*[./]\\)|>$2\\1|" \ + "$_sitefile_" >> "$MK_DATA" +} + +make_sitemap_sect() +{ + # scan used pages and store prime section group relation 'sect' and 'node' + # (A) each "use1" creates "'sect'>href+ href1" for all following non-"use1" + # (B) each "use1" creates "'node'>href2 href1" for all following "use2" + $SED -e "/^<$Q'use.'>/!d" \ + -e "/^<$Q'use1'>/{" \ + -e "h" -e "s|^<$Q'use1'>\\([^ ]*\\) .*|\\1|" \ + -e "x" -e "}" \ + -e "s|^<$Q'use.'>\\([^ ]*\\) .*|<$Q'sect'>\\1|" \ + -e G -e "s|\\n| |" -e "s|\$|<$QX>|" "$MK_DATA" >> "$MK_DATA" + $SED -e "/^<$Q'use.'>/!d" \ + -e "/^<$Q'use1'>/{" \ + -e "h" -e "s|^<$Q'use1'>\\([^ ]*\\) .*|\\1|" \ + -e "x" -e "}" \ + -e "/^<$Q'use[13456789]'>/d" \ + -e "s|<$Q'use.'>\\([^ ]*\\) .*|<$Q'node'>\\1|" \ + -e G -e "s|\\n| |" -e "s|\$|<$QX>|" "$MK_DATA" >> "$MK_DATA" +} + +make_sitemap_page() +{ + # scan used pages and store secondary group relation 'page' and 'node' + # the parenting 'node' for use3 is usually a use2 (or use1 if none there) + $SED -e "/^<$Q'use.'>/!d" \ + -e "/^<$Q'use1'>/{" \ + -e "h" -e "s|^<$Q'use1'>\\([^ ]*\\) .*|\\1|" \ + -e "x" -e "}" \ + -e "/^<$Q'use2'>/{" \ + -e "h" -e "s|^<$Q'use2'>\\([^ ]*\\) .*|\\1|" \ + -e "x" -e "}" \ + -e "/^<$Q'use1'>/d" \ + -e "s|^<$Q'use.'>\\([^ ]*\\) .*|<$Q'page'>\\1<$QX>|" \ + -e G -e "s|\\n| |" "$MK_DATA" >> "$MK_DATA" + $SED -e "/^<$Q'use.'>/!d" \ + -e "/^<$Q'use1'>/{" \ + -e "h" -e "s|^<$Q'use1'>\\([^ ]*\\) .*|\\1|" \ + -e "x" -e "}" \ + -e "/^<$Q'use2'>/{" \ + -e "h" -e "s|^<$Q'use2'>\\([^ ]*\\) .*|\\1|" \ + -e "x" -e "}" \ + -e "/^<$Q'use[12456789]'>/d" \ + -e "s|^<$Q'use.'>\\([^ ]*\\) .*|<$Q'node'>\\1<$QX>|" \ + -e G -e "s|\\n| |" "$MK_DATA" >> "$MK_DATA" + # and for the root sections we register ".." as the parenting group + $SED -e "/^<$Q'use1'>/!d" \ + -e "s|^<$Q'use.'>\\([^ ]*\\) .*|<$Q'node'>\\1 ..<$QX>|" "$MK_DATA" >> "$MK_DATA" +} + +echo_site_filelist() +{ + $SED -e "/^<$Q'use.'>/!d" \ + -e "s|^<$Q'use.'>||" -e "s| .*||" "$MK_DATA" +} + +# ========================================================================== +# originally this was a one-pass compiler but the more information +# we were scanning out the more slower the system ran - since we +# were rescanning files for things like section information. Now +# we scan the files first for global information. +# 1.PASS + +scan_sitefile () # $F +{ + SOURCEFILE=`html_sourcefile "$F"` + $hint "'$SOURCEFILE': scanning -> sitefile" + if test "$SOURCEFILE" != "$F" ; then + dx_init "$F" + dx_text today "`timetoday`" + short=`echo "$F" | $SED -e "s:.*/::" -e "s:[.].*::"` # basename for all exts + short="$short ~" + DC_meta title "$short" + DC_meta date.available "`timetoday`" + DC_meta subject sitemap + DC_meta DCMIType Collection + DC_VARS_Of "$SOURCEFILE" ; HTTP_VARS_Of "$SOURCEFILE" + DC_modified "$SOURCEFILE" ; DC_date "$SOURCEFILE" + DC_section "$F" + DX_text date.formatted `timetoday` + test ".$printerfriendly" != "." && \ + DX_text "printerfriendly" `fast_html_printerfile "$F"` + test ".$USER" != "." && DC_publisher "$USER" + echo "'$SOURCEFILE': $short (sitemap)" + site_map_list_title "$F" "$short" + site_map_long_title "$F" "generated sitemap index" + site_map_list_date "$F" "`timetoday`" + fi +} + +scan_htmlfile() # "$F" +{ + SOURCEFILE=`html_sourcefile "$F"` # SCAN : + $hint "'$SOURCEFILE': scanning -> $F" # HTML : + if test "$SOURCEFILE" != "$F" ; then : + if test -f "$SOURCEFILE" ; then + dx_init "$F" + dx_text today "`timetoday`" + dx_text todays "`timetodays`" + DC_VARS_Of "$SOURCEFILE" ; HTTP_VARS_Of "$SOURCEFILE" + DC_title "$SOURCEFILE" + DC_isFormatOf "$SOURCEFILE" + DC_modified "$SOURCEFILE" ; DC_date "$SOURCEFILE" ; DC_date "$SITEFILE" + DC_section "$F" ; DC_selected "$F" ; DX_alternative "$SOURCEFILE" + test ".$USER" != "." && DC_publisher "$USER" + DX_text date.formatted "`timetoday`" + test ".$printerfriendly" != "." && \ + DX_text "printerfriendly" `fast_html_printerfile "$F"` + sectn=`info_get_entry DC.relation.section` + short=`info_get_entry DC.title.selected` + site_map_list_title "$F" "$short" + info_map_list_title "$F" "$short" + title=`info_get_entry DC.title` + site_map_long_title "$F" "$title" + info_map_long_title "$F" "$title" + edate=`info_get_entry DC.date` + issue=`info_get_entry issue` + site_map_list_date "$F" "$edate" + info_map_list_date "$F" "$edate" + css_scan + echo "'$SOURCEFILE': '$title' ('$short') @ '$issue' ('$sectn')" + else + echo "'$SOURCEFILE': does not exist" + site_map_list_title "$F" "$F" + site_map_long_title "$F" "$F (no source)" + fi ; else + echo "<$F> - skipped" + fi +} + +scan_subsitemap_long () +{ + grep "<a href=\"[^\"]*\">" "$1" | { + while read _line_ ; do + _href_=`echo "$_line_" | $SED -e "s|.*<a href=\"\\([^\"]*\\)\">.*|\\1|"` + _date_=`echo "$_line_" | $SED -e "s|.*<small style=\"date\">\\([^<>]*\\)</small>.*|\\1|" -e "/<a href=\"[^\"]*\">/d"` + _long_=`echo "$_line_" | $SED -e "s|.*<!--long-->\\([^<>]*\\)<!--/long-->.*|\\1|" -e "/<a href=\"[^\"]*\">/d"` + if test ".$_href_" != "." && test ".$_date_" != "." ; then + site_map_list_date "$2$_href_" "$_date_" + fi + if test ".$_href_" != "." && test ".$_long_" != "." ; then + site_map_long_title "$2$_href_" "$_long_" + fi + done + } +} + +scan_namespec () +{ + # nothing so far + case "$1" in + name:sitemap:*) + short=`echo "$F" | $SED -e "s:.*/::" -e "s:[.].*::"` + short=`echo "$short ~" | $SED -e "s/name:sitemap://"` + site_map_list_title "$F" "$short" + site_map_long_title "$F" "external sitemap index" + site_map_list_date "$F" "`timetoday`" + echo "'$F' external sitemap index$n" + ;; + name:*.htm|name:*.html) + FF=`echo "$1" | $SED -e "s|name:||"` + FFF=`echo "$FF" | $SED -e "s|/[^/]*\$|/|"` # dirname + case "$FFF" in */*) : ;; *) FFF="" ;; esac + make_subsitemap_list "$FF" "$FFF" + scan_subsitemap_long "$FF" "$FFF" + ;; + esac +} +scan_httpspec () +{ + # nothing so far + return; +} + +skip_namespec () +{ + # nothing so far + return; +} +skip_httpspec () +{ + # nothing so far + return; +} + +# ========================================================================== +# and now generate the output pages +# 2.PASS + +head_sed_sitemap() # $filename $section +{ + FF=`sed_piped_key "$1"` + SECTION=`sed_slash_key "$2"` + SECTS="sect=\"[$NN$AZ]\"" ; SECTN="sect=\"[$NN]\"" # lines with hrefs + echo "s|\\(<a $SECTS href=\"$FF\">.*</a>\\)|<b>\\1</b>|" # $++ + test ".$sectiontab" != ".no" && \ + echo "/ href=\"$SECTION\"/s|^<td class=\"[^\"]*\"|<td |" # $++ +} + +head_sed_listsection() # $filename $section +{ + # traditional.... the sitefile is the full navigation bar + FF=`sed_piped_key "$1"` + SECTION=`sed_slash_key "$2"` + SECTS="sect=\"[$NN$AZ]\"" ; SECTN="sect=\"[$NN]\"" # lines with hrefs + echo "s|\\(<a $SECTS href=\"$FF\">.*</a>\\)|<b>\\1</b>|" # $++ + test ".$sectiontab" != ".no" && \ + echo "/ href=\"$SECTION\"/s|^<td class=\"[^\"]*\"|<td |" # $++ +} + +head_sed_multisection() # $filename $section +{ + # sitefile navigation bar is split into sections + FF=`sed_piped_key "$1"` + SECTION=`sed_slash_key "$2"` + SECTS="sect=\"[$NN$AZ]\"" ; SECTN="sect=\"[$NN]\"" # lines with hrefs + # grep all pages with a class='sect' relation to current $SECTION and + # build foreach an sed line "s|$SECTS\(<a href=$F>\)|<!--sectX-->\1|" + # after that all the (still) numeric SECTNs are deactivated / killed. + for section in $SECTION $headsection $tailsection ; do + test ".$section" = ".no" && continue + $SED -e "/^<$Q'sect'>[^ ]* $section/!d" \ + -e "s|<$Q'sect'>||" -e "s| .*||" \ + -e "s/.*/s|<a $SECTS \\\\(href=\"&\"\\\\)|<a sect=\"X\" \\\\1|/" \ + "$MK_DATA" # $++ + $SED -e "/^<$Q'sect'>name:[^ ]* $section/!d" \ + -e "s|<$Q'sect'>name:||" -e "s| .*||" \ + -e "s/.*/s|<a $SECTS \\\\(name=\"&\"\\\\)|<a sect=\"X\" \\\\1|/" \ + "$MK_DATA" # $++ + done + echo "s|.*<a \\($SECTN href=[^<>]*\\)>.*|<!-- \\1 -->|" # $++ + echo "s|.*<a \\($SECTN name=[^<>]*\\)>.*|<!-- \\1 -->|" # $++ + echo "s|\\(<a $SECTS href=\"$FF\">\\)|<b>\\1</b>|" # $++ + test ".$sectiontab" != ".no" && \ + echo "/ href=\"$SECTION\"/s|^<td class=\"[^\"]*\"|<td |" # $++ +} + +make_sitefile () # "$F" +{ + SOURCEFILE=`html_sourcefile "$F"` + if test "$SOURCEFILE" != "$F" ; then + if test -f "$SOURCEFILE" ; then + # remember that in this case "${SITEFILE}l" = "$F" = "${SOURCEFILE}l" + info2vars_sed > $MK_VARS # have <!--title--> vars substituted + info2meta_sed > $MK_META # add <meta name="DC.title"> values + F_HEAD="$tmp/$F.$HEAD" ; F_FOOT="$tmp/$F.$FOOT" + $CAT "$MK_PUTS" > "$F_HEAD" + head_sed_sitemap "$F" "`info_get_entry_section`" >> "$F_HEAD" + echo "/<head>/r $MK_META" >> "$F_HEAD" + $CAT "$MK_VARS" "$MK_TAGS" >> "$F_HEAD" + echo "/<\\/body>/d" >> "$F_HEAD" + case "$sitemaplayout" in + multi) make_multisitemap > "$F_FOOT" ;; # here we use ~foot~ to + *) make_listsitemap > "$F_FOOT" ;; # hold the main text + esac + + mkpathfile "$F" + $SED_LONGSCRIPT "$F_HEAD" "$SITEFILE" > $F # ~head~ + $CAT "$F_FOOT" >> $F # ~body~ + $SED -e "/<\\/body>/!d" -f "$MK_VARS" "$SITEFILE" >> $F #</body> + echo "'$SOURCEFILE': " `ls -s $SOURCEFILE` ">->" `ls -s $F` "(sitemap)" + else + echo "'$SOURCEFILE': does not exist" + fi fi +} + +make_htmlfile() # "$F" +{ + SOURCEFILE=`html_sourcefile "$F"` # 2.PASS + if test "$SOURCEFILE" != "$F" ; then + if test -f "$SOURCEFILE" ; then + if grep '<meta name="formatter"' "$SOURCEFILE" > $NULL ; then + echo "'$SOURCEFILE': SKIP, this sourcefile looks like a formatted file" + echo "'$SOURCEFILE': (may be a sourcefile in place of a targetfile?)" + return + fi + info2vars_sed > $MK_VARS # have <!--$title--> vars substituted + info2meta_sed > $MK_META # add <meta name="DC.title"> values + tags2span_sed > $MK_SPAN # extern text/css -> intern css classes + tags2meta_sed >>$MK_META # extern text/css -> intern css classes + F_HEAD="$tmp/$F.$HEAD" ; F_BODY="$tmp/$F.$BODY" ; F_FOOT="$tmp/$F.$FOOT" + $CAT "$MK_PUTS" > "$F_HEAD" + case "$sectionlayout" in + multi) head_sed_multisection "$F" "`info_get_entry_section`" >> "$F_HEAD" ;; + *) head_sed_listsection "$F" "`info_get_entry_section`" >> "$F_HEAD" ;; + esac + $CAT "$MK_VARS" "$MK_TAGS" "$MK_SPAN" >> "$F_HEAD" #tag and vars + echo "/<\\/body>/d" >> "$F_HEAD" #cut lastline + echo "/<head>/r $MK_META" >> "$F_HEAD" #add metatags + echo "/<title>/d" > "$F_BODY" #not that line + $CAT "$MK_VARS" "$MK_TAGS" "$MK_SPAN" >> "$F_BODY" #tag and vars + bodymaker_for_sectioninfo >> "$F_BODY" #if sectioninfo + info2body_sed >> "$F_BODY" #cut early + info2head_sed >> "$F_HEAD" + make_back_path "$F" >> "$F_HEAD" + test ".$emailfooter" != ".no" && \ + body_for_emailfooter > "$F_FOOT" + + mkpathfile "$F" + $SED_LONGSCRIPT "$F_HEAD" $SITEFILE > $F # ~head~ + $SED_LONGSCRIPT "$F_BODY" $SOURCEFILE >> $F # ~body~ + test -f "$F_FOOT" && $CAT "$F_FOOT" >> $F # ~foot~ + $SED -e "/<\\/body>/!d" -f "$MK_VARS" "$SITEFILE" >> $F #</body> + echo "'$SOURCEFILE': " `ls -s $SOURCEFILE` "->" `ls -s $F` + else # test -f $SOURDEFILE + echo "'$SOURCEFILE': does not exist" + fi ; else + echo "<$F> - skipped" + fi +} + +make_printerfriendly () # "$F" +{ # PRINTER + printsitefile="0" # FRIENDLY + P=`html_printerfile "$F"` + P_HEAD="$tmp/$P.$HEAD" + P_BODY="$tmp/$P.$BODY" + case "$F" in + ${SITEFILE}|${SITEFILE}l) + printsitefile=">=>" ; BODY_TXT="$tmp/$F.$FOOT" ;; + *.html) printsitefile="=>" ; BODY_TXT="$SOURCEFILE" ;; + esac + if grep '<meta name="formatter"' "$BODY_TXT" > $NULL ; then return; fi + if test ".$printsitefile" != ".0" && test -f "$SOURCEFILE" ; then + make_printerfile_fast "$FILELIST" > ./$MK_FAST + $CAT "$MK_VARS" "$MK_TAGS" "$MK_FAST" > "$P_HEAD" + $SED -e "/DC.relation.isFormatOf/s|content=\"[^\"]*\"|content=\"$F\"|" \ + "$MK_META" > "$MK_METT" + echo "/<head>/r $MK_METT" >> "$P_HEAD" # meta + echo "/<\\/body>/d" >> "$P_HEAD" + select_in_printsitefile "$F" >> "$P_HEAD" + _ext_=`print_extension "$printerfriendly"` # head- + # line_=`sed_slash_key "$printsitefile_img_2"` # back- + echo "/||topics:/s| href=\"[#][.]\"| href=\"$F\"|" >> "$P_HEAD" + echo "/|||pages:/s| href=\"[#][.]\"| href=\"$F\"|" >> "$P_HEAD" + make_back_path "$F" >> "$P_HEAD" + $CAT "$MK_VARS" "$MK_TAGS" "$MK_FAST" > "$P_BODY" + make_back_path "$F" >> "$P_BODY" + + mkpathfile "$P" + $SED_LONGSCRIPT "$P_HEAD" $PRINTSITEFILE > $P # ~head~ + $SED_LONGSCRIPT "$P_BODY" $BODY_TXT >> $P # ~body~ + $SED -e "/<\\/body>/!d" -f $MK_VARS $PRINTSITEFILE >> $P #</body> + echo "'$SOURCEFILE': " `ls -s $SOURCEFILE` "$printsitefile" `ls -s $P` + fi +} + + +# ======================================================================== +# ======================================================================== +# ======================================================================== + +# ======================================================================== +# #### 0. INIT +make_sitemap_init +make_sitemap_list +make_sitemap_sect +make_sitemap_page + +if test -d DEBUG && test -f "$MK_DATA" +then FFFF=`echo "$F" | sed -e "s,/,:,g"` + cp "$MK_DATA" "DEBUG/$FFFF.DATA.tmp.htm" +fi + +FILELIST=`echo_site_filelist` +if test ".$opt_filelist" != "." || test ".$opt_list" = ".file"; then + for F in $FILELIST; do echo "$F" ; done ; exit # --filelist +fi +if test ".$opt_files" != "." ; then FILELIST="$opt_files" ; fi # --files +if test ".$FILELIST" = "."; then warn "nothing to do (no --filelist)" ; fi +if test ".$FILELIST" = ".SITEFILE" ; then warn "only '$SITEFILE'?!" ; fi + +for F in $FILELIST ; do case "$F" in #### 1. PASS +name:*) scan_namespec "$F" ;; +http:*|https:*|ftp:*|mailto:*|telnet:*|news:*|gopher:*|wais:*) + scan_httpspec "$F" ;; +${SITEFILE}|${SITEFILE}l) scan_sitefile "$F" ;; # ........... SCAN SITE +*@*.de) + echo "!! -> '$F' (skipping malformed mailto:-link)" + ;; +../*) + echo "!! -> '$F' (skipping topdir build)" + ;; +# */*.html) +# echo "!! -> '$F' (skipping subdir build)" +# ;; +# */*/*/|*/*/|*/|*/index.htm|*/index.html) +# echo "!! -> '$F' (skipping subdir index.html)" +# ;; +*.html) scan_htmlfile "$F" # ........... SCAN HTML + if test ".$opt_xml" != "." ; then + F=`echo "$F" | sed -e "s/\\.html$/.xml/"` + scan_xmlfile "$F" + fi ;; +*.xml) scan_xmlfile "$F" ;; +*/) echo "'$F' : directory - skipped" + site_map_list_title "$F" "`sed_slash_key $F`" + site_map_long_title "$F" "(directory)" + ;; +*) echo "?? -> '$F'" + ;; +esac done + +if test ".$printerfriendly" != "." ; then # .......... PRINT VERSION + _ext_=`print_extension "$printerfriendly" | sed -e "s/&/\\\\&/"` + PRINTSITEFILE=`echo "$SITEFILE" | sed -e "s/\\.[$AA]*\$/$_ext_&/"` + echo "NOTE: going to create printer-friendly sitefile $PRINTSITEFILE" + make_printsitefile > "$PRINTSITEFILE" +fi + +for F in $FILELIST ; do case "$F" in #### 2. PASS +name:*) skip_namespec "$F" ;; +http:*|https:*|ftp:*|mailto:*|telnet:*|news:*|gopher:*|wais:*) + skip_httpspec "$F" ;; +${SITEFILE}|${SITEFILE}l) make_sitefile "$F" # ........ SITE FILE + if test ".$printerfriendly" != "." ; then make_printerfriendly "$F" ; fi + if test ".$opt_xml" != "." ; then _old_F_="$F" + F=`echo "$F" | sed -e "s/\\.html$/.xml/"` + make_xmlmaster "$F" ;F="$_old_F_" + fi ;; +*@*.de) + echo "!! -> '$F' (skipping malformed mailto:-link)" + ;; +../*) + echo "!! -> '$F' (skipping topdir build)" + ;; +# */*.html) +# echo "!! -> '$F' (skipping subdir build)" +# ;; +# */*/*/|*/*/|*/|*/index.htm|*/index.html) +# echo "!! -> '$F' (skipping subdir index.html)" +# ;; +*.html) make_htmlfile "$F" # .................. HTML FILES + test ".$printerfriendly" != "." && make_printerfriendly "$F" + if test ".$opt_xml" != "." ; then _old_F_="$F" + F=`echo "$F" | sed -e "s/\\.html$/.xml/"` + make_xmlfile "$F" ;F="$_old_F_" + fi ;; +*.xml) make_xmlfile "$F" ;; +*/) echo "'$F' : directory - skipped" + ;; +*) echo "?? -> '$F'" + ;; +esac +# .............. debug .................... + if test -d DEBUG && test -f "./$F" ; then + FFFF=`echo "$F" | sed -e "s,/,:,g"` + test -f "$tmp/$F.$DATA" && cp "$tmp/$F.$DATA" DEBUG/$FFFF.data.tmp.htm + test -f "$tmp/$F.$HEAD" && cp "$tmp/$F.$HEAD" DEBUG/$FFFF.head.tmp.sed + test -f "$tmp/$F.$BODY" && cp "$tmp/$F.$BODY" DEBUG/$FFFF.body.tmp.sed + test -f "$tmp/$F.$FOOT" && cp "$tmp/$F.$FOOT" DEBUG/$FFFF.foot.tmp.sed + for P in tags vars span meta page date list html sect \ + data head body foot fast xmlmapping \ + gets puts site mett sect1 sect2 sect3 style ; do + test -f $tmp/$MK.$P.tmp.htm && cp $tmp/$MK.$P.tmp.htm DEBUG/$FFFF.$P.tmp.htm + test -f $tmp/$MK.$P.tmp.sed && cp $tmp/$MK.$P.tmp.sed DEBUG/$FFFF.$P.tmp.sed + done + fi +done + +if test ".$opt_keeptmpfiles" = "." ; then + for i in $tmp/$MK.*.tmp.htm $tmp/$MK.*.tmp.sed \ + $tmp/$MK.*.tmp.css $tmp/$MK.*.tmp.txt + do test -f "$i" && rm "$i" + done +fi +if test ".$tmp_dir_was_created" != ".no" ; then rm $tmp/* ; rmdir $tmp ; fi +exit 0 diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/mmapped.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/mmapped.htm new file mode 100644 index 00000000000..9ac35ece988 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/mmapped.htm @@ -0,0 +1,226 @@ +<section> <date> 2005 </date> +<H2> zzip/mmapped </H2> zip access for mmapped views + +<BLOCKQUOTE> + These routines are fully independent from the traditional zzip + implementation. They assume a readonly mmapped sharedmem block + representing a complete zip file. The functions show how to + parse the structure, find files and return a decoded bytestream. +</BLOCKQUOTE> + +<section> +<H3> zzip disk handle </H3> + +<P> + Other than with the <a href="fseeko.html">fseeko</a> alternative + interface there is no need to have an actual disk handle to the + zip archive. Instead you can use a bytewise copy of a file or + even use a mmapped view of a file. This is generally the fastest + way to get to the data contained in a zipped file. All it requires + is enough of virtual memory space but a desktop computer with a + a modern operating system will easily take care of that. +</P> + +<P> + The zzipmmapped library provides a number of calls to create a + disk handle representing a zip archive in virtual memory. Per + default we use the sys/mmap.h (or MappedView) functionality + of the operating system. The <code>zzip_disk_open</code> will + open a system file descriptor and try to <code>zzip_disk_mmap</code> + the complete zip content. When finished with the zip archive + call <code>zzip_disk_close</code> to release the mapped view + and all management data. +</P> + +<PRE> + ZZIP_DISK* zzip_disk_open(char* filename); + int zzip_disk_close(ZZIP_DISK* disk); + + ZZIP_DISK* zzip_disk_new(void); + ZZIP_DISK* zzip_disk_mmap(int fd); + int zzip_disk_munmap(ZZIP_DISK* disk); + int zzip_disk_init(ZZIP_DISK* disk, + char* buffer, zzip_size_t buflen); +</PRE> + +</section><section> +<H3> reading the central directory </H3> + +<P> + To get access to a zipped file, you need a pointer to an entry in the + mmapped zip disk known under the type <code>ZZIP_DISK_ENTRY</code>. + This is again modelled after the <code>DIR_ENTRY</code> type in being + a representation of a file name inside the zip central directory. To + get an initial zzip disk entry pointer, use <code>zzip_disk_findfirst</code>, + to move the pointer to the next entry use <code>zzip_disk_findnext</code>. +</P> +<PRE> + extern ZZIP_ENTRY* zzip_disk_findfirst(FILE* disk); + extern ZZIP_ENTRY* zzip_disk_findnext(ZZIP_ENTRY* entry); +</PRE> +<P> + These two calls will allow to walk all zip archive members in the + order listed in the zip central directory. To actually implement a + directory lister ("zzipdir"), you need to get the name string of the + zzip entry. This is not just a pointer: the zzip disk entry is not + null terminated actually. Therefore we have a helper function that + will <code>strdup</code> the entry name as a normal C string: +</P> +<PRE> + #include <zzip/mmapped.h> + void _zzip_dir(char* filename) + { + ZZIP_DISK* disk = zzip_disk_open (filename); + if (! disk) return disk; + for (ZZIP_DISK_ENTRY* entry = zzip_disk_findfirst (disk); + entry ; entry = zzip_disk_findnext (entry)) { + char* name = zzip_disk_entry_strdup_name (entry); + puts (name); free (name); + } + } +</PRE> + +</section><section> +<H3> find a zipped file </H3> + +<P> + The central directory walk can be used to find any file in the + zip archive. The <code>zzipfseeko</code> library however provides + two convenience functions that allow to jump directly to the + zip disk entry of a given name or pattern. You are free to use + the returned <code>ZZIP_DISK_ENTRY</code> pointer for later calls + that type. There is no need to free this pointer as it is really + a pointer into the mmapped area of the <code>ZZIP_DISK</code>. + But do not forget to free that one via <code>zzip_disk_close</code>. +</P> +<PRE> + ZZIP_DISK_ENTRY* zzip_disk_findfile(ZZIP_DISK* disk, char* filename, + ZZIP_DISK_ENTRY* after, + zzip_strcmp_fn_t compare); + + ZZIP_DISK_ENTRY* zzip_disk_findmatch(ZZIP_DISK* disk, char* filespec, + ZZIP_ENTRY* after, + zzip_fnmatch_fn_t compare, int flags); +</PRE> +<P> + In general only the first two arguments are non-null pointing to the + zip disk handle and the file name to look for. The "after" argument + is an old value and allows you to walk the zip directory similar to + <code>zzip_disk_entry_findnext</code> but actually leaping forward. The + compare function can be used for alternate match behavior: the default + of <code>strcmp</code> might be changed to <code>strncmp</code> for + a caseless match. The "flags" of the second call are forwarded to the + posix <code>fnmatch</code> which we use as the default function. +</P> +<P> + If you do know a specific zzipped filename then you can just use + <code>zzip_disk_entry_findfile</code> and supply the return value to + <code>zzip_disk_entry_fopen</code>. There is a convenience function + <code>zzip_disk_fopen</code> that will do just that and therefore + only requires a disk handle and a filename to find-n-open. +</P> +<PRE> + #include <zzip/mmapped.h> + + int _zzip_read(ZZIP_DISK* disk, char* filename, void* buffer, int bytes) + { + ZZIP_DISK_FILE* file = zzip_disk_fopen (disk, filename); + if (! file) return -1; + int bytes = zzip_disk_fread (buffer, 1, bytes, file); + zzip_disk_fclose (file); + return bytes; + } +</PRE> + +</section><section> +<H3> reading bytes </H3> + +<P> + The example has shown already how to read some bytes off the head of + a zipped file. In general the zzipmmapped api is used to replace a few + system file routines that access a file. For that purpose we provide three + functions that look very similar to the stdio functions of + <code>fopen()</code>, <code>fread()</code> and <code>fclose()</code>. + These work on an active file descriptor of type <code>ZZIP_DISK_FILE</code>. +</P> + +<PRE> + ZZIP_DISK_FILE* zzip_disk_entry_fopen (ZZIP_DISK* disk, + ZZIP_DISK_ENTRY* entry); + ZZIP_DISK_FILE* zzip_disk_fopen (ZZIP_DISK* disk, char* filename); + zzip_size_t zzip_disk_fread (void* ptr, + zzip_size_t sized, zzip_size_t nmemb, + ZZIP_DISK_FILE* file); + int zzip_disk_fclose (ZZIP_DISK_FILE* file); + int zzip_disk_feof (ZZIP_DISK_FILE* file); +</PRE> + +<P> + In all of the examples you need to remember that you provide a single + <code>ZZIP_DISK</code> descriptor for a memory block which is in reality + a virtual filesystem on its own. Per default filenames are matched case + sensitive also on win32 systems. The findnext function will walk all + files on the zip virtual filesystem table and return a name entry + with the full pathname, i.e. including any directory names to the + root of the zip disk <code>FILE</code>. +</P> + +</section><section> +<H3> ZZIP_DISK_ENTRY inspection </H3> + +<P> + The <code>ZZIP_DISK_FILE</code> is a special file descriptor handle + of the <code>zzipmmapped</code> library - but the + <code>ZZIP_DISK_ENTRY</code> is not so special. It is actually a pointer + directly into the zip central directory managed by <code>ZZIP_DISK</code>. + While <code>zzip/mmapped.h</code> will not reveal the structure on its own, + you can include <code>zzip/format.h</code> to get access to the actual + structure content of a <code>ZZIP_DISK_ENTRY</code> by its definition +<br><b><code> struct zzip_disk_entry</code></b>. +</P> + +<P> + In reality however it is not a good idea to actually read the bytes + in the <code>zzip_disk_entry</code> structure unless you seriously know + the internals of a zip archive entry. That includes any byteswapping + needed on bigendian platforms. Instead you want to take advantage of + helper macros defined in <code>zzip/fetch.h</code>. These will take + care to convert any struct data member to the host native format. +</P> +<PRE> +extern uint16_t zzip_disk_entry_get_flags( zzip_disk_entry* entry); +extern uint16_t zzip_disk_entry_get_compr( zzip_disk_entry* entry); +extern uint32_t zzip_disk_entry_get_crc32( zzip_disk_entry* entry); + +extern zzip_size_t zzip_disk_entry_csize( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_usize( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_namlen( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_extras( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_comment( zzip_disk_entry* entry); +extern int zzip_disk_entry_diskstart( zzip_disk_entry* entry); +extern int zzip_disk_entry_filetype( zzip_disk_entry* entry); +extern int zzip_disk_entry_filemode( zzip_disk_entry* entry); + +extern zzip_off_t zzip_disk_entry_fileoffset( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_sizeof_tail( zzip_disk_entry* entry); +extern zzip_size_t zzip_disk_entry_sizeto_end( zzip_disk_entry* entry); +extern char* zzip_disk_entry_skipto_end( zzip_disk_entry* entry); +</PRE> + +<P> + Additionally the <code>zzipmmapped</code> library has two additional + functions that can convert a mmapped disk entry to (a) the local + file header of a compressed file and (b) the start of the data area + of the compressed file. These are used internally upon opening of + a disk entry but they may be useful too for direct inspection of the + zip data area in special applications. +</P> +<PRE> + char* zzip_disk_entry_to_data(ZZIP_DISK* disk, + struct zzip_disk_entry* entry); + struct zzip_file_header* + zzip_disk_entry_to_file_header(ZZIP_DISK* disk, + struct zzip_disk_entry* entry); +</PRE> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/notes.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/notes.htm new file mode 100644 index 00000000000..4b6d78177c3 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/notes.htm @@ -0,0 +1,57 @@ +<H2> Some Notes </H2> + +<dl> +<dt> ClamAV thinks it decompress zip method 9 files </dt> +<dd> + In May 2005 the ClamAV Development team detected a problem + with zlib - which knows an undocumented inflate64 that maps + to the zip compression method number 9 (implemented in files + inftree9 and infback9). However the support for that method + is not being compiled into libz.so by default, so you have + to recompile zlib to get support for it - zziplib will not do + that on its own, nor will it check the actual availability. + So, zziplib users might be handicapped if the meet a zip + compressed with that method 9, at best they will get an + error code back to the application but that is mostly not + intuitive enough to point to the actual problem related to + the last breed of zip/zlib compression methods. Effectivly + you are restricted to methods 0 and 8. +</dd> +<dt> Ogre3D + Win64/AMD64 + zziplib = ZZIP_DIR_READ error </dt> +<dd> + As of December 2005 the thread at + http://www.ogre3d.org/phpBB2/viewtopic.php?p=110707#110667 + points to a problem in the 64bit variant of zziplib with + some zip archives. The actual source of the problem is + unknown. The Ogre project uses an internal copy of the + zziplib library being statically linked. The latest + zziplib version has been tested on a number of 64bit + system in the meantime - however those are 64bit Unix + variants (LP64). While Win32 (LP32) works okay there + might be some buglet left for Win64 (LLP64) that I can't + track down (system N/A to me) in the near future. +</dd> +<dt> PHP5 does not know --with-zip </dt> +<dd> + As of January 2005 I was hinted that some of the PHP + problems might see a new show. In the past there were + numerous queries about installation of zziplib to be + useful as the PHP-ZIP module but I could not answer them. + (I don't use PHP for real work). The standard php4 + docs were obviously insufficient with saying to just + configure --with-zip... but now even that option is + gone and there is no hint anywhere telling of the + replacement. +</dd> +<dt> sourcebase.sf using modified zziplib code </dt> +<dd> + In May 2003 I did notice that the sourcebase.sf + project - providing a generic virtual filesystem + for applications - has been reusing the zziplib + code. However the code has been modified in a + number of places and it was (at first) placed + under real GPL. That library was supposed to be + put under the hood of the GNOME desktop but at + the moment it does not seem to go nowhere further. +</dd> +</dl> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/referentials.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/referentials.htm new file mode 100644 index 00000000000..8519ed3cd99 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/referentials.htm @@ -0,0 +1,68 @@ +<section><date> 15. July 2002 </date> +<h2> ZZIP Referentials </h2> Where is it used. + +<!--noborder--> + +<section> +<h3> GPL Rant </h3> + +<BLOCKQUOTE> + The GPL/LGPL do not have a clause like MPL and others to notify the + original author about certain usages of the library - that's a pity + since I do not get to know many of the areas where zziplib has come + to be used. I can only ask you to send me an e-mail, so I can put a + link from here to your project. Within thousands of downloads less + than a handful of people wrote to me - mostly for having found a bug + or having a feature request. Be nice, and write even if you have had + successfully implanted zziplib in your project... I love to hear that ;-) +</BLOCKQUOTE> + +</section><section> +<h3> opensource games </h3> + +<BLOCKQUOTE> + Although the library has not been written focusing on game data, + it has it greatest success just there. The SDL-rwops example did + further it by great amounts, people just like it to have the + thousand of small bitmaps to be assembled into one big dat file, + and put the AI scripts just next to them. +</BLOCKQUOTE> + +<ul><li> Underworld Adventures: <br> +<a href="http://uwadv.sourceforge.net"> + http://uwadv.sourceforge.net</a> +</li><li> Ogre3D Game Development <br /> +<a href="http://www.ogre3d.org/"> + http://www.ogre3d.org/</a> +</li></ul> + +</section><section> +<h3> opensource apps/libs </h3> + +<BLOCKQUOTE> + Here the most important feature has been the smalls size of this + library and the possible to use its autoconf script and even for + those who don't, it is easy to make a custom configuration. The + source code is easy to understand and therefore to customize for + the needs of the app/lib that wants to use the functionality. +</BLOCKQUOTE> + +<ul><li> PHP ZIP Module <br> +<a href="http://www.php.net/manual/en/ref.zip.php"> + http://www.php.net/manual/en/ref.zip.php</a> +</li></ul> + +</section><section> +<h3> commercial usage </h3> + +<BLOCKQUOTE> + For commercial usage, you can bind many small files into a zip + file for easier handling. Obfuscation and io-wrapping help + greatly to implant it in areas even far from posix-io grounds. +</BLOCKQUOTE> + +<ul><li> Media Portal Backside <br> +<a href="http://www.appwares.com"> + http://www.appwares.com</a> +</li></ul> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/sdocbook.css b/Build/source/libs/zziplib/zziplib-0.13.60/docs/sdocbook.css new file mode 100644 index 00000000000..95a65fb2d24 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/sdocbook.css @@ -0,0 +1,679 @@ +abbrev +{ + display: inline; +} + +abstract +{ + margin-left: 0.5in; + margin-right: 0.5in; + display: inline; +} + +acronym +{ + display: inline; +} + +address +{ + white-space: pre; + display: block; +} + +anchor +{ + display: inline; +} + +appendix +{ + display: block; +} + +articleinfo +{ + display: none; +} + +article +{ + display: block; +} + +audiodata +{ + display: none; +} + +audioobject +{ + display: none; +} + +author +{ + display: inline; +} + +authorgroup +{ + display: inline; +} + +authorinitials +{ + display: inline; +} + +bibliomisc +{ + display: inline; +} + +bibliomset +{ + display: inline; +} + +biblioset +{ + display: inline; +} + +blockquote +{ + display: block; + margin-left: 0.5in; + margin-right: 0.5in; +} + +caption +{ + display: none; +} + +citetitle +{ + display: inline; + font-style: italic; +} + +city +{ + display: inline; +} + +colspec +{ + display: none; +} + +command +{ + display: inline; + font-style: italic; +} + +computeroutput +{ + display: inline; + font-family: monospace; +} + +copyright +{ + display: inline; +} + +corpauthor +{ + display: inline; +} + +country +{ + display: inline; +} + +date +{ + display: inline; +} + +articleinfo +{ + display: none; +} + +appendixinfo +{ + display: none; +} + +edition +{ + display: inline; +} + +editor +{ + display: inline; +} + +email +{ + display: inline; + font-style: italic; +} + +emphasis +{ + display: inline; + font-style: italic; +} + +entry +{ + display: table-cell; +} + +example +{ + display: block; +} + +fax +{ + display: inline; +} + +figure +{ + display: block; +} + +filename +{ + display: inline; + font-style: italic; +} + +firstname +{ + display: inline; +} + +footnote +{ + display: inline; +} + +holder +{ + display: inline; +} + +honorific +{ + display: inline; +} + +imagedata +{ + display: inline; +} + +imageobject +{ + display: inline; +} + +informaltable +{ + display: block; +} + +inlinemediaobject +{ + display: inline; +} + +isbn +{ + display: inline; +} + +issn +{ + display: inline; +} + +issuenum +{ + display: inline; +} + +itemizedlist +{ + display: block; + list-style-type: disc; +} + +keyword +{ + display: inline; +} + +keywordset +{ + display: inline; +} + +legalnotice +{ + display: inline; +} + +lineage +{ + display: inline; +} + +lineannotation +{ + display: inline; +} + +link +{ + display: inline; +} + +listitem +{ + display: list-item; +} + +literal +{ + display: inline; +} + +literallayout +{ + display: inline; +} + +mediaobject +{ + display: inline; +} + +member +{ + display: inline; +} + +note +{ + display: inline; +} + +objectinfo +{ + display: inline; +} + +option +{ + display: inline; +} + +orderedlist +{ + display: block; + list-style-type: decimal; +} + +otheraddr +{ + display: inline; +} + +othercredit +{ + display: inline; +} + +othername +{ + display: inline; +} + +pagenums +{ + display: inline; +} + +para +{ + display: block; +} + +phone +{ + display: inline; +} + +phrase +{ + display: inline; +} + +pob +{ + display: inline; +} + +postcode +{ + display: inline; +} + +printhistory +{ + display: inline; +} + +procedure +{ + display: inline; +} + +programlisting +{ + display: inline; +} + +pubdate +{ + display: inline; +} + +publisher +{ + display: inline; +} + +publishername +{ + display: inline; +} + +quote +{ + display: inline; +} + +replaceable +{ + display: inline; +} + +revhistory +{ + display: inline; +} + +revision +{ + display: inline; +} + +revnumber +{ + display: inline; +} + +revremark +{ + display: inline; +} + +row +{ + display: table-row; +} + +section +{ + display: block; +} + +sectioninfo +{ + display: none; +} + +sidebar +{ + display: block; +} + +simplelist +{ + display: inline; +} + +state +{ + display: inline; +} + +step +{ + display: inline; +} + +street +{ + display: inline; +} + +substeps +{ + display: inline; +} + +subtitle +{ + display: inline; +} + +surname +{ + display: inline; +} + +systemitem +{ + display: inline; +} + +tbody +{ + display: table-row-group; +} + +term +{ + display: inline; +} + +textobject +{ + display: inline; +} + +tgroup +{ + display: table; +} + +thead +{ + display: table-row-group; +} + +title +{ + display: block; +} + +article title +{ + font-size: 36pt; + font-weight: bold; + display: block; +} + +section title +{ + font-size: 24pt; + font-weight: bold; + display: block; +} + +section section title +{ + font-size: 20pt; + font-weight: bold; + display: block; +} + +section section section title +{ + font-size: 18pt; + font-weight: bold; + display: block; +} + +section section section section title +{ + font-size: 16pt; + font-weight: bold; + display: block; +} + +section section section section section title +{ + font-size: 14pt; + font-weight: bold; + display: block; +} + +section section section section section section title +{ + font-size: 12pt; + font-weight: bold; + display: block; +} + +appendix title +{ + font-size: 24pt; + font-weight: bold; + display: block; +} + +appendix section title +{ + font-size: 22pt; + font-weight: bold; + display: block; +} + +appendix section section title +{ + font-size: 18pt; + font-weight: bold; + display: block; +} + +appendix section section section title +{ + font-size: 16pt; + font-weight: bold; + display: block; +} + +appendix section section section section title +{ + font-size: 14pt; + font-weight: bold; + display: block; +} + +appendix section section section section section title +{ + font-size: 12pt; + font-weight: bold; + display: block; +} + +titleabbrev +{ + display: none; +} + +trademark +{ + display: inline; +} + +ulink +{ + display: inline; +} + +userinput +{ + display: inline; +} + +variablelist +{ + display: inline; +} + +varlistentry +{ + display: inline; +} + +videodata +{ + display: inline; +} + +videoobject +{ + display: inline; +} + +volumenum +{ + display: inline; +} + +xref +{ + display: inline; +} + +year +{ + display: inline; +} + diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/sfx-make.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/sfx-make.htm new file mode 100644 index 00000000000..5e092e47a11 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/sfx-make.htm @@ -0,0 +1,167 @@ +<section> <date> February 2003 </date> +<h2> SFX-Make </h2> combining an EXE with a ZIP archive + +<!--border--> + +<section> +<h3> How To </h3> +<P> + In this section we walk you through the steps of combining an EXE + with a ZIP archive. The basic scheme goes like this: the final + file will have an EXE starting at offset null, followed by the + data entries of a ZIP archive. The <em>last</em> part of the ZIP + archive is the ZIP central-directory which ends at the end of the file. +</P> +<P> + The basic problem lies in the fact that the zip central-directory + entries reference their data section with an offset from the + start-of-file so that you can not just append a zip archive after + an exe stub. The trick goes like adding the EXE as the first data + part of the ZIP archive - so that the offsets for each entry will + be correct when we are finished with it. +</P> +<P> + Again, one can not just use a zip tool to put the EXE as the first + part - since each data part is preceded with an infoblock of a + few bytes. The data of the first data part will therefore not + start at offset zero. We solve this problem with moving the data + a few bytes later - so that the final file will not start with a + "PK" magic (from the zip info header) but with an "MZ" or "ELF" + magic (from the exe info header). +</P> + +</section><section> +<h3> Step 1: Creating The Zip Combination </h3> +<P> + Choose your exe file (example.exe) and wrap that file into a + zip container - ensure that the zip tool does <em>not</em> + use any compression algorithm on the data. This is usually + done with saying "zero compression level" as an option to + the zip tool. Also note that <em>no other</em> file is + wrapped as some zip tools reorder the entries from the + order on the command line to alphabetic order. Here is an + example with infozip's `zip` (e.g. on linux): + <pre> zip -0 -j example.zip example.exe </pre> +</P> +<P> + There is no zip tool that would reorder the data entries in + an existing zip archive. This mode is used here - the real + compressed data entries can now be added to the existing + zip archive that currently just wraps the exe part. With + specifying maximum compression ("-9" = compression level 9) + and throwing away any subdirectory part ("-j" = junk path) + it might look like + <pre> zip -9 -j example.zip data/* </pre> +</P> +<P> + Now we need to move the exe part by a few bytes to the + real start of the file. This can be done as easily as + writing the exe file again on to the start of the file. + However, one can not just use a shell-direction or + copy-operation since that would truncate (!!) the zip + file to the length of the exe part. The overwrite-operation + must be done without truncation. For maximum OS independence + the zziplib ships with a little tool in "test/zzipsetstub.c" + that you can reuse for this task: + <pre> zzipsetstub example.zip example.exe </pre> +</P> +<P> + This is it - the `unzip` tool can still access all data + entries but the first EXE - the first EXE will be listed + in the central-directory of the ZIP archive but one can + not extract the data since the "PK" magic at offset null + has been overwritten with the EXE magic. The data of all + the other entries can still be extracted with a normal + `unzip` tool - or any tool from the zziplib be used for + the same task. +</P> + +</section><section> +<h3> Step 2: Accessing The Data From The Program </h3> + +<P> + There is an example in test/zzipself.c that show how to do + it. The OS will provide each program with its own name in + argv[0] of the main() routine. This program file (!!) is + also the zip archive that carries the compressed data + entries along. Therefore, we can just issue a zzip_opendir + on argv[0] to access the zip central-directory. +</P> +<P> + Likewise one can open a file within it by just prepending + the string argv[0] to the filename stem, i.e. you could + do like + <pre> ZZIP_FILE* f = zzip_fopen ("example.exe/start.gif", "rbi")</pre> +</P><P> + however you are advised to use the _ext_io cousin to be + platform independet - different Operating Systems use + different file extensions for executables, it's not always + an ".exe". +</P> +<P> + Once the file is opened, the data can be zzip_fread or + passed through an SDL_rwops structure into the inner + parts of your program. +</P> + +</section><section> +<h3> Step 3: Using Obfuscation Along </h3> + +<P> + The next level uses obfuscatation on the data part of the + application. That way there is no visible data to be seen + from outside, it looks like it had been compiled right into + the C source part. One can furthermore confuse a possible + attacker with staticlinking the zziplib into the executable + (this is possible in a limited set of conditions). +</P> +<P> + The first pass is again in creating the zip - here we must + ensure that only the ZIP archive part is obfuscated but + the EXE part must be plain data so that the operationg + system can read and relocate it into main memory. Using + xor-obfuscation this is easy - applying xor twice will + yield the original data. The steps look like this now: + <pre> + zzipxorcopy example.exe example.xor + zip -0 application.zip example.xor + zip -9 application.zip data/* + zzipsetstub application.zip example.xor + zzipxorcopy application.zip application.exe + </pre> +</P> +<P> + In the second step the open-routine in your application + needs to be modified - there are quite some examples in + the zziplib that show you how to add an xor-read routine + and passing it in the "io"-part of an zzip_open_ext_io + routine (see zzipxorcat.c). + <pre> + static int xor_value = 0x55; + + static zzip_ssize_t xor_read (int f, void* p, zzip_size_t l) + { + zzip_ssize_t r = read(f, p, l); + zzip_ssize_t x; char* q; for (x=0, q=p; x < r; x++) q[x] ^= xor_value; + return r; + } + + static struct zzip_plugin_io xor_handlers; + static zzip_strings_t xor_fileext[] = { ".exe", ".EXE", "", 0 }; + + main(...) + { + zzip_init_io (&xor_handlers, 0); xor_handlers.read = &xor_read; + + ZZIP_FILE* fp = zzip_open_ext_io (filename, + O_RDONLY|O_BINARY, ZZIP_CASELESS|ZZIP_ONLYZIP, + xor_fileext, &xor_handlers); + .... + </pre> +</P> +<P> + You may want to pick your own xor-value instead of the default 0x55, + the zziplib-shipped tool `zzipxorcopy` does know an option to just + set the xor-value with which to obfuscate the data. +</P> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zip-php.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zip-php.htm new file mode 100644 index 00000000000..04166673f38 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zip-php.htm @@ -0,0 +1,63 @@ +<H2> PHP-ZIP Installation </H2> + +<P> + There have been many problems about the installation of the php-zip + module. Since Mid of 2006 the php-zip module does not require the + zziplib anymore - it uses its own implementation (which is a clean + approach in a double sense - there are no source code comments). + So, the following might possibly be only relevant for older + installations. +</P> + +<P> Chris Branch has been kind enough to jot down the points of a + successful php-zip installation sending it to me in May 2006. + I am quoting his text verbatim - again, I do not know whether + it works or not as I am not using any PHP for real work. +</P> + +<hr width="60%" align="center"> +<DL> +<DT> Software Packages </DT> +<dd><ul> +<li> Apache 2.4.21 (Linux) </li> +<li> PHP 4.3.9 </li> +<li> ZZIPLIB 0.10.82 </li> +<li> Special requirement: static linking +</ul></dd> +<DT> Setting up ZZIPLIB </DT> +<dd><ul> +<li> Extract files from zziplib-0.10.82.tar.bz2 to a new folder. </li> +<li> ./configure --enable-static </li> +<li> make </li> +<li> make install </li> +</ul></dd> +<DT> Rebuilding PHP to include ZIP support </DT> +<dd><ul> +<li> Modify PHP build file and add "--with-zip" +[no dir needed because default /usr/local on my machine] </li> +<li> make </li> +<li> make install </li> +</ul></dd> +<DT> Modifying the Apache Installation </DT> +<dd><ul> +<li> Change to Apache source code directory </li> +<li> Change to "src" subdirectory and edit existing Makefile. [***] +<br> Add: EXTRA_LIBS=/usr/local/lib/libzzip.a </li> +<li> Change back to parent folder (cd ..) </li> +<li> make </li> +<li> /usr/local/etc/httpd/bin/apachectl stop </li> +<li> make install </li> +<li> /usr/local/etc/httpd/bin/apachectl start </li> +</ul></dd> +</DL> + +<p><b>[***] Note:</b> +That step is the critical step that's not obvious. Apparently, +when you build PHP as a static library and include the "--with-zip" +option, it creates a static library for PHP with an external dependency on +zziplib.a. However, the Apache configure script and resulting Makefile +doesn't take this into account, so Apache won't link unless you hand-edit +the Apache Makefile. (Maybe there's a better place to make this change so +that you don't have to re-fix Apache's Makefile each time you run Apache's +./configure. However, I didn't spend the time to investigate that). +</p> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-api.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-api.htm new file mode 100644 index 00000000000..79bed1401d4 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-api.htm @@ -0,0 +1,30 @@ +<section> <date> 20. July 2002 </date> +<h2> ZZIP Programmers Interface </h2> The complete API description. + +<!--border--> + +<P> + The zzip library was orginally developped by Tomi Ollila as a + set of zip decoder routines. Guido Draheim did pick it up and + wrapped them under a call synopsis matching their posix + api calls. Therefore <code>zzip_open()</code> has the same + synopsis as <code>open(2)</code> but it can open zipped files. + Later the distinction was made between magic wrappers and apis + for direct access to zip archives and the files contained + in the archive. +</P> +<P> + These (three) functional apis have little helper functions + alongside including those to get the posix filehandle out of a + zzip handle and to get some attributes about the data handle + represented by a zzip handle. Plus checking for error codes + that may have been generated from internal checks. +</P> + +<dl> +<dt> <a href="zzip-basics.html">Basics</a> </dt> +<dd> Magic Wrappers, Zip Archive Dir access, Zipped File access </dd> +<dt> <a href="zzip-extras.html">Extras</a> </dt> +<dd> ext/io init, StdC calls, Error defs, ReOpen, FileStat </dd> +</dl> +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-basics.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-basics.htm new file mode 100644 index 00000000000..0d0504710e7 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-basics.htm @@ -0,0 +1,163 @@ +<section> <date> 20. July 2002 </date> +<h2> ZZIP API Basics </h2> The open/close API description. + +<!--border--> + +<section> +<h3> Basics </h3> + +<P> + The naming schem of functions in this library follow a simple rule: + if you see a function with a <code>zzip_</code> prefix followed by + compact name representing otherwise a C library or posix function then + it is a magic wrapper that can automagically handle both real + files/directories or zip-contained files. This includes: +</P> +<table cellpadding=10 width=100%><tr><td><table width=100% border=1> + <tr><td width=50%> zzip_opendir </td><td width=50%> opendir </td></tr> + <tr><td width=50%> zzip_readdir </td><td width=50%> readdir </td></tr> + <tr><td width=50%> zzip_closedir </td><td width=50%> closedir </td></tr> + <tr><td width=50%> zzip_rewinddir </td><td width=50%> rewinddir </td></tr> + <tr><td width=50%> zzip_telldir </td><td width=50%> telldir </td></tr> + <tr><td width=50%> zzip_seekdir </td><td width=50%> seekdir </td></tr> +</table></td></tr></table> +<P> + The ZZIP_DIR handle can wrap both a real directory or a zip-file. + Note that you can not open a virtual directory <em>within</em> a + zip-file, the ZZIP_DIR is either a real DIR-handle of a real + directory or the reference of ZIP-file but never a DIR-handle + within a ZIP-file - there is no such schema of a SUB-DIR handle + implemented in this library. A ZZIP_DIR does actually represent + the central directory of a ZIP-file, so that each file entry in + this ZZIP-DIR can possibly have a subpath prepended. +</P> + +<P> + This form of magic has historic reasons as originally the + magic wrappers of this library were not meant to wrap a complete + subtree of a real file tree but only a single directory being + wrapped with into a zip-file and placed instead. Later proposals + and patches were coming in to support subtree wrapping by not + only making a split between the dir-part and file-part but + going recursivly up through all "/"-dirseparators of a filepath + given to <code>zzip_open</code> and looking for zip-file there. +</P> + +<P> + To open a zip-file unconditionally one should be using their + respective methods that would return a ZZIP_DIR handle being + the representant memory instance of a ZIP-DIR, the central + directory of a zip-file. From that ZZIP-DIR one can open a + compressed file entry which will be returned as a ZZIP_FILE + pointer. +</P> +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_dir_open </td> + <td width=50%> open a zip-file and parse the central directory + to a memory shadow</td></tr> + <tr><td width=50%> zzip_dir_close </td> + <td width=50%> close a zip-file and free the memory shadow</td></tr> + <tr><td width=50%> zzip_dir_fdopen </td> + <td width=50%> aquire the given posix-file and try to parse it + as a zip-file.</td></tr> + <tr><td width=50%> zzip_dir_read </td> + <td width=50%> return the next info entry of a zip-file's central + directory - this would include a possible subpath </td></tr> +</table></td></tr></table> + +<P> + To unconditionally access a zipped-file (as the counter-part of a + zip-file's directory) you should be using the functions having a + <code>zzip_file_</code> prefix which are the methods working on + ZZIP_FILE pointers directly and assuming those are references of + a zipped file with a ZZIP_DIR. +</P> +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_file_open </td> + <td width=50%> open a file within a zip and prepare a zlib + compressor for it - note the ZZIP_DIR argument, + multiple ZZIP_FILE's may share the same central + directory shadow.</td></tr> + <tr><td width=50%> zzip_file_close </td> + <td width=50%> close the handle of zippedfile + and free zlib compressor of it</td></tr> + <tr><td width=50%> zzip_file_read </td> + <td width=50%> decompress the next part of a compressed file + within a zip-file</td></tr> +</table></td></tr></table> +<P> + From here it is only a short step to the magic wrappers for + file-access - when being given a filepath to zzip_open then + the filepath is checked first for being possibly a real file + (we can often do that by a <code>stat</code> call) and if there is + a real file under that name then the returned ZZIP_FILE is + nothing more than a wrapper around a file-descriptor of the + underlying operating system. Any other calls like zzip_read + will see the realfd-flag in the ZZIP_FILE and forward the + execution to the read() function of the underlying operating system. +</P> + +<P> + However if that fails then the filepath is cut at last directory + separator, i.e. a filepath of "this/test/README" is cut into the + dir-part "this/test" and a file-part "README". Then the possible + zip-extensions are attached (".zip" and ".ZIP") and we check if + there is a real file under that name. If a file "this/test.zip" + does exist then it is given to zzip_dir_open which will create + a ZZIP_DIR instance of it, and when that was successul (so it + was in zip-format) then we call zzip_file_open which will see + two arguments - the just opened ZZIP_DIR and the file-part. The + resulting ZZIP_FILE has its own copy of a ZZIP_DIR, so if you + open multiple files from the same zip-file than you will also + have multiple in-memory copies of the zip's central directory + whereas otherwise multiple ZZIP_FILE's may share a common + ZZIP_DIR when being opened with zzip_file_open directly - the + zzip_file_open's first argument is the ZZIP_DIR and the second + one the file-part to be looked up within that zip-directory. +</P> + +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_open </td> + <td width=50%> try the file-path as a real-file, and if not + there, look for the existance of ZZIP_DIR by + applying extensions, and open the file + contained within that one.</td></tr> + <tr><td width=50%> zzip_close </td> + <td width=50%> if the ZZIP_FILE wraps a real-file, then call + read(), otherwise call zzip_file_read() </td></tr> + <tr><td width=50%> zzip_close </td> + <td width=50%> if the ZZIP_FILE wraps a real-file, then call + close(), otherwise call zzip_file_close() </td></tr> +</table></td></tr></table> + +<P> + Up to here we have the original functionality of the zziplib + when I (Guido Draheim) created the magic functions around the work from + Tomi Ollila who wrote the routines to read and decompress files from + a zip archive - unlike other libraries it was quite readable and + intelligible source code (after many changes there is not much + left of the original zip08x source code but that's another story). + Later however some request and proposals and patches were coming in. +</P> + +<P> + Among the first extensions was the recursive zzip_open magic. In + the first instance, the library did just do as described above: + a file-path of "this/test/README" might be a zip-file known as + "this/test.zip" containing a compressed file "README". But if + there is neither a real file "this/test/README" and no real + zip-file "this/test.zip" then the call would have failed but + know the zzip_open call will recursivly check the parent + directories - so it can now find a zip-file "this.zip" which + contains a file-part "test/README". +</P> + +<P> + This dissolves the original meaning of a ZZIP_DIR and it has lead + to some confusion later on - you can not create a DIRENT-like handle + for "this/test/" being within a "test.zip" file. And actually, I did + never see a reason to implement it so far (open "this.zip" and set + an initial subpath of "test" and let zzip_readdir skip all entries + that do not start with "test/"). This is left for excercie ;-) +</P> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-crypt.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-crypt.htm new file mode 100644 index 00000000000..9d55ba97215 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-crypt.htm @@ -0,0 +1,92 @@ +<section> <date> 15. July 2002 </date> +<h2> ZIP Std Encryption </h2> Standard Zip Encryption is Weak! + +<!--border--> + +<section> +<h3> Some rationale </h3> + +<P> + Some people might ask why not adding standard zip-encryption. Well, + first of all the standard zip-encryption has not been strong enough + for modern computers, and there are hacker tools that even a + half-literate computer-user can use to crack the password of a + zip-archive. In other words: <b> every encrypted zip file can be + cracked using freely downloadable helper tools </b>. That's because + standard zip encryption is weak regarding modern personal computer + power. Furthermore, adding <em>real</em> encryption is a heavy weight + that many people do not need, see the last argument for seeing the + standard one is useless anyway, and adding a non-standard one + should not be the case of the standard zziplib either, ye know. +</P><P> + On the other hand, obfuscation is a means to fear off half-literates + just as well - there are no <em>premade</em> tools for the obfuscation you + can invent from the xor examples. And a hacker that can de-obfuscate + such a dat-file is able to dissassemble your program as well - just to + remind you that the disassembly of a program will reveal the decryption + routine <em>and</em> the decryption key, even for a heavyweight crypt + algorithm. Although there is a difference, it just ranges on about times + and exprience, not magnitudes. Remember the old saying: you can irritate + some people for some time but not irritate all people for all the time. + As for encryption of artwork and AI scripts in games and applications, + just keep in mind that the final recipient has the decryption key on + his system anyway, just obfuscated. So each such encryption is nothing + more than just a clever form of obfuscation, nothing mathematical strong. +</P><P> + Some other people might ask why to obfuscate anyway. Well, the reason + is theft. Even people who write opensource free software generally + like to get some reward for what they do, some fame or atleast some + sweet dream to have helped the world go a bit easier in the future. + As for program text this is quite natural for the programmers who + pick up some code from somewhere else - it happens that most of them + have gone through some formation and they know how hard it is to get + even some lines of code out of your brain. This is not the case for + some artwork and AI parameters, people do not have much respect for + those - they just pick it up, put it under their umbrella, and + that's it - they even claim they could have done that themselves, + and in most cases it is that they never have been really trying to + do it and think of it as being comparable to that action-art they've + seen on TV. +</P><P> + Just be sure that there is nothing wrong with obfuscating + things for a binary distribution of your program even for the + opensource case - the program text itself is an obfuscation in its + source form when being compiled into cpu instructions. Still, the + interested people can get hold of the source code since you provide + it somewhere and actually the original programmers like to hear + from literate people who could help with modifying the project. The + same is true for you artwork and AI scripts, the interested people + can still see them in the opensource project material, but only + those will look who dare to, not just the halfwit next door. +</P><P> + Well, you do not need to that on the other hand - ID software has + shown that it can be very helpful since people will start to + write new maps and new bots, pack them and publish them. An open + data format is a form of attraction for people who can use a + graphics program and an editor but who do not know how to program. + And if you use obfuscation within an opensource program, it is + surely enought to just use the xor-format presented here, so that + it easy for third people to get involved if they want to, they + just have to rewrite their new datapacks with zzxorcopy, and + that's it. +</P><P> + As for the non-opensource projects, be aware that there are + some ways to even staticlink the zziplib into your project, so + you can even hide that you used zip tools to create your dat files. + This is well enough for anyone to do - as soon as a hacker will + get to the point to notice you used a zip format, he would have + had found any other deobfusation or decryption routine as well. + If you are frightened, just encrypt the executable with tools + you bought from somewhere else. On the other hand, should there + be problems or bugs, you have an easier time to find them when + they could be caused by your dat entries, and it is again easy + to send a fixup file to your clients, since the command line + tools are just a breeze compared with some other anti-hacking + tools you'll find on the market. +</P><P> + Well, hope this is enough rationale to tell you that I do not + see a need to implement anything more than obfuscation within + zziplib - if you need real encryption, use real encryption + software and its fileformat that supports it, not zip files. +</P> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-cryptoid.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-cryptoid.htm new file mode 100644 index 00000000000..ee6b120d7f9 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-cryptoid.htm @@ -0,0 +1,106 @@ +<section> <date> 11. May 2004 </date> +<h2> ZIP Ext Encryption </h2> ext/io used for cryptoid plugins + +<!--border--> + +<section> +<h3> Stronger Obfuscation For ZZip </h3> + +<P> + Some people feel that a simple bytewise xor is not strong enough + as an obfuscation for the data. There we have the question how to + implant a stronger obfuscation routine to protect that data of an + application from artwork theft. Perhaps there is even the idea to + use an obfuscation in the range of a real crypt routine - in which + case I want to recommend strongly to read the + <a href="zzip-crypt.html"> reasoning page </a> why it can not be + real encryption and that the resulting obfuscation has an upper + limit being <em>lower</em> than the crypt routine complexity. +</P> + +<P> + After reminding you of this fact we can go at evaluationg how to + implant a stronger obfusction routine to protect your data. The + ext/io feature uses a callback routine "read" that must read a + block of the given size - for the obfuscation case it will call + the "read()" function of the underlying operation system, and + the obfuscated block will be deobfuscated before returning it to + the caller. +</P> + +<P> + In this mechanism there is not asseration at which file-offset + the ext/io-read() callback is triggered. That is the reason we + have shown obfuscation with bytewise xor-key example - formally + this is using obfuscation blocks of 8bit width being aligned + on 8bit boundaries in the data file, and our decryption stream + is stateless being the same for each obfuscation block (of 8bit + width). +</P> +<P> + In order for a stronger obfuscation we have to break those + limitations which are directly derived from the natural way + of the handling of files by a contemporary operating system. + This is triggered as the call synopsis of the ext/io read() + callback matches <em>exactly</em> the one of posix, so that + one can use the posix read() function reference as the default + for ensuring the most minimal overhead in accessing non-obfuscated + zip files. +<br><small>And btw, the abbreviation "posix" stands for + "Portable Open System in Unix".</small> +</P> + +<P> + The trick we show here: the first argument of the ext/io read + callback is the file descriptor of the underlying operationg + system. While we can not add another argument to the ext/io + read call we can pick up additional information with the help + of that file descriptor id being globally unique even across + multiple threads. One solution would make the application map + that descriptor id to a special argument but this is often too + much overhead: the current file position is enough. +</P> +<P> + The current file position is managed by the operation system + via the file descriptor table. There is a function call to + map a file descriptor to the current read position offset + usually named "tell(fd)". Since this call is not mandated by + posix, you can emulate it with the posix lseek() call which + returns the resulting offset after the operation was performed, + so we just seek by a zero offset: <br><code> + <> <> <> <> #define tell(fd) lseek(fd,0,SEEK_CUR) +</code> +</P> + +<P> + That file offset is measured from the start of the zip archive, + not per each zipped file. Remind yourself of that fact when + creating your own "zzobfuscate.exe" which should work on the + zip archive and not per file before zipping. That is a difference + over normal zip archives where the user can atleast recognized the + dat file as a zip archive and see a list of files contained in the + archive, atleast their names and data start offset. +</P> +<P> + Now, let's use the file read offset to break the blocking + limitations of 8bit/8bit to a larger xor-key. In our example + we expand to a 32bit/32bit xor-key giving a search space of + 4<>billion keys instead of the just 256<>keys in 8bit blocking. + That is simply done by a static 4<>byte xor-key sequence and using + modulo operations for alignment. For the 2^X cases any modulo + operations shrink to a set of ultra-fast bitwise-and operations. +</P> + +<pre> + static char xor_value[4] = { 0x55, 0x63, 0x27, 0x31 }; + static zzip_ssize_t xor_read (int f, void* p, zzip_size_t l) + { + zzip_off_t y = tell(f); + zzip_size_t r = read(f, p, l); + zzip_size_t x; char* q = p; + for (x=0; x < r; x++) q[x] ^= xor_value[(y+x)&3]; + return r; + } +</pre> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-extio.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-extio.htm new file mode 100644 index 00000000000..c7c47f1a37d --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-extio.htm @@ -0,0 +1,188 @@ +<section> <date>15. July 2002 </date> +<h2> ZZIP-EXT/IO </h2> Customizing the file access + +<!--border--> + +<section> +<h3> The EXT/IO calls </h3> + +<P> + There were quite some requests from game developers and graphics-apps + developers who wanted various extensions to be included into the + <a href="zziplib.html">zziplib library</a>, but most of them were + only of specific usage. After some discussions we came up with a + model to customize the <a href="zziplib.html">zziplib library</a> + calls in a number of ways - and adding two or three arguments to + the zzip_* calls. The standard <a href="zziplib.html">zziplib library</a> + will actually call these *_ext_io functions with these extra arguments + to be set to zero. +</P><P> + The EXT feature describes a way to customize the extensions used in + the magic wrapper to find a .ZIP file. It turned out that there are + quite a number of applications that did chose the zip file format as + their native document format and where just the file extension had + been changed. This includes file types like Quake3 ".PK3" files from + ID Software, the Java library files called ".JAR", and lately the + OpenOffice-6 (resp. StarOffice-6) documents which carry xml-files along. + Just build a zero-termined string-list of file-extensions and submit it + to the _ext_io calls to let the <a href="zziplib.html">zziplib</a> find + parts of those zip-documents automagically. +</P><P> + In quite some of these cases, it is very benefical to make use of the + o_modes functionality that allows to submit extra bit-options into + the <a href="zziplib.html">zziplib</a> - this includes options like + <code>ZZIP_PREFERZIP</code> or even <code>ZZIP_ONLYZIP</code> which + modifies the default behaviour of looking for real files first instead + of some within a zipped directory tree. Other bit-options include + <code>ZZIP_CASELESS</code> to imitate win32-like filematching for a + zipped filetree. +</P><P> + Other wishes on <a href="zziplib.html">zziplib</a> circulated around + <a href="zzip-xor.html">obfuscation</a> or access to zip-files wrapped + in other data areas including encrpyted resources from other applications. + This has been adressed with the IO-handlers that you can explicitly + submit to the *_ext_io functions - the default will be posix-IO + open/read/write and seek/tell. An application using + <a href="zziplib.html">zziplib</a> can divert these to its own set of + these calls - and it only needs to declare them on opening a zipped file. +</P> + +</section><section> +<h3> The EXT stringlist </h3> + +<P> + Declaring an EXT stringlist is very simple as it is simply a + list of strings, the <a href="zziplib.html">zziplib</a> provides + you with a double-const <code>zzip_strings_t</code> type to help + you move a global declaration into the writeonly segment of your + app - it turned out that about all developers wanted just some + extensions on the default and they were fine with having them + global-const for their application, nothing like dynamically + modifying them. Well, you are still allowed to make it fully + dynamic... if you find a use case for that. +</P><P> + Extending the magic zip-extensions is just done by adding the + additional extensions to be recognized - just remember to add + the uppercased variants too since those will be needed on + (unx-like) filesystems that are case-sensitive. In the internet + age, quite some downloaded will appear in uppercased format since + the other side declared it as that and that other end was happy + with it as being a (w32-like) case-insensitive server. Therefore, + it should look like <pre> + static zzip_strings_t my_ext[] = { ".zip", ".ZIP", ".jar", ".JAR", 0 }; + </pre> +</P><P> + There is one frequently asked question in this area - how to open + a zipped file as "test.zip/README" instead of "test/README". Other + than some people might expect, the library will not find it - if + you want to have that one needs a fileext list that contains the + empty string - not the zero string, an empty string that is. It + looks like <pre> + static zzip_strings_t my_ext[] = { ".zip", ".ZIP", "", 0 }; + </pre> +</P><P> + And last not least, people want to tell the libary to not try to + open a real file that lives side by side with the same path as the + file path that can be matched by the zziplib. Actually, the magic + wrappers were never meant to be used like - the developer should + have used zzip_dir_* functions to open a zip-file and the + zzip_file_* functions to read entries from that zip-file. However, + the magic-wrappers look rather more familiar, and so you will find + now a bit-option ZZIP_ONLYZIP that can be passed down to the _ext_io + variants of the magic-wrapper calls, and a real-file will never get + tested for existance. Actually, I would rather recommend that for + application data the option ZZIP_PREFERZIP, so that one can enter + debugging mode by unpacking the zip-file as a real directory tree + in the place of the original zip. +</P> + +</section><section> +<h3> The IO handlers </h3> + +<P> + While you will find the zzip_plugin_io_t declared in the zziplib + headers, you are not advised to make much assumptions about their + structure. Still we gone the path of simplicity, so you can use + a global static for this struct too just like one can do for the + EXT-list. This again mimics the internals of zziplib. There is + even a helper function zzip_init_io that will copy the zziplib + internal handlers to your own handlers-set. Actually, this is + barely needed since the zziplib library will not check for nulls + in the plugin_io structure, all handlers must be filled, and the + zziplib routines call them unconditionally - that's simply + because a conditional-call will be ten times slower than an + unconditional call which adds mostly just one or two cpu cycles + in the place so you won't ever notice zziplib to be anywhat + slower than before adding IO-handlers. +</P><P> + However, you better instantiate your handlers in your application + and call that zzip_init_io on that instance to have everything + filled, only then modify the entry you actually wish to have + modified. For <a href="zzip-xor.html">obfuscation</a> this + will mostly be just the <code>read()</code> routine. But one can + also use IO-handlers to wrap zip-files into another data part + for which one (also) wants to modify the open/close routines + as well. +</P><P> + Therefore, you can modify your normal stdio code to start using + zipped files by exchaning the fopen/fread/fclose calls by their + magic counterparts, i.e. <pre> + // FILE* file = fopen ("test/README", "rb"); + ZZIP_FILE* file = zzip_fopen ("test/README", "rb"); + // while (0 < fread (buffer, 1, buflen, file))) + while (0 < zzip_fread (buffer, 1, buflen, file))) + { do something } + // fclose (file); + zzip_fclose (file); + </pre> +</P><P> + and you then need to prefix this code with some additional + code to support your own EXT/IO set, so the code will finally + look like <pre> + /* use .DAT extension to find some files */ + static zzip_strings_t ext[] = { ".dat", ".DAT", "", 0 }; + /* add obfuscation routine - see zzxorcat.c examples */ + static zzip_plugin_io_t io; + zzip_init_io (& io, 0); + io.read = xor_read; + /* and the rest of the code, just as above, but with ext/io */ + ZZIP_FILE* file = zzip_open_ext_io ("test/README", O_RDONLY|O_BINARY, + ZZIP_ONLYZIP|ZZIP_CASELESS, ext, io); + while (0 < zzip_fread (buffer, 1, buflen, file))) + { do something } + zzip_fclose (file); + </pre> +</P> + +</section><section> +<h3> Finally </h3> + +<P> + What's more to it? Well, if you have some ideas then please mail me + about it - don't worry, I'll probably reject it to be part of the + standard zziplib dll, but perhaps it is worth to be added as a + configure option and can help others later, and even more perhaps + it can be somehow generalized just as the ext/io features have been + generalized now. In most respects, this ext/io did not add much + code to the <a href="zziplib.html">zziplib</a> - the posix-calls + in the implemenation, like <code>"read(file)"</code> were simply + exchanged with <code>"zip->io->read(file)"</code>, and the + old <code>"zzip_open(name,mode)"</code> call is split up - the old + entry still persists but directly calls + <code>"zzip_open_ext_io(name,mode,0,0,0)"</code> which has the + old implementation code with just one addition: when the ZIP_FILE + handle is created, it uses the transferred io-handlers (or the + default ones if io==0), and initialized the io-member of that + structure for usage within the <code>zzip_read</code> calls. +</P><P> + This adds just a few bytes to the libs and just consumes additional + cpu cycles that can be rightfully called to be negligable (unlike + most commerical vendors will tell you when they indeed want to + tell you that for soooo many new features you have to pay a price). + It makes for greater variability without adding fatness to the + core in the default case, this is truly efficient I'd say. Well, + call this a German desease :-)=) ... and again, if you have another + idea, write today... or next week. +</P> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-extras.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-extras.htm new file mode 100644 index 00000000000..1d771160cb9 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-extras.htm @@ -0,0 +1,164 @@ +<section><date> 20. July 2002 </date> +<h2> ZZIP API extras </h2> The check/init API description. + +<!--border--> + +<section> +<h3> Extras </h3> + +<P> + The next requests circulated around other file-extensions to + automagically look inside filetypes that have zip-format too but + carry other fileextensions - most famous might be the ".PK3" + files of ID's Quake game. There have been a number of these + requests and in a lot of cases it dawned to me that those guys + may have overlooked the zzip_dir_open functions to travel + through documents of zipformat under any name - that is that the + "magic" was not actually needed but they just wanted to read + files in zipformat with the zziplib. +</P> + +<P> + Other requests circulated around encryption but I did reject + those bluntly, always. Instead there have been always examples + for doing some obfuscation around the zip-format so that the + stock zip/unzip tools do not recognize them but a game + software developer can pack/unpack his AI scripts and bitmaps + into such a zipformat-like file. +</P> + +<P> + After some dead-end patches (being shipped along with the + zziplib as configure-time compile-options - greetings to + Lutz Sammer and Andreas Schiffler), the general approach + of _ext_io came up, and finally implemented (greetings go + to Mike Nordell). The _open()-calls do now each have a + cousin of _open_ext_io() with two/three additional arguments + being a set of extensions to loop through our magic testing, + a callback-handler plugin-table for obfuscation-means, + and (often) a bit-mask for extra-options - this bitmask even + has "PREFERZIP" and "ONLYZIP" options to skip the real-file + test magic in those <code>zzip_*open</code> functions. +</P> + +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_open(name,flags) </td> + <td width=50%> zzip_open_ext_io(name,flags,mode,ext,io) </td></tr> + <tr><td width=50%> zzip_opendir(name) </td> + <td width=50%> zzip_opendir_ext_io(name,mode,ext,io) </td></tr> + <tr><td width=50%> zzip_dir_open(name,errp) </td> + <td width=50%> zzip_dir_open_ext_io(name,errp,ext,io) </td></tr> + <tr><td width=50%> zzip_dir_fdopen(fd,errp) </td> + <td width=50%> zzip_dir_fdopen_ext_io(fd,errp,ext,io) </td></tr> + <tr><td width=50%> zzip_file_open(dir,name,mode) </td> + <td width=50%> zzip_file_open_ext_io(dir,name,mode,ext,io) </td></tr> +</table></td></tr></table> + +<P> + Oh, and note that the mode,ext,io extras are memorized + in the respecitive ZZIP_DIR handle attached, so each + of the other calls like <code>zzip_file_open()</code> + and <code>zzip_read()</code> will be using them. There + are a few helper routines to help setup a new io-plugin + where the init_io will currently just memcopy the + default_io entries into the user-supplied plugin-struct. +</P> + +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_init_io </td> + <td width=50%> the recommended way to do things </td></tr> + <tr><td width=50%> zzip_get_default_io </td> + <td width=50%> used internally whenever you supply a null + for the io-argument of a _ext_io()-call </td></tr> + <tr><td width=50%> zzip_get_default_ext </td> + <td width=50%> used internally but not exported </td></tr> +</table></td></tr></table> + + +<P> + And last some stdio-like replacements were build but these + happen to be actually just small wrappers around the other + posix-like magic-calls. It just offers some convenience + since wrappers like "SDL_rwops" tend to use a stringised + open-mode - and I took the occasion to fold the zzip-bits + for the _ext_io-calls right in there recognized via + special extensions to the openmode-string of zzip_fopen(). +</P> + +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_fopen </td> + <td width=50%> convert stringmode and call zzip_open_ext_io </td></tr> + <tr><td width=50%> zzip_fread </td> + <td width=50%> slower way to say zzip_read </td></tr> + <tr><td width=50%> zzip_fclose </td> + <td width=50%> a synonym of zzip_close </td></tr> +</table></td></tr></table> + +<P> + For some reason, people did need the full set of function-calls() + to be working on zzip-wrappers too, so here they are - if the + ZZIP_FILE instance did wrap a real file, then the real posix-call + will be used, otherwise it is simulated on the compressed stream + with a zip-contained file - especially <code>seek()</code> can be + a slow operation: + if the new point is later then just read out more bytes till we + hit that position but if it is an earlier point then rewind to the + beginning of the compressed data and start reading/decompression + until the position is met. +</P> + +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_rewind </td> + <td width=50%> magic for rewind() </td></tr> + <tr><td width=50%> zzip_tell </td> + <td width=50%> magic for tell() </td></tr> + <tr><td width=50%> zzip_seek </td> + <td width=50%> magic for seek() </td></tr> +</table></td></tr></table> + +<P> + And last not least, there are few informative functions to + use function-calls to read parts of the opaque structures + of zzip-objects and their zzip-factory. +</P> + +<table cellpadding=10 width=100%><tr><td><table border=1 width=100%> + <tr><td width=50%> zzip_dir_stat </td> + <td width=50%> a stat()-like thing on a file within a ZZIP_DIR </td></tr> + <tr><td width=50%> zzip_dir_real </td> + <td width=50%> check if ZZIP_DIR wraps a stat'able posix-dirent</td></tr> + <tr><td width=50%> zzip_file_real </td> + <td width=50%> check if ZZIP_FILE wraps a stat'able posix-file </td></tr> + <tr><td width=50%> zzip_realdir </td> + <td width=50%> if zzip_dir_real then return the posix-dirent </td></tr> + <tr><td width=50%> zzip_realfd </td> + <td width=50%> if zzip_file_real then return the posix-file </td></tr> + <tr><td width=50%> zzip_dirhandle </td> + <td width=50%> the attached ZZIP_DIR of compressed ZZIP_FILE </td></tr> + <tr><td width=50%> zzip_dirfd </td> + <td width=50%> the attached posix-file of ZZIP_DIR zip-file </td></tr> + <tr><td width=50%> zzip_set_error </td> + <td width=50%> set the last ZZIP_DIR error-code </td></tr> + <tr><td width=50%> zzip_error </td> + <td width=50%> get the last ZZIP_DIR error-code </td></tr> + <tr><td width=50%> zzip_strerror </td> + <td width=50%> convert a zzip_error into a readable string </td></tr> + <tr><td width=50%> zzip_strerror_of </td> + <td width=50%> combine both above zzip_strerror of zzip_error </td></tr> + <tr><td width=50%> zzip_errno </td> + <td width=50%> helper to wrap a zzip-error to a posix-errno </td></tr> + <tr><td width=50%> zzip_compr_str </td> + <td width=50%> helper to wrap a compr-number to a readable string + </td></tr> + <tr><td width=50%> zzip_dir_free </td> + <td width=50%> internally called by zzip_dir_close if the ref-count + of the ZZIP_DIR has gone zero</td></tr> + <tr><td width=50%> zzip_freopen </td> + <td width=50%> to reuse the ZZIP_DIR from another ZZIP_FILE so it does + not need to be parsed again </td></tr> + <tr><td width=50%> zzip_open_shared_io </td> + <td width=50%> the ext/io cousin but it does not close the old ZZIP_FILE + and instead just shares the ZZIP_DIR if possible</td></tr> +</table></td></tr></table> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-file.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-file.htm new file mode 100644 index 00000000000..241baef85a7 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-file.htm @@ -0,0 +1,183 @@ +<section> <date> 1. June 2000 </date> +<h2> ZIP File Access </h2> Using Zipped Files Transparently + +<!--border--> + +<section> +<h3>The Typedef</h3> + +<P> + The typedef <code>ZZIP_FILE</code> can serve as a replacement + for a normal file descriptor. As long as it is only used + for reading a file, the zzlib-user can actually replace + the posix functions <code>open/read/close</code> + by their counterparts from the + <a href="zziplib.html">zziplib library</a>: + <code>zzip_open/zzip_read/zzip_close</code>. +</P> +<P> + As long as the filename path given to <code>zzip_open</code> + refers to a real file in the filesystem, it will almost + directly forward the call to the respective posix <code>open</code> + call. The returned file descriptor is then stored in + a member-variable of the <code>ZZIP_FILE</code> structure. +</P> +<P> + Any subsequent calls to <code>zzip_read</code> will then + be forwarded to the posix <code>read</code> call on the + memorized file descriptor. The same about <code>zzip_close</code> + which will call the posix <code>close</code> function and then + <code>free</code> the <code>ZZIP_FILE</code> structure. +</P> +<P> + The real benefit of the + <a href="zziplib.html">zziplib library</a> + comes about when the filename argument does actually refer + to a file that is zipped in a zip-archive. It happens that + even both a real file and a zipped file can live under the + same pathname given to the <code>zzip_open</code> call, + whereas the real file is used in preference. +</P> + +</section><section> +<h3>Zipped File</h3> + +<P> + Suppose you have subdirectory called '<tt>test/</tt>'. In + this directory is just one file, called '<tt>README</tt>'. + Calling the <code>zzip_open</code> function with an + argument of '<i>optional-path/</i> <tt>test/README</tt>', + then it will open that file for subsequent reading with + <code>zzip_read</code>. In this case the real (<i>stat'able</i>) + file is opened. +</P> +<P> + Now you can go to the '<tt>test/</tt>' directory and zip up + the files in there by calling + <nobr><tt>`zip ../test.zip *`</tt></nobr>. + After this, you can delete the '<tt>test/</tt>' directory and + the call to <code>zzip_open</code> will still succeed. + The reason is that the part of the path saying + '<tt>test/README</tt>' will be replaced by sth. like + '<tt>test.zip:README</tt>' - that is the real file '<tt>test.zip</tt>' + is opened and searched for a contained file '<tt>README</tt>'. +</P> +<P> + Calling <code>zzip_read</code> on the zipped '<tt>README</tt>' file + will return the very same data as if it is a real file in a + real directory. If the zipped file is compressed it will be + decompressed on the fly. +</P> + +</section><section> +<h3>Zip Directory</h3> + +<P> + The same applies to the use of <code>opendir/readdir/closedir</code> + which can safely be replaced with their counterparts from the + <a href="zziplib.html">zziplib library</a> - again their prototype + follows the scheme of the original calls, just prepend <tt>zzip_</tt> + to the function calls and <tt>ZZIP_</tt> to the struct-typedefs. +</P> +<P> + To call <code>zzip_opendir</code> on a real directory will then + return a <code>ZZIP_DIR</code> whose member-variable + <code>realdir</code> points to the actual <code>DIR</code>-structure + returned by the underlying posix <code>opendir</code>-call. +</P> +<P> + If a real directory '<tt>test</tt>' does not exist, then the + <code>zzip_opendir</code> will try to open a file '<tt>test.zip</tt>' + with a call to <code>zzip_dir_open</code>. + Subsequent calls to <code>zzip_readdir</code> will then return + information as being obtained from the central archive directory + of the zip-file. +</P> + +</section><section> +<h3>Differences</h3> + +<P> + There are no differences between the posix calls and their counterparts + from the <a href="zziplib.html">zziplib library</a> - well, just + as long as the zip-file contains just the plain files from a directory. +</P> +<P> + If the zip-file contains directory entries you may be prompted with + some awkward behaviour, since in zip-file a directory happens to be + just an empty file. Note that the posix function <code>open</code> + may also open a directory for reading - it will only return + <code>EISDIR</code> if the <code>open</code> mode-argument included + write-access. +</P> +<P> + What the current of version of the + <a href="zziplib.html">zziplib library</a> + can definitly not do: calling zzip_opendir on a directory zippend + <em>inside</em> a zip-file. +</P> +<P> + To prevent the enrollment of directories into the zip-archive, you + can use the <tt>-D</tt> option of the <tt>zip</tt> program. That + is in any <tt>Makefile</tt> you may want to use + <nobr><tt>`cd $(dir) && zip -D ../$(dir).zip *`</tt></nobr>. +</P> + +</section><section> +<h3>Advantages</h3> + +<P> + Distribution of a set of files is much easier if it just means + to wrap up a group of files into a zip-archive - and copy that + zip-archive to the respective destination directory. + Even more the files can be compressed and unlike a <tt>tar.gz</tt> + archive there is no need to decompress the archive in temporary + location before accessing a member-file. +</P> +<P> + On the other hand, there is no chance to scatter files around + on the disk like it could easily happen with a set of gzip'ed + man-pages in a single `<tt>man</tt>`-directory. The reader + application does not specifically need to know that the file + is compressed, so that reading a script like + `<tt>share/guile/x.x.x/ice-9/popen.scm</tt>` is done by simple + calls to <code>zzip_read</code> which works on zip-file named + `<tt>share/guile/x.x.x/ice-9.zip</tt>`. +</P> +<P> + A version mismatch between different files in a group is now + obvious: either the opened file belongs to the distribution + archive, or otherwise in resides in a real directory <em>just + next to the zip-archive that contains the original</em>. +</P> + +</section><section> +<h3>Issues</h3> + +<P> + The <a href="zziplib.html">zziplib library</a> does not + use any code piece from the <code>zip</code> programs, neither + <em>pkzip</em> nor <em>infozip</em>, so there is no license + issue here. The decompression is done by using the free + <a href="http://www.gzip.org/zlib">zlib library</a> which has no special + issues with respect to licensing. + The rights to the <a href="zziplib.html">zziplib library</a> + are reserved to the copyright holders, there is a public + license that puts most the sources themselves under + <a href="COPYING.LIB">the GNU Lesser General Public License</a>, + so that the use of a shared library instance of the + <a href="zziplib.html">zziplib library</a> + has no restrictions of interest to application programmers. + For more details and hints about static linking, check + the <a href="copying.html">COPYING</a> information. +</P> +<P> + The only issue you have with the + <a href="zziplib.html">zziplib library</a> + is the fact that you can only <em>read</em> the contained files. + Writing/Compression is not implemented. Even more, a compressed + file is not seekable at the moment although I hope that someone + will stand up to implement that functionality someday. +</P> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-index.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-index.htm new file mode 100644 index 00000000000..8b7a9bb6637 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-index.htm @@ -0,0 +1,55 @@ +<section> <date> created 1.Jun.2000, last updated 09.Feb.2003 </date> +<h2> The Library </h2> Overview + +<!--border--> + +<!-- 1. section of zzip-zip.html --> + +<P> + The <a href="zziplib.html">zziplib library</a> is intentionally + lightweight, it offers the ability to easily extract data from + files archived in a single zip file. Applications can bundle + files into a single zip archive and access them. + The implementation is based only on the (free) subset of + compression with the <a href="http://www.gzip.org/zlib"> + zlib algorithm</a> which is actually used by the <tt>zip/unzip</tt> tools. +</P> + +<center> + The library allows reading zip archives in a number of ways, +</center><dl> +<dt>archive mode:</dt> +<dd> reading the zip directory and extracting files from it. + This is the traditional mode as seen with unzip-utilities. + Some extra unzip-utiles for transparent/magic mode are + shipped as well. +</dd> +<dt>replacement mode:</dt> +<dd> Use ZZIP_FILE / ZZIP_DIR pointers provided by zziplib and + put them to work with routines originally developped to + work with real directories and file handles. The API calls + do follow traditional synopsis from posix/stdio. +</dd> +<dt>transparent mode:</dt> +<dd> Use replacement handles and allow the open()-calls to + automatically detect when a file is contained in a zip + archive or when it is a real file in the file system. + A filepath can be partly in a real filesystem and partly + within the zip archive when one is seen. +</dd> +<dt> ext magic </dt> +<dd> Use the same filepath to access either a zipped or real + file - it looks for a real file and there is none then + every subdirectory of the path is checked, a ".zip" + extension appended, and the zipped file transparently + opened. This can speed up dat-file development + dramatically. +</dd> +<dt> io/xor magic </dt> +<dd> The access to the filesystem can be hooked up - examples + are given for xor obfuscation which is great for game + artwork and AI data. A small intro for SDLrwops usage is + given as well. +</dd> +</dl> +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-parse.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-parse.htm new file mode 100644 index 00000000000..a3c49dcba76 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-parse.htm @@ -0,0 +1,219 @@ +<section> <date> 17. December 2002 </date> +<h2> ZIP Format </h2> About Zip Parsing Internals... + +<!--border--> + +<section> +<h3> ZIP Trailer Block </h3> + +<P> + The general ZIP file format is written sequentially - each file + being added gets a local file header and its inflated data. When + all files are written then a central directory is written - and + this central directory may even span multiple disks. And each + disk gets a descriptor block that contains a pointer to the start + of the central directory. This descriptor is always written last + and therefore we call it the "ZIP File Trailer Block". +</P> +<P> + Okay, so we know that this ZIP Trailer is always at the end of a zip + file and that is has a fixed length, and a magic four-byte value at + its block start. That should make it easy to detect zip files but in + the real world it is not that easy - it is allowed to add a zip + archive comment text <em>after</em> the Trailer block. It's rarely + used these days but it turns out that a zip reader must be ready + to search for the Trailer block starting at the end of the file + and looking upwards for the Trailer magic (it's "PK\5\6" btw). +</P> +<P> + Now that's what the internal function __zip_find_disk_trailer is + used for. It's somewhat optimized as we try to use mmap features + of the underlying operating system. The returned structure is + called zzip_disk_trailer in the library source code, and we only + need two values actually: u_rootseek and u_rootsize. The first of + these can be used to lseek to the place of the central directory + and the second value tells us the byte size of the central directory. +</P> + +</section><section> +<h3> ZIP Central Directory </h3> + +<P> + So here we are at the central directory. The disk trailer did also + tell us how many entries are there but it is not that easy to read + them. Each directory entry (zzip_root_dirent type) has again a + magic value up front followed by a few items but they all have some + dos format - consider the timestamps, and atleast size/seek values + are in intel byteorder. So we might want to parse them into a format + that is easier to handle in internal code. +</P> +<P> + That is also needed for another reason - there are three items in that + directory entry being size values of three variadic fields following + right after the directory. That's right, three of these. The first + variadic field is the filename of this directory entry. In other + words, the root directory entry does not contain a seek value of + where the filename starts off, the start of the filename is + implicitly given with the end address of the directory entry. +</P> +<P> + The size value for the filename does simply say how long the + filename is - however, and more importantly, it allows us to + compute the start of the next variadic field, called the extra + info field. Well, we do not need any value from that extra info + block (it has unix filemode bits when packed under unix) but we + can be quite sure that this field is not null either. And that + was the second variadic field. +</P> +<P> + There is a third variadic field however - it's the comment field. + That was pretty heavily used in the good old DOS days. We are not + used to it anymore since filenames are generally self-descriptive + today but in the DOS days a filename was 8+3 chars maximum - and + it was in the comment field that told users what's in there. It + turned out that many software archives used zip format for just + that purpose as their primary distribution format - for being + able to attach a comment line with each entry. +</P> +<P> + Now, these three variadic fields have each an entry in the + directory entry header telling of their size. And after these + three variadic fields the next directory entry follows right in. + Yes, again there is no seek value here - we have to take the sum + of the three field sizes and add that to the end address of the + directory entry - just to be able to get to the next entry. +</P> + +</section><section> +<h3> Internal Directory </h3> + +<P> + Now, the external ZIP format is too complicated. We cut it down + to the bare minimum we actually need. The fields in the entry + are parsed into a format directly usable, and from the variadic + fields we only keep the filename. Oh, and we ensure that the + filename gets a trailing null byte, so it can surely be passed + down into libc routines. +</P> +<P> + There is another trick by the way - we use the u_rootsize value + to malloc a block for the internal directory. That ensures the + internal root directory entries are in nearby locations, and + including the filenames themselves which we put in between the + dirent entries. That's not only similar to the external directory + format, but when calling readdir and looking for a matching + filename of an zzip_open call, this will ensure the memory is + fetched in a linear fashion. Modern cpu architectures are able + to burst through it. +</P> +<P> + One might think to use a more complicated internal directory + format - like hash tables or something. However, they all suffer + from the fact that memory access patterns will be somewhat random + which eats a lot of speed. It is hardly predictable under what + circumstances it gets us a benefit, but the problem is certainly + not off-world: there are zzip archives with 13k+ entries. In a real + filesystem people will not put 13k files into one directory, of + course - but for the zip central directory all entries are listed + in parallel with their subdirectory paths attached. So, if the + original subtree had a number of directories, they'll end up in + parallel in the zip's central directory. +</P> + +</section><section> +<h3> File Entry </h3> + +<P> + The zip directory entry has one value that is called z_off in the + zziplib sources - it's the seek value to the start of the actual + file data, or more correctly it points to the "local file header". + Each file data block is preceded/followed with a little frame. + There is not much interesting information in these framing blocks, + the values are duplicates of the ones found in the zip central + directory - however, we must skip the local file header (and a + possible duplicate of filename and extrainfo) to arrive at the + actual file data. +</P> +<P> + When the start of the actual file data, we can finally read data. + The zziplib library does only know about two choices defined by + the value in the z_compr field - a value of "0" means "stored" + and data has been stored in uncompresed format, so that we can + just copy it out of the file to the application buffer. +</P> +<P> + A value of "8" means "deflated", and here we initialize the zlib + and every file data is decompressed before copying it to the + application buffer. Care must be taken here since zlib input data + and decompressed data may differ significantly. The zlib compression + will not even obey byte boundaries - a single bit may expand to + hundreds of bytes. That's why each ZZIP_FILE has a decompression + buffer attached. +</P> +<P> + All the other z_compr values are only of historical meaning, + the infozip unix tools will only create deflated content, and + the same applies to pkzip 2.x tools. If there would be any other + value than "0" or "8" then zziplib can not decompress it, simple + as that. +</P> + +</section><section> +<h3> ZZIP_DIR / ZZIP_FILE </h3> + +<P> + The ZZIP_DIR internal structures stores a posix handle to the + zip file, and a pointer to the parsed central directory block. + One can use readdir/rewinddir to walk each entry in the central + directory and compare with the filenames attached. And that's + what will be done at a zzip_open call to find the file entry. +</P> +<P> + There are a few more fields in the ZZIP_DIR structure, where + most of these are related to the use of this struct as a + shared recource. You can use zzip_file_open to walk the + preparsed central directory and return a new ZZIP_FILE handle + for that entry. +</P> +<P> + That ZZIP_FILE handle contains a back pointer its ZZIP_DIR + that it was made from - and the back pointer also serves as flag + that the ZZIP_FILE handle points to a file within a ZIP file as + opposed to wrapping a real file in the real directory tree. + Each ZZIP_FILE will increment a shared counter, so that the + next dir_close will be deferred until all ZZIP_FILE have been + destroyed. +</P> +<P> + Another optmization is the cache-pointer in the ZZIP_DIR. It is + quite common to read data entries sequentially, as that the + zip directory is scanned for files matching a specific pattern, + and when a match is seen, that file is openened. However, each + ZZIP_FILE needs a decompression buffer, and we keep a cache of + the last one freed so that it can be picked up right away for the + next zzip_file_open. +</P> +<P> + Note that using multiple zzip_open() directly, each will open + and parse a zip directory of its own. That's bloat both in + terms of memory consumption and execution speed. One should try + to take advantage of the feature that multiple ZZIP_FILE's can + share a common ZZIP_DIR with a common preparsed copy of the + zip's central directory. That can be done directly with using + zzip_file_open to use a ZZIP_DIR as a factory for ZZIP_FILE, + but also zzip_freopen can be used to reuse the old internal + central directory, instead of parsing it again. +</P> +<P> + And while zzip_freopen would release the old ZZIP_FILE handle + only resuing the ZZIP_DIR attached, one can use another routine + directly called zzip_open_shared that will create a ZZIP_FILE + from an existing ZZIP_FILE. Oh, and not need to worry about + problems when a filepath given to zzip_freopen() happens to + be in another place, another directory, another zip archive. + In that case, the old zzip's internal directory is freed and + the others directory read - the preparsed central directory + is only used if that is actually possible. +</P> + +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-sdl-rwops.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-sdl-rwops.htm new file mode 100644 index 00000000000..38923f6f0be --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-sdl-rwops.htm @@ -0,0 +1,90 @@ +<section> <date> 19. Aug 2001 </date> +<h2> SDL rwops </h2> Example to make an SDL_rwops interface. + +<p><small> some <b>MSVC</b> help in + <a href="README.MSVC6">README.MSVC6</a> and + <a href="README.SDL">README.SDL</a> +</small></p> + +<!--border--> + +<section> +<h3> Source </h3> + +<P> + The example sources of the <a href="zziplib.html">zziplib library</a> + are usually put under the <a href="COPYING.ZLIB">ZLIB license</a> so + that you can reuse the code freely in your own projects. Here we talk + about the example that might be most useful for + <a href="http://libsdl.org">SDL</a> based programs. + Just copy the two files + <a href="SDL_rwops_zzip.h">SDL_rwops_zzip.h</a> + and + <a href="SDL_rwops_zzip.c">SDL_rwops_zzip.c</a> + to the directory with your other project sources, and make sure + to link it somehow to your programs. I did not make the effort to + create a seperate library out of it - it would just export one + single function <tt>SDL_RWFromZZIP</tt> that has the same call-synopsis + like <tt>SDL_RWFromFile</tt> (but it can not (yet) write a zip-file). +</P> + +<P> + The source file <a href="SDL_rwops_zzip.c">SDL_rwops_zzip.c</a> is + quite short - it just stores a ZZIP_FILE handle in the userdata + field of the <tt>SDL_rwops</tt> structure. The SDL'rwop calls will then + fetch that pointer and call the corresponding functions from the + <a href="zziplib.html">zziplib library</a>. Most of the glue code + is in the <tt>SDL_RWFromZZIP</tt> function that allocates an + <tt>SDL_rwops</tt> structure and fills the handler-functions + into the callback fields. +</P> + +</section><section> +<h3> Usage </h3> + +<P> + If you link this file to your project, remember that your executables + do now have additional dependencies - not only -lzzip to link with + the <a href="zziplib.html">zziplib library</a> - do not forget to + link with zlib library via -lz. Of course, there is a lib-config + script that you can use: `zzip-config --libs` will return these + linker-infos (unless you have a native-windows system - it is + shell-script). +</P> + +<P> + As an example, replace that <tt>SDL_RWFromFile</tt> that accesses your + game-graphic files - these files are stored in shared/myapp + of course where they belong. When you've done that + then go to X/share/myapp and +<br><code> + `(cd graphics/ && zip -9r ../graphics.zip .)` </code><br> + and rename the graphics/ subfolder - and still all your files + are found: a filepath like X/shared/graphics/game/greetings.bmp + will open X/shared/graphics.zip and return the zipped file + game/greetings.bmp in the zip-archive (for reading that is). +</P> + +</section><section> +<h3> Test </h3> + +<P> + The <a href="zziplib.html">zziplib</a> configure script does not + look for <a href="http://libsdl.org">SDL</a>. If you know that + you have <a href="http://libsdl.org">SDL</a> installed + then you can check this <tt>SDL_rwops</tt> example by using + <code><nobr>`make testsdl`</nobr></code>. This will compile the + two source files <a href="SDL_rwops_zzip.c">SDL_rwops_zzip.c</a> + and <a href="SDL_rwops_zzcat.c">SDL_rwops_zzcat.c</a> to be linked + together into an executable called <code>zzcatsdl</code>. The test + will continue with a <code><nobr>`zzcatsdl test/README`</nobr></code> + - just like it is done for <code><nobr>`make test3`</nobr></code>. +</P> +<P> + The corresponding section in the <a href="Makefile.am">Makefile.am</a> + is also an example how to use lib-config scripts to build files. Here + there is no build-processing that had been tweaked much by automake/autoconf. + Just use sdl-config and zzip-config to add the needed flags. +</P> +</section></section> + diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-xor.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-xor.htm new file mode 100644 index 00000000000..2434c2974a9 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-xor.htm @@ -0,0 +1,77 @@ +<section> <date> 15. July 2002 </date> +<h2> ZIP Obfuscation </h2> Using obfuscations like XOR. + +<!--border--> + +<section> +<h3> The EXT/IO calls </h3> + +<P> + You really should read the section about the + <a href="zzip-extio.html">EXT/IO feature</a> of the zziplib since the + obfuscation routines are built on top of it. In order to use obfuscation, + you will generally need to use all the three additional argument that + can be passsed to _open_ext_io functions. For the XOR-example, only one + IO-handler is modified being the read()-call that will simply xor each + data byte upon read with a specific value. It two advantages - doing an + xor twice does yield the same data, so as a developer you do not have + to wonder about the encryption/decryption pair, and it is a stateless + obfuscation that does not need to know about the current position + within the zip-datafile or zippedfile-datatream. +</P><P> + The examples provided just use a simple routine for xoring data that + is defined in all the three of the example programs: <pre> + static int xor_value = 0x55; + static zzip_ssize_t xor_read (int f, void* p, zzip_size_t l) + { + zzip_size_t r = read(f, p, l); + zzip_size_t x; char* q = p; + for (x=0; x < r; x++) q[x] ^= xor_value; + return r; + } + </pre> +</P><P> + and place this routine into the io-handlers after initializing + the structure: <pre> + zzip_init_io (&xor_handlers, 0); xor_handlers.read = &xor_read; + </pre> +</P> + +</section><section> +<h3> The examples </h3> + +<P> + There are three example programs. The first one is + <a href="zzxorcopy.c">zzxorcopy.c</a> which actually is not a zziplib + based program. It just opens a file via stdio, loops through all data bytes + it can read thereby xor'ing it, and writes it out to the output file. A + call like <code><nobr>"zzxorcopy file.zip file.dat"</nobr></code> will + create an obfuscated dat-file from a zip-file that has been possibly + create with the normal infozip tools or any other archive program to + generate a zip-file. The output dat-file is not recognized by normal + zip-enabled apps - the filemagic is obfuscated too. This output + dat-file however is subject to the other two example programs. +</P><P> + The <a href="zzxordir.c">zzxordir.c</a> program will open such an obfuscated + zip file and decode the central directory of that zip. Everything is + still there in just the way it can be shown with the normal unzip + programs and routines. And the <a href="zzxorcat.c">zzxorcat.c</a> program + can extract data from this obfuscated zip - and print it un-obfuscated + to the screen. These example programs can help you jumpstart with + your own set of obfuscator routines, possibly more complex ones. +</P><P> + By the way, just compare those with their non-xor counterparts that + you can find in <a href="zzdir.c">zzdir.c</a> and + <a href="zzxorcat.c">zzxorcat.c</a>. Notice that the difference is + in the setup part until the _open_ call after which one can just + use the normal zzip_ routines on that obfuscated file. This is + great for developing since you can start of with the magic-wrappers + working on real-files then slowly turning to pack-files that hold + most of the data and finally ending with a zip-only and obfuscated + dat-file for your project. +</P> + +<p align="right"><small><small> +<a href="copying.html">staticlinking?</a> +</small></small></p> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-zip.htm b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-zip.htm new file mode 100644 index 00000000000..141700044a5 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzip-zip.htm @@ -0,0 +1,146 @@ +<section><date> 1. June 2000 </date> +<h2> ZIP Access </h2> Accessing Zip Archives with ZLib Decompression + +<!--border--> + +<section> +<h3> The Library </h3> + +<P> + The <a href="zziplib.html">zziplib library</a> offers users the + ability to easily extract data from files archived in a single + zip file. This way, programs that use many "read-only" files from + a program specific source directory can have a single zip + archive +</P> +<P> + This library offers only a (free) subset of compression methods + provided in a full implementation but that is well enough. The + idea here is that <tt>zip/unzip</tt> utilities can be used + to create archives that will later be read by using this library. + Yet those programmes (or a library with their functionality) + is not needed in that final operation. +</P> + +</section><section> +<h3> Using A Zip-File </h3> + +<P> + Before a file in the zip-archive is accessed, the application + must first get a handle to the central directory contained in the + zip-file. This is achived by calling + <a href="zziplib.html#zzip_dir_open"> zzip_dir_open </a> + or + <a href="zziplib.html#zzip_dir_fdopen"> zzip_dir_fdopen </a>. + The directory entries in the zip-archives can be obtained + with + <a href="zziplib.html#zzip_dir_read"> zzip_dir_read </a>. + After being done, the zip-dir handle should be closed with + <a href="zziplib.html#zzip_dir_close"> zzip_dir_close </a>. +</P> +<p><pre> ZZIP_DIR* dir = zzip_dir_open("test.zip",0); + if (dir) { + ZZIP_DIRENT dirent; + if (zzip_dir_read(dir,&dirent) { + /* show info for first file */ + print("%s %i/%i", dirent.d_name, dirent.d_csize, dirent.st_size); + } + zzip_dir_close(dir); + } +</pre></p> +<P> + From the zip-dir handle a compressed file can be opened + for reading. This is achieved by using + <a href="zziplib.html#zzip_file_open"> zzip_file_open </a> + and providing it with the dir-handle and a name of the file. + The function + <a href="zziplib.html#zzip_file_read"> zzip_file_read </a> + is used to get pieces of uncompressed data from the file, and + the file-handle should be closed with + <a href="zziplib.html#zzip_file_close"> zzip_file_close </a> +</P> +<p><pre> ZZIP_FILE* fp = zzip_file_open(dir,"README",0); + if (fp) { + char buf[10]; + zzip_ssize_t len = zzip_file_read(fp, buf, 10); + if (len) { + /* show head of README */ + write(1, buf, len); + } + zzip_file_close(fp); + } +</pre></p> + +</section><section> +<h3> Magic Zipped Files </h3> + +<P> + There is actually no need to directly use the zip-centric functions + as described above. Instead there are magic replacements for the + posix calls <code>open/read/close</code> and + <code>opendir/readdir/closedir</code>. The prototypes of these + functions had been the guideline for the design of their magic + counterparts of the + <a href="zziplib.html">zziplib library</a>. +</P> +<P> + The magic functions are described in a seperated document on + <a href="zzip-file.html"> Using Zipped Files </a>. In general, + the functions have a prefix <tt>zzip_</tt> and their argument + types have a prefix <tt>ZZIP_</tt> where appropriate. Calls + to the magic functions and the direct functions above can + be mixed as long as the magic functions have not been opening + a real file. +</P> +<P> + To detect a real file (or directory), the info functions + <a href="zziplib.html#zzip_file_real"> zzip_file_real </a> + and + <a href="zziplib.html#zzip_dir_real"> zzip_dir_real </a> + can be used. + If these return a true value, the standard posix functions + are more apropriate. The posix handles can be obtained with + a call to + <a href="zziplib.html#zzip_realdir"> zzip_realdir </a> and + <a href="zziplib.html#zzip_realfd"> zzip_realfd </a> respectivly. +</P> + +</section><section> +<h3> Errors & Infos </h3> + +<P> + There are a set of error and info functions available. To handle + error conditions specific to the + <a href="zziplib.html">zziplib library</a> + there are these functions: + <a href="zziplib.html#zzip_error"> zzip_error </a>, + <a href="zziplib.html#zzip_seterror"> zzip_seterror </a> + and their string representations with + <a href="zziplib.html#zzip_strerror"> zzip_strerror </a>, + <a href="zziplib.html#zzip_strerror_of"> zzip_strerror_of </a>. + The magic functions will map any of these specific library + error conditions to the more generic system <code>errno</code> + codes with + <a href="zziplib.html#zzip_errno"> zzip_errno </a>. +</P> +<P> + More information on stream can be obtained with + <a href="zziplib.html#zzip_dir_stat"> zzip_dir_stat </a> and + <a href="zziplib.html#zzip_dirhandle"> zzip_dirhandle. </a> + The latter is used to obtain the dir-handle that every zipped file + handle has even if not explicitly opened. +</P> +<P> + The usage of many functions are shown in the example programs + that come along with the + <a href="zziplib.html">zziplib library</a>. See the files + <a href="zzcat.c"> zzcat.c </a> and + <a href="zzdir.c"> zzdir.c </a>. The + <a href="zziptest.c"> zziptest.c </a> program needs the + private header file + <a href="zzip.h"> zzip.h </a> whereas the library installer + will only copy the public include file + <a href="zziplib.h"> zziplib.h </a> to your system's + <tt>include</tt> directory. +</P> +</section></section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/__init__.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/__init__.py new file mode 100644 index 00000000000..e69de29bb2d --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/__init__.py diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/commentmarkup.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/commentmarkup.py new file mode 100644 index 00000000000..3f605a72d6c --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/commentmarkup.py @@ -0,0 +1,85 @@ +from match import Match + +def markup_link_syntax(text): + """ markup the link-syntax ` => somewhere ` in the text block """ + return (text + & Match(r"(?m)(^|\s)\=\>\"([^\"]*)\"") + >> r"\1<link>\2</link>" + & Match(r"(?m)(^|\s)\=\>\'([^\']*)\'") + >> r"\1<link>\2</link>" + & Match(r"(?m)(^|\s)\=\>\s(\w[\w.]*\w\(\d+\))") + >> r"\1<link>\2</link>" + & Match(r"(?m)(^|\s)\=\>\s([^\s\,\.\!\?]+)") + >> r"\1<link>\2</link>") + +class CommentMarkup: + """ using a structure having a '.comment' item - it does pick it up + and enhances its text with new markups so that they can be represented + in xml. Use self.xml_text() to get markup text (knows 'this function') """ + def __init__(self, header = None): + self.header = header + self.text = None # xml'text + def get_filename(self): + if self.header is None: + return None + return self.header.get_filename() + def parse(self, header = None): + if header is not None: + self.header = header + if self.header is None: + return False + comment = self.header.comment + try: + comment = self.header.get_otherlines() + except Exception, e: + pass + mode = "" + text = "" + for line in comment.split("\n"): + check = Match() + if line & check(r"^\s?\s?\s?[*]\s+[*]\s(.*)"): + if mode != "ul": + if mode: text += "</"+mode+">" + mode = "ul" ; text += "<"+mode+">" + line = check.group(1) + text += "<li><p> "+self.markup_para_line(line)+" </p></li>\n" + elif line & check(r"^\s?\s?\s?[*](.*)"): + if mode != "para": + if mode: text += "</"+mode+">" + mode = "para" ; text += "<"+mode+">" + line = check.group(1) + if line.strip() == "": + text += "</para><para>"+"\n" + else: + text += " "+self.markup_para_line(line)+"\n" + else: + if mode != "screen": + if mode: text += "</"+mode+">" + mode = "screen" ; text += "<"+mode+">" + text += " "+self.markup_screen_line(line)+"\n" + if mode: text += "</"+mode+">"+"\n" + self.text = (text + & Match(r"(<para>)(\s*[R]eturns)") >>r"\1This function\2" + & Match(r"(?s)<para>\s*</para><para>") >> "<para>" + & Match(r"(?s)<screen>\s*</screen>") >> "") + return True + def markup_screen_line(self, line): + return self.markup_line(line.replace("&","&") + .replace("<","<") + .replace(">",">")) + def markup_para_line(self, line): + return markup_link_syntax(self.markup_line(line)) + def markup_line(self, line): + return (line + .replace("<c>","<code>") + .replace("</c>","</code>")) + def xml_text(self, functionname = None): + if self.text is None: + if not self.parse(): return None + text = self.text + if functionname is not None: + def function(text): return "<function>"+text+"</function> function" + text = (text + .replace("this function", "the "+function(functionname)) + .replace("This function", "The "+function(functionname))) + return text diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/dbk2htm.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/dbk2htm.py new file mode 100644 index 00000000000..f8593e697b0 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/dbk2htm.py @@ -0,0 +1,26 @@ +from match import Match +import string + +class dbk2htm_conversion: + mapping = { "<screen>" : "<pre>", "</screen>" : "</pre>", + "<para>" : "<p>", "</para>" : "</p>" , + "<function>" : "<link>", "</function>" : "</link>" } + def __init__(self): + pass + def section2html(self, text): + for str in self.mapping: + text = string.replace(text, str, self.mapping[str]) + return text + def paramdef2html(self, text): + s = Match() + txt = text & s(r"\s+<paramdef>") >> r"\n<nobr>" + txt &= s(r"<paramdef>") >> r"<nobr>" + txt &= s(r"</paramdef>") >> r"</nobr>" + txt &= s(r"<parameters>") >> r"\n <code>" + txt &= s(r"</parameters>") >> r"</code>\n" + return txt + +def section2html(text): + return dbk2htm_conversion().section2html(text) +def paramdef2html(text): + return dbk2htm_conversion().paramdef2html(text) diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/docbookdocument.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/docbookdocument.py new file mode 100644 index 00000000000..c4602ad64d3 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/docbookdocument.py @@ -0,0 +1,95 @@ +#! /usr/bin/env python +# -*- coding: UTF-8 -*- +from match import Match + +class DocbookDocument: + """ binds some xml content page with additional markup - in this + variant we set the rootnode container to 'reference' and the DTD + to the Docbook 4.1.2 version. Modify as you like.""" + has_title_child = [ "book", "chapter", "section", "reference" ] + docbook_dtd = ( + ' PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"'+"\n"+ + ' "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd"') + def __init__(self, o, filename = None): + self.o = o + self.rootnode = "reference" + self.filename = filename + self.title = "" + self.text = [] + def add(self, text): + """ add some content """ + self.text += [ text ] + return self + def get_title(self): + if self.title: return title + try: return self.text[0].get_title() + except Exception, e: pass + return self.title + def _xml_doctype(self, rootnode): + return "<!DOCTYPE "+rootnode+self.docbook_dtd+">" + def _xml_text(self, xml): + """ accepts adapter objects with .xml_text() """ + try: return xml.xml_text() + except Exception, e: print "DocbookDocument/text", e; pass + return str(xml) + def _fetch_rootnode(self, text): + fetch = Match(r"^[^<>]*<(\w+)\b") + if text & fetch: return fetch[1] + return self.rootnode + def _filename(self, filename): + if filename is not None: + self.filename = filename + filename = self.filename + if not filename & Match(r"\.\w+$"): + ext = self.o.docbook + if not ext: ext = "docbook" + filename += "."+ext + return filename + def save(self, filename = None): + filename = self._filename(filename) + print "writing '"+filename+"'" + if len(self.text) > 1: + self.save_all(filename) + else: + self.save_text(filename, self.text[0]) + def save_text(self, filename, text): + try: + fd = open(filename, "w") + xml_text = self._xml_text(text) + rootnode = self._fetch_rootnode(xml_text) + doctype = self._xml_doctype(rootnode) + print >>fd, doctype + print >>fd, xml_text + fd.close() + return True + except IOError, e: + print "could not open '"+filename+"'file", e + return False + def save_all(self, filename): + assert len(self.text) > 1 + try: + fd = open(filename, "w") + xml_text = self._xml_text(self.text[0]) + rootnode = self._fetch_rootnode(xml_text) + if rootnode == self.rootnode: + rootnode = "book" + else: + rootnode = self.rootnode + doctype = self._xml_doctype(rootnode) + print >>fd, doctype + title = self.get_title() + if title and self.rootnode in self.has_title_child: + print >>fd, "<"+self.rootnode+'><title>'+title+'</title>' + elif title: + print >>fd, "<"+self.rootnode+' id="'+title+'">' + else: + print >>fd, "<"+self.rootnode+'>' + for text in self.text: + text = self._xml_text(text) + print >>fd, text + print >>fd, "</"+self.rootnode+">" + fd.close() + return True + except IOError, e: + print "could not open '"+filename+"'file", e + return False diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionheader.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionheader.py new file mode 100644 index 00000000000..81bb385c408 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionheader.py @@ -0,0 +1,96 @@ +from match import Match + +class FunctionHeader: + """ parsing the comment block that is usually presented before + a function prototype - the prototype part is passed along + for further parsing through => FunctionPrototype """ + def __init__(self, functionheaderlist, comment, prototype): + self.parent = functionheaderlist + self.comment = comment + self.prototype = prototype + self.firstline = None + self.otherlines = None + self.titleline = None + self.alsolist = [] + def get_filename(self): + return self.parent.get_filename() + def parse_firstline(self): + if not self.comment: return False + x = self.comment.find("\n") + if x > 0: + self.firstline = self.comment[:x] + self.otherlines = self.comment[x:] + elif x == 0: + self.firstline = "..." + self.otherlines = self.comment[1:x] + else: + self.firstline = self.comment + self.otherlines = "" + return True + def get_firstline(self): + if self.firstline is None: + if not self.parse_firstline(): return "" + return self.firstline + def get_otherlines(self): + if self.firstline is None: + if not self.parse_firstline(): return "" + return self.otherlines + def parse_titleline(self): + """ split extra-notes from the firstline - keep only titleline """ + line = self.get_firstline() + if line is None: return False + self.titleline = line + self.alsolist = [] + x = line.find("also:") + if x > 0: + self.titleline = line[:x] + for also in line[x+5:].split(","): + self.alsolist += [ also.strip() ] + self._alsolist = self.alsolist + return True + def get_alsolist(self): + """ gets the see-also notes from the firstline """ + if self.titleline is None: + if not self.parse_titleline(): return None + return self.alsolist + def get_titleline(self): + """ gets firstline with see-also notes removed """ + if self.titleline is None: + if not self.parse_titleline(): return False + return self.titleline + def get_title(self): + """ gets titleline unless that is a redirect """ + titleline = self.get_titleline() + if titleline & Match(r"^\s*=>"): return "" + if titleline & Match(r"^\s*<link>"): return "" + return titleline + def get_prototype(self): + return self.prototype + +class FunctionHeaderList: + """ scan for comment blocks in the source file that are followed by + something quite like a C definition (probably a function definition). + Unpack the occurrences and fill self.comment and self.prototype. """ + def __init__(self, textfile = None): + self.textfile = textfile # TextFile + self.children = None # src'style + def parse(self, textfile = None): + if textfile is not None: + self.textfile = textfile + if self.textfile is None: + return False + text = self.textfile.get_src_text() + m = Match(r"(?s)\/\*[*]+(?=\s)" + r"((?:.(?!\*\/))*.)\*\/" + r"([^/\{\}\;\#]+)[\{\;]") + self.children = [] + for found in m.finditer(text): + child = FunctionHeader(self, found.group(1), found.group(2)) + self.children += [ child ] + return len(self.children) > 0 + def get_filename(self): + return self.textfile.get_filename() + def get_children(self): + if self.children is None: + if not self.parse(): return [] + return self.children diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionlisthtmlpage.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionlisthtmlpage.py new file mode 100644 index 00000000000..4ec9178ca10 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionlisthtmlpage.py @@ -0,0 +1,127 @@ +from options import * +from match import Match + +class FunctionListHtmlPage: + """ The main part here is to create a TOC (table of contents) at the + start of the page - linking down to the descriptions of the functions. + Sure we need to generate anchors on the fly. Additionally, all the + non-html (docbook-like) markup needs to be converted for ouput. - + each element to be added should implement get_name(), get_head() and + get_body() with the latter two having a xml_text() method.""" + _null_table100 = '<table border="0" width="100%"' \ + ' cellpadding="0" cellspacing="0">' + _ul_start = '<table width="100%">' + _ul_end = '</table>' + _li_start = '<tr><td valign="top">' + _li_end = '</td></tr>' + http_opengroup = "http://www.opengroup.org/onlinepubs/000095399/functions/" + http_zlib = "http://www.zlib.net/manual.html" + def __init__(self, o = None): + self.toc = "" + self.text = "" + self.head = "" + self.body = "" + self.anchors = [] + self.o = o + if self.o is None: self.o = Options() + self.not_found_in_anchors = [] + def cut(self): + self.text += ("<dt>"+self._ul_start+self.head+self._ul_end+"</dt>"+ + "<dd>"+self._ul_start+self.body+self._ul_end+"</dd>") + self.head = "" + self.body = "" + def add(self, entry): + name = entry.get_name() + head_text = entry.head_xml_text() + body_text = entry.body_xml_text(name) + if not head_text: + print "no head_text for", name + return + try: + prespec = entry.head_get_prespec() + namespec = entry.head_get_namespec() + callspec = entry.head_get_callspec() + head_text = ("<code><b><function>"+namespec+"</function></b>" + +callspec+" : "+prespec+"</code>") + except Exception, e: + pass + try: + extraline = "" + title = entry.get_title() + filename = entry.get_filename().replace("../","") + if title: + subtitle = ' <em>'+title+'</em>' + extraline = (self._null_table100+'<td> '+subtitle+' </td>'+ + '<td align="right"> '+ + '<em><small>'+filename+'</small></em>'+ + '</td></table>') + body_text = extraline + body_text + except Exception, e: + pass + def link(text): + return (text & Match("<function>(\w*)</function>") + >> "<link>\\1</link>") + def here(text): + has_function = Match("<function>(\w*)</function>") + if text & has_function: + func = has_function[1] + self.anchors += [ func ] + return (text & has_function + >> '<a name="'+"\\1"+'">'+"\\1"+'</a>') + else: + return text + self.toc += self._li_start+self.sane(link(head_text))+self._li_end + self.head += self._li_start+self.sane(here(head_text))+self._li_end + self.body += self._li_start+self.sane(body_text)+self._li_end + def get_title(self): + return self.o.package+" Library Functions" + def xml_text(self): + self.cut() + return ("<h2>"+self.get_title()+"</h2>"+ + self.version_line()+ + self.mainheader_line()+ + self._ul_start+ + self.resolve_links(self.toc)+ + self._ul_end+ + "<h3>Documentation</h3>"+ + "<dl>"+ + self.resolve_links(self.text)+ + "</dl>") + def version_line(self): + if self.o.version: + return "<p>Version "+self.o.version+"</p>" + return "" + def mainheader_line(self): + if self.o.onlymainheader: + include = "#include <"+self.o.onlymainheader+">" + return "<p><big><b><code>"+include+"</code></b></big></p>" + return "" + def resolve_links(self, text): + text &= (Match("(?s)<link>([^<>]*)(\(\d\))</link>") + >> (lambda x: self.resolve_external(x.group(1), x.group(2)))) + text &= (Match("(?s)<link>(\w+)</link>") + >> (lambda x: self.resolve_internal(x.group(1)))) + if len(self.not_found_in_anchors): + print "not found in anchors: ", self.not_found_in_anchors + return (text & Match("(?s)<link>([^<>]*)</link>") + >> "<code>\\1</code>") + def resolve_external(self, func, sect): + x = Match() + if func & x("^zlib(.*)"): + return ('<a href="'+self.http_zlib+x[1]+'">'+ + "<code>"+func+sect+"</code>"+'</a>') + if sect & x("[23]"): + return ('<a href="'+self.http_opengroup+func+'.html">'+ + "<code>"+func+sect+"</code>"+'</a>') + return "<code>"+func+"<em>"+sect+"</em></sect>" + def resolve_internal(self, func): + if func in self.anchors: + return '<code><a href="#'+func+'">'+func+"</a></code>" + if func not in self.not_found_in_anchors: + self.not_found_in_anchors += [ func ] + return "<code><u>"+func+"</u></code>" + def sane(self, text): + return (text + .replace("<function>", "<code>") + .replace("</function>", "</code>")) + diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionlistreference.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionlistreference.py new file mode 100644 index 00000000000..944d005c6fb --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionlistreference.py @@ -0,0 +1,270 @@ +#! /usr/bin/env python +# -*- coding: UTF-8 -*- +from match import Match +from htm2dbk import * + +class FunctionListReference: + """ Creating a docbook-style <reference> list of <refentry> parts + that will each be translated into a unix manual page in a second step """ + def __init__(self, o = None): + self.o = o + self.pages = [] + self.entry = None + def cut(self): + if not self.entry: return + self.pages += [ self.entry ] + self.entry = None + def add(self, entry): + name = entry.get_name() + description = entry.body_xml_text(name) + funcsynopsis = entry.head_xml_text() + if not funcsynopsis: + print "no funcsynopsis for", name + return + if self.entry is None: + self.entry = FunctionListRefEntry(entry, self.o) + self.entry.funcsynopsisinfo = entry.get_mainheader() + self.entry.refpurpose = entry.get_title() + self.entry.refentrytitle = entry.get_name() + # self.entry.refname = entry.get_name() + self.entry.funcsynopsis_list += [ funcsynopsis ] + self.entry.description_list += [ description ] + self.entry.refname_list += [ name ] + if entry.list_seealso(): + for item in entry.list_seealso(): + if item not in self.entry.seealso_list: + self.entry.seealso_list += [ item ] + def get_title(self): + return self.o.package+" Function List" + def xml_text(self): + T = "<reference><title>"+self.get_title()+"</title>\n" + for item in self.pages: + text = item.refentry_text() + if not text: "OOPS, no text for", item.name ; continue + T += self.sane(text) + T += "</reference>\n" + return T + def sane(self, text): + return (html2docbook(text) + .replace("<link>","<function>") + .replace("</link>","</function>") + .replace("<fu:protospec>","<funcprototype>") + .replace("</fu:protospec>","</funcprototype>") + .replace("<fu:prespec>","<funcdef>") + .replace("</fu:prespec>","") + .replace("<fu:namespec>","") + .replace("</fu:namespec>","</funcdef>") + .replace("</fu:callspec>","</paramdef>") + .replace("<fu:callspec>","<paramdef>")) + + +class FunctionListRefEntry: + def __init__(self, func, o): + """ initialize the fields needed for a man page entry - the fields are + named after the docbook-markup that encloses (!!) the text we store + the entries like X.refhint = "hello" will be printed therefore as + <refhint>hello</refhint>. Names with underscores are only used as + temporaries but they are memorized, perhaps for later usage. """ + self.name = func.get_name() + self.refhint = "\n<!--========= "+self.name+" (3) ============-->\n" + self.refentry = None + self.refentry_date = o.version.strip() #! //refentryinfo/date + self.refentry_productname = o.package.strip() #! //refentryinfo/prod* + self.refentry_title = None #! //refentryinfo/title + self.refentryinfo = None #! override + self.manvolnum = "3" # //refmeta/manvolnum + self.refentrytitle = None # //refmeta/refentrytitle + self.refmeta = None # override + self.refpurpose = None # //refnamediv/refpurpose + self.refname = None # //refnamediv/refname + self.refname_list = [] + self.refnamediv = None # override + self.mainheader = func.get_mainheader() + self.includes = func.get_includes() + self.funcsynopsisinfo = "" # //funcsynopsisdiv/funcsynopsisinfo + self.funcsynopsis = None # //funcsynopsisdiv/funcsynopsis + self.funcsynopsis_list = [] + self.description = None + self.description_list = [] + # optional sections + self.authors_list = [] # //sect1[authors]/listitem + self.authors = None # override + self.copyright = None + self.copyright_list = [] + self.seealso = None + self.seealso_list = [] + if func.list_seealso(): + for item in func.list_seealso(): + self.seealso_list += [ item ] + self.file_authors = None + if func.get_authors(): + self.file_authors = func.get_authors() + self.authors_list += [ self.file_authors ] + self.file_copyright = None + if func.get_copyright(): + self.file_copyright = func.get_copyright() + self.copyright_list += [ self.file_copyright ] + #fu + def refentryinfo_text(self): + """ the manvol formatter wants to render a footer line and header line + on each manpage and such info is set in <refentryinfo> """ + if self.refentryinfo: + return self.refentryinfo + if self.refentry_date and \ + self.refentry_productname and \ + self.refentry_title: return ( + "\n <date>"+self.refentry_date+"</date>"+ + "\n <productname>"+self.refentry_productname+"</productname>"+ + "\n <title>"+self.refentry_title+"</title>") + if self.refentry_date and \ + self.refentry_productname: return ( + "\n <date>"+self.refentry_date+"</date>"+ + "\n <productname>"+self.refentry_productname+"</productname>") + return "" + def refmeta_text(self): + """ the manvol formatter needs to know the filename of the manpage to + be made up and these parts are set in <refmeta> actually """ + if self.refmeta: + return self.refmeta + if self.manvolnum and self.refentrytitle: + return ( + "\n <refentrytitle>"+self.refentrytitle+"</refentrytitle>"+ + "\n <manvolnum>"+self.manvolnum+"</manvolnum>") + if self.manvolnum and self.name: + return ( + "\n <refentrytitle>"+self.name+"</refentrytitle>"+ + "\n <manvolnum>"+self.manvolnum+"</manvolnum>") + return "" + def refnamediv_text(self): + """ the manvol formatter prints a header line with a <refpurpose> line + and <refname>'d functions that are described later. For each of + the <refname>s listed here, a mangpage is generated, and for each + of the <refname>!=<refentrytitle> then a symlink is created """ + if self.refnamediv: + return self.refnamediv + if self.refpurpose and self.refname: + return ("\n <refname>"+self.refname+'</refname>'+ + "\n <refpurpose>"+self.refpurpose+" </refpurpose>") + if self.refpurpose and self.refname_list: + T = "" + for refname in self.refname_list: + T += "\n <refname>"+refname+'</refname>' + T += "\n <refpurpose>"+self.refpurpose+" </refpurpose>" + return T + return "" + def funcsynopsisdiv_text(self): + """ refsynopsisdiv shall be between the manvol mangemaent information + and the reference page description blocks """ + T="" + if self.funcsynopsis: + T += "\n<funcsynopsis>" + if self.funcsynopsisinfo: + T += "\n<funcsynopsisinfo>"+ self.funcsynopsisinfo + \ + "\n</funcsynopsisinfo>\n" + T += self.funcsynopsis + \ + "\n</funcsynopsis>\n" + if self.funcsynopsis_list: + T += "\n<funcsynopsis>" + if self.funcsynopsisinfo: + T += "\n<funcsynopsisinfo>"+ self.funcsynopsisinfo + \ + "\n</funcsynopsisinfo>\n" + for funcsynopsis in self.funcsynopsis_list: + T += funcsynopsis + T += "\n</funcsynopsis>\n" + #fi + return T + def description_text(self): + """ the description section on a manpage is the main part. Here + it is generated from the per-function comment area. """ + if self.description: + return self.description + if self.description_list: + T = "" + for description in self.description_list: + if not description: continue + T += description + if T.strip() != "": return T + return "<para>(missing description)</para>" + def authors_text(self): + """ part of the footer sections on a manpage and a description of + original authors. We prever an itimizedlist to let the manvol + show a nice vertical aligment of authors of this ref item """ + if self.authors: + return self.authors + if self.authors_list: + T = "<itemizedlist>" + previous="" + for authors in self.authors_list: + if not authors: continue + if previous == authors: continue + T += "\n <listitem><para>"+authors+"</para></listitem>" + previous = authors + T += "</itemizedlist>" + return T + if self.authors: + return self.authors + return "" + def copyright_text(self): + """ the copyright section is almost last on a manpage and purely + optional. We list the part of the per-file copyright info """ + if self.copyright: + return self.copyright + """ we only return the first valid instead of merging them """ + if self.copyright_list: + T = "" + for copyright in self.copyright_list: + if not copyright: continue + return copyright # !!! + return "" + def seealso_text(self): + """ the last section on a manpage is called 'SEE ALSO' usally and + contains a comma-separated list of references. Some manpage + viewers can parse these and convert them into hyperlinks """ + if self.seealso: + return self.seealso + if self.seealso_list: + T = "" + for seealso in self.seealso_list: + if not seealso: continue + if T: T += ", " + T += seealso + if T: return T + return "" + def refentry_text(self, id=None): + """ combine fields into a proper docbook refentry """ + if id is None: + id = self.refentry + if id: + T = '<refentry id="'+id+'">' + else: + T = '<refentry>' # this is an error + + if self.refentryinfo_text(): + T += "\n<refentryinfo>"+ self.refentryinfo_text()+ \ + "\n</refentryinfo>\n" + if self.refmeta_text(): + T += "\n<refmeta>"+ self.refmeta_text() + \ + "\n</refmeta>\n" + if self.refnamediv_text(): + T += "\n<refnamediv>"+ self.refnamediv_text() + \ + "\n</refnamediv>\n" + if self.funcsynopsisdiv_text(): + T += "\n<refsynopsisdiv>\n"+ self.funcsynopsisdiv_text()+ \ + "\n</refsynopsisdiv>\n" + if self.description_text(): + T += "\n<refsect1><title>Description</title> " + \ + self.description_text() + "\n</refsect1>" + if self.authors_text(): + T += "\n<refsect1><title>Author</title> " + \ + self.authors_text() + "\n</refsect1>" + if self.copyright_text(): + T += "\n<refsect1><title>Copyright</title> " + \ + self.copyright_text() + "\n</refsect1>\n" + if self.seealso_text(): + T += "\n<refsect1><title>See Also</title><para> " + \ + self.seealso_text() + "\n</para></refsect1>\n" + + T += "\n</refentry>\n" + return T + #fu +#end diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionprototype.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionprototype.py new file mode 100644 index 00000000000..fda85bb3117 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/functionprototype.py @@ -0,0 +1,60 @@ +from match import Match + +class FunctionPrototype: + """ takes a single function prototype line (cut from some source file) + and parses it into the relevant portions 'prespec', 'namespec' and + 'callspec'. Additionally we present 'name' from the namespec that is + usually used as the filename stem for a manual page """ + def __init__(self, functionheader = None): + self.functionheader = functionheader + self.prespec = None + self.namespec = None + self.callspec = None + self.name = None + def get_functionheader(self): + return self.functionheader + def get_prototype(self): + if self.functionheader is None: + return None + return self.functionheader.get_prototype() + def get_filename(self): + if self.functionheader is None: + return None + return self.functionheader.get_filename() + def parse(self, functionheader = None): + if functionheader is not None: + self.functionheader = functionheader + if self.functionheader is None: + return False + found = Match() + prototype = self.get_prototype() + if prototype & found(r"(?s)^(.*[^.])" + r"\b(\w[\w.]*\w)\b" + r"(\s*\(.*)$"): + self.prespec = found.group(1).lstrip() + self.namespec = found.group(2) + self.callspec = found.group(3).lstrip() + self.name = self.namespec.strip() + return True + return False + def _assert_parsed(self): + if self.name is None: + return self.parse() + return True + def get_prespec(self): + if not self._assert_parsed(): return None + return self.prespec + def get_namespec(self): + if not self._assert_parsed(): return None + return self.namespec + def get_callspec(self): + if not self._assert_parsed(): return None + return self.callspec + def get_name(self): + if not self._assert_parsed(): return None + return self.name + def xml_text(self): + if not self.namespec: return self.namespec + return ("<fu:protospec><fu:prespec>"+self.prespec+"</fu:prespec>"+ + "<fu:namespec>"+self.namespec+"</fu:namespec>"+ + "<fu:callspec>"+self.callspec+"</fu:callspec></fu:protospec>") diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/htm2dbk.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/htm2dbk.py new file mode 100644 index 00000000000..ec9685bfd3e --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/htm2dbk.py @@ -0,0 +1,158 @@ +#! /usr/bin/env python + +""" +this file converts simple html text into a docbook xml variant. +The mapping of markups and links is far from perfect. But all we +want is the docbook-to-pdf converter and similar technology being +present in the world of docbook-to-anything converters. """ + +from datetime import date +import match +import sys + +m = match.Match + +class htm2dbk_conversion_base: + regexlist = [ + m()("</[hH]2>(.*)", "m") >> "</title>\n<subtitle>\\1</subtitle>", + m()("<[hH]2>") >> "<sect1 id=\"--filename--\"><title>", + m()("<[Pp]([> ])","m") >> "<para\\1", + m()("</[Pp]>") >> "</para>", + m()("<(pre|PRE)>") >> "<screen>", + m()("</(pre|PRE)>") >> "</screen>", + m()("<[hH]3>") >> "<sect2><title>", + m()("</[hH]3>((?:.(?!<sect2>))*.?)", "s") >> "</title>\\1</sect2>", + m()("<!doctype [^<>]*>","s") >> "", + m()("<!DOCTYPE [^<>]*>","s") >> "", + m()("(<\w+\b[^<>]*\swidth=)(\d+\%)","s") >> "\\1\"\\2\"", + m()("(<\w+\b[^<>]*\s\w+=)(\d+)","s") >> "\\1\"\\2\"", + m()("&&") >> "\&\;\&\;", + m()("\$\<") >> "\$\<\;", + m()("&(\w+[\),])") >> "\&\;\\1", + m()("(</?)span(\s[^<>]*)?>","s") >> "\\1phrase\\2>", + m()("(</?)small(\s[^<>]*)?>","s") >> "\\1note\\2>", + m()("(</?)(b|em|i)>")>> "\\1emphasis>", + m()("(</?)(li)>") >> "\\1listitem>", + m()("(</?)(ul)>") >> "\\1itemizedlist>", + m()("(</?)(ol)>") >> "\\1orderedlist>", + m()("(</?)(dl)>") >> "\\1variablelist>", + m()("<dt\b([^<>]*)>","s") >> "<varlistentry\\1><term>", + m()("</dt\b([^<>]*)>","s") >> "</term>", + m()("<dd\b([^<>]*)>","s") >> "<listitem\\1>", + m()("</dd\b([^<>]*)>","s") >> "</listitem></varlistentry>", + m()("<table\b([^<>]*)>","s") + >> "<informaltable\\1><tgroup cols=\"2\"><tbody>", + m()("</table\b([^<>]*)>","s") >> "</tbody></tgroup></informaltable>", + m()("(</?)tr(\s[^<>]*)?>","s") >> "\\1row\\2>", + m()("(</?)td(\s[^<>]*)?>","s") >> "\\1entry\\2>", + m()("<informaltable\b[^<>]*>\s*<tgroup\b[^<>]*>\s*<tbody>"+ + "\s*<row\b[^<>]*>\s*<entry\b[^<>]*>\s*<informaltable\b","s") + >> "<informaltable", + m()("</informaltable>\s*</entry>\s*</row>"+ + "\s*</tbody>\s*</tgroup>\s*</informaltable>", "s") + >> "</informaltable>", + m()("(<informaltable[^<>]*\swidth=\"100\%\")","s") >> "\\1 pgwide=\"1\"", + m()("(<tbody>\s*<row[^<>]*>\s*<entry[^<>]*\s)(width=\"50\%\")","s") + >> "<colspec colwidth=\"1*\" /><colspec colwidth=\"1*\" />\n\\1\\2", + m()("<nobr>([\'\`]*)<tt>") >> "<cmdsynopsis>\\1", + m()("</tt>([\'\`]*)</nobr>") >> "\\1</cmdsynopsis>", + m()("<nobr><(?:tt|code)>([\`\"\'])") >> "<cmdsynopsis>\\1", + m()("<(?:tt|code)><nobr>([\`\"\'])") >> "<cmdsynopsis>\\1", + m()("([\`\"\'])</(?:tt|code)></nobr>") >> "\\1</cmdsynopsis>", + m()("([\`\"\'])</nobr></(?:tt|code)>") >> "\\1</cmdsynopsis>", + m()("(</?)tt>") >> "\\1constant>", + m()("(</?)code>") >> "\\1literal>", + m()(">([^<>]+)<br>","s") >> "><highlights>\\1</highlights>", + m()("<br>") >> "<br />", + # m()("<date>") >> "<sect1info><date>", + # m()("</date>") >> "</date></sect1info>", + m()("<reference>") >> "<reference id=\"reference\">" >> 1, + m()("<a\s+href=\"((?:http|ftp|mailto):[^<>]+)\"\s*>((?:.(?!</a>))*.)</a>" + ,"s") >> "<ulink url=\"\\1\">\\2</ulink>", + m()("<a\s+href=\"zziplib.html\#([\w_]+)\"\s*>((?:.(?!</a>))*.)</a>","s") + >> "<link linkend=\"$1\">$2</link>", + m()("<a\s+href=\"(zziplib.html)\"\s*>((?:.(?!</a>))*.)</a>","s") + >> "<link linkend=\"reference\">$2</link>", + m()("<a\s+href=\"([\w-]+[.]html)\"\s*>((?:.(?!</a>))*.)</a>","s") + >> "<link linkend=\"\\1\">\\2</link>", + m()("<a\s+href=\"([\w-]+[.](?:h|c|am|txt))\"\s*>((?:.(?!</a>))*.)</a>" + ,"s") >> "<ulink url=\"file:\\1\">\\2</ulink>", + m()("<a\s+href=\"([A-Z0-9]+[.][A-Z0-9]+)\"\s*>((?:.(?!</a>))*.)</a>","s") + >> "<ulink url=\"file:\\1\">\\2</ulink>" + # m()("(</?)subtitle>") >> "\\1para>" + # $_ .= "</sect1>" if /<sect1[> ]/ + ] + regexlist2 = [ + m()(r"<br\s*/?>") >> "", + m()(r"(</?)em>") >> r"\1emphasis>", + m()(r"<code>") >> "<userinput>", + m()(r"</code>") >> "</userinput>", + m()(r"<link>") >> "<function>", + m()(r"</link>") >> "</function>", + m()(r"(?s)\s*</screen>") >> "</screen>", + # m()(r"<ul>") >> "</para><programlisting>\n", + # m()(r"</ul>") >> "</programlisting><para>", + m()(r"<ul>") >> "<itemizedlist>", + m()(r"</ul>") >> "</itemizedlist>", + # m()(r"<li>") >> "", + # m()(r"</li>") >> "" + m()(r"<li>") >> "<listitem><para>", + m()(r"</li>") >> "</para></listitem>\n", + ] +class htm2dbk_conversion(htm2dbk_conversion_base): + def __init__(self): + self.version = "" # str(date.today) + self.filename = "." + def convert(self,text): # $text + txt = text.replace("<!--VERSION-->", self.version) + for conv in self.regexlist: + txt &= conv + return txt.replace("--filename--", self.filename) + def convert2(self,text): # $text + txt = text.replace("<!--VERSION-->", self.version) + for conv in self.regexlist: + txt &= conv + return txt + +class htm2dbk_document(htm2dbk_conversion): + """ create document, add(text) and get the value() """ + doctype = ( + '<!DOCTYPE book PUBLIC "-//OASIS//DTD'+ + ' DocBook XML V4.1.2//EN"'+"\n"+ + ' "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd">'+ + "\n") + book_start = '<book><chapter><title>Documentation</title>'+"\n" + book_end_chapters = '</chapter>'+"\n" + book_end = '</book>'+"\n" + def __init__(self): + htm2dbk_conversion.__init__(self) + self.text = self.doctype + self.book_start + def add(self,text): + if self.text & m()("<reference"): + self.text += self.book_end_chapters ; self.book_end_chapters = "" + self.text += self.convert(text).replace( + "<br />","") & ( + m()("<link>([^<>]*)</link>") >> "<function>\\1</function>") & ( + m()("(?s)(<refentryinfo>\s*)<sect1info>" + + "(<date>[^<>]*</date>)</sect1info>") >> "\\1\\2") + def value(self): + return self.text + self.book_end_chapters + self.book_end + +def htm2dbk_files(args): + doc = htm2dbk_document() + for filename in args: + try: + f = open(filename, "r") + doc.filename = filename + doc.add(f.read()) + f.close() + except IOError, e: + print >> sys.stderr, "can not open "+filename + return doc.value() + +def html2docbook(text): + """ the C comment may contain html markup - simulate with docbook tags """ + return htm2dbk_conversion().convert2(text) + +if __name__ == "__main__": + print htm2dbk_files(sys.argv[1:]) diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/htmldocument.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/htmldocument.py new file mode 100644 index 00000000000..47d58dc6ad2 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/htmldocument.py @@ -0,0 +1,117 @@ +#! /usr/bin/env python +# -*- coding: UTF-8 -*- +from match import Match + +class HtmlDocument: + """ binds some html content page with additional markup - in this + base version it is just the header information while other variants + might add navigation items around the content block elements """ + def __init__(self, o, filename = None): + self.o = o + self.filename = filename + self.title = "" + self.meta = [] + self.style = [] + self.text = [] + self.navi = None + def meta(self, style): + """ add some header meta entry """ + self.meta += [ meta ] + return self + def style(self, style): + """ add a style block """ + self.style += [ style ] + return self + def add(self, text): + """ add some content """ + self.text += [ text ] + return self + def get_title(self): + if self.title: return self.title + try: return self.text[0].get_title() + except Exception, e: pass + return self.title + def _html_meta(self, meta): + """ accepts adapter objects with .html_meta() """ + try: return meta.html_meta() + except Exception, e: pass + return str(meta) + def _html_style(self, style): + """ accepts adapter objects with .html_style() and .xml_style() """ + ee = None + try: return style.html_style() + except Exception, e: ee = e; pass + try: return style.xml_style() + except Exception, e: print "HtmlDocument/style", ee, e; pass + try: return str(style) + except Exception, e: print "HtmlDocument/style", e; return "" + def _html_text(self, html): + """ accepts adapter objects with .html_text() and .xml_text() """ + ee = None + try: return html.html_text() + except Exception, e: ee = e; pass + try: return html.xml_text() + except Exception, e: print "HtmlDocument/text", ee, e; pass + try: return str(html) + except Exception, e: print "HtmlDocument/text", e; return " " + def navigation(self): + if self.navi: + return self.navi + if self.o.body: + try: + fd = open(self.o.body, "r") + self.navi = fd.read() + fd.close() + return self.navi + except Exception, e: + pass + return None + def html_header(self): + navi = self.navigation() + if not navi: + T = "<html><head>" + title = self.get_title() + if title: + T += "<title>"+title+"</title>" + T += "\n" + for style in self.style: + T += self._html_style(style) + T += "\n" + return T+"</head><body>" + else: + title = self.get_title() + return navi & ( + Match(r"<!--title-->") >> " - "+title) & ( + Match(r"<!--VERSION-->") >> self.o.version) & ( + Match(r"(?m).*</body></html>") >> "") + def html_footer(self): + navi = self.navigation() + if not navi: + return "</body></html>" + else: + return navi & ( + Match(r"(?m)(.*</body></html>)") >> "%&%&%&%\\1") & ( + Match(r"(?s).*%&%&%&%") >> "") + def _filename(self, filename): + if filename is not None: + self.filename = filename + filename = self.filename + if not filename & Match(r"\.\w+$"): + ext = self.o.html + if not ext: ext = "html" + filename += "."+ext + return filename + def save(self, filename = None): + filename = self._filename(filename) + print "writing '"+filename+"'" + try: + fd = open(filename, "w") + print >>fd, self.html_header() + for text in self.text: + print >>fd, self._html_text(text) + print >>fd, self.html_footer() + fd.close() + return True + except IOError, e: + print "could not open '"+filename+"'file", e + return False diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/match.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/match.py new file mode 100644 index 00000000000..a089ec399c9 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/match.py @@ -0,0 +1,103 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +# @creator (C) 2003 Guido U. Draheim +# @license http://creativecommons.org/licenses/by-nc-sa/2.0/de/ + +import re + +# ---------------------------------------------------------- Regex Match() +# beware, stupid python interprets backslashes in replace-parts only partially! +class MatchReplace: + """ A MatchReplace is a mix of a Python Pattern and a Replace-Template """ + def __init__(self, matching, template, count = 0, flags = None): + """ setup a substition from regex 'matching' into 'template', + the replacement count default of 0 will replace all occurrences. + The first argument may be a Match object or it is a string that + will be turned into one by using Match(matching, flags). """ + self.template = template + MatchReplace.__call__(self, matching, template, count, flags) + def __call__(self, matching, template = None, count = 0, flags = None): + """ other than __init__ the template may be left off to be unchanged""" + if isinstance(count, basestring): # count/flags swapped over? + flags = count; count = 0 + if isinstance(matching, Match): + self.matching = matching + else: + self.matching = Match()(matching, flags) ## python 2.4.2 bug + if template is not None: + self.template = template + self.count = count + def __and__(self, string): + """ z = MatchReplace('foo', 'bar') & 'foo'; assert z = 'bar' """ + text, self.matching.replaced = \ + self.matching.regex.subn(self.template, string, self.count) + return text + def __rand__(self, string): + """ z = 'foo' & Match('foo') >> 'bar'; assert z = 'bar' """ + text, self.matching.replaced = \ + self.matching.regex.subn(self.template, string, self.count) + return text + def __iand__(self, string): + """ x = 'foo' ; x &= Match('foo') >> 'bar'; assert x == 'bar' """ + string, self.matching.replaced = \ + self.matching.regex.subn(self.template, string, self.count) + return string + def __rshift__(self, count): + " shorthand to set the replacement count: Match('foo') >> 'bar' >> 1 " + self.count = count ; return self + def __rlshift__(self, count): + self.count = count ; return self + +class Match(str): + """ A Match is actually a mix of a Python Pattern and MatchObject """ + def __init__(self, pattern = None, flags = None): + """ flags is a string: 'i' for case-insensitive etc.; it is just + short for a regex prefix: Match('foo','i') == Match('(?i)foo') """ + Match.__call__(self, pattern, flags) + def __call__(self, pattern, flags = None): + assert isinstance(pattern, str) or pattern is None + assert isinstance(flags, str) or flags is None + str.__init__(self, pattern) + self.replaced = 0 # set by subn() inside MatchReplace + self.found = None # set by search() to a MatchObject + self.pattern = pattern + if pattern is not None: + if flags: + self.regex = re.compile("(?"+flags+")"+self.pattern) + else: + self.regex = re.compile(self.pattern) + return self + def __truth__(self): + return self.found is not None + def __and__(self, string): + self.found = self.regex.search(string) + return self.__truth__() + def __rand__(self, string): + self.found = self.regex.search(string) + return self.__truth__() + def __rshift__(self, template): + return MatchReplace(self, template) + def __rlshift__(self, template): + return MatchReplace(self, template) + def __getitem__(self, index): + return self.group(index) + def group(self, index): + assert self.found is not None + return self.found.group(index) + def finditer(self, string): + return self.regex.finditer(string) + +if __name__ == "__main__": + # matching: + if "foo" & Match("oo"): + print "oo" + x = Match() + if "foo" & x("(o+)"): + print x[1] + # replacing: + y = "fooboo" & Match("oo") >> "ee" + print y + r = Match("oo") >> "ee" + print "fooboo" & r + s = MatchReplace("oo", "ee") + print "fooboo" & s diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/options.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/options.py new file mode 100644 index 00000000000..c6758d5fabf --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/options.py @@ -0,0 +1,31 @@ +#! /usr/bin/python +# -*- coding: utf-8 -*- +# @creator (C) 2003 Guido U. Draheim +# @license http://creativecommons.org/licenses/by-nc-sa/2.0/de/ + +from match import Match + +# use as o.optionname to check for commandline options. +class Options: + var = {} + def __getattr__(self, name): + if not self.var.has_key(name): return None + return self.var[name] + def __setattr__(self, name, value): + self.var[name] = value + def scan(self, optionstring): # option-name or None + x = Match() + if optionstring & x(r"^--?(\w+)=(.*)"): + self.var[x[1]] = x[2] ; return x[1] + if optionstring & x(r"^--?no-(\w+)$"): + self.var[x[1]] = "" ; return x[1] + if optionstring & x(r"^--?(\w+)$"): + self.var[x[1]] = "*"; return x[1] + return None +#end Options + +if False: + o = Options() + o.help = """ + scans for options + """ diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/textfile.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/textfile.py new file mode 100644 index 00000000000..bfaff8dbdfa --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/textfile.py @@ -0,0 +1,49 @@ + +def _src_to_xml(text): + return text.replace("&", "&").replace("<", "<").replace(">", ">") + +class TextFile: + def __init__(self, filename = None): + self.filename = filename + self.src_text = None + self.xml_text = None + def parse(self, filename = None): + if filename is not None: + self.filename = filename + if self.filename is None: + return False + try: + fd = open(self.filename, "r") + self.src_text = fd.read() + fd.close() + return True + except IOError, e: + pass + return False + def assert_src_text(self): + if self.src_text: return True + return self.parse() + def assert_xml_text(self): + if self.xml_text: return True + if not self.assert_src_text(): return False + self.xml_text = _src_to_xml(self.src_text) + def get_src_text(self): + self.assert_src_text() + return self.src_text + def get_xml_text(self): + self.assert_xml_text() + return self.xml_text + def get_filename(self): + return self.filename + def line_xml_text(self, offset): + self._line(self.xml_text, offset) + def line_src_text(self, offset): + self._line(self.src_text, offset) + def _line(self, text, offset): + line = 1 + for x in xrange(0,offset): + if x == "\n": + line += 1 + return line + + diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/textfileheader.py b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/textfileheader.py new file mode 100644 index 00000000000..2ac0896e5fd --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipdoc/textfileheader.py @@ -0,0 +1,47 @@ +from match import Match + +class TextFileHeader: + """ scan for a comment block at the source file start and fill the + inner text into self.comment - additionally scan for the first + #include statement and put the includename into self.mainheader + (TextFileHeader re-exports all => TextFile methods for processing)""" + def __init__(self, textfile = None): + self.textfile = textfile # TextFile + self.comment = "" # src'style + self.mainheader = "" # src'style + def parse(self, textfile = None): + if textfile is not None: + self.textfile = textfile + if self.textfile is None: + return False + x = Match() + text = self.textfile.get_src_text() + if not text: + print "nonexistant file:", self.textfile.get_filename() + return False + if text & x(r"(?s)[/][*]+(\s(?:.(?!\*\/))*.)\*\/" + r"(?:\s*\#(?:define|ifdef|endif)[ ]*\S*[ ]*\S*)*" + r"(\s*\#include\s*<[^<>]*>(?:\s*//[^\n]*)?)"): + self.comment = x[1] + self.mainheader = x[2].strip() + elif text & x(r"(?s)[/][*]+(\s(?:.(?!\*\/))*.)\*\/"): + self.comment = x[1] + elif text & x(r"(?s)(?:\s*\#(?:define|ifdef|endif)[ ]*\S*[ ]*\S*)*" + r"(\s*\#include\s*<[^<>]*>(?:\s*//[^\n]*)?)"): + self.mainheader = x[1].strip() + return True + def src_mainheader(self): + return self.mainheader + def src_filecomment(self): + return self.comment + # re-export textfile functions - allows textfileheader to be used instead + def get_filename(self): + return self.textfile.get_filename() + def get_src_text(self): + return self.textfile.get_src_text() + def get_xml_text(self): + return self.textfile.get_src_text() + def line_src__text(self, offset): + return self.textfile.line_src_text(offset) + def line_xml__text(self, offset): + return self.textfile.line_xml_text(offset) diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipfseeko.html b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipfseeko.html new file mode 100644 index 00000000000..1f103b9a7dd --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipfseeko.html @@ -0,0 +1,176 @@ +<html><head><title>zziplib Library Functions</title> +</head><body> +<h2>zziplib Library Functions</h2><p>Version 0.13.60</p><p><big><b><code>#include <zzip/fseeko.h></code></b></big></p><table width="100%"><tr><td valign="top"><code><b><code><a href="#zzip_entry_fopen">zzip_entry_fopen</a></code></b>(ZZIP_ENTRY * entry, int takeover) + : zzip__new__ ZZIP_ENTRY_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_ffile">zzip_entry_ffile</a></code></b>(FILE * disk, char *filename) + : zzip__new__ ZZIP_ENTRY_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_fread">zzip_entry_fread</a></code></b>(void *ptr, zzip_size_t sized, zzip_size_t nmemb, + ZZIP_ENTRY_FILE * file) + : zzip_size_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_fclose">zzip_entry_fclose</a></code></b>(ZZIP_ENTRY_FILE * file) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_feof">zzip_entry_feof</a></code></b>(ZZIP_ENTRY_FILE * file) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_data_offset">zzip_entry_data_offset</a></code></b>(ZZIP_ENTRY * entry) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_fread_file_header">zzip_entry_fread_file_header</a></code></b>(ZZIP_ENTRY * entry, + struct zzip_file_header *file_header) + : static zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_strdup_name">zzip_entry_strdup_name</a></code></b>(ZZIP_ENTRY * entry) + : zzip__new__ char * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_findfile">zzip_entry_findfile</a></code></b>(FILE * disk, char *filename, + ZZIP_ENTRY * _zzip_restrict entry, zzip_strcmp_fn_t compare) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_findfirst">zzip_entry_findfirst</a></code></b>(FILE * disk) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_findnext">zzip_entry_findnext</a></code></b>(ZZIP_ENTRY * _zzip_restrict entry) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_free">zzip_entry_free</a></code></b>(ZZIP_ENTRY * entry) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_entry_findmatch">zzip_entry_findmatch</a></code></b>(FILE * disk, char *filespec, + ZZIP_ENTRY * _zzip_restrict entry, + zzip_fnmatch_fn_t compare, int flags) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr></table><h3>Documentation</h3><dl><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_entry_fopen">zzip_entry_fopen</a></b>(ZZIP_ENTRY * entry, int takeover) + : zzip__new__ ZZIP_ENTRY_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_ffile">zzip_entry_ffile</a></b>(FILE * disk, char *filename) + : zzip__new__ ZZIP_ENTRY_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_fread">zzip_entry_fread</a></b>(void *ptr, zzip_size_t sized, zzip_size_t nmemb, + ZZIP_ENTRY_FILE * file) + : zzip_size_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_fclose">zzip_entry_fclose</a></b>(ZZIP_ENTRY_FILE * file) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_feof">zzip_entry_feof</a></b>(ZZIP_ENTRY_FILE * file) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> open a file within a zip disk for reading</em> </td><td align="right"> <em><small>zzip/fseeko.c</small></em></td></table><p> + The <code>zzip_entry_fopen</code> function does take an "entry" argument and copies it (or just takes + it over as owner) to a new ZZIP_ENTRY_FILE handle structure. That + structure contains also a zlib buffer for decoding. The <code>zzip_entry_fopen</code> function does + seek to the file_header of the given "entry" and validates it for the + data buffer following it. We do also prefetch some data from the data + buffer thereby trying to match the disk pagesize for faster access later. + The <code><a href="#zzip_entry_fread">zzip_entry_fread</a></code> will then read in chunks of pagesizes which is + the size of the internal readahead buffer. If an error occurs then null + is returned. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_entry_ffile</code> function opens a file found by name, so it does a search into + the zip central directory with <code><a href="#zzip_entry_findfile">zzip_entry_findfile</a></code> and whatever + is found first is given to <code><a href="#zzip_entry_fopen">zzip_entry_fopen</a></code> +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_entry_fread</code> function reads more bytes into the output buffer specified as + arguments. The return value is null on eof or error, the stdio-like + interface can not distinguish between these so you need to check + with <code><a href="#zzip_entry_feof">zzip_entry_feof</a></code> for the difference. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_entry_fclose</code> function releases any zlib decoder info needed for decompression + and dumps the ZZIP_ENTRY_FILE struct then. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_entry_feof</code> function allows to distinguish an error from an eof condition. + Actually, if we found an error but we did already reach eof then we + just keep on saying that it was an eof, so the app can just continue. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_entry_data_offset">zzip_entry_data_offset</a></b>(ZZIP_ENTRY * entry) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_fread_file_header">zzip_entry_fread_file_header</a></b>(ZZIP_ENTRY * entry, + struct zzip_file_header *file_header) + : static zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_strdup_name">zzip_entry_strdup_name</a></b>(ZZIP_ENTRY * entry) + : zzip__new__ char * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> helper functions for (fseeko) zip access api</em> </td><td align="right"> <em><small>zzip/fseeko.c</small></em></td></table><p> + The <code>zzip_entry_data_offset</code> functions returns the seekval offset of the data portion of the + file referenced by the given zzip_entry. It requires an intermediate + check of the file_header structure (i.e. it reads it from disk). After + this call, the contained diskfile readposition is already set to the + data_offset returned here. On error -1 is returned. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_entry_fread_file_header</code> functions read the correspoding struct zzip_file_header from + the zip disk of the given "entry". The returned off_t points to the + end of the file_header where the current fseek pointer has stopped. + This is used to immediatly parse out any filename/extras block following + the file_header. The return value is null on error. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_entry_strdup_name</code> function is a big helper despite its little name: in a zip file the + encoded filenames are usually NOT zero-terminated but for common usage + with libc we need it that way. Secondly, the filename SHOULD be present + in the zip central directory but if not then we fallback to the filename + given in the file_header of each compressed data portion. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_entry_findfile">zzip_entry_findfile</a></b>(FILE * disk, char *filename, + ZZIP_ENTRY * _zzip_restrict entry, zzip_strcmp_fn_t compare) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_findfirst">zzip_entry_findfirst</a></b>(FILE * disk) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_findnext">zzip_entry_findnext</a></b>(ZZIP_ENTRY * _zzip_restrict entry) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_free">zzip_entry_free</a></b>(ZZIP_ENTRY * entry) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_entry_findmatch">zzip_entry_findmatch</a></b>(FILE * disk, char *filespec, + ZZIP_ENTRY * _zzip_restrict entry, + zzip_fnmatch_fn_t compare, int flags) + : zzip__new__ ZZIP_ENTRY * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> search for files in the (fseeko) zip central directory</em> </td><td align="right"> <em><small>zzip/fseeko.c</small></em></td></table><p> + The <code>zzip_entry_findfile</code> function is given a filename as an additional argument, to find the + disk_entry matching a given filename. The compare-function is usually + strcmp or strcasecmp or perhaps strcoll, if null then strcmp is used. + - use null as argument for "old"-entry when searching the first + matching entry, otherwise the last returned value if you look for other + entries with a special "compare" function (if null then a doubled search + is rather useless with this variant of _findfile). If no further entry is + found then null is returned and any "old"-entry gets already free()d. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_entry_findfirst</code> function is the first call of all the zip access functions here. + It contains the code to find the first entry of the zip central directory. + Here we require the stdio handle to represent a real zip file where the + disk_trailer is _last_ in the file area, so that its position would be at + a fixed offset from the end of the file area if not for the comment field + allowed to be of variable length (which needs us to do a little search + for the disk_tailer). However, in this simple implementation we disregard + any disk_trailer info telling about multidisk archives, so we just return + a pointer to the first entry in the zip central directory of that file. +</p><p> + For an actual means, we are going to search backwards from the end + of the mmaped block looking for the PK-magic signature of a + disk_trailer. If we see one then we check the rootseek value to + find the first disk_entry of the root central directory. If we find + the correct PK-magic signature of a disk_entry over there then we + assume we are done and we are going to return a pointer to that label. +</p><p> + The return value is a pointer to the first zzip_disk_entry being checked + to be within the bounds of the file area specified by the arguments. If + no disk_trailer was found then null is returned, and likewise we only + accept a disk_trailer with a seekvalue that points to a disk_entry and + both parts have valid PK-magic parts. Beyond some sanity check we try to + catch a common brokeness with zip archives that still allows us to find + the start of the zip central directory. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_entry_findnext</code> function takes an existing "entry" in the central root directory + (e.g. from zzip_entry_findfirst) and moves it to point to the next entry. + On error it returns 0, otherwise the old entry. If no further match is + found then null is returned and the entry already free()d. If you want + to stop searching for matches before that case then please call + <code><a href="#zzip_entry_free">zzip_entry_free</a></code> on the cursor struct ZZIP_ENTRY. +</p> +</td></tr><tr><td valign="top"><p> the <code>zzip_entry_free</code> function releases the malloc()ed areas needed for zzip_entry, the + pointer is invalid afterwards. The <code>zzip_entry_free</code> function has #define synonyms of + zzip_entry_findlast(), zzip_entry_findlastfile(), zzip_entry_findlastmatch() +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_entry_findmatch</code> function uses a compare-function with an additional argument + and it is called just like fnmatch(3) from POSIX.2 AD:1993), i.e. + the argument filespec first and the ziplocal filename second with + the integer-flags put in as third to the indirect call. If the + platform has fnmatch available then null-compare will use that one + and otherwise we fall back to mere strcmp, so if you need fnmatch + searching then please provide an implementation somewhere else. + - use null as argument for "after"-entry when searching the first + matching entry, or the last disk_entry return-value to find the + next entry matching the given filespec. If no further entry is + found then null is returned and any "old"-entry gets already free()d. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd></dl> +</body></html> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-manpages.dbk b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-manpages.dbk new file mode 100644 index 00000000000..d3c8998bea7 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-manpages.dbk @@ -0,0 +1,12 @@ +<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN" + "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd"> +<!-- using <chapter> allows to append a <reference> with manpages --> +<book><title> ZZIPlib Manual Pages </title> + +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zziplib.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzipmmapped.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzipfseeko.xml" /> +</book> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-manpages.tar b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-manpages.tar Binary files differnew file mode 100644 index 00000000000..d200b2b80cf --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-manpages.tar diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-master.dbk b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-master.dbk new file mode 100644 index 00000000000..8384c282428 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib-master.dbk @@ -0,0 +1,59 @@ +<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.4//EN" + "http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd"> +<!-- using <chapter> allows to append a <reference> with manpages --> +<section> +<sectioninfo> +<date> 2006-01-01</date> +<authorblurb><simpara> Guido Draheim </simpara></authorblurb> +</sectioninfo> +<title> ZZIPlib Documentation </title> + +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-index.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-zip.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-file.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-sdl-rwops.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-extio.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-xor.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-crypt.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-cryptoid.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-api.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-basics.xml" /> +<!-- +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-extras.xml" /> +--> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="zzip-parse.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="64on32.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="future.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="fseeko.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="mmapped.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="memdisk.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="configs.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="sfx-make.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="history.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="referentials.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="copying.xml" /> +<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" parse="xml" + href="faq.xml" /> +</section> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib.html b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib.html new file mode 100644 index 00000000000..89336cc12d6 --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zziplib.html @@ -0,0 +1,464 @@ +<html><head><title>zziplib Library Functions</title> +</head><body> +<h2>zziplib Library Functions</h2><p>Version 0.13.60</p><p><big><b><code>#include <zzip/lib.h></code></b></big></p><table width="100%"><tr><td valign="top"><code><b><code><a href="#zzip_error">zzip_error</a></code></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_seterror">zzip_seterror</a></code></b>(ZZIP_DIR * dir, int errcode) : void +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_open">zzip_open</a></code></b>(zzip_char_t * filename, int o_flags) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_open_ext_io">zzip_open_ext_io</a></code></b>(zzip_char_t * filename, int o_flags, int o_modes, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_open_shared_io">zzip_open_shared_io</a></code></b>(ZZIP_FILE * stream, + zzip_char_t * filename, int o_flags, int o_modes, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_opendir">zzip_opendir</a></code></b>(zzip_char_t * filename) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_opendir_ext_io">zzip_opendir_ext_io</a></code></b>(zzip_char_t * filename, int o_modes, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_file_real">zzip_file_real</a></code></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_real">zzip_dir_real</a></code></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_realdir">zzip_realdir</a></code></b>(ZZIP_DIR * dir) + : void * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_realfd">zzip_realfd</a></code></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_tell">zzip_tell</a></code></b>(ZZIP_FILE * fp) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_tell32">zzip_tell32</a></code></b>(ZZIP_FILE * fp) + : long +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_stat">zzip_dir_stat</a></code></b>(ZZIP_DIR * dir, zzip_char_t * name, ZZIP_STAT * zs, int flags) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_file_stat">zzip_file_stat</a></code></b>(ZZIP_FILE * file, ZZIP_STAT * zs) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_fstat">zzip_fstat</a></code></b>(ZZIP_FILE * file, ZZIP_STAT * zs) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_strerror">zzip_strerror</a></code></b>(int errcode) + : zzip_char_t * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_strerror_of">zzip_strerror_of</a></code></b>(ZZIP_DIR * dir) + : zzip_char_t * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_open">zzip_dir_open</a></code></b>(zzip_char_t * filename, zzip_error_t * e) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_open_ext_io">zzip_dir_open_ext_io</a></code></b>(zzip_char_t * filename, zzip_error_t * e, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_read">zzip_dir_read</a></code></b>(ZZIP_DIR * dir, ZZIP_DIRENT * d) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_init_io">zzip_init_io</a></code></b>(zzip_plugin_io_handlers_t io, int flags) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_get_default_io">zzip_get_default_io</a></code></b>(void) + : zzip_plugin_io_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_rewinddir">zzip_rewinddir</a></code></b>(ZZIP_DIR * dir) + : void +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_telldir">zzip_telldir</a></code></b>(ZZIP_DIR * dir) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_seekdir">zzip_seekdir</a></code></b>(ZZIP_DIR * dir, zzip_off_t offset) + : void +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_telldir32">zzip_telldir32</a></code></b>(ZZIP_DIR * dir) + : long +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_seekdir32">zzip_seekdir32</a></code></b>(ZZIP_DIR * dir, long offset) + : void +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_fopen">zzip_fopen</a></code></b>(zzip_char_t * filename, zzip_char_t * mode) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_freopen">zzip_freopen</a></code></b>(zzip_char_t * filename, zzip_char_t * mode, ZZIP_FILE * stream) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dirhandle">zzip_dirhandle</a></code></b>(ZZIP_FILE * fp) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dirfd">zzip_dirfd</a></code></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_seek">zzip_seek</a></code></b>(ZZIP_FILE * fp, zzip_off_t offset, int whence) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_seek32">zzip_seek32</a></code></b>(ZZIP_FILE * fp, long offset, int whence) + : long +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_read">zzip_read</a></code></b>(ZZIP_FILE * fp, void *buf, zzip_size_t len) + : zzip_ssize_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_fread">zzip_fread</a></code></b>(void *ptr, zzip_size_t size, zzip_size_t nmemb, ZZIP_FILE * file) + : zzip_size_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_free">zzip_dir_free</a></code></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_close">zzip_dir_close</a></code></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_fclose">zzip_fclose</a></code></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_close">zzip_close</a></code></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_fdopen">zzip_dir_fdopen</a></code></b>(int fd, zzip_error_t * errcode_p) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_fdopen_ext_io">zzip_dir_fdopen_ext_io</a></code></b>(int fd, zzip_error_t * errcode_p, + zzip_strings_t * ext, const zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_alloc_ext_io">zzip_dir_alloc_ext_io</a></code></b>(zzip_strings_t * ext, const zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_dir_alloc">zzip_dir_alloc</a></code></b>(zzip_strings_t * fileext) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_readdir">zzip_readdir</a></code></b>(ZZIP_DIR * dir) + : ZZIP_DIRENT * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_closedir">zzip_closedir</a></code></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_errno">zzip_errno</a></code></b>(int errcode) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_file_close">zzip_file_close</a></code></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_file_open">zzip_file_open</a></code></b>(ZZIP_DIR * dir, zzip_char_t * name, int o_mode) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_inflate_init">zzip_inflate_init</a></code></b>(ZZIP_FILE * fp, struct zzip_dir_hdr *hdr) + : static int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_file_read">zzip_file_read</a></code></b>(ZZIP_FILE * fp, void *buf, zzip_size_t len) + : zzip_ssize_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_rewind">zzip_rewind</a></code></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_compr_str">zzip_compr_str</a></code></b>(int compr) + : zzip_char_t * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#__zzip_fetch_disk_trailer">__zzip_fetch_disk_trailer</a></code></b>(int fd, zzip_off_t filesize, + struct _disk_trailer *_zzip_restrict trailer, + zzip_plugin_io_t io) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#__zzip_parse_root_directory">__zzip_parse_root_directory</a></code></b>(int fd, + struct _disk_trailer *trailer, + struct zzip_dir_hdr **hdr_return, + zzip_plugin_io_t io) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#__zzip_try_open">__zzip_try_open</a></code></b>(zzip_char_t * filename, int filemode, + zzip_strings_t * ext, zzip_plugin_io_t io) + : int +</code></td></tr></table><h3>Documentation</h3><dl><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_error">zzip_error</a></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_seterror">zzip_seterror</a></b>(ZZIP_DIR * dir, int errcode) : void +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/info.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_seterror</code> function just does dir->errcode = errcode +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_open">zzip_open</a></b>(zzip_char_t * filename, int o_flags) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_open_ext_io">zzip_open_ext_io</a></b>(zzip_char_t * filename, int o_flags, int o_modes, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_open_shared_io">zzip_open_shared_io</a></b>(ZZIP_FILE * stream, + zzip_char_t * filename, int o_flags, int o_modes, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_FILE * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> + The <code>zzip_open_ext_io</code> function uses explicit ext and io instead of the internal + defaults, setting them to zero is equivalent to <code><a href="#zzip_open">zzip_open</a></code> +</p><p> + note that the two flag types have been split into an o_flags + (for fcntl-like openflags) and o_modes where the latter shall + carry the zzip_flags and possibly accessmodes for unix filesystems. + Since this version of zziplib can not write zipfiles, it is not + yet used for anything else than zzip-specific modeflags. +</p><p> + The <code>zzip_open_ext_io</code> function returns a new zzip-handle (use <code><a href="#zzip_close">zzip_close</a></code> to return + it). On error the <code>zzip_open_ext_io</code> function will return null setting <a href="http://www.opengroup.org/onlinepubs/000095399/functions/errno.html"><code>errno(3)</code></a>. +</p><p> + If any ext_io handlers were used then the referenced structure + should be static as the allocated ZZIP_FILE does not copy them. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_open_shared_io</code> function takes an extra stream argument - if a handle has been + then ext/io can be left null and the new stream handle will pick up + the ext/io. This should be used only in specific environment however + since <code><a href="#zzip_file_real">zzip_file_real</a></code> does not store any ext-sequence. +</p><p> + The benefit for the <code>zzip_open_shared_io</code> function comes in when the old file handle + was openened from a file within a zip archive. When the new file + is in the same zip archive then the internal zzip_dir structures + will be shared. It is even quicker, as no check needs to be done + anymore trying to guess the zip archive place in the filesystem, + here we just check whether the zip archive's filepath is a prefix + part of the filename to be opened. +</p><p> + Note that the <code>zzip_open_shared_io</code> function is also used by <code><a href="#zzip_freopen">zzip_freopen</a></code> that + will unshare the old handle, thereby possibly closing the handle. +</p><p> + The <code>zzip_open_shared_io</code> function returns a new zzip-handle (use <code><a href="#zzip_close">zzip_close</a></code> to return + it). On error the <code>zzip_open_shared_io</code> function will return null setting <a href="http://www.opengroup.org/onlinepubs/000095399/functions/errno.html"><code>errno(3)</code></a>. +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_opendir">zzip_opendir</a></b>(zzip_char_t * filename) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_opendir_ext_io">zzip_opendir_ext_io</a></b>(zzip_char_t * filename, int o_modes, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/dir.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_opendir_ext_io</code> function uses explicit ext and io instead of the internal + defaults, setting them to zero is equivalent to <code><a href="#zzip_opendir">zzip_opendir</a></code> +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_file_real">zzip_file_real</a></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dir_real">zzip_dir_real</a></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_realdir">zzip_realdir</a></b>(ZZIP_DIR * dir) + : void * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_realfd">zzip_realfd</a></b>(ZZIP_FILE * fp) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/info.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_dir_real</code> function checks if the ZZIP_DIR-handle is wrapping + a real directory or a zip-archive. + Returns 1 for a stat'able directory, and 0 for a handle to zip-archive. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_realdir</code> function returns the posix DIR* handle (if one exists). + Check before with <code><a href="#zzip_dir_real">zzip_dir_real</a></code> if the + the ZZIP_DIR points to a real directory. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_realfd</code> function returns the posix file descriptor (if one exists). + Check before with <code><a href="#zzip_file_real">zzip_file_real</a></code> if the + the ZZIP_FILE points to a real file. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_tell">zzip_tell</a></b>(ZZIP_FILE * fp) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_tell32">zzip_tell32</a></b>(ZZIP_FILE * fp) + : long +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_tell32</code> function is provided for users who can not use any largefile-mode. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_dir_stat">zzip_dir_stat</a></b>(ZZIP_DIR * dir, zzip_char_t * name, ZZIP_STAT * zs, int flags) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_file_stat">zzip_file_stat</a></b>(ZZIP_FILE * file, ZZIP_STAT * zs) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_fstat">zzip_fstat</a></b>(ZZIP_FILE * file, ZZIP_STAT * zs) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/stat.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_file_stat</code> function will obtain information about a opened file _within_ a + zip-archive. The file is supposed to be open (otherwise -1 is returned). + The st_size stat-member contains the uncompressed size. The optional + d_name is never set here. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_fstat</code> function will obtain information about a opened file which may be + either real/zipped. The file is supposed to be open (otherwise -1 is + returned). The st_size stat-member contains the uncompressed size. + The optional d_name is never set here. For a real file, we do set the + d_csize := st_size and d_compr := 0 for meaningful defaults. +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_strerror">zzip_strerror</a></b>(int errcode) + : zzip_char_t * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_strerror_of">zzip_strerror_of</a></b>(ZZIP_DIR * dir) + : zzip_char_t * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/err.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_strerror_of</code> function fetches the errorcode from the <code>DIR-handle</code> and + runs it through <code><a href="#zzip_strerror">zzip_strerror</a></code> to obtain the static string + describing the error. +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_dir_open">zzip_dir_open</a></b>(zzip_char_t * filename, zzip_error_t * e) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dir_open_ext_io">zzip_dir_open_ext_io</a></b>(zzip_char_t * filename, zzip_error_t * e, + zzip_strings_t * ext, zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dir_read">zzip_dir_read</a></b>(ZZIP_DIR * dir, ZZIP_DIRENT * d) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> the <code>zzip_dir_open_ext_io</code> function uses explicit ext and io instead of the internal + defaults. Setting these to zero is equivalent to <code><a href="#zzip_dir_open">zzip_dir_open</a></code> + Note that the referenced ext_io plugin handlers structure must be + static as it is not copied to the returned ZZIP_DIR structure. +</p> +</td></tr><tr><td valign="top"><p> fills the dirent-argument with the values and + increments the read-pointer of the dir-argument. +</p><p> + returns 0 if there no entry (anymore). +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_init_io">zzip_init_io</a></b>(zzip_plugin_io_handlers_t io, int flags) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_get_default_io">zzip_get_default_io</a></b>(void) + : zzip_plugin_io_t +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/plugin.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_get_default_io</code> function returns a zzip_plugin_io_t handle to static defaults + wrapping the posix io file functions for actual file access. The + returned structure is shared by all threads in the system. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_rewinddir">zzip_rewinddir</a></b>(ZZIP_DIR * dir) + : void +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_telldir">zzip_telldir</a></b>(ZZIP_DIR * dir) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_seekdir">zzip_seekdir</a></b>(ZZIP_DIR * dir, zzip_off_t offset) + : void +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_telldir32">zzip_telldir32</a></b>(ZZIP_DIR * dir) + : long +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_seekdir32">zzip_seekdir32</a></b>(ZZIP_DIR * dir, long offset) + : void +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> </em> </td><td align="right"> <em><small>zzip/dir.c</small></em></td></table><p> The <code>zzip_rewinddir</code> function is the equivalent of a <a href="http://www.opengroup.org/onlinepubs/000095399/functions/rewinddir.html"><code>rewinddir(2)</code></a> for a realdir or + the zipfile in place of a directory. The ZZIP_DIR handle returned from + <code><a href="#zzip_opendir">zzip_opendir</a></code> has a flag saying realdir or zipfile. As for a zipfile, + the filenames will include the filesubpath, so take care. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_telldir</code> function is the equivalent of <a href="http://www.opengroup.org/onlinepubs/000095399/functions/telldir.html"><code>telldir(2)</code></a> for a realdir or zipfile. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_seekdir</code> function is the equivalent of <a href="http://www.opengroup.org/onlinepubs/000095399/functions/seekdir.html"><code>seekdir(2)</code></a> for a realdir or zipfile. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_telldir32</code> function is provided for users who can not use any largefile-mode. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_seekdir32</code> function is provided for users who can not use any largefile-mode. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_fopen">zzip_fopen</a></b>(zzip_char_t * filename, zzip_char_t * mode) + : ZZIP_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_freopen">zzip_freopen</a></b>(zzip_char_t * filename, zzip_char_t * mode, ZZIP_FILE * stream) + : ZZIP_FILE * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> </em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table><p> The <code>zzip_fopen</code> function will <a href="http://www.opengroup.org/onlinepubs/000095399/functions/fopen.html"><code>fopen(3)</code></a> a real/zipped file. +</p><p> + It has some magic functionality builtin - it will first try to open + the given <em>filename</em> as a normal file. If it does not + exist, the given path to the filename (if any) is split into + its directory-part and the file-part. A ".zip" extension is + then added to the directory-part to create the name of a + zip-archive. That zip-archive (if it exists) is being searched + for the file-part, and if found a zzip-handle is returned. +</p><p> + Note that if the file is found in the normal fs-directory the + returned structure is mostly empty and the <code><a href="#zzip_read">zzip_read</a></code> call will + use the libc <a href="http://www.opengroup.org/onlinepubs/000095399/functions/read.html"><code>read(2)</code></a> to obtain data. Otherwise a <code><a href="#zzip_file_open">zzip_file_open</a></code> + is performed and any error mapped to <a href="http://www.opengroup.org/onlinepubs/000095399/functions/errno.html"><code>errno(3)</code></a>. +</p><p> + unlike the posix-wrapper <code><a href="#zzip_open">zzip_open</a></code> the mode-argument is + a string which allows for more freedom to support the extra + zzip modes called ZZIP_CASEINSENSITIVE and ZZIP_IGNOREPATH. + Currently, this <code><a href="#zzip_fopen">zzip_fopen</a></code> call will convert the following + characters in the mode-string into their corrsponding mode-bits: +</p><ul><li><p> <code> "r" : O_RDONLY : </code> read-only </p></li> +<li><p> <code> "b" : O_BINARY : </code> binary (win32 specific) </p></li> +<li><p> <code> "f" : O_NOCTTY : </code> no char device (unix) </p></li> +<li><p> <code> "i" : ZZIP_CASELESS : </code> inside zip file </p></li> +<li><p> <code> "*" : ZZIP_NOPATHS : </code> inside zip file only </p></li> +</ul><p> all other modes will be ignored for zip-contained entries + but they are transferred for compatibility and portability, + including these extra sugar bits: +</p><ul><li><p> <code> "x" : O_EXCL :</code> fail if file did exist </p></li> +<li><p> <code> "s" : O_SYNC :</code> synchronized access </p></li> +<li><p> <code> "n" : O_NONBLOCK :</code> nonblocking access </p></li> +<li><p> <code> "z#" : compression level :</code> for zlib </p></li> +<li><p> <code> "g#" : group access :</code> unix access bits </p></li> +<li><p> <code> "u#" : owner access :</code> unix access bits </p></li> +<li><p> <code> "o#" : world access :</code> unix access bits </p></li> +</ul><p> ... the access bits are in traditional unix bit format + with 7 = read/write/execute, 6 = read/write, 4 = read-only. +</p><p> + The default access mode is 0664, and the compression level + is ignored since the lib can not yet write zip files, otherwise + it would be the initialisation value for the zlib deflateInit + where 0 = no-compression, 1 = best-speed, 9 = best-compression. +</p><p> + The <code>zzip_fopen</code> function returns a new zzip-handle (use <code><a href="#zzip_close">zzip_close</a></code> to return + it). On error the <code>zzip_fopen</code> function will return null setting <a href="http://www.opengroup.org/onlinepubs/000095399/functions/errno.html"><code>errno(3)</code></a>. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_freopen</code> function receives an additional argument pointing to + a ZZIP_FILE* being already in use. If this extra argument is + null then the <code>zzip_freopen</code> function is identical with calling <code><a href="#zzip_fopen">zzip_fopen</a></code> +</p><p> + Per default, the old file stream is closed and only the internal + structures associated with it are kept. These internal structures + may be reused for the return value, and this is a lot quicker when + the filename matches a zipped file that is incidently in the very + same zip arch as the old filename wrapped in the stream struct. +</p><p> + That's simply because the zip arch's central directory does not + need to be read again. As an extension for the <code>zzip_freopen</code> function, if the + mode-string contains a "q" then the old stream is not closed but + left untouched, instead it is only given as a hint that a new + file handle may share/copy the zip arch structures of the old file + handle if that is possible, i.e when they are in the same zip arch. +</p><p> + The <code>zzip_freopen</code> function returns a new zzip-handle (use <code><a href="#zzip_close">zzip_close</a></code> to return + it). On error the <code>zzip_freopen</code> function will return null setting <a href="http://www.opengroup.org/onlinepubs/000095399/functions/errno.html"><code>errno(3)</code></a>. +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_dirhandle">zzip_dirhandle</a></b>(ZZIP_FILE * fp) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dirfd">zzip_dirfd</a></b>(ZZIP_DIR * dir) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/info.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_dirfd</code> function will just return dir->fd +</p><p> + If a ZZIP_DIR does point to a zipfile then the file-descriptor of that + zipfile is returned, otherwise a NULL is returned and the ZZIP_DIR wraps + a real directory DIR (if you have dirent on your system). +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_seek">zzip_seek</a></b>(ZZIP_FILE * fp, zzip_off_t offset, int whence) + : zzip_off_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_seek32">zzip_seek32</a></b>(ZZIP_FILE * fp, long offset, int whence) + : long +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> The <code>zzip_seek32</code> function is provided for users who can not use any largefile-mode. +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_read">zzip_read</a></b>(ZZIP_FILE * fp, void *buf, zzip_size_t len) + : zzip_ssize_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_fread">zzip_fread</a></b>(void *ptr, zzip_size_t size, zzip_size_t nmemb, ZZIP_FILE * file) + : zzip_size_t +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr><tr><td valign="top"> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_dir_free">zzip_dir_free</a></b>(ZZIP_DIR * dir) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dir_close">zzip_dir_close</a></b>(ZZIP_DIR * dir) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> It will also <a href="http://www.opengroup.org/onlinepubs/000095399/functions/free.html"><code>free(2)</code></a> the <code>ZZIP_DIR-handle</code> given. + the counterpart for <code><a href="#zzip_dir_open">zzip_dir_open</a></code> + see also <code><a href="#zzip_dir_free">zzip_dir_free</a></code> +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_fclose">zzip_fclose</a></b>(ZZIP_FILE * fp) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_close">zzip_close</a></b>(ZZIP_FILE * fp) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr><tr><td valign="top"> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_dir_fdopen">zzip_dir_fdopen</a></b>(int fd, zzip_error_t * errcode_p) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dir_fdopen_ext_io">zzip_dir_fdopen_ext_io</a></b>(int fd, zzip_error_t * errcode_p, + zzip_strings_t * ext, const zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> the <code>zzip_dir_fdopen_ext_io</code> function uses explicit ext and io instead of the internal + defaults, setting these to zero is equivalent to <code><a href="#zzip_dir_fdopen">zzip_dir_fdopen</a></code> +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_dir_alloc_ext_io">zzip_dir_alloc_ext_io</a></b>(zzip_strings_t * ext, const zzip_plugin_io_t io) + : ZZIP_DIR * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_dir_alloc">zzip_dir_alloc</a></b>(zzip_strings_t * fileext) + : ZZIP_DIR * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr><tr><td valign="top"><p> the <code>zzip_dir_alloc</code> function is obsolete - it was generally used for implementation + and exported to let other code build on it. It is now advised to + use <code><a href="#zzip_dir_alloc_ext_io">zzip_dir_alloc_ext_io</a></code> now on explicitly, just set that second + argument to zero to achieve the same functionality as the old style. +</p> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_readdir">zzip_readdir</a></b>(ZZIP_DIR * dir) + : ZZIP_DIRENT * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/dir.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_closedir">zzip_closedir</a></b>(ZZIP_DIR * dir) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/dir.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_errno">zzip_errno</a></b>(int errcode) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/err.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_file_close">zzip_file_close</a></b>(ZZIP_FILE * fp) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_file_open">zzip_file_open</a></b>(ZZIP_DIR * dir, zzip_char_t * name, int o_mode) + : ZZIP_FILE * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_inflate_init">zzip_inflate_init</a></b>(ZZIP_FILE * fp, struct zzip_dir_hdr *hdr) + : static int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_file_read">zzip_file_read</a></b>(ZZIP_FILE * fp, void *buf, zzip_size_t len) + : zzip_ssize_t +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_rewind">zzip_rewind</a></b>(ZZIP_FILE * fp) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/file.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_compr_str">zzip_compr_str</a></b>(int compr) + : zzip_char_t * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/info.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="__zzip_fetch_disk_trailer">__zzip_fetch_disk_trailer</a></b>(int fd, zzip_off_t filesize, + struct _disk_trailer *_zzip_restrict trailer, + zzip_plugin_io_t io) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="__zzip_parse_root_directory">__zzip_parse_root_directory</a></b>(int fd, + struct _disk_trailer *trailer, + struct zzip_dir_hdr **hdr_return, + zzip_plugin_io_t io) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="__zzip_try_open">__zzip_try_open</a></b>(zzip_char_t * filename, int filemode, + zzip_strings_t * ext, zzip_plugin_io_t io) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em>...</em> </td><td align="right"> <em><small>zzip/zip.c</small></em></td></table> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd></dl> +</body></html> diff --git a/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipmmapped.html b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipmmapped.html new file mode 100644 index 00000000000..d5b82aea8ad --- /dev/null +++ b/Build/source/libs/zziplib/zziplib-0.13.60/docs/zzipmmapped.html @@ -0,0 +1,222 @@ +<html><head><title>zziplib Library Functions</title> +</head><body> +<h2>zziplib Library Functions</h2><p>Version 0.13.60</p><p><big><b><code>#include <zzip/mmapped.h></code></b></big></p><table width="100%"><tr><td valign="top"><code><b><code><a href="#zzip_disk_entry_to_data">zzip_disk_entry_to_data</a></code></b>(ZZIP_DISK * disk, struct zzip_disk_entry * entry) + : zzip_byte_t * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_entry_to_file_header">zzip_disk_entry_to_file_header</a></code></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : struct zzip_file_header * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_entry_strdup_name">zzip_disk_entry_strdup_name</a></code></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : zzip__new__ char * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_entry_strdup_comment">zzip_disk_entry_strdup_comment</a></code></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : zzip__new__ char * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_findfile">zzip_disk_findfile</a></code></b>(ZZIP_DISK * disk, char *filename, + struct zzip_disk_entry *after, zzip_strcmp_fn_t compare) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_findfirst">zzip_disk_findfirst</a></code></b>(ZZIP_DISK * disk) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_findnext">zzip_disk_findnext</a></code></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_findmatch">zzip_disk_findmatch</a></code></b>(ZZIP_DISK * disk, char *filespec, + struct zzip_disk_entry *after, + zzip_fnmatch_fn_t compare, int flags) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_fopen">zzip_disk_fopen</a></code></b>(ZZIP_DISK * disk, char *filename) + : zzip__new__ ZZIP_DISK_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_entry_fopen">zzip_disk_entry_fopen</a></code></b>(ZZIP_DISK * disk, ZZIP_DISK_ENTRY * entry) + : zzip__new__ ZZIP_DISK_FILE * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_fread">zzip_disk_fread</a></code></b>(void *ptr, zzip_size_t sized, zzip_size_t nmemb, + ZZIP_DISK_FILE * file) + : zzip_size_t +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_fclose">zzip_disk_fclose</a></code></b>(ZZIP_DISK_FILE * file) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_feof">zzip_disk_feof</a></code></b>(ZZIP_DISK_FILE * file) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_mmap">zzip_disk_mmap</a></code></b>(int fd) + : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_init">zzip_disk_init</a></code></b>(ZZIP_DISK * disk, void *buffer, zzip_size_t buflen) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_new">zzip_disk_new</a></code></b>(void) + : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_munmap">zzip_disk_munmap</a></code></b>(ZZIP_DISK * disk) + : int +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_open">zzip_disk_open</a></code></b>(char *filename) + : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_buffer">zzip_disk_buffer</a></code></b>(void *buffer, size_t buflen) : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><code><a href="#zzip_disk_close">zzip_disk_close</a></code></b>(ZZIP_DISK * disk) + : int +</code></td></tr></table><h3>Documentation</h3><dl><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_disk_entry_to_data">zzip_disk_entry_to_data</a></b>(ZZIP_DISK * disk, struct zzip_disk_entry * entry) + : zzip_byte_t * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_entry_to_file_header">zzip_disk_entry_to_file_header</a></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : struct zzip_file_header * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_entry_strdup_name">zzip_disk_entry_strdup_name</a></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : zzip__new__ char * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_entry_strdup_comment">zzip_disk_entry_strdup_comment</a></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : zzip__new__ char * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> helper functions for (mmapped) zip access api</em> </td><td align="right"> <em><small>zzip/mmapped.c</small></em></td></table><p> + The <code>zzip_disk_entry_to_data</code> function augments the other zzip_disk_entry_* helpers: here we move + a disk_entry pointer (as returned by _find* functions) into a pointer to + the data block right after the file_header. Only disk->buffer would be + needed to perform the seek but we check the mmapped range end as well. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_entry_to_file_header</code> function does half the job of <code><a href="#zzip_disk_entry_to_data">zzip_disk_entry_to_data</a></code> where it + can augment with <code><u>zzip_file_header_to_data</u></code> helper from format/fetch.h +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_entry_strdup_name</code> function is a big helper despite its little name: in a zip file the + encoded filenames are usually NOT zero-terminated but for common usage + with libc we need it that way. Secondly, the filename SHOULD be present + in the zip central directory but if not then we fallback to the filename + given in the file_header of each compressed data portion. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_entry_strdup_comment</code> function is similar creating a reference to a zero terminated + string but it can only exist in the zip central directory entry. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_disk_findfile">zzip_disk_findfile</a></b>(ZZIP_DISK * disk, char *filename, + struct zzip_disk_entry *after, zzip_strcmp_fn_t compare) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_findfirst">zzip_disk_findfirst</a></b>(ZZIP_DISK * disk) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_findnext">zzip_disk_findnext</a></b>(ZZIP_DISK * disk, struct zzip_disk_entry *entry) + : struct zzip_disk_entry * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_findmatch">zzip_disk_findmatch</a></b>(ZZIP_DISK * disk, char *filespec, + struct zzip_disk_entry *after, + zzip_fnmatch_fn_t compare, int flags) + : struct zzip_disk_entry * +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> search for files in the (mmapped) zip central directory</em> </td><td align="right"> <em><small>zzip/mmapped.c</small></em></td></table><p> + The <code>zzip_disk_findfile</code> function is given a filename as an additional argument, to find the + disk_entry matching a given filename. The compare-function is usually + strcmp or strcasecmp or perhaps strcoll, if null then strcmp is used. + - use null as argument for "after"-entry when searching the first + matching entry, otherwise the last returned value if you look for other + entries with a special "compare" function (if null then a doubled search + is rather useless with this variant of _findfile). +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_findfirst</code> function is the first call of all the zip access functions here. + It contains the code to find the first entry of the zip central directory. + Here we require the mmapped block to represent a real zip file where the + disk_trailer is _last_ in the file area, so that its position would be at + a fixed offset from the end of the file area if not for the comment field + allowed to be of variable length (which needs us to do a little search + for the disk_tailer). However, in this simple implementation we disregard + any disk_trailer info telling about multidisk archives, so we just return + a pointer to the zip central directory. +</p><p> + For an actual means, we are going to search backwards from the end + of the mmaped block looking for the PK-magic signature of a + disk_trailer. If we see one then we check the rootseek value to + find the first disk_entry of the root central directory. If we find + the correct PK-magic signature of a disk_entry over there then we + assume we are done and we are going to return a pointer to that label. +</p><p> + The return value is a pointer to the first zzip_disk_entry being checked + to be within the bounds of the file area specified by the arguments. If + no disk_trailer was found then null is returned, and likewise we only + accept a disk_trailer with a seekvalue that points to a disk_entry and + both parts have valid PK-magic parts. Beyond some sanity check we try to + catch a common brokeness with zip archives that still allows us to find + the start of the zip central directory. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_findnext</code> function takes an existing disk_entry in the central root directory + (e.g. from zzip_disk_findfirst) and returns the next entry within in + the given bounds of the mmapped file area. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_findmatch</code> function uses a compare-function with an additional argument + and it is called just like fnmatch(3) from POSIX.2 AD:1993), i.e. + the argument filespec first and the ziplocal filename second with + the integer-flags put in as third to the indirect call. If the + platform has fnmatch available then null-compare will use that one + and otherwise we fall back to mere strcmp, so if you need fnmatch + searching then please provide an implementation somewhere else. + - use null as argument for "after"-entry when searching the first + matching entry, or the last disk_entry return-value to find the + next entry matching the given filespec. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_disk_fopen">zzip_disk_fopen</a></b>(ZZIP_DISK * disk, char *filename) + : zzip__new__ ZZIP_DISK_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_entry_fopen">zzip_disk_entry_fopen</a></b>(ZZIP_DISK * disk, ZZIP_DISK_ENTRY * entry) + : zzip__new__ ZZIP_DISK_FILE * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_fread">zzip_disk_fread</a></b>(void *ptr, zzip_size_t sized, zzip_size_t nmemb, + ZZIP_DISK_FILE * file) + : zzip_size_t +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_fclose">zzip_disk_fclose</a></b>(ZZIP_DISK_FILE * file) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_feof">zzip_disk_feof</a></b>(ZZIP_DISK_FILE * file) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> openening a file part wrapped within a (mmapped) zip archive</em> </td><td align="right"> <em><small>zzip/mmapped.c</small></em></td></table><p> + The <code>zzip_disk_fopen</code> function opens a file found by name, so it does a search into + the zip central directory with <code><a href="#zzip_disk_findfile">zzip_disk_findfile</a></code> and whatever + is found first is given to <code><a href="#zzip_disk_entry_fopen">zzip_disk_entry_fopen</a></code> +</p> +</td></tr><tr><td valign="top"><p> + the ZZIP_DISK_FILE* is rather simple in just encapsulating the + arguments given to the <code>zzip_disk_entry_fopen</code> function plus a zlib deflate buffer. + Note that the ZZIP_DISK pointer does already contain the full + mmapped file area of a zip disk, so open()ing a file part within + that area happens to be a lookup of its bounds and encoding. That + information is memorized on the ZZIP_DISK_FILE so that subsequent + _read() operations will be able to get the next data portion or + return an eof condition for that file part wrapped in the zip archive. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_fread</code> function reads more bytes into the output buffer specified as + arguments. The return value is null on eof or error, the stdio-like + interface can not distinguish between these so you need to check + with <code><a href="#zzip_disk_feof">zzip_disk_feof</a></code> for the difference. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_fclose</code> function releases any zlib decoder info needed for decompression + and dumps the ZZIP_DISK_FILE* then. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_feof</code> function allows to distinguish an error from an eof condition. + Actually, if we found an error but we did already reach eof then we + just keep on saying that it was an eof, so the app can just continue. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"><tr><td valign="top"><code><b><a name="zzip_disk_mmap">zzip_disk_mmap</a></b>(int fd) + : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_init">zzip_disk_init</a></b>(ZZIP_DISK * disk, void *buffer, zzip_size_t buflen) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_new">zzip_disk_new</a></b>(void) + : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_munmap">zzip_disk_munmap</a></b>(ZZIP_DISK * disk) + : int +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_open">zzip_disk_open</a></b>(char *filename) + : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_buffer">zzip_disk_buffer</a></b>(void *buffer, size_t buflen) : zzip__new__ ZZIP_DISK * +</code></td></tr><tr><td valign="top"><code><b><a name="zzip_disk_close">zzip_disk_close</a></b>(ZZIP_DISK * disk) + : int +</code></td></tr></table></dt><dd><table width="100%"><tr><td valign="top"><table border="0" width="100%" cellpadding="0" cellspacing="0"><td> <em> turn a filehandle into a mmapped zip disk archive handle</em> </td><td align="right"> <em><small>zzip/mmapped.c</small></em></td></table><p> + The <code>zzip_disk_mmap</code> function uses the given file-descriptor to detect the length of the + file and calls the system <a href="http://www.opengroup.org/onlinepubs/000095399/functions/mmap.html"><code>mmap(2)</code></a> to put it in main memory. If it is + successful then a newly allocated ZZIP_DISK* is returned with + disk->buffer pointing to the mapview of the zipdisk content. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_init</code> function does primary initialization of a disk-buffer struct. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_new</code> function allocates a new disk-buffer with <a href="http://www.opengroup.org/onlinepubs/000095399/functions/malloc.html"><code>malloc(3)</code></a> +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_munmap</code> function is the inverse of <code><a href="#zzip_disk_mmap">zzip_disk_mmap</a></code> and using the system + munmap(2) on the buffer area and <a href="http://www.opengroup.org/onlinepubs/000095399/functions/free.html"><code>free(3)</code></a> on the ZZIP_DISK structure. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_open</code> function opens the given archive by name and turn the filehandle + to <code><a href="#zzip_disk_mmap">zzip_disk_mmap</a></code> for bringing it to main memory. If it can not + be <a href="http://www.opengroup.org/onlinepubs/000095399/functions/mmap.html"><code>mmap(2)</code></a>'ed then we slurp the whole file into a newly <a href="http://www.opengroup.org/onlinepubs/000095399/functions/malloc.html"><code>malloc(2)</code></a>'ed + memory block. Only if that fails too then we return null. Since handling + of disk->buffer is ambigous it should not be snatched away please. +</p> +</td></tr><tr><td valign="top"><p> The <code>zzip_disk_buffer</code> function will attach a buffer with a zip image + that was acquired from another source than a file. + Note that if zzip_disk_mmap fails then zzip_disk_open + will fall back and try to read the full file to memory + wrapping a ZZIP_DISK around the memory buffer just as + the <code>zzip_disk_buffer</code> function will do. Note that the <code>zzip_disk_buffer</code> function will not + own the buffer, it will neither be written nor free()d. +</p> +</td></tr><tr><td valign="top"><p> + The <code>zzip_disk_close</code> function will release all data needed to access a (mmapped) + zip archive, including any malloc()ed blocks, sharedmem mappings + and it dumps the handle struct as well. +</p> +</td></tr></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd><dt><table width="100%"></table></dt><dd><table width="100%"></table></dd></dl> +</body></html> |