summaryrefslogtreecommitdiff
path: root/Master/tlpkg/tlperl0/lib/pods/perltodo.pod
diff options
context:
space:
mode:
Diffstat (limited to 'Master/tlpkg/tlperl0/lib/pods/perltodo.pod')
-rwxr-xr-xMaster/tlpkg/tlperl0/lib/pods/perltodo.pod1229
1 files changed, 1229 insertions, 0 deletions
diff --git a/Master/tlpkg/tlperl0/lib/pods/perltodo.pod b/Master/tlpkg/tlperl0/lib/pods/perltodo.pod
new file mode 100755
index 00000000000..7e7a3526742
--- /dev/null
+++ b/Master/tlpkg/tlperl0/lib/pods/perltodo.pod
@@ -0,0 +1,1229 @@
+=head1 NAME
+
+perltodo - Perl TO-DO List
+
+=head1 DESCRIPTION
+
+This is a list of wishes for Perl. The most up to date version of this file
+is at http://perl5.git.perl.org/perl.git/blob_plain/HEAD:/pod/perltodo.pod
+
+The tasks we think are smaller or easier are listed first. Anyone is welcome
+to work on any of these, but it's a good idea to first contact
+I<perl5-porters@perl.org> to avoid duplication of effort, and to learn from
+any previous attempts. By all means contact a pumpking privately first if you
+prefer.
+
+Whilst patches to make the list shorter are most welcome, ideas to add to
+the list are also encouraged. Check the perl5-porters archives for past
+ideas, and any discussion about them. One set of archives may be found at:
+
+ http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/
+
+What can we offer you in return? Fame, fortune, and everlasting glory? Maybe
+not, but if your patch is incorporated, then we'll add your name to the
+F<AUTHORS> file, which ships in the official distribution. How many other
+programming languages offer you 1 line of immortality?
+
+=head1 Tasks that only need Perl knowledge
+
+=head2 Remove macperl references from tests
+
+MacPerl is gone. The tests don't need to be there.
+
+=head2 Remove duplication of test setup.
+
+Schwern notes, that there's duplication of code - lots and lots of tests have
+some variation on the big block of C<$Is_Foo> checks. We can safely put this
+into a file, change it to build an C<%Is> hash and require it. Maybe just put
+it into F<test.pl>. Throw in the handy tainting subroutines.
+
+=head2 POD -E<gt> HTML conversion in the core still sucks
+
+Which is crazy given just how simple POD purports to be, and how simple HTML
+can be. It's not actually I<as> simple as it sounds, particularly with the
+flexibility POD allows for C<=item>, but it would be good to improve the
+visual appeal of the HTML generated, and to avoid it having any validation
+errors. See also L</make HTML install work>, as the layout of installation tree
+is needed to improve the cross-linking.
+
+The addition of C<Pod::Simple> and its related modules may make this task
+easier to complete.
+
+=head2 Make ExtUtils::ParseXS use strict;
+
+F<lib/ExtUtils/ParseXS.pm> contains this line
+
+ # use strict; # One of these days...
+
+Simply uncomment it, and fix all the resulting issues :-)
+
+The more practical approach, to break the task down into manageable chunks, is
+to work your way though the code from bottom to top, or if necessary adding
+extra C<{ ... }> blocks, and turning on strict within them.
+
+=head2 Parallel testing
+
+(This probably impacts much more than the core: also the Test::Harness
+and TAP::* modules on CPAN.)
+
+All of the tests in F<t/> can now be run in parallel, if C<$ENV{TEST_JOBS}>
+is set. However, tests within each directory in F<ext> and F<lib> are still
+run in series, with directories run in parallel. This is an adequate
+heuristic, but it might be possible to relax it further, and get more
+throughput. Specifically, it would be good to audit all of F<lib/*.t>, and
+make them use C<File::Temp>.
+
+=head2 Make Schwern poorer
+
+We should have tests for everything. When all the core's modules are tested,
+Schwern has promised to donate to $500 to TPF. We may need volunteers to
+hold him upside down and shake vigorously in order to actually extract the
+cash.
+
+=head2 Improve the coverage of the core tests
+
+Use Devel::Cover to ascertain the core modules's test coverage, then add
+tests that are currently missing.
+
+=head2 test B
+
+A full test suite for the B module would be nice.
+
+=head2 A decent benchmark
+
+C<perlbench> seems impervious to any recent changes made to the perl core. It
+would be useful to have a reasonable general benchmarking suite that roughly
+represented what current perl programs do, and measurably reported whether
+tweaks to the core improve, degrade or don't really affect performance, to
+guide people attempting to optimise the guts of perl. Gisle would welcome
+new tests for perlbench.
+
+=head2 fix tainting bugs
+
+Fix the bugs revealed by running the test suite with the C<-t> switch (via
+C<make test.taintwarn>).
+
+=head2 Dual life everything
+
+As part of the "dists" plan, anything that doesn't belong in the smallest perl
+distribution needs to be dual lifed. Anything else can be too. Figure out what
+changes would be needed to package that module and its tests up for CPAN, and
+do so. Test it with older perl releases, and fix the problems you find.
+
+To make a minimal perl distribution, it's useful to look at
+F<t/lib/commonsense.t>.
+
+=head2 Bundle dual life modules in ext/
+
+For maintenance (and branch merging) reasons, it would be useful to move
+some architecture-independent dual-life modules from lib/ to ext/, if this
+has no negative impact on the build of perl itself.
+
+=head2 POSIX memory footprint
+
+Ilya observed that use POSIX; eats memory like there's no tomorrow, and at
+various times worked to cut it down. There is probably still fat to cut out -
+for example POSIX passes Exporter some very memory hungry data structures.
+
+=head2 embed.pl/makedef.pl
+
+There is a script F<embed.pl> that generates several header files to prefix
+all of Perl's symbols in a consistent way, to provide some semblance of
+namespace support in C<C>. Functions are declared in F<embed.fnc>, variables
+in F<interpvar.h>. Quite a few of the functions and variables
+are conditionally declared there, using C<#ifdef>. However, F<embed.pl>
+doesn't understand the C macros, so the rules about which symbols are present
+when is duplicated in F<makedef.pl>. Writing things twice is bad, m'kay.
+It would be good to teach C<embed.pl> to understand the conditional
+compilation, and hence remove the duplication, and the mistakes it has caused.
+
+=head2 use strict; and AutoLoad
+
+Currently if you write
+
+ package Whack;
+ use AutoLoader 'AUTOLOAD';
+ use strict;
+ 1;
+ __END__
+ sub bloop {
+ print join (' ', No, strict, here), "!\n";
+ }
+
+then C<use strict;> isn't in force within the autoloaded subroutines. It would
+be more consistent (and less surprising) to arrange for all lexical pragmas
+in force at the __END__ block to be in force within each autoloaded subroutine.
+
+There's a similar problem with SelfLoader.
+
+=head2 profile installman
+
+The F<installman> script is slow. All it is doing text processing, which we're
+told is something Perl is good at. So it would be nice to know what it is doing
+that is taking so much CPU, and where possible address it.
+
+
+=head1 Tasks that need a little sysadmin-type knowledge
+
+Or if you prefer, tasks that you would learn from, and broaden your skills
+base...
+
+=head2 make HTML install work
+
+There is an C<installhtml> target in the Makefile. It's marked as
+"experimental". It would be good to get this tested, make it work reliably, and
+remove the "experimental" tag. This would include
+
+=over 4
+
+=item 1
+
+Checking that cross linking between various parts of the documentation works.
+In particular that links work between the modules (files with POD in F<lib/>)
+and the core documentation (files in F<pod/>)
+
+=item 2
+
+Work out how to split C<perlfunc> into chunks, preferably one per function
+group, preferably with general case code that could be used elsewhere.
+Challenges here are correctly identifying the groups of functions that go
+together, and making the right named external cross-links point to the right
+page. Things to be aware of are C<-X>, groups such as C<getpwnam> to
+C<endservent>, two or more C<=items> giving the different parameter lists, such
+as
+
+ =item substr EXPR,OFFSET,LENGTH,REPLACEMENT
+ =item substr EXPR,OFFSET,LENGTH
+ =item substr EXPR,OFFSET
+
+and different parameter lists having different meanings. (eg C<select>)
+
+=back
+
+=head2 compressed man pages
+
+Be able to install them. This would probably need a configure test to see how
+the system does compressed man pages (same directory/different directory?
+same filename/different filename), as well as tweaking the F<installman> script
+to compress as necessary.
+
+=head2 Add a code coverage target to the Makefile
+
+Make it easy for anyone to run Devel::Cover on the core's tests. The steps
+to do this manually are roughly
+
+=over 4
+
+=item *
+
+do a normal C<Configure>, but include Devel::Cover as a module to install
+(see F<INSTALL> for how to do this)
+
+=item *
+
+ make perl
+
+=item *
+
+ cd t; HARNESS_PERL_SWITCHES=-MDevel::Cover ./perl -I../lib harness
+
+=item *
+
+Process the resulting Devel::Cover database
+
+=back
+
+This just give you the coverage of the F<.pm>s. To also get the C level
+coverage you need to
+
+=over 4
+
+=item *
+
+Additionally tell C<Configure> to use the appropriate C compiler flags for
+C<gcov>
+
+=item *
+
+ make perl.gcov
+
+(instead of C<make perl>)
+
+=item *
+
+After running the tests run C<gcov> to generate all the F<.gcov> files.
+(Including down in the subdirectories of F<ext/>
+
+=item *
+
+(From the top level perl directory) run C<gcov2perl> on all the C<.gcov> files
+to get their stats into the cover_db directory.
+
+=item *
+
+Then process the Devel::Cover database
+
+=back
+
+It would be good to add a single switch to C<Configure> to specify that you
+wanted to perform perl level coverage, and another to specify C level
+coverage, and have C<Configure> and the F<Makefile> do all the right things
+automatically.
+
+=head2 Make Config.pm cope with differences between built and installed perl
+
+Quite often vendors ship a perl binary compiled with their (pay-for)
+compilers. People install a free compiler, such as gcc. To work out how to
+build extensions, Perl interrogates C<%Config>, so in this situation
+C<%Config> describes compilers that aren't there, and extension building
+fails. This forces people into choosing between re-compiling perl themselves
+using the compiler they have, or only using modules that the vendor ships.
+
+It would be good to find a way teach C<Config.pm> about the installation setup,
+possibly involving probing at install time or later, so that the C<%Config> in
+a binary distribution better describes the installed machine, when the
+installed machine differs from the build machine in some significant way.
+
+=head2 linker specification files
+
+Some platforms mandate that you provide a list of a shared library's external
+symbols to the linker, so the core already has the infrastructure in place to
+do this for generating shared perl libraries. My understanding is that the
+GNU toolchain can accept an optional linker specification file, and restrict
+visibility just to symbols declared in that file. It would be good to extend
+F<makedef.pl> to support this format, and to provide a means within
+C<Configure> to enable it. This would allow Unix users to test that the
+export list is correct, and to build a perl that does not pollute the global
+namespace with private symbols.
+
+=head2 Cross-compile support
+
+Currently C<Configure> understands C<-Dusecrosscompile> option. This option
+arranges for building C<miniperl> for TARGET machine, so this C<miniperl> is
+assumed then to be copied to TARGET machine and used as a replacement of full
+C<perl> executable.
+
+This could be done little differently. Namely C<miniperl> should be built for
+HOST and then full C<perl> with extensions should be compiled for TARGET.
+This, however, might require extra trickery for %Config: we have one config
+first for HOST and then another for TARGET. Tools like MakeMaker will be
+mightily confused. Having around two different types of executables and
+libraries (HOST and TARGET) makes life interesting for Makefiles and
+shell (and Perl) scripts. There is $Config{run}, normally empty, which
+can be used as an execution wrapper. Also note that in some
+cross-compilation/execution environments the HOST and the TARGET do
+not see the same filesystem(s), the $Config{run} may need to do some
+file/directory copying back and forth.
+
+=head2 roffitall
+
+Make F<pod/roffitall> be updated by F<pod/buildtoc>.
+
+=head2 Split "linker" from "compiler"
+
+Right now, Configure probes for two commands, and sets two variables:
+
+=over 4
+
+=item * C<cc> (in F<cc.U>)
+
+This variable holds the name of a command to execute a C compiler which
+can resolve multiple global references that happen to have the same
+name. Usual values are F<cc> and F<gcc>.
+Fervent ANSI compilers may be called F<c89>. AIX has F<xlc>.
+
+=item * C<ld> (in F<dlsrc.U>)
+
+This variable indicates the program to be used to link
+libraries for dynamic loading. On some systems, it is F<ld>.
+On ELF systems, it should be C<$cc>. Mostly, we'll try to respect
+the hint file setting.
+
+=back
+
+There is an implicit historical assumption from around Perl5.000alpha
+something, that C<$cc> is also the correct command for linking object files
+together to make an executable. This may be true on Unix, but it's not true
+on other platforms, and there are a maze of work arounds in other places (such
+as F<Makefile.SH>) to cope with this.
+
+Ideally, we should create a new variable to hold the name of the executable
+linker program, probe for it in F<Configure>, and centralise all the special
+case logic there or in hints files.
+
+A small bikeshed issue remains - what to call it, given that C<$ld> is already
+taken (arguably for the wrong thing now, but on SunOS 4.1 it is the command
+for creating dynamically-loadable modules) and C<$link> could be confused with
+the Unix command line executable of the same name, which does something
+completely different. Andy Dougherty makes the counter argument "In parrot, I
+tried to call the command used to link object files and libraries into an
+executable F<link>, since that's what my vaguely-remembered DOS and VMS
+experience suggested. I don't think any real confusion has ensued, so it's
+probably a reasonable name for perl5 to use."
+
+"Alas, I've always worried that introducing it would make things worse,
+since now the module building utilities would have to look for
+C<$Config{link}> and institute a fall-back plan if it weren't found."
+Although I can see that as confusing, given that C<$Config{d_link}> is true
+when (hard) links are available.
+
+=head2 Configure Windows using PowerShell
+
+Currently, Windows uses hard-coded config files based to build the
+config.h for compiling Perl. Makefiles are also hard-coded and need to be
+hand edited prior to building Perl. While this makes it easy to create a perl.exe
+that works across multiple Windows versions, being able to accurately
+configure a perl.exe for a specific Windows versions and VS C++ would be
+a nice enhancement. With PowerShell available on Windows XP and up, this
+may now be possible. Step 1 might be to investigate whether this is possible
+and use this to clean up our current makefile situation. Step 2 would be to
+see if there would be a way to use our existing metaconfig units to configure a
+Windows Perl or whether we go in a separate direction and make it so. Of
+course, we all know what step 3 is.
+
+=head2 decouple -g and -DDEBUGGING
+
+Currently F<Configure> automatically adds C<-DDEBUGGING> to the C compiler
+flags if it spots C<-g> in the optimiser flags. The pre-processor directive
+C<DEBUGGING> enables F<perl>'s command line <-D> options, but in the process
+makes F<perl> slower. It would be good to disentangle this logic, so that
+C-level debugging with C<-g> and Perl level debugging with C<-D> can easily
+be enabled independently.
+
+=head1 Tasks that need a little C knowledge
+
+These tasks would need a little C knowledge, but don't need any specific
+background or experience with XS, or how the Perl interpreter works
+
+=head2 Weed out needless PERL_UNUSED_ARG
+
+The C code uses the macro C<PERL_UNUSED_ARG> to stop compilers warning about
+unused arguments. Often the arguments can't be removed, as there is an
+external constraint that determines the prototype of the function, so this
+approach is valid. However, there are some cases where C<PERL_UNUSED_ARG>
+could be removed. Specifically
+
+=over 4
+
+=item *
+
+The prototypes of (nearly all) static functions can be changed
+
+=item *
+
+Unused arguments generated by short cut macros are wasteful - the short cut
+macro used can be changed.
+
+=back
+
+=head2 Modernize the order of directories in @INC
+
+The way @INC is laid out by default, one cannot upgrade core (dual-life)
+modules without overwriting files. This causes problems for binary
+package builders. One possible proposal is laid out in this
+message:
+L<http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2002-04/msg02380.html>.
+
+=head2 -Duse32bit*
+
+Natively 64-bit systems need neither -Duse64bitint nor -Duse64bitall.
+On these systems, it might be the default compilation mode, and there
+is currently no guarantee that passing no use64bitall option to the
+Configure process will build a 32bit perl. Implementing -Duse32bit*
+options would be nice for perl 5.12.
+
+=head2 Profile Perl - am I hot or not?
+
+The Perl source code is stable enough that it makes sense to profile it,
+identify and optimise the hotspots. It would be good to measure the
+performance of the Perl interpreter using free tools such as cachegrind,
+gprof, and dtrace, and work to reduce the bottlenecks they reveal.
+
+As part of this, the idea of F<pp_hot.c> is that it contains the I<hot> ops,
+the ops that are most commonly used. The idea is that by grouping them, their
+object code will be adjacent in the executable, so they have a greater chance
+of already being in the CPU cache (or swapped in) due to being near another op
+already in use.
+
+Except that it's not clear if these really are the most commonly used ops. So
+as part of exercising your skills with coverage and profiling tools you might
+want to determine what ops I<really> are the most commonly used. And in turn
+suggest evictions and promotions to achieve a better F<pp_hot.c>.
+
+One piece of Perl code that might make a good testbed is F<installman>.
+
+=head2 Allocate OPs from arenas
+
+Currently all new OP structures are individually malloc()ed and free()d.
+All C<malloc> implementations have space overheads, and are now as fast as
+custom allocates so it would both use less memory and less CPU to allocate
+the various OP structures from arenas. The SV arena code can probably be
+re-used for this.
+
+Note that Configuring perl with C<-Accflags=-DPL_OP_SLAB_ALLOC> will use
+Perl_Slab_alloc() to pack optrees into a contiguous block, which is
+probably superior to the use of OP arenas, esp. from a cache locality
+standpoint. See L<Profile Perl - am I hot or not?>.
+
+=head2 Improve win32/wince.c
+
+Currently, numerous functions look virtually, if not completely,
+identical in both C<win32/wince.c> and C<win32/win32.c> files, which can't
+be good.
+
+=head2 Use secure CRT functions when building with VC8 on Win32
+
+Visual C++ 2005 (VC++ 8.x) deprecated a number of CRT functions on the basis
+that they were "unsafe" and introduced differently named secure versions of
+them as replacements, e.g. instead of writing
+
+ FILE* f = fopen(__FILE__, "r");
+
+one should now write
+
+ FILE* f;
+ errno_t err = fopen_s(&f, __FILE__, "r");
+
+Currently, the warnings about these deprecations have been disabled by adding
+-D_CRT_SECURE_NO_DEPRECATE to the CFLAGS. It would be nice to remove that
+warning suppressant and actually make use of the new secure CRT functions.
+
+There is also a similar issue with POSIX CRT function names like fileno having
+been deprecated in favour of ISO C++ conformant names like _fileno. These
+warnings are also currently suppressed by adding -D_CRT_NONSTDC_NO_DEPRECATE. It
+might be nice to do as Microsoft suggest here too, although, unlike the secure
+functions issue, there is presumably little or no benefit in this case.
+
+=head2 Fix POSIX::access() and chdir() on Win32
+
+These functions currently take no account of DACLs and therefore do not behave
+correctly in situations where access is restricted by DACLs (as opposed to the
+read-only attribute).
+
+Furthermore, POSIX::access() behaves differently for directories having the
+read-only attribute set depending on what CRT library is being used. For
+example, the _access() function in the VC6 and VC7 CRTs (wrongly) claim that
+such directories are not writable, whereas in fact all directories are writable
+unless access is denied by DACLs. (In the case of directories, the read-only
+attribute actually only means that the directory cannot be deleted.) This CRT
+bug is fixed in the VC8 and VC9 CRTs (but, of course, the directory may still
+not actually be writable if access is indeed denied by DACLs).
+
+For the chdir() issue, see ActiveState bug #74552:
+http://bugs.activestate.com/show_bug.cgi?id=74552
+
+Therefore, DACLs should be checked both for consistency across CRTs and for
+the correct answer.
+
+(Note that perl's -w operator should not be modified to check DACLs. It has
+been written so that it reflects the state of the read-only attribute, even
+for directories (whatever CRT is being used), for symmetry with chmod().)
+
+=head2 strcat(), strcpy(), strncat(), strncpy(), sprintf(), vsprintf()
+
+Maybe create a utility that checks after each libperl.a creation that
+none of the above (nor sprintf(), vsprintf(), or *SHUDDER* gets())
+ever creep back to libperl.a.
+
+ nm libperl.a | ./miniperl -alne '$o = $F[0] if /:$/; print "$o $F[1]" if $F[0] eq "U" && $F[1] =~ /^(?:strn?c(?:at|py)|v?sprintf|gets)$/'
+
+Note, of course, that this will only tell whether B<your> platform
+is using those naughty interfaces.
+
+=head2 -D_FORTIFY_SOURCE=2, -fstack-protector
+
+Recent glibcs support C<-D_FORTIFY_SOURCE=2> and recent gcc
+(4.1 onwards?) supports C<-fstack-protector>, both of which give
+protection against various kinds of buffer overflow problems.
+These should probably be used for compiling Perl whenever available,
+Configure and/or hints files should be adjusted to probe for the
+availability of these features and enable them as appropriate.
+
+=head2 Arenas for GPs? For MAGIC?
+
+C<struct gp> and C<struct magic> are both currently allocated by C<malloc>.
+It might be a speed or memory saving to change to using arenas. Or it might
+not. It would need some suitable benchmarking first. In particular, C<GP>s
+can probably be changed with minimal compatibility impact (probably nothing
+outside of the core, or even outside of F<gv.c> allocates them), but they
+probably aren't allocated/deallocated often enough for a speed saving. Whereas
+C<MAGIC> is allocated/deallocated more often, but in turn, is also something
+more externally visible, so changing the rules here may bite external code.
+
+=head2 Shared arenas
+
+Several SV body structs are now the same size, notably PVMG and PVGV, PVAV and
+PVHV, and PVCV and PVFM. It should be possible to allocate and return same
+sized bodies from the same actual arena, rather than maintaining one arena for
+each. This could save 4-6K per thread, of memory no longer tied up in the
+not-yet-allocated part of an arena.
+
+
+=head1 Tasks that need a knowledge of XS
+
+These tasks would need C knowledge, and roughly the level of knowledge of
+the perl API that comes from writing modules that use XS to interface to
+C.
+
+=head2 Remove the use of SVs as temporaries in dump.c
+
+F<dump.c> contains debugging routines to dump out the contains of perl data
+structures, such as C<SV>s, C<AV>s and C<HV>s. Currently, the dumping code
+B<uses> C<SV>s for its temporary buffers, which was a logical initial
+implementation choice, as they provide ready made memory handling.
+
+However, they also lead to a lot of confusion when it happens that what you're
+trying to debug is seen by the code in F<dump.c>, correctly or incorrectly, as
+a temporary scalar it can use for a temporary buffer. It's also not possible
+to dump scalars before the interpreter is properly set up, such as during
+ithreads cloning. It would be good to progressively replace the use of scalars
+as string accumulation buffers with something much simpler, directly allocated
+by C<malloc>. The F<dump.c> code is (or should be) only producing 7 bit
+US-ASCII, so output character sets are not an issue.
+
+Producing and proving an internal simple buffer allocation would make it easier
+to re-write the internals of the PerlIO subsystem to avoid using C<SV>s for
+B<its> buffers, use of which can cause problems similar to those of F<dump.c>,
+at similar times.
+
+=head2 safely supporting POSIX SA_SIGINFO
+
+Some years ago Jarkko supplied patches to provide support for the POSIX
+SA_SIGINFO feature in Perl, passing the extra data to the Perl signal handler.
+
+Unfortunately, it only works with "unsafe" signals, because under safe
+signals, by the time Perl gets to run the signal handler, the extra
+information has been lost. Moreover, it's not easy to store it somewhere,
+as you can't call mutexs, or do anything else fancy, from inside a signal
+handler.
+
+So it strikes me that we could provide safe SA_SIGINFO support
+
+=over 4
+
+=item 1
+
+Provide global variables for two file descriptors
+
+=item 2
+
+When the first request is made via C<sigaction> for C<SA_SIGINFO>, create a
+pipe, store the reader in one, the writer in the other
+
+=item 3
+
+In the "safe" signal handler (C<Perl_csighandler()>/C<S_raise_signal()>), if
+the C<siginfo_t> pointer non-C<NULL>, and the writer file handle is open,
+
+=over 8
+
+=item 1
+
+serialise signal number, C<struct siginfo_t> (or at least the parts we care
+about) into a small auto char buff
+
+=item 2
+
+C<write()> that (non-blocking) to the writer fd
+
+=over 12
+
+=item 1
+
+if it writes 100%, flag the signal in a counter of "signals on the pipe" akin
+to the current per-signal-number counts
+
+=item 2
+
+if it writes 0%, assume the pipe is full. Flag the data as lost?
+
+=item 3
+
+if it writes partially, croak a panic, as your OS is broken.
+
+=back
+
+=back
+
+=item 4
+
+in the regular C<PERL_ASYNC_CHECK()> processing, if there are "signals on
+the pipe", read the data out, deserialise, build the Perl structures on
+the stack (code in C<Perl_sighandler()>, the "unsafe" handler), and call as
+usual.
+
+=back
+
+I think that this gets us decent C<SA_SIGINFO> support, without the current risk
+of running Perl code inside the signal handler context. (With all the dangers
+of things like C<malloc> corruption that that currently offers us)
+
+For more information see the thread starting with this message:
+http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2008-03/msg00305.html
+
+=head2 autovivification
+
+Make all autovivification consistent w.r.t LVALUE/RVALUE and strict/no strict;
+
+This task is incremental - even a little bit of work on it will help.
+
+=head2 Unicode in Filenames
+
+chdir, chmod, chown, chroot, exec, glob, link, lstat, mkdir, open,
+opendir, qx, readdir, readlink, rename, rmdir, stat, symlink, sysopen,
+system, truncate, unlink, utime, -X. All these could potentially accept
+Unicode filenames either as input or output (and in the case of system
+and qx Unicode in general, as input or output to/from the shell).
+Whether a filesystem - an operating system pair understands Unicode in
+filenames varies.
+
+Known combinations that have some level of understanding include
+Microsoft NTFS, Apple HFS+ (In Mac OS 9 and X) and Apple UFS (in Mac
+OS X), NFS v4 is rumored to be Unicode, and of course Plan 9. How to
+create Unicode filenames, what forms of Unicode are accepted and used
+(UCS-2, UTF-16, UTF-8), what (if any) is the normalization form used,
+and so on, varies. Finding the right level of interfacing to Perl
+requires some thought. Remember that an OS does not implicate a
+filesystem.
+
+(The Windows -C command flag "wide API support" has been at least
+temporarily retired in 5.8.1, and the -C has been repurposed, see
+L<perlrun>.)
+
+Most probably the right way to do this would be this:
+L</"Virtualize operating system access">.
+
+=head2 Unicode in %ENV
+
+Currently the %ENV entries are always byte strings.
+See L</"Virtualize operating system access">.
+
+=head2 Unicode and glob()
+
+Currently glob patterns and filenames returned from File::Glob::glob()
+are always byte strings. See L</"Virtualize operating system access">.
+
+=head2 Unicode and lc/uc operators
+
+Some built-in operators (C<lc>, C<uc>, etc.) behave differently, based on
+what the internal encoding of their argument is. That should not be the
+case. Maybe add a pragma to switch behaviour.
+
+=head2 use less 'memory'
+
+Investigate trade offs to switch out perl's choices on memory usage.
+Particularly perl should be able to give memory back.
+
+This task is incremental - even a little bit of work on it will help.
+
+=head2 Re-implement C<:unique> in a way that is actually thread-safe
+
+The old implementation made bad assumptions on several levels. A good 90%
+solution might be just to make C<:unique> work to share the string buffer
+of SvPVs. That way large constant strings can be shared between ithreads,
+such as the configuration information in F<Config>.
+
+=head2 Make tainting consistent
+
+Tainting would be easier to use if it didn't take documented shortcuts and
+allow taint to "leak" everywhere within an expression.
+
+=head2 readpipe(LIST)
+
+system() accepts a LIST syntax (and a PROGRAM LIST syntax) to avoid
+running a shell. readpipe() (the function behind qx//) could be similarly
+extended.
+
+=head2 Audit the code for destruction ordering assumptions
+
+Change 25773 notes
+
+ /* Need to check SvMAGICAL, as during global destruction it may be that
+ AvARYLEN(av) has been freed before av, and hence the SvANY() pointer
+ is now part of the linked list of SV heads, rather than pointing to
+ the original body. */
+ /* FIXME - audit the code for other bugs like this one. */
+
+adding the C<SvMAGICAL> check to
+
+ if (AvARYLEN(av) && SvMAGICAL(AvARYLEN(av))) {
+ MAGIC *mg = mg_find (AvARYLEN(av), PERL_MAGIC_arylen);
+
+Go through the core and look for similar assumptions that SVs have particular
+types, as all bets are off during global destruction.
+
+=head2 Extend PerlIO and PerlIO::Scalar
+
+PerlIO::Scalar doesn't know how to truncate(). Implementing this
+would require extending the PerlIO vtable.
+
+Similarly the PerlIO vtable doesn't know about formats (write()), or
+about stat(), or chmod()/chown(), utime(), or flock().
+
+(For PerlIO::Scalar it's hard to see what e.g. mode bits or ownership
+would mean.)
+
+PerlIO doesn't do directories or symlinks, either: mkdir(), rmdir(),
+opendir(), closedir(), seekdir(), rewinddir(), glob(); symlink(),
+readlink().
+
+See also L</"Virtualize operating system access">.
+
+=head2 -C on the #! line
+
+It should be possible to make -C work correctly if found on the #! line,
+given that all perl command line options are strict ASCII, and -C changes
+only the interpretation of non-ASCII characters, and not for the script file
+handle. To make it work needs some investigation of the ordering of function
+calls during startup, and (by implication) a bit of tweaking of that order.
+
+=head2 Duplicate logic in S_method_common() and Perl_gv_fetchmethod_autoload()
+
+A comment in C<S_method_common> notes
+
+ /* This code tries to figure out just what went wrong with
+ gv_fetchmethod. It therefore needs to duplicate a lot of
+ the internals of that function. We can't move it inside
+ Perl_gv_fetchmethod_autoload(), however, since that would
+ cause UNIVERSAL->can("NoSuchPackage::foo") to croak, and we
+ don't want that.
+ */
+
+If C<Perl_gv_fetchmethod_autoload> gets rewritten to take (more) flag bits,
+then it ought to be possible to move the logic from C<S_method_common> to
+the "right" place. When making this change it would probably be good to also
+pass in at least the method name length, if not also pre-computed hash values
+when known. (I'm contemplating a plan to pre-compute hash values for common
+fixed strings such as C<ISA> and pass them in to functions.)
+
+=head2 Organize error messages
+
+Perl's diagnostics (error messages, see L<perldiag>) could use
+reorganizing and formalizing so that each error message has its
+stable-for-all-eternity unique id, categorized by severity, type, and
+subsystem. (The error messages would be listed in a datafile outside
+of the Perl source code, and the source code would only refer to the
+messages by the id.) This clean-up and regularizing should apply
+for all croak() messages.
+
+This would enable all sorts of things: easier translation/localization
+of the messages (though please do keep in mind the caveats of
+L<Locale::Maketext> about too straightforward approaches to
+translation), filtering by severity, and instead of grepping for a
+particular error message one could look for a stable error id. (Of
+course, changing the error messages by default would break all the
+existing software depending on some particular error message...)
+
+This kind of functionality is known as I<message catalogs>. Look for
+inspiration for example in the catgets() system, possibly even use it
+if available-- but B<only> if available, all platforms will B<not>
+have catgets().
+
+For the really pure at heart, consider extending this item to cover
+also the warning messages (see L<perllexwarn>, C<warnings.pl>).
+
+=head1 Tasks that need a knowledge of the interpreter
+
+These tasks would need C knowledge, and knowledge of how the interpreter works,
+or a willingness to learn.
+
+=head2 forbid labels with keyword names
+
+Currently C<goto keyword> "computes" the label value:
+
+ $ perl -e 'goto print'
+ Can't find label 1 at -e line 1.
+
+It would be nice to forbid labels with keyword names, to avoid confusion.
+
+=head2 truncate() prototype
+
+The prototype of truncate() is currently C<$$>. It should probably
+be C<*$> instead. (This is changed in F<opcode.pl>)
+
+=head2 decapsulation of smart match argument
+
+Currently C<$foo ~~ $object> will die with the message "Smart matching a
+non-overloaded object breaks encapsulation". It would be nice to allow
+to bypass this by using explictly the syntax C<$foo ~~ %$object> or
+C<$foo ~~ @$object>.
+
+=head2 error reporting of [$a ; $b]
+
+Using C<;> inside brackets is a syntax error, and we don't propose to change
+that by giving it any meaning. However, it's not reported very helpfully:
+
+ $ perl -e '$a = [$b; $c];'
+ syntax error at -e line 1, near "$b;"
+ syntax error at -e line 1, near "$c]"
+ Execution of -e aborted due to compilation errors.
+
+It should be possible to hook into the tokeniser or the lexer, so that when a
+C<;> is parsed where it is not legal as a statement terminator (ie inside
+C<{}> used as a hashref, C<[]> or C<()>) it issues an error something like
+I<';' isn't legal inside an expression - if you need multiple statements use a
+do {...} block>. See the thread starting at
+http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2008-09/msg00573.html
+
+=head2 lexicals used only once
+
+This warns:
+
+ $ perl -we '$pie = 42'
+ Name "main::pie" used only once: possible typo at -e line 1.
+
+This does not:
+
+ $ perl -we 'my $pie = 42'
+
+Logically all lexicals used only once should warn, if the user asks for
+warnings. An unworked RT ticket (#5087) has been open for almost seven
+years for this discrepancy.
+
+=head2 UTF-8 revamp
+
+The handling of Unicode is unclean in many places. For example, the regexp
+engine matches in Unicode semantics whenever the string or the pattern is
+flagged as UTF-8, but that should not be dependent on an internal storage
+detail of the string. Likewise, case folding behaviour is dependent on the
+UTF8 internal flag being on or off.
+
+=head2 Properly Unicode safe tokeniser and pads.
+
+The tokeniser isn't actually very UTF-8 clean. C<use utf8;> is a hack -
+variable names are stored in stashes as raw bytes, without the utf-8 flag
+set. The pad API only takes a C<char *> pointer, so that's all bytes too. The
+tokeniser ignores the UTF-8-ness of C<PL_rsfp>, or any SVs returned from
+source filters. All this could be fixed.
+
+=head2 state variable initialization in list context
+
+Currently this is illegal:
+
+ state ($a, $b) = foo();
+
+In Perl 6, C<state ($a) = foo();> and C<(state $a) = foo();> have different
+semantics, which is tricky to implement in Perl 5 as currently they produce
+the same opcode trees. The Perl 6 design is firm, so it would be good to
+implement the necessary code in Perl 5. There are comments in
+C<Perl_newASSIGNOP()> that show the code paths taken by various assignment
+constructions involving state variables.
+
+=head2 Implement $value ~~ 0 .. $range
+
+It would be nice to extend the syntax of the C<~~> operator to also
+understand numeric (and maybe alphanumeric) ranges.
+
+=head2 A does() built-in
+
+Like ref(), only useful. It would call the C<DOES> method on objects; it
+would also tell whether something can be dereferenced as an
+array/hash/etc., or used as a regexp, etc.
+L<http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2007-03/msg00481.html>
+
+=head2 Tied filehandles and write() don't mix
+
+There is no method on tied filehandles to allow them to be called back by
+formats.
+
+=head2 Propagate compilation hints to the debugger
+
+Currently a debugger started with -dE on the command-line doesn't see the
+features enabled by -E. More generally hints (C<$^H> and C<%^H>) aren't
+propagated to the debugger. Probably it would be a good thing to propagate
+hints from the innermost non-C<DB::> scope: this would make code eval'ed
+in the debugger see the features (and strictures, etc.) currently in
+scope.
+
+=head2 Attach/detach debugger from running program
+
+The old perltodo notes "With C<gdb>, you can attach the debugger to a running
+program if you pass the process ID. It would be good to do this with the Perl
+debugger on a running Perl program, although I'm not sure how it would be
+done." ssh and screen do this with named pipes in /tmp. Maybe we can too.
+
+=head2 LVALUE functions for lists
+
+The old perltodo notes that lvalue functions don't work for list or hash
+slices. This would be good to fix.
+
+=head2 regexp optimiser optional
+
+The regexp optimiser is not optional. It should configurable to be, to allow
+its performance to be measured, and its bugs to be easily demonstrated.
+
+=head2 delete &function
+
+Allow to delete functions. One can already undef them, but they're still
+in the stash.
+
+=head2 C</w> regex modifier
+
+That flag would enable to match whole words, and also to interpolate
+arrays as alternations. With it, C</P/w> would be roughly equivalent to:
+
+ do { local $"='|'; /\b(?:P)\b/ }
+
+See L<http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2007-01/msg00400.html>
+for the discussion.
+
+=head2 optional optimizer
+
+Make the peephole optimizer optional. Currently it performs two tasks as
+it walks the optree - genuine peephole optimisations, and necessary fixups of
+ops. It would be good to find an efficient way to switch out the
+optimisations whilst keeping the fixups.
+
+=head2 You WANT *how* many
+
+Currently contexts are void, scalar and list. split has a special mechanism in
+place to pass in the number of return values wanted. It would be useful to
+have a general mechanism for this, backwards compatible and little speed hit.
+This would allow proposals such as short circuiting sort to be implemented
+as a module on CPAN.
+
+=head2 lexical aliases
+
+Allow lexical aliases (maybe via the syntax C<my \$alias = \$foo>.
+
+=head2 entersub XS vs Perl
+
+At the moment pp_entersub is huge, and has code to deal with entering both
+perl and XS subroutines. Subroutine implementations rarely change between
+perl and XS at run time, so investigate using 2 ops to enter subs (one for
+XS, one for perl) and swap between if a sub is redefined.
+
+=head2 Self-ties
+
+Self-ties are currently illegal because they caused too many segfaults. Maybe
+the causes of these could be tracked down and self-ties on all types
+reinstated.
+
+=head2 Optimize away @_
+
+The old perltodo notes "Look at the "reification" code in C<av.c>".
+
+=head2 The yada yada yada operators
+
+Perl 6's Synopsis 3 says:
+
+I<The ... operator is the "yada, yada, yada" list operator, which is used as
+the body in function prototypes. It complains bitterly (by calling fail)
+if it is ever executed. Variant ??? calls warn, and !!! calls die.>
+
+Those would be nice to add to Perl 5. That could be done without new ops.
+
+=head2 Virtualize operating system access
+
+Implement a set of "vtables" that virtualizes operating system access
+(open(), mkdir(), unlink(), readdir(), getenv(), etc.) At the very
+least these interfaces should take SVs as "name" arguments instead of
+bare char pointers; probably the most flexible and extensible way
+would be for the Perl-facing interfaces to accept HVs. The system
+needs to be per-operating-system and per-file-system
+hookable/filterable, preferably both from XS and Perl level
+(L<perlport/"Files and Filesystems"> is good reading at this point,
+in fact, all of L<perlport> is.)
+
+This has actually already been implemented (but only for Win32),
+take a look at F<iperlsys.h> and F<win32/perlhost.h>. While all Win32
+variants go through a set of "vtables" for operating system access,
+non-Win32 systems currently go straight for the POSIX/UNIX-style
+system/library call. Similar system as for Win32 should be
+implemented for all platforms. The existing Win32 implementation
+probably does not need to survive alongside this proposed new
+implementation, the approaches could be merged.
+
+What would this give us? One often-asked-for feature this would
+enable is using Unicode for filenames, and other "names" like %ENV,
+usernames, hostnames, and so forth.
+(See L<perlunicode/"When Unicode Does Not Happen">.)
+
+But this kind of virtualization would also allow for things like
+virtual filesystems, virtual networks, and "sandboxes" (though as long
+as dynamic loading of random object code is allowed, not very safe
+sandboxes since external code of course know not of Perl's vtables).
+An example of a smaller "sandbox" is that this feature can be used to
+implement per-thread working directories: Win32 already does this.
+
+See also L</"Extend PerlIO and PerlIO::Scalar">.
+
+=head2 Investigate PADTMP hash pessimisation
+
+The peephole optimiser converts constants used for hash key lookups to shared
+hash key scalars. Under ithreads, something is undoing this work.
+See http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2007-09/msg00793.html
+
+=head2 Store the current pad in the OP slab allocator
+
+=for clarification
+I hope that I got that "current pad" part correct
+
+Currently we leak ops in various cases of parse failure. I suggested that we
+could solve this by always using the op slab allocator, and walking it to
+free ops. Dave comments that as some ops are already freed during optree
+creation one would have to mark which ops are freed, and not double free them
+when walking the slab. He notes that one problem with this is that for some ops
+you have to know which pad was current at the time of allocation, which does
+change. I suggested storing a pointer to the current pad in the memory allocated
+for the slab, and swapping to a new slab each time the pad changes. Dave thinks
+that this would work.
+
+=head2 repack the optree
+
+Repacking the optree after execution order is determined could allow
+removal of NULL ops, and optimal ordering of OPs with respect to cache-line
+filling. The slab allocator could be reused for this purpose. I think that
+the best way to do this is to make it an optional step just before the
+completed optree is attached to anything else, and to use the slab allocator
+unchanged, so that freeing ops is identical whether or not this step runs.
+Note that the slab allocator allocates ops downwards in memory, so one would
+have to actually "allocate" the ops in reverse-execution order to get them
+contiguous in memory in execution order.
+
+See http://www.nntp.perl.org/group/perl.perl5.porters/2007/12/msg131975.html
+
+Note that running this copy, and then freeing all the old location ops would
+cause their slabs to be freed, which would eliminate possible memory wastage if
+the previous suggestion is implemented, and we swap slabs more frequently.
+
+=head2 eliminate incorrect line numbers in warnings
+
+This code
+
+ use warnings;
+ my $undef;
+
+ if ($undef == 3) {
+ } elsif ($undef == 0) {
+ }
+
+used to produce this output:
+
+ Use of uninitialized value in numeric eq (==) at wrong.pl line 4.
+ Use of uninitialized value in numeric eq (==) at wrong.pl line 4.
+
+where the line of the second warning was misreported - it should be line 5.
+Rafael fixed this - the problem arose because there was no nextstate OP
+between the execution of the C<if> and the C<elsif>, hence C<PL_curcop> still
+reports that the currently executing line is line 4. The solution was to inject
+a nextstate OPs for each C<elsif>, although it turned out that the nextstate
+OP needed to be a nulled OP, rather than a live nextstate OP, else other line
+numbers became misreported. (Jenga!)
+
+The problem is more general than C<elsif> (although the C<elsif> case is the
+most common and the most confusing). Ideally this code
+
+ use warnings;
+ my $undef;
+
+ my $a = $undef + 1;
+ my $b
+ = $undef
+ + 1;
+
+would produce this output
+
+ Use of uninitialized value $undef in addition (+) at wrong.pl line 4.
+ Use of uninitialized value $undef in addition (+) at wrong.pl line 7.
+
+(rather than lines 4 and 5), but this would seem to require every OP to carry
+(at least) line number information.
+
+What might work is to have an optional line number in memory just before the
+BASEOP structure, with a flag bit in the op to say whether it's present.
+Initially during compile every OP would carry its line number. Then add a late
+pass to the optimiser (potentially combined with L</repack the optree>) which
+looks at the two ops on every edge of the graph of the execution path. If
+the line number changes, flags the destination OP with this information.
+Once all paths are traced, replace every op with the flag with a
+nextstate-light op (that just updates C<PL_curcop>), which in turn then passes
+control on to the true op. All ops would then be replaced by variants that
+do not store the line number. (Which, logically, why it would work best in
+conjunction with L</repack the optree>, as that is already copying/reallocating
+all the OPs)
+
+(Although I should note that we're not certain that doing this for the general
+case is worth it)
+
+=head2 optimize tail-calls
+
+Tail-calls present an opportunity for broadly applicable optimization;
+anywhere that C<< return foo(...) >> is called, the outer return can
+be replaced by a goto, and foo will return directly to the outer
+caller, saving (conservatively) 25% of perl's call&return cost, which
+is relatively higher than in C. The scheme language is known to do
+this heavily. B::Concise provides good insight into where this
+optimization is possible, ie anywhere entersub,leavesub op-sequence
+occurs.
+
+ perl -MO=Concise,-exec,a,b,-main -e 'sub a{ 1 }; sub b {a()}; b(2)'
+
+Bottom line on this is probably a new pp_tailcall function which
+combines the code in pp_entersub, pp_leavesub. This should probably
+be done 1st in XS, and using B::Generate to patch the new OP into the
+optrees.
+
+=head2 C<\N>
+
+It should be possible to add a C<\N> regex assertion, meaning "every
+character except C<\n>° independently of the context. That would
+of course imply that C<\N> couldn't be followed by an opening C<{>.
+
+=head1 Big projects
+
+Tasks that will get your name mentioned in the description of the "Highlights
+of 5.12"
+
+=head2 make ithreads more robust
+
+Generally make ithreads more robust. See also L</iCOW>
+
+This task is incremental - even a little bit of work on it will help, and
+will be greatly appreciated.
+
+One bit would be to write the missing code in sv.c:Perl_dirp_dup.
+
+Fix Perl_sv_dup, et al so that threads can return objects.
+
+=head2 iCOW
+
+Sarathy and Arthur have a proposal for an improved Copy On Write which
+specifically will be able to COW new ithreads. If this can be implemented
+it would be a good thing.
+
+=head2 (?{...}) closures in regexps
+
+Fix (or rewrite) the implementation of the C</(?{...})/> closures.
+
+=head2 A re-entrant regexp engine
+
+This will allow the use of a regex from inside (?{ }), (??{ }) and
+(?(?{ })|) constructs.
+
+=head2 Add class set operations to regexp engine
+
+Apparently these are quite useful. Anyway, Jeffery Friedl wants them.
+
+demerphq has this on his todo list, but right at the bottom.
+
+
+=head1 Tasks for microperl
+
+
+[ Each and every one of these may be obsolete, but they were listed
+ in the old Todo.micro file]
+
+
+=head2 make creating uconfig.sh automatic
+
+=head2 make creating Makefile.micro automatic
+
+=head2 do away with fork/exec/wait?
+
+(system, popen should be enough?)
+
+=head2 some of the uconfig.sh really needs to be probed (using cc) in buildtime:
+
+(uConfigure? :-) native datatype widths and endianness come to mind
+