[developers] ACL running out of memory

Stephan Oepen oe at ifi.uio.no
Sun Nov 2 14:36:29 CET 2008

hi mike,

the kind of problem you are experiencing is one we have battled for the
past twelve or so years.  ACL and memory management are complex issues,
i am afraid i would conclude from my own experience.  but the situation
is not hopeless either ...

first,  2gbyte RAM is not a lot of memory, considering that most of the
ERG and JaCY development nowadays is done on 64-bit machines with 4 or
6 (or 32) gbyte of RAM.  especially for batches, i would recommend you
look for a 64-bit machine with more RAM.

in 32-bit mode, i have never been able to get ACL to use more than some
1.8 gbyte in total process size.  given that (by default) [incr tsdb()]
disables tenuring, i.e. keeps all Lisp objects in newspace, that means
about 800 mbyte are actually available for Lisp data while generating.
furthermore, it seems you are using the LOGON run-time images, and from
what i remember these have somewhat more conservative heap placement,
to increase portability across Linux kernel versions.  the underlying
issues are discussed in the ACL FAQ; see:


the initial heap placement interacts with virtual memory management by
the kernel, and the message `lack of swap space or some other operating
system imposed limit or memory mapping collision' can indicate several
underlying problems (i.e. in the placement of dynamic libraries, which
could further limit how much the Lisp heap can grow: the heap needs to
be a contiguous block of virtual memory, hence most such problems).

did you observe process size at the time of the out-of-memory error?  i
take it you are actually running two active processes here, which would
further reduce the total amount of memory available.

in a 64-bit universe, things become a lot simpler.  and i would expect
even with the run-time images (where you have reduced control over the
memory layout) you should be able to do some serious batches.

second, i was going to say that there are things on the Lisp side that
can reduce `memory leaking', and [incr tsdb()] tries to work around the
tendency of dags to hold on to invalid `pointers' (a consequence of the
quasi-destructive unification approach, which does not interact nicely
with GC-based memory management: pointers become logically invalid, but
Lisp has no way of knowing that).  release-temporary-storage() makes an
attempt at working around that (as much as possible), and in principle
i would expect batch parsing or generation to only grow moderately over
time.  though, looking at the code just now, batch generation actually 
fails to call release-temporary-storage()!  i will fix that.

however, from reading your log file, it appears the process that runs
out of memory is the foreground process, actually, while parsing with
PET.  that seems rather surprising to me, as [incr tsdb()] should not
have much work to do when using an external `cpu' (aka process) to do
the actual parsing.  however, i noticed

  (setf *tsdb-trees-hook* "lkb::parse-tree-structure")

why are you doing this?  the effect is that [incr tsdb()] will rebuild
/all/ derivations received from PET, label them with node labels, and
write the results into its profile.  i imagine this part could be quite
`leaky' in terms of memory use, so could you try the experiment without
the trees hook?  in case you actually want those trees, i would suggest
treebanking first, followed by a `thinning' normalize step, where there
is a parallel *redwoods-trees-hook* that can be activated.

so much for today; i hope this may help you get a little further!

                                                      all best  -  oe

+++ Universitetet i Oslo (IFI); Boks 1080 Blindern; 0316 Oslo; (+47) 2284 0125
+++     CSLI Stanford; Ventura Hall; Stanford, CA 94305; (+1 650) 723 0515
+++       --- oe at ifi.uio.no; oe at csli.stanford.edu; stephan at oepen.net ---

More information about the developers mailing list