[developers] speeding up grammar loading in the LKB

John Carroll J.A.Carroll at sussex.ac.uk
Fri Sep 1 10:47:40 CEST 2017

Hi all,

Glenn and I have been exchanging emails off-list about implementation issues in GLB computation and speeding up logical operations on sparse bit vectors. We have also run agree and the new version of the LKB on Petter’s whole grammar (see his posting to the list on 26 August). For the benefit of Woodley, Ann and anyone else who’s interested, I append a few excerpts.


As I mentioned in my previous note, given a summary qword, I use (x & -x) to directly access, in O(N) of the summary 1 bits, only the interesting qwords. Given such a single-bit mask, I use the 64-bit deBruijn number “0x07EDD5E59A4E28C2” to find its log, so as to index into the main qwords. The code is hairy, but I seem to recall that testing showed large speedups. ...


Looks like the agree results are similar to yours, with a dramatic speedup for computing the glb closure of the full Norwegian type hierarchy that Petter sent (norsyg.2018-08-26.tgz) in about a minute and a half:

00:00:00 iter 0, glbs: 4943
00:00:31 iter 1, glbs: 13061
00:01:15 iter 2, glbs: 7516
00:01:28 iter 3, glbs: 374
00:01:28 types:63251 glbs:20951

Best regards,


Here are my results for loading Petter’s norsyg.2018-08-26.tgz with the latest LKB:

  grammar        largest partition   time
  tiny-script    736 types           3 secs
  small-script   4297 types          17 secs
  script         40658 types         4 mins 50 secs

For the LKB, the most expensive operation is not computing the glb types but finding the correct place to insert them into the type hierarchy. I’m sure this could be improved.


On 28 Aug 2017, at 20:33, Glenn Slayden <glenn at thai-language.com<mailto:glenn at thai-language.com>> wrote:

Thanks John,

Thanks for clarifying how you use wrap-around with your 192-bit scheme to indicate the areas with signal. The reason for my open-ended (auto-expanding) 1:64 ratio system was that I wanted the bitarray implementation to be a general-purpose component that could possibly be used in other sparse-representation applications. Hence also the correct maintaining of the summary bits during arbitrary bitwise operations (indeed some of which they helpfully inform).

I have received your files and will try to load them and report performance figures soon.

Best regards,


From: John Carroll [mailto:J.A.Carroll at sussex.ac.uk]
Sent: Thursday, August 24, 2017 5:42 AM
To: developers at delph-in.net<mailto:developers at delph-in.net>
Cc: Glenn Slayden <glenn at thai-language.com>; gslayden at uw.edu
Subject: Re: [developers] speeding up grammar loading in the LKB

Hi Glenn,

A quick follow-up: I like your idea of the summary bits potentially allowing large uninteresting segments of the full bit vectors to be skipped.

I use 192 summary bits (= 3 x 64 bit words) since in my experiments using more bits didn’t give a significant improvement. Although a fixed-size summary representation doesn’t unambiguously identify those words in the full bit vector that are zero, it allows the compiler to unwind loops of logical operations and efficiently inline them.

I’ll be interested in your results with the type files I sent you. (To get the graph I produced cut-down versions by removing final segments of the verbrels.tdl file).


On 22 Aug 2017, at 23:26, John Carroll <J.A.Carroll at sussex.ac.uk<mailto:J.A.Carroll at sussex.ac.uk>> wrote:

Hi Glenn,

I think my scheme is very similar to yours. Each successive bit in my 192-bit “summary” representation encodes whether the next 64 bits of the full representation has any 1s in it. On reaching the end of the 192 bits, it starts again (so bit zero of the summary also encodes the 193rd group of 64 bits, etc).

I attach the type files. They should be loaded in the following order:



On 22 Aug 2017, at 22:57, Glenn Slayden <glenn at thai-language.com<mailto:glenn at thai-language.com>> wrote:

Hello All,

I apologize for not communicating this earlier, but since 2009 Agree has used a similar approach of carrying and maintaining supplemental bits, which I call “summary” bits, along with each of the large bit vectors for use during the GLB computation. Instead of a fixed 192 bits, Agree uses one “summary” bit per 64-bit ‘qword’ of main bits, where summary bits are all stored together (allocated in chunks of 64). Each individual summary bit indicates whether it’s corresponding full qword has any 1s in it and is correctly maintained across all logical operations.

In the best case of an extreme sparse vector, finding that one summary qword is zero avoids evaluating 4096 bits. More realistically, however, it’s possible to walk the summary bits in O(number-of-set-bits), and this provides direct access to (only) those 64-bit qwords that are interesting.

I would welcome the chance to test Petter’s grammar in Agree.

Best regards,


From: developers-bounces at emmtee.net<mailto:developers-bounces at emmtee.net> [mailto:developers-bounces at emmtee.net] On Behalf Of John Carroll
Sent: Friday, August 18, 2017 3:26 PM
To: developers at delph-in.net<mailto:developers at delph-in.net>
Subject: Re: [developers] speeding up grammar loading in the LKB


[This is a more detailed follow-up to emails that Petter, Stephan, Woodley and I have been exchanging over the past couple of days]

At the very pleasant DELPH-IN Summit last week in Oslo, Petter mentioned to me that the full version of his Norwegian grammar takes hours to load into the LKB. He gave me some of his grammar files, and it turns out that the time is going in computing glb types for a partition of the type hierarchy that contains almost all the types. In this example grammar, there is a partition of almost 40000 types which cannot be split into smaller disjoint sets of non-interacting types. The LKB was having to consider 40000^2/2 (= 800 million) type/type combinations, each combination taking time linear in the number of types. Although this is an efficiently coded part of the LKB, the computation still took around 30 minutes.

One fortunate property of the glb type computation algorithm is that very few of the type/type combinations are “interesting” in that they actually lead to creation of a glb. So I came up with a scheme to quickly filter out pairs that could not possibly produce glbs (always erring on the permissive side in order not to make the algorithm incorrect).

In this scheme, the bit code representing each type (40000 bits long for this example grammar) is augmented with a relatively short “type filter” code (192 bits empirically gives good results). The two main operations in computing glb types are ANDing pairs of these bit codes and testing whether the result is all zeros, and determining whether one bit code subsumes another (for every zero bit in code 1, the corresponding bit in code 2 must also be zero). By making each bit of a filter code to be the logical OR of a specific set of bits in the corresponding type bit code, the AND and subsume tests can also be applied to these codes as a quick pre-filter.

This approach reduces the load time for Petter's example grammar from 30 minutes to 4.5 minutes. It also seems to make the computation scale a bit more happily with increasing numbers of types. I attach a graph showing a comparison for this grammar and cut-down versions.

So that grammar writers can benefit soon, Stephan will shortly re-build the LOGON image to include this new algorithm.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.delph-in.net/archives/developers/attachments/20170901/0a315312/attachment-0001.html>

More information about the developers mailing list