[developers] speeding up grammar loading in the LKB

John Carroll J.A.Carroll at sussex.ac.uk
Sat Oct 21 01:27:54 CEST 2017


Hi developers,

I thought I'd send an update on the new LKB's type hierarchy processing. At the beginning of September, I had this running in around 190 secs for Norsyg (norsyg.2018-08-26.tgz) on a mid-range Intel i5 desktop machine. But I convinced myself that it could be better.

In an email of 1 September, Glenn mentions some special configurations of the hierarchy that one can use to speed up processing. I added those he mentions and also a couple of others I think he doesn't. Below are the main steps performed by the LKB and the improvements I made.

* Partition types into non-interacting cliques
- Removed code that looked for tree-shaped configurations of the hierarchy and excluded them, since a more general case is dealt with below.

* Assign a bit code to each type in the current partition
- Only assign bit codes to types that are 'active' with respect to GLB computation: an active type has more than 1 parent and/or more than 1 daughter, and is not inside a tree-shaped part of the type hierarchy (i.e. no descendant has more than 1 parent).

* Check pairs of types and create GLB type if no unique common subtype
- Only check pairs of types that both have more than 1 daughter; other pairs could have a non-unique GLB but this would be found from checks on their descendants.

* Insert GLB types into hierarchy, recomputing parent/daughter links
- Insert each GLB type sequentially, by checking subsumption relationships with all other 'active' types. The subsumption relation could be in either direction with respect to other GLB types, but for authored types the 'subsume' relation is only relevant to types with more than one parent, and the 'subsumed by' relation only to types with more than one daughter.
- Fixed the LKB's long-standing redundant links bug by adding a final step to adjust the links from the parents and daughters of each GLB type (fortunately this is pretty cheap).

I also restructured some of the code, and changed a few data structures into more suitable ones, e.g. lists to hash tables.

At this point, I found that 3 "summary" 64-bit words worked best, and Norsyg was down to 120 secs. But I still wasn't happy, and started experimenting first with an index of the first non-zero word in the type bit code, and then an index of the last non-zero word. I don't know why I didn't try this before since the bit codes are so sparse. Taking account of this information in the basic logical operations on bit codes gave a 3x speed up.

It's possible to push this indexing further, and I am also now doing some judicious sorting of lists of types based on their start index so that some highly combinatorial computations can be terminated early. I am also indexing some of these sorted lists in order to start computations in the middle of the list rather than at the beginning. This gives another 2x speed-up. (Interestingly, the summary words now provide much less benefit and I have gone down to just 1 of them (=64 bits) - which is only 15% faster than no summary words at all).

So now the type hierarchy processing for the full Norsyg takes only 20 secs! Here are timings for other grammars:

ERG  0.10 sec
JACY  0.007 sec
GG  0.16 sec

The percentage of time taken by each of the steps is:

0.2%  Partition types into non-interacting cliques
0.8%  Assign a bit code to each type
26.0%  Check pairs of types and create GLB type if no unique common subtype
73.0%  Insert GLB types into hierarchy, recomputing parent/daughter links

The first, type partitioning step is cheap but gives modest speed gains: with Norsyg there's an overall 5% speed improvement compared to processing the whole hierarchy at once, and with the ERG there's a 30% improvement.

I attach the LKB source file concerned, which contains more detail in the comments and obviously the full story in the code.

John


On 1 Sep 2017, at 09:47, John Carroll <J.A.Carroll at sussex.ac.uk<mailto:J.A.Carroll at sussex.ac.uk>> wrote:

Hi all,

Glenn and I have been exchanging emails off-list about implementation issues in GLB computation and speeding up logical operations on sparse bit vectors. We have also run agree and the new version of the LKB on Petter’s whole grammar (see his posting to the list on 26 August). For the benefit of Woodley, Ann and anyone else who’s interested, I append a few excerpts.

John


As I mentioned in my previous note, given a summary qword, I use (x & -x) to directly access, in O(N) of the summary 1 bits, only the interesting qwords. Given such a single-bit mask, I use the 64-bit deBruijn number “0x07EDD5E59A4E28C2” to find its log, so as to index into the main qwords. The code is hairy, but I seem to recall that testing showed large speedups. ...

Glenn


Looks like the agree results are similar to yours, with a dramatic speedup for computing the glb closure of the full Norwegian type hierarchy that Petter sent (norsyg.2018-08-26.tgz) in about a minute and a half:

00:00:00 iter 0, glbs: 4943
00:00:31 iter 1, glbs: 13061
00:01:15 iter 2, glbs: 7516
00:01:28 iter 3, glbs: 374
00:01:28 types:63251 glbs:20951

Best regards,

Glenn


Here are my results for loading Petter’s norsyg.2018-08-26.tgz with the latest LKB:

  grammar        largest partition   time
  tiny-script    736 types           3 secs
  small-script   4297 types          17 secs
  script         40658 types         4 mins 50 secs

For the LKB, the most expensive operation is not computing the glb types but finding the correct place to insert them into the type hierarchy. I’m sure this could be improved.

John



On 28 Aug 2017, at 20:33, Glenn Slayden <glenn at thai-language.com<mailto:glenn at thai-language.com>> wrote:

Thanks John,

Thanks for clarifying how you use wrap-around with your 192-bit scheme to indicate the areas with signal. The reason for my open-ended (auto-expanding) 1:64 ratio system was that I wanted the bitarray implementation to be a general-purpose component that could possibly be used in other sparse-representation applications. Hence also the correct maintaining of the summary bits during arbitrary bitwise operations (indeed some of which they helpfully inform).

I have received your files and will try to load them and report performance figures soon.

Best regards,

GLenn

From: John Carroll [mailto:J.A.Carroll at sussex.ac.uk]
Sent: Thursday, August 24, 2017 5:42 AM
To: developers at delph-in.net<mailto:developers at delph-in.net>
Cc: Glenn Slayden <glenn at thai-language.com<mailto:glenn at thai-language.com>>; gslayden at uw.edu<mailto:gslayden at uw.edu>
Subject: Re: [developers] speeding up grammar loading in the LKB

Hi Glenn,

A quick follow-up: I like your idea of the summary bits potentially allowing large uninteresting segments of the full bit vectors to be skipped.

I use 192 summary bits (= 3 x 64 bit words) since in my experiments using more bits didn’t give a significant improvement. Although a fixed-size summary representation doesn’t unambiguously identify those words in the full bit vector that are zero, it allows the compiler to unwind loops of logical operations and efficiently inline them.

I’ll be interested in your results with the type files I sent you. (To get the graph I produced cut-down versions by removing final segments of the verbrels.tdl file).

John



On 22 Aug 2017, at 23:26, John Carroll <J.A.Carroll at sussex.ac.uk<mailto:J.A.Carroll at sussex.ac.uk>> wrote:

Hi Glenn,

I think my scheme is very similar to yours. Each successive bit in my 192-bit “summary” representation encodes whether the next 64 bits of the full representation has any 1s in it. On reaching the end of the 192 bits, it starts again (so bit zero of the summary also encodes the 193rd group of 64 bits, etc).

I attach the type files. They should be loaded in the following order:

 coretypes.tdl
 extratypes.tdl
 linktypes.tdl
 verbrels.tdl
 reltypes.tdl

John


On 22 Aug 2017, at 22:57, Glenn Slayden <glenn at thai-language.com<mailto:glenn at thai-language.com>> wrote:

Hello All,

I apologize for not communicating this earlier, but since 2009 Agree has used a similar approach of carrying and maintaining supplemental bits, which I call “summary” bits, along with each of the large bit vectors for use during the GLB computation. Instead of a fixed 192 bits, Agree uses one “summary” bit per 64-bit ‘qword’ of main bits, where summary bits are all stored together (allocated in chunks of 64). Each individual summary bit indicates whether it’s corresponding full qword has any 1s in it and is correctly maintained across all logical operations.

In the best case of an extreme sparse vector, finding that one summary qword is zero avoids evaluating 4096 bits. More realistically, however, it’s possible to walk the summary bits in O(number-of-set-bits), and this provides direct access to (only) those 64-bit qwords that are interesting.

I would welcome the chance to test Petter’s grammar in Agree.

Best regards,

Glenn



From: developers-bounces at emmtee.net<mailto:developers-bounces at emmtee.net> [mailto:developers-bounces at emmtee.net] On Behalf Of John Carroll
Sent: Friday, August 18, 2017 3:26 PM
To: developers at delph-in.net<mailto:developers at delph-in.net>
Subject: Re: [developers] speeding up grammar loading in the LKB


Hi,

[This is a more detailed follow-up to emails that Petter, Stephan, Woodley and I have been exchanging over the past couple of days]

At the very pleasant DELPH-IN Summit last week in Oslo, Petter mentioned to me that the full version of his Norwegian grammar takes hours to load into the LKB. He gave me some of his grammar files, and it turns out that the time is going in computing glb types for a partition of the type hierarchy that contains almost all the types. In this example grammar, there is a partition of almost 40000 types which cannot be split into smaller disjoint sets of non-interacting types. The LKB was having to consider 40000^2/2 (= 800 million) type/type combinations, each combination taking time linear in the number of types. Although this is an efficiently coded part of the LKB, the computation still took around 30 minutes.

One fortunate property of the glb type computation algorithm is that very few of the type/type combinations are “interesting” in that they actually lead to creation of a glb. So I came up with a scheme to quickly filter out pairs that could not possibly produce glbs (always erring on the permissive side in order not to make the algorithm incorrect).

In this scheme, the bit code representing each type (40000 bits long for this example grammar) is augmented with a relatively short “type filter” code (192 bits empirically gives good results). The two main operations in computing glb types are ANDing pairs of these bit codes and testing whether the result is all zeros, and determining whether one bit code subsumes another (for every zero bit in code 1, the corresponding bit in code 2 must also be zero). By making each bit of a filter code to be the logical OR of a specific set of bits in the corresponding type bit code, the AND and subsume tests can also be applied to these codes as a quick pre-filter.

This approach reduces the load time for Petter's example grammar from 30 minutes to 4.5 minutes. It also seems to make the computation scale a bit more happily with increasing numbers of types. I attach a graph showing a comparison for this grammar and cut-down versions.

So that grammar writers can benefit soon, Stephan will shortly re-build the LOGON image to include this new algorithm.

John



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.delph-in.net/archives/developers/attachments/20171020/582f0f75/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: checktypes.lsp
Type: application/octet-stream
Size: 65026 bytes
Desc: checktypes.lsp
URL: <http://lists.delph-in.net/archives/developers/attachments/20171020/582f0f75/attachment-0001.obj>


More information about the developers mailing list