[developers] preprocessing functionality in pet

Bernd Kiefer kiefer at dfki.de
Thu Feb 8 18:10:58 CET 2007

> maybe instead of spending the time having a lively discussion, we
> should stat off by all just sitting down and writing documentation?
> Our discussions tend to end with the decision that we need
> documentation and then it doesn't happen.  

I agree that this is a good and true point, and i won't exclude myself.

> I'm somewhat guilty of this
> with the morphology stuff, I admit, although I have emailed a fairly
> detailed account.  I'd welcome specific questions if people don't
> understand it.  I didn't think there was an urgent need to document
> the details of how the morphological rule filter is applied, though,
> because the behaviour is declarative and gives the same results as if
> the filter were not there (except much faster).  There is an unusually
> high amount of comments in that part of the LKB code btw. 

I think the morphology stuff itself is quite settled, sorry i didn't
make this clearer. Responsibility for the better filter is with me, no

Not clear, however, is for example if this processing should be applied
to, for example, generic entries (resp. their surface form) or similar
things coming from the input chart where a (lexical) type is supplied
in addition to the surface form (see the mail of Tim and the request by

> My belief about case is that, in the long-term, the systems should not
> be normalising case, except as defined by a grammar-specific
> preprocessor.  Wasn't this a conclusion from Jerez?  I still intend to
> take all the case conversion out of the LKB.

OK with me. Still, Berthold wants a "super robust" mode, where as well
input as lexicon access is normalized. Otherwise, he (and pet) has to
provide functionality for, e.g., sentence initial capitals. And this
again raises the question of preprocessor formalism/implementation.

> I would like to know where/why the ECL preprocessor is so slow - I
> hadn't heard this.  Is it because it's writing out a full PET input
> chart or something?  I would be surprised if we couldn't make the
> speed acceptable in Lisp unless ECL itself is very inefficient, but
> then the MRS stuff runs reasonably, doesn't it?

Seems this is something with the fspp library and ECL. At least i had
the impression when i tried it last time (which is some time
ago). Maybe this has improved?

Finally, this wasn't meant as a call to weapons. I just would like
those things to be settled. 


In a world without walls and fences, who needs Windows or Gates?

Bernd Kiefer                                            Am Blauberg 16
kiefer at dfki.de                                      66119 Saarbruecken
+49-681/302-5301 (office)                      +49-681/3904507  (home)

More information about the developers mailing list