[Fwd: Re: [developers] processing of lexical rules]
Stephan Oepen
oe at csli.Stanford.EDU
Tue Feb 15 03:17:39 CET 2005
hi bernd,
> I think if there is a list of options to gain efficiency here, i'd
> like to add the code i just implemented to activate the rule filter
> as (e), since it seems to work for German, at least.
certainly, i even offered (in private email to berthold) to try and
implement the rule filter approach in the LKB (as another temporary
measure while re-design and re-implementation proceed).
we should maybe agree on a coherent naming scheme then, such that for a
change both systems could use the same names for these parameters. how
about the following:
(a) orthographemics-maximum-chain-depth := 2.
(b) orthographemics-minimum-stem-length := 1.
(c) orthographemics-duplicate-filter.
(d) orthographemics-bottom-rules :=
a-lexeme-negative-cons-stem-infl-rule
adj2adv-lexeme-infl-rule.
(e) orthographemics-cohesive-chains.
i would of course agree with ann that we should not have more of these
than is practically needed, since they all are arbitrary stipulations.
cheers - oe
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ Universitetet i Oslo (ILF); Boks 1102 Blindern; 0317 Oslo; (+47) 2285 7989
+++ CSLI Stanford; Ventura Hall; Stanford, CA 94305; (+1 650) 723 0515
+++ --- oe at csli.stanford.edu; oe at hf.uio.no; stephan at oepen.net ---
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
More information about the developers
mailing list