[developers] pet ranking with the ERG

Stephan Oepen oe at ifi.uio.no
Sat Aug 29 19:56:01 CEST 2009


dan and fran,

> It appears that the pet ranking machinery is behaving wierdly.  When
> we parse, as one does, "The dog barks.", the fragment analysis is more
> highly ranked, although we think this was not the case in the recent
> past.

i suspect you thought there was a change in ranking behavior, where in
fact the change was a temporary bug in PET: causing the dynamic choice
of root symbols from the web demo to fail.  in the past, one would not
typically have seen the sentential and the fragment readings together.
but for several weeks this summer, the ERG on-line demo was erroneously
always allowing all root symbols.

when we restored the mechanism for selecting root symbols at run-time
last week or so, the ERG web demo returned to a more normal behavior.

however, if one asks for both sentences and fragments, then it really
is the case that the fragment reading has a higher probability than the
sentence.  it is a very common NP structure, after all, and our current
parse selection models do /not/ take into account the root symbols.  i
suspect examples like this one simply expose a shortcoming in our model
design (where we plan to include the root symbols in a fresh series of
experiments this fall). 

but there was a deficiency in the interplay of the web demo and its use
of the [incr tsdb()] protocol to drive parsing and generation clients.
it would never activate n-best mode (selective unpacking), and for PET
this means that only a sub-set of the MaxEnt features are used (because
exhaustive unpacking predates the introduction of granparenting).  with
a model trained to take advantage of grandparenting, however, it is not
surprising that this model does not perform well without grandparenting
features.  i noticed this first when looking into your original report,
parsing, as one does, `Kim loves Sandy.'  without selective unpacking,
a depictive reading (parallel to `Kim arrived naked.') is ranked best.

intuitively, one would think the depictive construction is less common
than a plain SVO structure.  and indeed, once selective unpacking (and
with it grandparenting, as used when training the model) are turned on, 
the relative ranking is reversed.

yesterday, i revised the LOGON code for on-line demonstrators to turn
on n-best search by default.  for the depictives, at least, things are
better-looking now.

                                                    all best  -  oe

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++ Universitetet i Oslo (IFI); Boks 1080 Blindern; 0316 Oslo; (+47) 2284 0125
+++     CSLI Stanford; Ventura Hall; Stanford, CA 94305; (+1 650) 723 0515
+++       --- oe at ifi.uio.no; oe at csli.stanford.edu; stephan at oepen.net ---
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



More information about the developers mailing list