[developers] results after edge limit reached?

Stephan Oepen oe at ifi.uio.no
Sat Aug 15 19:19:41 CEST 2009


hi again,

are you quite sure of that observed asymmetry in PET behavior,  
rebecca?  even though i'm no fan of the -tsdbdump mode of operation  
(as it duplicates existing [incr tsdb()] internals in the PET code  
base), i was under the impression that it shared enough of the result  
reporting code with standard [incr tsdb()] client mode to make it  
unlikely that one would see different outcomes.

either way, which statistics to use should ideally depend on context.   
in contrasting parsing algorithms and for item-by-item profile  
comparison, i find it convenient to ignore any out-of-scope items,  
including ones that time out after having found a sub-set of solutions.

in an application-oriented perspective, however, it might be tempting  
to count those partial solutions as coverage (in principle they should  
be qualified as owed to a robustness heuristics), and certainly  
substantial cpu time was expended on these items (just as our parsers  
tend to reject some inputs in zero time, e.g. ones exposing lexical  
gaps).  in this respect, averaging over all inputs gives a more  
practical estimate of processing cost.  for your thesis, i believe i  
would recommend you use this latter approach.

in what i described in my earlier message (so-called ‘müller mode’  
of reporting parse results in PET), more information is available in  
the profiles (as stefan used to argue forcefully): exclusion of items  
that timed out can easily be accomplished through a TSQL condition on  
the ‘error’ field.  so in principle reporting in the LKB should be  
revised in this spirit, but then we hardly look at the LKB in an  
application-oriented perspective.

if -tsdbdump mode does indeed report in pre-müller mode, i think that  
should be considered a bug.

best, oe




On Aug 15, 2009, at 2:33 PM, Rebecca Dridan <bec.dridan at gmail.com>  
wrote:

> Sorry, that should have said the parse script, not the export script.
>
> Rebecca Dridan wrote:
>> Maybe this related to an inconsistency I am seeing between test  
>> runs using the export script (with --terg) and running cheap with  
>> the -tsdbdump option?
>> Running cheap directly, all time outs have readings = -1, but  
>> through the export script they are mostly 0, and some greater than  
>> 0. As you say, it changes the [incr tsdb()] statistics. Should I  
>> assume the export script represents consensus and time outs should  
>> be considered in the time and memory use figures? (Ultimately I'm  
>> using a perl script anyway, to save loading the fine system, so my  
>> calculations can go either way.)
>> Rebecca
>> Stephan Oepen wrote:
>>> jeff,
>>>
>>> i have no source code accessible right now, but i'm pretty sure  
>>> that PET vs. the LKB differ in this respect.  from the profiling  
>>> point of view, i (used to) think that items that hit any kind of  
>>> resource limit should be flagged (readings = -1) and excluded from  
>>> many of the [incr tsdb()] statistics.  years ago, stefan müller  
>>> argued forcefully for results that had been found at that point  
>>> to be returned, nevertheless, which is what PET implements today 
>>> .  note that the time-out is recorded in the error field alright 
>>> , but seeing that the search for analyses was not complete, sele 
>>> ctive unpacking cannot guaratee returning the correct n-best lis 
>>> t.  in the LKB, on the other hand, time-outs are signalled (inte 
>>> rnally) by raising an exception.  in these cases, control is ret 
>>> urned immediately to [incr tsdb()], and no attempt is made at en 
>>> umerating solutions.  hence, i would expect that you are seeing  
>>> the results of a complete search whenever readings >= 0.  this s 
>>> hould be true of parsing and generation alike (in the LKB).
>>>
>>> all best, oe
>>>
>>>
>>> On Aug 14, 2009, at 1:47 PM, Michael Wayne Goodman <goodmami at u.washington.edu 
>>> > wrote:
>>>
>>>> Hi there,
>>>>
>>>> In parsing with PET and (particularly) generation with the LKB,  
>>>> if the
>>>> edge-limit has been exhausted after some results have been found,  
>>>> are
>>>> those results returned? I checked some items where the edge-limit  
>>>> was
>>>> reached and did not observe any results for them, but I can't be
>>>> certain that that will always be the case.
>>>>
>>>> Any help or insight would be appreciated.
>>>>
>>>> Thanks,
>>>>
>>>> -- 
>>>> -Michael Wayne Goodman
>>>
>>>
>>>
>





More information about the developers mailing list