[developers] Questions about the MRS algebra from Seattle

Emily M. Bender ebender at uw.edu
Thu Jul 2 05:03:08 CEST 2015


Hi again,

Still replying without much deep thought --- I hope others will jump in,
too!

On Tue, Jun 30, 2015 at 2:41 AM, Ann Copestake <aac10 at cl.cam.ac.uk> wrote:

>  Hi Emily,
>
> Also quick replies ....
>
>
> On 29/06/2015 17:46, Emily M. Bender wrote:
>
> Dear Ann,
>
>  Thanks for the quick answers!  Some further comments/questions below:
>
> On Fri, Jun 26, 2015 at 4:09 PM, Ann Copestake <aac10 at cl.cam.ac.uk> wrote:
>
>> here's some quick answers (on the basis I may never get round to replying
>> if I try and reply more carefully)
>>
>> On 26/06/2015 22:13, Emily M. Bender wrote:
>>
>>> Dear all,
>>>
>>> The UW group has been reading and discussing Copestake et al 2001
>>> and Copestake 2007, trying to get a better understanding of the MRS
>>> algebra.  We have a few questions---I think some of these issues have
>>> been
>>> proposed for the Summit, but I'm impatient, so I thought I'd try to get
>>> a discussion going over email.  UW folks: Please feel free to chime in
>>> with other questions I haven't remembered just now.
>>>
>>> The two big ones are:
>>>
>>> (1) Copestake et al 2001 don't explicitly state what the purpose of the
>>> algebra is.  My understanding is that it provides a guarantee that the
>>> MRSs
>>> produced by a grammar are well-formed, so long as the grammar is
>>> algebra-compliant.   Well-formed MRS (in this sense) would necessarily
>>> have an interpretation because the algebra shows how to compose the
>>> interpretation for each sement.  Is this on track?  Are there other
>>> reasons
>>> to want an algebra?
>>>
>>
>>  We have never managed to prove that MRSs constructed according to the
>> algebra will be scopable, but I think that is the case.  But more
>> generally, the algebra gives some more constraints to the idea of
>> compositionality which isn't the case if you simply use feature
>> structures.  It excludes some possible ways of doing semantic composition
>> and therefore constitutes a testable hypothesis about the nature of the
>> syntax-semantics interface.  It also allows one to do the same semantic
>> composition with grammars in formalisms other than typed feature structures.
>>
>>
>  I'm still trying to understand why it's important (or maybe interesting
> is
> the goal, rather than important?) to exclude some possible ways of doing
> semantic composition.  Are there ways that are problematic for some reason?
> Is it a question of constraining the space of possible grammars (e.g. for
> learnability concerns)?  Related to issues of incremental processing?
>
>
> The following may sound snarky but I really don't mean it this way - would
> you have the same type of questions about syntactic formalism?  Because I
> can answer in two ways - one is about why I think it's important to build
> testable formal models for language (models which aren't equivalent to
> general programming languages, so more constrained thn typed feature
> structures) and the other is about what the particular issues are for
> compositional semantics.  I don't want to reach the conclusion that
> semantic composition requires arbitrary programs without looking at more
> constrained alternatives.  So yes: learnability, processing efficiency
> (human and computational), incrementality and so on, but one can't really
> look at these in detail without first having an idea of plausible models.
>
>
Not snarky and totally a fair question.  Our syntactic formalism in fact
isn't constrained.
Part of what's appealing about HPSG is that the formalism is flexible
enough to state
different theories in.  (As opposed to certain other approaches that have
to hard code
theoretical claims into the formalism.) So at that point, I think it makes
sense to see the
algebra as a theory of composition that can be implemented in the HPSG
formalism
(or others).  I'm totally with you on building testable formal models (and
testable at
scale), so the program of formalizing the general approach of the ERG and
then
seeing whether the whole grammar can be made consistent with that set of
'best practices'
makes a lot of sense.  I guess the piece that's new to me is why this
should be considered
for composition separate from the rest of the grammar.

(Also, for the record, I'm always skeptical about arguments grounded in
learnability,
since I think they require a leap that I'm not ready to make that our
models are actually
relatable to what's 'really' going on in wet-ware.  But processing
efficiency, incrementality,
etc are still interesting to me.)


>
>
>>
>>> Subquestions:
>>>
>>>  (1a) I was a bit surprised to see the positing of labels in the model.
>>> What
>>> would a label correspond to in the world?  Is this akin to reification
>>> of propositions?
>>> Are we really talking about all the labels here, or just those that
>>> survive once
>>> an MRS is fully scoped?
>>>
>>
>>  the model here is not a model of the world - it's a model of semantic
>> structures (fully scoped MRSs).
>>
>>
>  Oh, I missed that in the paper.  So are we then talking about
> "interpretation"
> because the fully scoped MRSs are assumed to have interpretations via
> a second layer of modeling?
>
>
> yes
>
>
>
>>   (1b) How does this discussion relate to what Ann was talking about at
>>> IWCS
>>> regarding the logical fragment of the ERG and the rest of the ERG?  That
>>> is,
>>> if all of the ERG were algebra-compliant, does that mean that all of the
>>> ERSs
>>> it can produce are compositional in their interpretation? Or does that
>>> require
>>> a model that can "keep up"?
>>>
>>
>>  it's really orthogonal - what I was talking about at IWCS was about the
>> complete MRSs.
>>
>>
>  Got it.
>
>
>>  (2) Copestake et al state: "Since the constraints [= constraints on
>>> grammar rules
>>> that make them algebra-compliant] need not be checked at runtime, it
>>> seems
>>> better to regard them as metalevel conditions on the description of the
>>> grammar,
>>> which can anyway easily be checked by code which converts the TFS into
>>> the
>>> algebraic representation."  What is the current thinking on this?  Is it
>>> in fact
>>> possible convert TFS (here I assume that means lexical entries & rules?)
>>> to
>>> algebraic representation?  Has this been done?
>>>
>>>
>>  `easily' might be an exaggeration, but the code is in the LKB, though it
>> has to be parameterised for the grammar and may not work with the current
>> ERG. You can access it via the menu on the trees, if I remember correctly.
>> The small mrscomp grammar is algebra compliant, the ERG wasn't entirely
>> when I tested it.
>>
>>
>  In the spirit of keeping the discussion going without delays, I haven't
> actually
> played with this yet.  But: accessible from the trees seems to suggest
> that the
> testing takes place over particular analyses of particular inputs, and not
> directly
> on the grammar as static code analysis.  Is that right?
>
>
> I think one could use static analysis based on that code on a grammar
> which was very careful about not having pointers into the semantics other
> than those licensed by the algebra.  Either no such pointers at all, or
> ones in which there was a clear locality, so it was never possible for
> information to sneak back into the semantics bypassing the official
> composition.   Proving that can't happen with a grammar like the ERG is
> difficult.
>
>
Would it be possible at least to detect pointers into the semantics, for
hand-investigation?

Emily


>   Thanks again,
> Emily
>
>
> you're welcome!
>
> All best,
>
> Ann
>
>


-- 
Emily M. Bender
Professor, Department of Linguistics
Check out CLMS on facebook! http://www.facebook.com/uwclma
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.delph-in.net/archives/developers/attachments/20150701/9e7ea743/attachment.html>


More information about the developers mailing list